Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Noise-free quantitative phase imaging in Gabor holography with conditional generative adversarial network

Open Access Open Access

Abstract

This paper shows that deep learning can eliminate the superimposed twin-image noise in phase images of Gabor holographic setup. This is achieved by the conditional generative adversarial model (C-GAN), trained by input-output pairs of noisy phase images obtained from synthetic Gabor holography and the corresponding quantitative noise-free contrast-phase image obtained by the off-axis digital holography. To train the model, Gabor holograms are generated from digital off-axis holograms with spatial shifting of the real image and twin image in the frequency domain and then adding them with the DC term in the spatial domain. Finally, the digital propagation of the Gabor hologram with Fresnel approximation generates a super-imposed phase image for the C-GAN model input. Two models were trained: a human red blood cell model and an elliptical cancer cell model. Following the training, several quantitative analyses were conducted on the bio-chemical properties and similarity between actual noise-free phase images and the model output. Surprisingly, it is discovered that our model can recover other elliptical cell lines that were not observed during the training. Additionally, some misalignments can also be compensated with the trained model. Particularly, if the reconstruction distance is somewhat incorrect, this model can still retrieve in-focus images.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Gabor first suggested holography in 1948 [1]. Several years after, coherent laser sources emerged and allowed researchers to use Gabor holography setup in different fields. Digital Gabor holography is particularly useful in conjunction with a numerical reconstruction algorithm for particle image analysis, 3D tracking, cell identification or swimming cells in a liquid flow [27]. The main advantage of Gabor holography is that it can be easily set up. The setup is compact, and the building cost is lower than other popular configurations since it requires only a few optical components.

However, Gabor holography suffers a major limitation in which an in-focus real image and unfocused twin image are strongly superposed [8]. To resolve this problem, several instrumental and numerical solutions have been proposed. Instrumental approaches include phase shifting by adding wave plates, a piezo-electric transducer mirror, or an electro-optic modulator [916]. In this case, multiple holograms (usually two or three) corresponding to various phase differences between the object and reference light are recorded. Other phase recovery implementations require recoding additional intensity information, such as two or three holograms at different distances [1719]. The extra intensity data can help in retrieving the missing phase information. However, a limitation arises in the imaging area and in that objects are required to stay immobile or else small vibration in the optical setup will disturb the results. This is particularly important for biological samples studies, especially in real-time flow cytometry applications. Iterative phase recovery was also suggested for Gabor holography to remove unfocused twin-image noise [2023]. The main drawback of iterative phase recovery is that it requires several back-and-forth propagations of light to recover the actual phase value. It also requires a convergence criterion or the number of iterations to be defined, which is generally unknown. This is particularly very difficult for real-time biological sample analyses. Non-iterative approaches, diffraction-based approaches, and inverse problem solutions were also suggested for phase recovery in Gabor holography [2228]. Another approach in digital holographic microscopy is off-axis recording, in which the object wave and reference wave are not exactly on the same optical axis. A small tilt (a few degrees) is inserted between the object wave and reference wave and allows for separating the twin-image and real image by spatial filtering in the spectrum domain. After numerical reconstruction of the off-axis hologram, a quantitative phase image related to the content and morphology of a biological sample can be obtained [2932]. The off-axis setup requires several optical element adjustments for perfect imaging for studying a biological sample. Firstly, the optical path lengths of the reference wave and the object wave are required to be matched before recording the hologram of the biological sample. Secondly, any change made in the object arm needs readjustment on the reference arm. For example, if one microscope objective (MO) is inserted in object arm, the same MO needs to be included in the reference arm. Otherwise, aberrations should be compensated by adding extra equations in the original numerical propagation equation that causes complications to the equations. An off-axis configuration does not allow us to use the whole bandwidth of a camera since a twin image and real image are recorded in separate non-overlapped bandwidths of the camera.

2. Proposed deep learning model for phase recovery

Convolutional neural network (CNN) and deep learning approaches have been proposed for several optical applications. Examples include virtual staining of non-stained samples [33], increasing spatial resolution in a large field of view in optical microscopy [34,35], color holographic microscopy with CNN [36], autofocusing and enhancing the depth-of-filed in inline holography [37], lens-less computational imaging by deep learning [38], single-cell-based reconstruction distance estimation by a regression CNN model [39], super-resolution fringe patterns by deep learning holography [40], virtual refocusing in fluorescence microscopy to map 2D images to a 3D surface [41], and several other studies [4244]. Deep-learning based phase recovery by residual CNN model was also suggested [45], but the application is limited because the reference noise-free phase images for deep-learning model are generated by the multi-height phase retrieval approach (8 holograms are recorded at different sample-to-sensor distances). For biological samples and particularly moving micro-objects (cancer cells and blood cells in flow cytometry applications), the proposed method in Ref. [45] is finite. Another small drawback of the method is that blurriness might happen in the output of the model. The reason is that the mean-squared error (MSE) is used to measure the difference between the output of the model and ground truth images. A network optimization method seeks to minimize the error between images and is equivalent to the averaging, which causes blurry images at the end. Reference [46,47] suggest that noise-free phase image can be directly produced from the inline hologram without light propagations. But in some cases when the contrast of fringes is low, the inline hologram might be reconstructed poorly.

In this paper we show two main things. Firstly, we show that Gabor (or inline) holograms can be synthesized from the off-axis hologram with shifting and adding the bands. This unique method is very important for the purpose of generating identical Gabor (or inline) hologram of one off-axis hologram for deep-learning studies. Secondly, a popular image-to-image conversation model is trained to convert the noisy Gabor (or inline) phase images to a noise-free Gabor (or inline) phase image. To overcome the stationary condition of the biological sample during the imaging, we suggest using off-axis holography and generate identical Gabor holograms with the method presented in Fig. 1. Our approach uses convolutional neural networks (CNNs) (a conditional generative adversarial network (C-GAN) model) to eliminate twin image distortion in phase images obtained from the numerical reconstruction of Gabor holograms. The model is trained with a pair of phase images obtained from phase images from Gabor holography (the model input) and the corresponding quantitative phase image obtained from off-axis digital holography (the desired model output). A novel method is used to replicate Gabor holograms from off-axis holograms, as shown in Fig. 1. This is very essential because it can provide us with a set of phase images of exactly the same microscopic object. Two phase images (a phase image obtained from a Gabor hologram and identical noise-free quantitative phase image from off-axis holography) are fed into the model as input and output, and then the model is trained for a few hundred epochs.

 figure: Fig. 1.

Fig. 1. Scheme of the proposed method to generate superimposed noisy phase image and the corresponding noise-free high-contrast phase image. The original hologram is recorded in off-axis configuration (a) with its spectrum obtained by Fourier transform. The bandwidth of the real image, twin image, and zero-order noise in the frequency domain are selected separately by spatial filtering. After filtering, the real-image spectrum and twin-image spectrum are shifted to the center frequency. By applying the inverse Fourier transform, three holograms are provided. The intensities of the three holograms are added together, which is equivalent to a Gabor hologram (b). The Gabor hologram is numerically propagated at distance d, and then the noisy superimposed phase image (c) is the input for the C-GAN model. Fresnel propagation of H1 at distance d can provide a noise-free phase image to be used for the output of the C-GAN model (d).

Download Full Size | PDF

The three bandwidths of the zero-order noise, real image, and twin image of the off-axis hologram are isolated separately with spatial filtering in the frequency domain. After frequency shifting of the real image and twin image spectrum to the center, the three holograms (zero-order noise, real image, and twin image) are added together to generate the Gabor version of the off-axis hologram. Numerical propagation of the Gabor hologram provides us with the superimposed noisy phase image. We describe the optical equations, optical configurations and the details of deep learning model in following subsections.

2.1 Gabor hologram construction from off-axis holograms and optical details

The off-axis hologram between object wave O and reference wave R can be expressed as:

$${I_H} = {|O |^2} + {|R |^2} + {R^ \ast }O + {O^ \ast }R,$$
where O* and R* respectively denote the complex conjugates of the object beam and reference beam. The small tilt angle θ between O and R allows complete isolation of the real image, twin image, and zero-order noise. Three spatial filters in the Fourier domain are used. The bandwidth of real image and twin image are the same, and for zero-order noise, a filter with smaller bandwidth is used. Finally, three holograms corresponding to the real image, twin image, and zero-order data are obtained as shown in Fig. 2 with the following equations:
$${H_1} = IFFT\{{FS\{{FFT({I_H}) \times Filte{r_{real}}} \}} \},$$
$${H_2} = IFFT\{{FS\{{FFT({I_H}) \times Filte{r_{twin}}} \}} \},$$
$${H_0} = IFFT\{{FFT({I_H}) \times Filte{r_{noise}}} \},$$
$$GH = {H_0} + {H_1} + {H_2},$$
where FFT and IFFT are Fourier and inverse Fourier transforms, respectively, and FS is the frequency shifting. To obtain the Gabor hologram, three isolated holograms (H0, H1, and H2) are added together in the spatial domain, and only the amplitude part is preserved. Since the final image is phase-contrasted, the Gabor holograms or the off-axis holograms should be multiplied by the digital reference wave RD during the reconstruction process [29]. This is similar to what happens in classical holography, in which the recorded hologram is illuminated by the reference wave. For the numerical reconstruction of Gabor and off-axis hologram signals, the same RD is used. When a microscope objective (MO) is inserted in the object wave arm, it introduces phase aberration in the off-axis configuration. This can be numerically resolved by multiplying the reconstructed wave front with the computed complex conjugate of the phase aberration [29]. The reconstruction of each real-image complex field $({\Psi _{{H_1}}})$ and Gabor hologram complex field $({\Psi _G})$ can be expressed by the Fresnel approximation [29]. Eventually, the phase image from the Gabor hologram and the noise-free quantitative phase image of the off-axis hologram are respectively obtained from the argument of:
$${\phi _G}(x,y) = {\tan ^{ - 1}}\left\{ {\frac{{{\mathop{\rm Im}\nolimits} [{{\Psi _G}({m,n} )} ]}}{{\Re [{{\Psi _G}({m,n} )} ]}}} \right\},\;\;{\phi _{real}}(x,y) = {\tan ^{ - 1}}\left\{ {\frac{{{\mathop{\rm Im}\nolimits} [{{\Psi _{^{{H_1}}}}({m,n} )} ]}}{{Re [{{\Psi _{^{{H_1}}}}({m,n} )} ]}}} \right\}.$$
During the deep training of the model, ϕG is the input, and ϕreal is the desired output of the C-GAN model.

 figure: Fig. 2.

Fig. 2. Off-axis hologram, its bandwidth, filtering, and spectrum shifting to the center generates three holograms for real image, twin image, and zero-order noise. Intensity of zero-order noise is adjusted for visualization.

Download Full Size | PDF

In our optical configuration, the microscope magnification factor and field of view are 40×/0.75NA and 150µm, respectively, for red blood cell imaging. For cancer cells, 20×/0.4NA MO with a 280µm field of view was used. A 666nm laser source is delivered to the specimen plane with an intensity of ∼200µW/cm2, which is nearly six orders of magnitude less than intensities typically associated with confocal fluorescence microscopy. Interferograms are recorded by a Complementary Metal Oxide Semiconductor (CMOS) camera (Basler acA1920, 1920×1220 pixels of 5.86×5.86µm). Image reconstruction and all analyses were carried out using MATLAB software. Off-axis holograms were acquired on a commercially available DHM T-1001 from LynceeTec SA (Lausanne, Switzerland) and equipped with a motorized x-y stage (Märzhäuser Wetzlar GmbH & Co. KG, Wetzlar, Germany, ref. S429). A gallery of off-axis holograms, corresponding Gabor hologram, reconstructed phase image from the off-axis hologram, and reconstructed super-imposed phase image from the Gabor hologram is shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. A gallery of the phase images obtained by the proposed Gabor hologram construction approach. For each off-axis phase image, a corresponding Gabor hologram was made. Both holograms were numerically reconstructed, and the phase images were used for the training and testing of our proposed deep learning model.

Download Full Size | PDF

2.2 Convolutional neural network and deep learning

Convolutional neural network (CNN) is a class of deep NNs which are made of neurons, and its original idea was inspired by biological processes. A CNN generally includes an input, multiple hidden layers and an output layer. The input to a CNN model is a matrix with size of (image number) × (image height) × (image width) × (image depth). The hidden layers of a CNN typically consist of a series of convolutional layers that convolve with a multiplication and pass its result to the next layer. The input image becomes abstracted to a feature map after passing through one convolutional layer. The layer's parameters consist of a set of learnable kernels. During the training procedure, each filter is convolved across the width and height of the input volume, computing the dot product between filter elements and the input values and outputs a 2D activation map of that filter. The network learns filters that activate when it detects some specific type of feature at some spatial position in the input. Neurons in a model generate an output value by applying a specific function to the inputs coming from the previous layer’s receptive field. The function that is applied to the input values is determined by weights and a bias in a vector form. Learning, in a neural network, progresses by making iterative adjustments to these biases and weights [48].

To resolve the blurriness effect of the previously suggested method, we propose using C-GAN, which was used in several image-to-image translation studies [33,4952]. The advantage of this model is that it uses a structured loss function that is different from unstructured functions (pixel-to-pixel similarity). Thus, it can be generalized for different image-to-image applications without changing the loss function or the structure of the model. C-GAN consists of a U-net image generator and a PatchGAN classifier or discriminator. The generator (Fig. 4(a)) is trained to produce images that cannot be distinguished from real images by an adversarial trained discriminator. The discriminator (Fig. 4(b)) learns to classify between the generator’s synthesized image (a “fake” image) and the “real” image (noise-free phase image). In other words, the generator attempts to produce a noise-free image with the same statistical features as the off-axis noise-free phase image, while the discriminator tries to distinguish whether the input image is the off-axis phase image or the generator’s output. The training procedure seeks a state of equilibrium in which the generator’s output and off-axis phase image share very similar statistical distributions.

 figure: Fig. 4.

Fig. 4. Structure of the proposed model to recover phase values from Gabor holograms. (a) After digital propagation of a Gabor hologram, the phase image is fed into a U-net shape generator. This generator tries to remove the noise. (b) The Markovian discriminator receives images (one is the output of the generator, and the other one is the quantitative phase image). The discriminator outputs a probability value saying that image is “fake” or “real”.

Download Full Size | PDF

2.2.1 Model architecture

The proposed image-to-image conversion is based on the concept of the conditional generative adversarial model (C-GAN) used in different applications [4952]. The C-GAN consists of two main deep learning models: a generator and a discriminator. The discriminator is used to as a tool to train the generator. The generator attempts to produce a noiseless image with the same statistical features as the off-axis phase image, while the discriminator tries to distinguish whether the input image is the off-axis phase image or the generator output. The training procedure seeks a state of equilibrium in which the generator’s output and off-axis phase image share very similar statistical distributions. The generator’s input and output image size are 768×768 pixels to perform image-to image translation (see Fig. 5(a)). The reason for choosing the following size comes from the limitations of the GPU card used for the training. More particularly, the memory of the GPU card should be big enough for a batch of the images with a larger size. The architecture of the C-GAN model simply allows us to use images with a bigger size if the hardware resources (GPU in particular) are not concern or limited. The generator consists of seven down-sampling and seven up-sampling units following the general shape of a U-net. Down-sampling extracts the features of the input image for translation and up-sampling reconstructs the image based on the extracted features. Down-sampling is performed before up-sampling, resulting in a blurry output due to much of the high-frequency information of the original image being lost. Thus, a skip connection is used to share high-frequency information between the input and output. The skip connection reduces the blurry effects of the generated image by connecting the information of the ith layer of the down-sampling process to the information of the n-ith layer of the up-sampling process. The discriminator receives a 1536×768 input image (two images are concatenated; one is generator output and one is the real image), passes through four convolution layers, and derives a 16×16-pixel patch (see Fig. 5(b)). The discriminator learns to distinguish between real and fake patches. The reason for evaluating images with patches is that the model can be trained faster with fewer parameters.

 figure: Fig. 5.

Fig. 5. (a) Generator architecture similar to U-net to recover noiseless phase image from the phase image obtained from the superimposed Gabor hologram, (b) discriminator to compare fake and real images with convolution layers. Tanh is the hyperbolic tangent.

Download Full Size | PDF

3. Experimental results

Two C-GAN models were trained with 500 RBCs for the RBC model, and 1000 lung, skin, and breast cancer cell phase images were used for the elliptical cell model. The accuracy of each model was assessed with several unseen images. The training time for RBC model is 15 hours and 16 minutes, and for cancer cell model is 36 hours and 23 minutes. We used the Adam solver as the optimization process with adaptive momentum and the parameters β1=0.5 and β2=0.999. The number of epochs is 450, and learning rate is 0.0002, and we used an NVIDIA TITAN XP GPU for training. The reason for choosing 450 epochs is that we did not observe any changes in the output images produced by the model’s Generator around the 400 epochs. Batch training with a batch size of 10 was used. We implemented the model on a computer with an Intel Xeon Gold 6134 CPU@3.20 GHz and 64 GB of RAM running Ubuntu 18.04.03. The network was implemented using Python version 3.6.8 and TensorFlow framework version 1.14.0. After the training, we checked the deep learn generator with several images. The network testing was performed using an NVIDIA TITAN XP. The sample preparations and objective function of the C-GAN model are explained in the Appendix (see Subsection A.1 and A.2 in the Appendix). A gallery of images used for the test is also shown in the Appendix (see Subsection A.3 in Appendix). For the RBC model, we also confirmed the trained model with real Gabor holograms recorded with blocking the reference wave of our optical configuration.

Figure 6(a) shows the development of the training according to the generator output. At each epoch, the generator with the help of the discriminator is updated and results in a less noisy Gabor-reconstructed phase image. Figures 6(b) and 6(c) show the normalized loss function for both the generator and discriminator at different epochs.

 figure: Fig. 6.

Fig. 6. (a) Development of generator at different epochs. (b) Normalized loss for both generator and discriminator for RBC model. (c) Normalized loss for both generator and discriminator for cancer cell model.

Download Full Size | PDF

3.1 Quantitative evaluations

To evaluate the performance, we calculated the MSE between the phase value obtained by the proposed method and the off-axis quantitative phase imaging of the same image. The histogram of MSE for 2000 RBC images and 1000 cancer cells (all cancer cells together) is shown in Figs. 7(a) and 7(b). Another criterion is the Structural Similarity Index (SSIM), which can numerically evaluate the perceptual difference between two images. Figure 7(c) and 7(d) show a histogram of SSIM for 2000 pairs of RBC images (one is a quantitative phase image in off-axis holography and the second is a noise-free phase image in Gabor holography from the deep learning model) and 1000 cancer cell images. One important advantage of quantitative phase imaging by digital holography is that biophysical and morphological features can be evaluated at the single-cell level. To validate the output of the deep-learning phase-recovery method, the dry mass of 200 cells at the single-cell level was obtained and compared with exactly the same cell in the off-axis quantitative phase image. The dry mass is directly related to the mean corpuscular-hemoglobin (MCH) content since RBCs are mostly composed of hemoglobin. The dry mass is determined by integrating the phase over the projected surface area of the cell:

$$DM = \frac{{\phi \lambda \bar{S}}}{{2\pi \alpha }},$$
where $\bar{S}$ is the projected surface area of the cell, ϕ is the summation of all phase values within the projected surface area of the cell, and α is the specific refractive index increment factor (∼0.193 mL/g for RBCs and 0.2 for most cancer cells) [53,54]. Single cell extraction is performed by binarization using the same binary mask for our model’s phase-image output and the quantitative phase image obtained by off-axis holography. A correlation analysis revealed that the MCH value obtained from the off-axis quantitative phase imaging and our proposed method is strongly correlated (see Fig. 7(e)). The Pearson product-moment correlation coefficient is 0.78 with a 99% confidence level when applying the t-test method. The MCH value is 30 ± 5pg (average ± STD) for the off-axis method and 28 ± 4pg for the proposed method. Additionally, there is a significant correlation between the dry mass of cancer cells obtained from off-axis holography and our proposed method (the two values linearly increased (y=0.92x+1.3); see Fig. 7(f)). The average dry mass values for off-axis holography and our model output are 31 ± 6pg and 29.6 ± 5pg, respectively.

 figure: Fig. 7.

Fig. 7. (a) and (b) Histograms of phase MSE for RBCs and cancer cells, respectively. (c) and (d) Histograms of phase SSID for RBCs and cancer cells, respectively, between off-axis quantitative phase image and the proposed deep learning phase recovery in Gabor holography. (e) Single-cell mean corpuscular hemoglobin analysis between off-axis phase image and our model. (f) Dry mass analysis between off-axis phase image and our model output.

Download Full Size | PDF

3.2 Evaluations for Gabor holography setup

In addition, for validation of the proposed method, the off-axis optical setup was modified. One shutter was added to the off-axis configuration to block the reference wave, as shown in Fig. 8. In this case, only one light (the object wave) illuminates the sample and reaches to the camera that is a real Gabor hologram. After blocking the reference wave, several Gabor holograms were saved and after numerical propagation, the superimposed phase values were fed into the model and the noise-free phase image was obtained (one image is shown in Fig. 9). A gallery of the results is shown in the Appendix (see Fig. 11 in Appendix A.3).

 figure: Fig. 8.

Fig. 8. Gabor holography experimental setup. Gabor holographic configuration can be obtained by using a shutter to block the reference wave from the off-axis DHM.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. (a) A Gabor hologram recorded in Gabor configuration, (b) the spectrum of the hologram, (c) reconstructed phase image from the Gabor hologram, (d) noise-free result from our model.

Download Full Size | PDF

We also measured the volume and MCH value of RBCs obtained from the real Gabor holography. Gabor holograms were numerically reconstructed, and the noise in the phase image was removed with the trained model. The volume and MCH are compared with the results obtained with off-axis holography in Table 1. The volume is obtained by the following:

$$V \cong {p^2}\sum\limits_{(i,j) \in {S_p}} {h(i,j)} ,$$
where the summation achieved over all the pixels (i,j) belongs to the RBCs’ projected surface Sp, p is the pixel size in the reconstruction plane, and h(i,j) is the thickness value at pixel (i,j) obtained by:
$$h(x,y) = \frac{{\lambda \times \phi (x,y)}}{{2\pi ({{n_{RBC}} - {n_m}} )}},$$
where λ is the wavelength of illumination light, ϕ(x,y) is the phase value at pixel (x,y), and nRBC=1.42 and nm=1.3334 are the refractive indices of the RBCs and HEPA medium, respectively [55].

Tables Icon

Table 1. Volume and MCH value for off-axis holography and our model (n >70)

4. Discussions

Phase recovery in Gabor holography can be obtained by a CNN model and deep learning technique. The CNN learns to extract and separate the spatial features of the desired data (the real image) from the features of the noise (the twin image and other undesired interference terms) in the phase image. Outstandingly, we also observed that the generator could artificially recover the shape of cells that are partially unfocused (see Figs. 10(a) and 10(b)). This emphasizes that the deep learning model not only learns the global and local features of the real image in phase at the whole-image level, but also compensates for other errors at the single-cell level while imaging. A quantitative analysis of the un-focusing recovery is shown in Fig. 12 in Appendix A.4. Additionally, our model can also recover different cell lines that are not observed by the model during the training process. In this case, we fed our model with the Gabor phase image of bladder cancer cells (see Fig. 10(c)), and noise-free images were generated (see Fig. 10(d)). This emphasizes that for elliptical cancer cells with spherical shape, our model can recover other cell lines, and generally speaking, all types of elliptical cells can be recovered by the trained model. This is due to the fact that these cells are very similar (elliptical cells) regardless of the difference between the organs in which they are produced. We also trained another model with Hela cancer cells cultured for several hours in which each 10 minutes one hologram is recorded. These cells have vast shape variations during the culturing period since they attach to the culture glass, detach and divide. The model is trained and some results are generated and shown in Fig. 13 in Appendix A.5. Non-deep learning techniques mostly employ additional physical and intensity measurements to fully or partially remove the noise in phase information based on an iterative solution or analysis satisfying the wave equation. The proposed method is achieved without any knowledge of optical components and does not require coherent or semi-coherent light interference modeling. The proposed method is instant, which makes it suitable for high-throughput screening or high-content screening (around 0.02 seconds for each phase image; average of 3000 images). This is achieved with a PC with 16GB of RAM, an Intel Core-i7-9700K 3.60GHz CPU, and NVIDIA GeForce RTX2080Ti GPU card.

 figure: Fig. 10.

Fig. 10. The proposed model can also recover single cells that are not in focus compared to other cells. (a) Two off-axis quantitative phase images (QPIs) where some cells are unfocused (see arrows). (b) Exactly the same image obtained by the deep learning method for noise-free Gabor holography. (c) Gabor phase image of bladder cancer cell that was never observed by the model while training. (d) Model output of cancer cell shown in (c). Two cells are magnified and plotted in 3D.

Download Full Size | PDF

There are several points that need to be considered during applying deep learning in studying biological samples with optical setups because there are plenty of variables which affect the results, such as reconstruction distance, sample-to-sample variations, etc. According to our experiments, we believe that training separate models for similar biological organs is more efficient than a single model with mixed of biological samples with different morphologies. During the training procedure, deep learning tries to learn features of each morphology, which sometimes differs from sample to sample, resulting in a poor trained model. For example, biconcave RBCs usually consist of a dimple and ring, and they are different from elliptical cancer cells with spherical geometry. It is also worth mentioning that for a well-trained model, plenty of sample images at different distances between the sample and camera should be recorded. This is essential for providing a very good training dataset. In Gabor holography, the superimposed phase image differs if the distance between the sample and camera varies (according to the fact that the overlap between the twin image and real image differs with the distance between the sample and camera). Additionally, it is very important to have an estimation of the reconstruction distance for numerical propagation. The reason is that propagating the image into a plane (propagation of the hologram too short or too far) will result in a phase image with an unknown pattern and features for the trained model. In this case, the model might fail in removing the superimposed phase image. Training can be performed on a GPU server to speed up the training procedure. To execute phase recovery for another unobserved sample, a conventional PC with a GPU card can be considered.

5. Conclusions

We presented a deep-learning-based approach to obtain noise-free quantitative phase image in Gabor holographic microscopy. To achieve this, a C-GAN was trained with several RBCs and elliptical cancer cells. The input to the model was made with a novel approach, which consists of separate isolation of the spectrum of a real image, twin image, and zero-order noise of an off-axis digital hologram. The three holograms are added to construct the Gabor hologram. The Gabor hologram is digitally reconstructed, and the phase part is fed into the C-GAN model. The desired output is the quantitative noise-free phase of off-axis holographic imaging. After training for several epochs, the proposed model was capable of removing the superimposed noise in the actual Gabor holograms. Interestingly, our model was also able to resolve the depth-of-focus limitations at the single-cell level and recover unobserved elliptical cells during the training.

Appendix

This Appendix presents the sample preparation, objective function of the learning model, gallery of the images used to train and test the proposed model, quantitative analysis of the model’s ability to recover unfocused results, and Gabor phase image of Hela cancer cell and the model’s output.

A.1 Cancer cell and red blood cell sample preparations

Cancer cells (A549 lung cancer cell line, SK-MEL-28 skin cancer cell line, and SK-BR3 human breast cancer cell line) were seeded in a 35 mm imaging dish with a polymer coverslip bottom and low walls (Ibidi 80136). They were incubated in standard tissue culture conditions of 37°C and 5% CO2 with 95% humidity. We used BI Roswell Park Memorial Institute (RPMI) 1640 Medium (ATCC 30-2001) supplemented with 10% Fetal Bovine Serum (FBS) (ATCC 30-2020) as the growth medium. For red blood cell preparations, approximately 5 ml of blood was collected from a healthy donor using a syringe. It was diluted at a ratio of 1:10 (v/v) in cold PBS buffer (pH 7.4, 138 mM NaCl, 27 mM KCl, 10 mM Na2HPO4, and 1 mL KH2PO4). Blood cells were sedimented by centrifuging at 200 g and 4°C for 10 min, following which the buffy coat was gently collected and washed once in PBS buffer for 2 min at 4°C. Finally, the isolated erythrocytes were suspended in HEPA buffer (280mOsm, 15 mM HEPES pH 7.4, 130 mM NaCl, 5.4 mM KCl, 10 mM glucose, 1 mM CaCl2, 0.5 mM MgCl2 and 1 mg/ ml bovine serum albumin) at 0.2% hematocrit. 1 ml of the erythrocyte suspension was diluted to 15 ml using HEPA buffer. Next, ∼30 µl of the final erythrocyte suspension was introduced into an imaging slide consisting of two coverslips. The bottom coverslip was coated with polyornithine to ensure that cells adhere to the coverslip surface. The glass was mounted on the DHM device and incubated for 10 minutes at 17°C under conditions of 5% CO2 and high humidity (Chamlide WP incubator system, LCI, Seoul, South Korea). This ensures that the cells adhere well to the glass.

A.2 C-GAN objective function

The generator and discriminator are composed of convolution-BatchNorm-Leaky ReLU with a 4×4 filter size. To train the C-GANs, we used an adversarial loss combined with L1 loss following the objective of the Pix2Pix model [49]. The loss function of the conditional GAN is defined as:

$${L_{C - GAN}}({G,D} )= {E_{x,y}}[{\log D({x,y} )} ]+ {E_x}[{\log \{{1 - D({x,G(x )} )} \}} ],$$
where x and y respectively denote the noisy phase image from the Gabor hologram reconstruction and the noiseless phase image, G is the generator, and D is the discriminator. The generator tries to minimize this objective against an adversarial D that tries to maximize it. Additionally, the generator requires the L1 loss function to produce the overall structure of the image and low-frequency information.
$${L_{L1}}(G )= {E_{x,y}}[{{{||{y - G(x)} ||}_1}} ].$$
Therefore, the objective of the training is the following:
$${L^\ast } = \arg \mathop {\min }\limits_G \mathop {\max }\limits_D {L_{C - GAN}}({G,D} )+ \lambda {L_{L1}}(G ),$$
where λ is set to 100 in this study to reduce the sharp artifacts in image generation [49].

A.3 Test images and Gabor recorded holograms

 figure: Fig. 11.

Fig. 11. A gallery of the test images. (a) RBC Gabor phase image, (b) breast cancer cell, and (c) lung cancer cell. For each sample, an off-axis quantitative phase image is also shown for visualization. (d) A gallery of test images with Gabor holograms recorded with using only object wave (reference wave was blocked). A Gabor hologram alongside the reconstructed super-imposed phase image and noise-removed phase image is shown.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Representation of the model output when the input Gabor phase image is reconstructed with a deviated reconstruction distance. (a) Super-imposed phase images, (b) corresponding model outputs, (c) 3D mesh of one RBC.

Download Full Size | PDF

A.4 Quantitative analysis of the model’s ability to recover unfocused results

If the correct focus is R, then the Gabor hologram is propagated at R+0.25R, R+0.50R, R+0.75R and R+1.0%R, and the superimposed phase image is fed to the model. If focus deviates around 50% of its correct value, then the model can roughly preserve the shape of RBC. For 75% and 100% deviation RBC’s biconcave shaper is not preserved (see Fig. 12).

A.5 Gabor phase image of Hela cancer cell and the model’s output

 figure: Fig. 13.

Fig. 13. Gabor phase image of Hela cancer cell and the model’s output. Cells go through several shape changes. (a) Noisy phase image, (b) noise-removed phase image.

Download Full Size | PDF

Funding

National Research Foundation of Korea (NRF-2020R1A2C3006234, NRF-2015K1A1A2029224); Daegu Gyeongbuk Institute of Science and Technology (20-CoE-BT-02).

Disclosures

The authors declare that there are no conflicts of interest related to this paper.

References

1. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]  

2. I. Moon, M. Daneshpanah, B. Javidi, and A. Stern, “Automated three-dimensional identification and tracking of micro/nano biological organisms by computational holographic microscopy,” Proc. IEEE 97(6), 990–1010 (2009). [CrossRef]  

3. B. Javidi, I. Moon, S. Yeom, and E. Carapezza, “Three-dimensional imaging and recognition of microorganism using single-exposure on-line (SEOL) digital holography,” Opt. Express 13(12), 4492–4506 (2005). [CrossRef]  

4. I. Moon and B. Javidi, “3-D visualization and identification of biological microorganisms using partially temporal incoherent light in-line computational holographic imaging,” IEEE T. Med. Imaging 27(12), 1782–1790 (2008). [CrossRef]  

5. G. Caprio, A. Mallahi, P. Ferraro, R. Dale, G. Coppola, B. Dale, G. Coppola, and F. Dubois, “4D tracking of clinical seminal samples for quantitative characterization of motility parameters,” Biomed. Opt. Express 5(3), 690–700 (2014). [CrossRef]  

6. P. Memmolo, L. Miccio, M. Paturzo, G. Caprio, G. Coppola, P. Netti, and P. Ferraro, “Recent advances in holographic 3D particle tracking,” Adv. Opt. Photonics 7(4), 713–755 (2015). [CrossRef]  

7. M. Paturzo, V. Pagliarulo, V. Bianco, P. Memmolo, L. Miccio, F. Merola, and P. Ferraro, “Digital Holography, a metrological tool for quantitative analysis: Trends and future applications,” Optics and Lasers in Engineering 104, 32–47 (2018). [CrossRef]  

8. P. Picart and J. Leval, “General theoretical formulation of image formation in digital Fresnel holography,” J. Opt. Soc. Am. A 25(7), 1744–1761 (2008). [CrossRef]  

9. J. Barton, “Removing multiple scattering and twin images from holographic images,” Phys. Rev. Lett. 67(22), 3106–3109 (1991). [CrossRef]  

10. P. Guo and A. Devaney, “Digital microscopy using phase-shifting digital holography with two reference waves,” Opt. Lett. 29(8), 857–859 (2004). [CrossRef]  

11. L. Cai, Q. Liu, and X. Yang, “Phase-shift extraction and wave-front reconstruction in phase-shifting interferometry with arbitrary phase steps,” Opt. Lett. 28(19), 1808–1810 (2003). [CrossRef]  

12. S. Lai, B. King, and M. Neifeld, “Wave front reconstruction by means of phase-shifting digital in-line holography,” Opt. Commun. 173(1-6), 155–160 (2000). [CrossRef]  

13. G. Koren, D. Joyeux, and F. Polack, “Twin-image elimination in in-line holography of finite-support complex objects,” Opt. Lett. 16(24), 1979–1981 (1991). [CrossRef]  

14. T. Tahara, K. Ito, T. Kakue, M. Fujii, Y. Shimozato, Y. Awatsuji, K. Nishio, S. Ura, T. Kubota, and O. Matoba, “Parallel phase-shifting digital holographic microscopy,” Biomed. Opt. Express 1(2), 610–616 (2010). [CrossRef]  

15. T. Tahara, Y. Shimozato, Y. Awatsuji, K. Nishio, S. Ura, O. Matoba, and T. Kubota, “Spatial-carrier phase-shifting digital holography utilizing spatial frequency analysis for the correction of the phase-shift error,” Opt. Lett. 37(2), 148–150 (2012). [CrossRef]  

16. J. Liu and T. Poon, “Two-step-only quadrature phase-shifting digital holography,” Opt. Lett. 34(3), 250–252 (2009). [CrossRef]  

17. Y. Zhang, G. Pedrini, W. Osten, and H. Tiziani, “Reconstruction of in-line digital holograms from two intensity measurements,” Opt. Lett. 29(15), 1787–1789 (2004). [CrossRef]  

18. T. Xiao, H. Xu, Y. Zhang, J. Chen, and Z. Xu, “Digital image decoding for in-line X-ray holography using two holograms,” J. Mod. Opt. 45(2), 343–353 (1998). [CrossRef]  

19. Y. Zhang and X. Zhang, “Reconstruction of a complex object from two in-line holograms,” Opt. Express 11(6), 572–578 (2003). [CrossRef]  

20. L. Denis, C. Fournier, T. Fournel, and C. Ducottet, “Numerical suppression of the twin image in in-line holography of a volume of micro-objects,” Meas. Sci. Technol. 19(7), 074004 (2008). [CrossRef]  

21. T. Nakamura, K. Nitta, and O. Matoba, “Iterative algorithm of phase determination in digital holography for real-time recording of real objects,” Appl. Opt. 46(28), 6849–6853 (2007). [CrossRef]  

22. G. Koren, F. Polack, and D. Joyeux, “Iterative algorithms for twin-image elimination in in-line holography using finite-support constraints,” J. Opt. Soc. Am. A 10(3), 423–433 (1993). [CrossRef]  

23. T. Latychevskaia and H. Fink, “Solution to the twin image problem in holography,” Phys. Rev. Lett. 98(23), 233901 (2007). [CrossRef]  

24. J. Gire, L. Denis, C. Fournier, E. Thiébaut, F. Soulez, and C. Ducottet, “Digital holography of particles: benefits of the ‘inverse problem’approach,” Meas. Sci. Technol. 19(7), 074005 (2008). [CrossRef]  

25. L. Xu, J. Miao, and A. Asundi, “Properties of digital holography based on in-line configuration,” Opt. Eng. 39(12), 3214–3219 (2000). [CrossRef]  

26. C. Cho, B. Choi, H. Kang, and S. Lee, “Numerical twin image suppression by nonlinear segmentation mask in digital holography,” Opt. Express 20(20), 22454–22464 (2012). [CrossRef]  

27. L. Onural and P. Scott, “Digital decoding of in-line holograms,” Opt. Eng. 26(11), 261124 (1987). [CrossRef]  

28. L. Denis, D. Lorenz, E. Thiébaut, C. Fournier, and D. Trede, “Inline hologram reconstruction with sparsity constraints,” Opt. Lett. 34(22), 3475–3477 (2009). [CrossRef]  

29. E. Cuche, P. Marquet, and C. Depeursinge, “Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms,” Appl. Opt. 38(34), 6994–7001 (1999). [CrossRef]  

30. T. Colomb, E. Cuche, F. Charrière, J. Kühn, N. Aspert, F. Montfort, P. Marquet, and C. Depeursinge, “Automatic procedure for aberration compensation in digital holographic microscopy and applications to specimen shape compensation,” Appl. Opt. 45(5), 851–863 (2006). [CrossRef]  

31. B. Kemper and G. Bally, “Digital holographic microscopy for live cell applications and technical inspection,” Appl. Opt. 47(4), A52–A61 (2008). [CrossRef]  

32. A. Anand, I. Moon, and B. Javidi, “Automated disease identification with 3-D optical imaging: a medical diagnostic tool,” Proc. IEEE 105(5), 924–946 (2017). [CrossRef]  

33. Y. Rivenson, T. Liu, Z. Wei, Y. Zhang, K. Haan, and A. Ozcan, “PhaseStain: the digital staining of label-free quantitative phase microscopy images using deep learning,” Light: Sci. Appl. 8(1), 23 (2019). [CrossRef]  

34. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017). [CrossRef]  

35. K. Haan, Y. Rivenson, Y. Wu, and A. Ozcan, “Deep-learning-based image reconstruction and enhancement in optical microscopy,” Proc. IEEE 108(1), 30–50 (2020). [CrossRef]  

36. T. Liu, Z. Wei, Y. Rivenson, K. Haan, Y. Zhang, Y. Wu, and A. Ozcan, “Deep learning-based color holographic microscopy,” J. Biophotonics 12(11), e201900107 (2019). [CrossRef]  

37. Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018). [CrossRef]  

38. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017). [CrossRef]  

39. K. Jaferzadeh, S. Hwang, I. Moon, and B. Javidi, “No-search focus prediction at the single cell level in digital holographic imaging with deep convolutional neural network,” Biomed. Opt. Express 10(8), 4276–4289 (2019). [CrossRef]  

40. Z. Ren, H. So, and E. Lam, “Fringe pattern improvement and super-resolution using deep learning in digital holography,” IEEE Trans. Ind. Inf. 15(11), 6179–6186 (2019). [CrossRef]  

41. Y. Wu, Y. Rivenson, H. Wang, Y. Luo, E. Ben-David, L. Bentolila, C. Pritz, and A. Ozcan, “Three-dimensional virtual refocusing of fluorescence microscopy images using deep learning,” Nat. Methods 16(12), 1323–1331 (2019). [CrossRef]  

42. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019). [CrossRef]  

43. Y. Rivenson, Y. Wu, and A. Ozcan, “Deep learning in holography and coherent imaging,” Light: Sci. Appl. 8(1), 85 (2019). [CrossRef]  

44. X. Lin, Y. Rivenson, N. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018). [CrossRef]  

45. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018). [CrossRef]  

46. Y. Xue, S. Cheng, Y. Li, and L. Tian, “Reliable deep-learning-based phase imaging with uncertainty quantification,” Optica 6(5), 618–629 (2019). [CrossRef]  

47. H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018). [CrossRef]  

48. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

49. P. Isola, J. Zhu, T. Zhou, and A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition (2017).

50. P. Moeskops, M. Veta, M. Lafarge, K. Eppenhof, and J. Pluim, “Adversarial training and dilated convolutions for brain MRI segmentation,” in international workshop on deep learning in medical image analysis (2017).

51. F. Mahmood, D. Borders, R. Chen, G. McKay, K. Salimian, A. Baras, and N. Durr, “Deep adversarial training for multi-organ nuclei segmentation in histopathology images,” IEEE Transactions on Medical Imaging, (to be published).

52. Y. Xue, T. Xu, H. Zhang, L. Long, and X. Huang, “Segan: Adversarial network with multi-scale L1 loss for medical image segmentation,” Neuroinf. 16(3-4), 383–392 (2018). [CrossRef]  

53. R. Barer, “Determination of dry mass, thickness, solid and water concentration in living cells,” Nature 172(4389), 1097–1098 (1953). [CrossRef]  

54. B. Rappaz, E. Cano, T. Colomb, J. Kuhn, C. Depeursinge, V. Simanis, P. Magistretti, and P. Marquet, “Noninvasive characterization of the fission yeast cell cycle by monitoring dry mass with digital holographic microscopy,” J. Biomed. Opt. 14(3), 034049 (2009). [CrossRef]  

55. B. Rappaz, A. Barbul, Y. Emery, R. Korenstein, C. Depeursinge, P. Magistretti, and P. Marquet, “Comparative study of human erythrocytes by digital holographic microscopy, confocal microscopy, and impedance volume analyzer,” Cytometry, Part A 73A(10), 895–903 (2008). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Scheme of the proposed method to generate superimposed noisy phase image and the corresponding noise-free high-contrast phase image. The original hologram is recorded in off-axis configuration (a) with its spectrum obtained by Fourier transform. The bandwidth of the real image, twin image, and zero-order noise in the frequency domain are selected separately by spatial filtering. After filtering, the real-image spectrum and twin-image spectrum are shifted to the center frequency. By applying the inverse Fourier transform, three holograms are provided. The intensities of the three holograms are added together, which is equivalent to a Gabor hologram (b). The Gabor hologram is numerically propagated at distance d, and then the noisy superimposed phase image (c) is the input for the C-GAN model. Fresnel propagation of H1 at distance d can provide a noise-free phase image to be used for the output of the C-GAN model (d).
Fig. 2.
Fig. 2. Off-axis hologram, its bandwidth, filtering, and spectrum shifting to the center generates three holograms for real image, twin image, and zero-order noise. Intensity of zero-order noise is adjusted for visualization.
Fig. 3.
Fig. 3. A gallery of the phase images obtained by the proposed Gabor hologram construction approach. For each off-axis phase image, a corresponding Gabor hologram was made. Both holograms were numerically reconstructed, and the phase images were used for the training and testing of our proposed deep learning model.
Fig. 4.
Fig. 4. Structure of the proposed model to recover phase values from Gabor holograms. (a) After digital propagation of a Gabor hologram, the phase image is fed into a U-net shape generator. This generator tries to remove the noise. (b) The Markovian discriminator receives images (one is the output of the generator, and the other one is the quantitative phase image). The discriminator outputs a probability value saying that image is “fake” or “real”.
Fig. 5.
Fig. 5. (a) Generator architecture similar to U-net to recover noiseless phase image from the phase image obtained from the superimposed Gabor hologram, (b) discriminator to compare fake and real images with convolution layers. Tanh is the hyperbolic tangent.
Fig. 6.
Fig. 6. (a) Development of generator at different epochs. (b) Normalized loss for both generator and discriminator for RBC model. (c) Normalized loss for both generator and discriminator for cancer cell model.
Fig. 7.
Fig. 7. (a) and (b) Histograms of phase MSE for RBCs and cancer cells, respectively. (c) and (d) Histograms of phase SSID for RBCs and cancer cells, respectively, between off-axis quantitative phase image and the proposed deep learning phase recovery in Gabor holography. (e) Single-cell mean corpuscular hemoglobin analysis between off-axis phase image and our model. (f) Dry mass analysis between off-axis phase image and our model output.
Fig. 8.
Fig. 8. Gabor holography experimental setup. Gabor holographic configuration can be obtained by using a shutter to block the reference wave from the off-axis DHM.
Fig. 9.
Fig. 9. (a) A Gabor hologram recorded in Gabor configuration, (b) the spectrum of the hologram, (c) reconstructed phase image from the Gabor hologram, (d) noise-free result from our model.
Fig. 10.
Fig. 10. The proposed model can also recover single cells that are not in focus compared to other cells. (a) Two off-axis quantitative phase images (QPIs) where some cells are unfocused (see arrows). (b) Exactly the same image obtained by the deep learning method for noise-free Gabor holography. (c) Gabor phase image of bladder cancer cell that was never observed by the model while training. (d) Model output of cancer cell shown in (c). Two cells are magnified and plotted in 3D.
Fig. 11.
Fig. 11. A gallery of the test images. (a) RBC Gabor phase image, (b) breast cancer cell, and (c) lung cancer cell. For each sample, an off-axis quantitative phase image is also shown for visualization. (d) A gallery of test images with Gabor holograms recorded with using only object wave (reference wave was blocked). A Gabor hologram alongside the reconstructed super-imposed phase image and noise-removed phase image is shown.
Fig. 12.
Fig. 12. Representation of the model output when the input Gabor phase image is reconstructed with a deviated reconstruction distance. (a) Super-imposed phase images, (b) corresponding model outputs, (c) 3D mesh of one RBC.
Fig. 13.
Fig. 13. Gabor phase image of Hela cancer cell and the model’s output. Cells go through several shape changes. (a) Noisy phase image, (b) noise-removed phase image.

Tables (1)

Tables Icon

Table 1. Volume and MCH value for off-axis holography and our model (n >70)

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

I H = | O | 2 + | R | 2 + R O + O R ,
H 1 = I F F T { F S { F F T ( I H ) × F i l t e r r e a l } } ,
H 2 = I F F T { F S { F F T ( I H ) × F i l t e r t w i n } } ,
H 0 = I F F T { F F T ( I H ) × F i l t e r n o i s e } ,
G H = H 0 + H 1 + H 2 ,
ϕ G ( x , y ) = tan 1 { Im [ Ψ G ( m , n ) ] [ Ψ G ( m , n ) ] } , ϕ r e a l ( x , y ) = tan 1 { Im [ Ψ H 1 ( m , n ) ] R e [ Ψ H 1 ( m , n ) ] } .
D M = ϕ λ S ¯ 2 π α ,
V p 2 ( i , j ) S p h ( i , j ) ,
h ( x , y ) = λ × ϕ ( x , y ) 2 π ( n R B C n m ) ,
L C G A N ( G , D ) = E x , y [ log D ( x , y ) ] + E x [ log { 1 D ( x , G ( x ) ) } ] ,
L L 1 ( G ) = E x , y [ | | y G ( x ) | | 1 ] .
L = arg min G max D L C G A N ( G , D ) + λ L L 1 ( G ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.