Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep learning-assisted low-cost autofluorescence microscopy for rapid slide-free imaging with virtual histological staining

Open Access Open Access

Abstract

Slide-free imaging techniques have shown great promise in improving the histological workflow. For example, computational high-throughput autofluorescence microscopy by pattern illumination (CHAMP) has achieved high resolution with a long depth of field, which, however, requires a costly ultraviolet laser. Here, simply using a low-cost light-emitting diode (LED), we propose a deep learning-assisted framework of enhanced widefield microscopy, termed EW-LED, to generate results similar to CHAMP (the learning target). Comparing EW-LED and CHAMP, EW-LED reduces the cost by 85×, shortening the image acquisition time and computation time by 36× and 17×, respectively. This framework can be applied to other imaging modalities, enhancing widefield images for better virtual histology.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Postoperative histological examination based on hematoxylin and eosin (H&E) stained thin tissue slices prepared from formalin-fixed and paraffin-embedding (FFPE) process is the gold standard in clinical diagnosis. However, this thin slice preparation procedure, including formalin fixation, tissue processing, paraffin embedding, microtome sectioning, and tissue mounting on glass, is lengthy and laborious, prolonging the diagnostic report generation from hours to days [1]. While frozen section can serve as a rapid alternative to prepare the thin tissue slices for intraoperative assessment, it is also prone to freezing artifacts and sub-optimal sectioning for fatty tissues, affecting slide interpretation and diagnostic accuracy [2].

Recently, many slide-free imaging techniques using deep ultraviolet (deep-UV) light have been developed due to the great demand for rapid histopathology. Deep-UV light is highly absorbed by a variety of intrinsic biomolecules and fluorescent [3,4], which is helpful in providing absorption-based contrast with or without fluorescence. The short penetration depth of deep-UV also helps to restrict the excitation only on the tissue surface, achieving high image contrast on the surface of non-sectioned tissue. UV-based photoacoustic microscopy (UV-PAM) has been demonstrated for label-free histological imaging on human breast and bone tissues based on the absorption-based contrast provided by deep-UV excitation [57]. However, a costly high-repetition-rate pulsed UV laser is required for high-throughput imaging due to its point scanning mechanism. For widefield fluorescence imaging techniques, microscopy with UV surface excitation (MUSE) has been applied for histological imaging on different human cancer tissues using UV-excitable exogenous fluorescence dyes [8]. Yet, integrating a fluorescence labeling procedure into the current clinical practice is challenging.

To this end, different widefield label-free imaging techniques have recently been developed for slide-free histological imaging. For instance, dark-field reflectance ultraviolet microscopy (DRUM) [9] and computational high-throughput autofluorescence microscopy by pattern illumination (CHAMP) [10] are two examples that have demonstrated the use of the high absorption property of deep-UV in cell nuclei to provide nuclear contrast on fresh and unprocessed tissues. DRUM is a simple light-emitting diode (LED)-based widefield histological imaging technique that leverages both dark field reflectance contrast for hematoxylin analog and single emission channel autofluorescence contrast for eosin analog. Whereas, CHAMP also leverages the absorption property of cell nuclei meanwhile capturing broad emission spectra, covering the variety of endogenous fluorophores, including cellular metabolites (e.g., reduced nicotinamide adenine dinucleotide, and flavins), structural proteins that are commonly found in extracellular matrix (e.g., collagen, elastin), and aromatic amino acid (e.g., tyrosine, tryptophan, and phenylalanine) [11]. Integrating with a color transformation method, H&E-like virtually stained images could be generated to facilitate the interpretation of the images from histopathology perspectives.

As an imaging approach in computational microscopy, CHAMP provides histological images with improved resolution by pattern illumination, simultaneously preserving a large field of view and long depth of field using a low numerical aperture (NA) objective lens for rapid scanning and high tolerance to tissue surface topology. However, the deep-UV-based pattern illumination also has its challenges. The pattern generation currently relies on the use of a costly coherent light source (266-nm laser, ∼USD 38,500, WEDGE HF 266nm, Bright Solutions Srl.) to generate the speckle pattern by interference. Furthermore, deep-UV pattern illumination tools that are compatible with incoherent light sources, e.g., spatial light modulators or digital micromirror devices (DMD) [12,13], are not commercially available yet. To the best of our knowledge, there are still challenges of DMD in the UV-C range (i.e., 180–280nm) due to the rapid degradation of DMD reflectivity [14].

Inspired by the strength and limitations of CHAMP, here, we propose a deep learning-assisted framework of enhanced widefield microscopy using a low-cost LED (EW-LED) (UV-LED, ∼USD 470, M265L5, Thorlabs Inc.), which is a cascaded framework that utilizes a super-resolution algorithm for image enhancement, followed by a virtual staining algorithm to generate enhanced virtual histological images.

The generative adversarial network (GAN) has been widely applied to generate natural and realistic images in the deep learning field [15]. Deep learning models generally fall into two categories: supervised learning and unsupervised learning, which differ based on the requirement of labeled data during training. For single-image super-resolution tasks, super-resolution GAN (SRGAN) [16] is a supervised method that can recover realistic images for 4× upscaling factors from low-resolution images through the adversarial training of the generator and discriminator networks. Derived based on SRGAN, the enhanced SRGAN (ESRGAN) [17] has several improvements in network architecture, including adversarial loss and perceptual loss, showing better visual quality and realistic textures than that of SRGAN.

For virtual staining tasks, deep learning simplifies the color transformation process by eliminating the need for prior knowledge about the intrinsic optical properties of different biomolecules in order to properly design a realistic pseudo-coloring approximation equation. Deep learning-based virtual staining has been successfully employed with different imaging modalities [7,10,1820]. Supervised methods, such as pix2pix [21], have successfully achieved virtual staining on images acquired from unstained thin tissue sections, e.g., transforming grayscale autofluorescence images into virtually stained images that are equivalent to H&E using labeled data [18]. While current supervised models require image registration for preparing well-aligned image pairs to provide labeled data for training, obtaining well-aligned images of unprocessed tissue and the H&E-stained thin slice is still challenging. On the contrary, unsupervised models enable deep learning-based virtual staining on unprocessed tissue without relying on well-aligned data. Unsupervised virtual staining has been widely implemented, including the use of cycleGAN [22] on UV-PAM, CHAMP, and MUSE images [7,10,19]. Recently, an unsupervised content-preserving transformation for optical microscopy (UTOM) model was developed to further improve the accuracy of color transformation by the introduction of saliency constraints [23] on deep-UV multispectral images [20]. In addition, while both supervised and unsupervised models could be used for thin slice virtual staining, unsupervised models have advantages for virtual staining on thick tissue because obtaining the exact tissue layer as the imaged thick tissue surface for H&E staining is challenging due to the difficulty in maintaining consistent sample orientation during the embedding process and the trimming of the FFPE block.

In this demonstration with CHAMP, we employed ESRGAN as the super-resolution algorithm to transform the low-resolution images acquired under a widefield microscope using UV-LED excitation (termed W-LED images hereafter), into enhanced widefield image output with improved resolution (termed EW-LED images hereafter) by using the CHAMP images acquired with laser (termed laser-CHAMP images hereafter) as the learning target.

In our implementation (Fig. 1), we acquired both the W-LED and laser-CHAMP images under the same system using a dual-mode autofluorescence microscope (Fig. 1(a)) to facilitate the image registration for paired training, which is needed in ESRGAN. Compared with CHAMP, this framework has three advantages: (1) shortens the image acquisition time from a sequence of pattern-illuminated images to single image acquisition, (2) simplifies the computation from running a computationally heavy iterative-based image reconstruction with a simple model inference with short inference time, and (3) enables the use of an 85× cheaper UV-LED to replace the costly UV-laser while achieving enhanced virtual H&E by further processing the EW-LED images with a virtual staining algorithm (Fig. 1(b)). The ESRGAN network architecture is shown in Fig. 1(c).

 figure: Fig. 1.

Fig. 1. A super-resolution and virtual staining workflow based on W-LED images. (a) A dual-mode autofluorescence microscope with both UV-laser pattern illumination and UV-LED illumination, modified from CHAMP [10], is used to acquire paired training data of low-resolution W-LED images and the high-resolution laser-CHAMP images with improved resolution through pattern illumination. (b) Compared with the workflow of laser-CHAMP, this EW-LED framework reduces the number of image acquisitions from a sequence of 36 images to a single image, and replaces the time-consuming iterative reconstruction framework with a time-efficient and well-trained generator from a GAN-based super-resolution model. The output of this model can be further transformed into the virtually stained H&E image. (c) The ESRGAN network is employed for transforming the low-resolution W-LED images to EW-LED images with improved resolution.

Download Full Size | PDF

Depending on the availability of paired training data and the complexity of the nuclear/cytoplasmic patterns, the EW-LED images can be further transformed into virtual H&E-stained images through supervised or unsupervised learning for histological diagnosis. Our experiments evaluated and demonstrated the potential of our EW-LED framework for producing high-quality EW-LED images, which show significantly enhanced virtual staining results on FFPE thin mouse brain tissue slices using supervised learning, and formalin-fixed and unprocessed thick mouse brain and human lung tissues using unsupervised learning. In this work, we demonstrated the potential of the EW-LED framework using CHAMP images as the image target for lateral resolution enhancement. However, this framework not only provides an alternative to address the challenge of deep-UV pattern generation but is also applicable to other autofluorescence/fluorescence imaging techniques, enhancing the widefield LED autofluorescence/fluorescence image for better virtual histological image generation.

2. Methods

2.1 Sample preparation

In this study, a comparison between W-LED, EW-LED, and laser-CHAMP images was demonstrated on FFPE mouse brain tissue slices, and formalin-fixed and unprocessed thick mouse brain and human lung cancer samples. Mice were provided by the Animal and Plant Care Facility at the Hong Kong University of Science and Technology (HKUST). Human lung cancer tissues were obtained from lung cancer patients at the Queen Mary Hospital. The animal experiments were conducted with the consent of the Animal Ethics Committee and the medical surveillance of the Health, Safety and Environment Office at HKUST, whereas the experiments involved human tissues were carried out in conformity with a clinical research ethics review approved by the Institutional Review Board of the University of Hong Kong/Hospital Authority Hong Kong West Cluster (HKU/HA HKW) (reference number: UW 20–335).

For FFPE mouse brain sample preparation, 10% formalin solution was used to fix the harvested mouse brain for 24 hours. Then, the fixed sample was paraffin-embedded and sectioned into 5-µm thin slices by a microtome. The tissue slices were placed on a quartz slide, deparaffinized, and imaged by the CHAMP system integrated with both pattern illumination and LED configuration (Fig. 1(a)). To obtain the corresponding H&E-stained images, the same slice was processed with the standard histology protocol.

For thick mouse brain and human lung tissue preparation, the formalin-fixed and unprocessed samples were embedded with 2% agarose, sectioned into 500-µm thick slices by a vibratome, and placed on a quartz slide which subsequently sandwiched between two plastic membranes mounted by a sample holder. After imaging with our dual-mode autofluorescence microscope, the thick samples were then processed and embedded in a paraffin block, followed by microtome sectioning into a 5-µm thin tissue section, deparaffinization, and H&E staining to acquire the histological image. All the above H&E-stained images were digitized into whole-slide images with a digital slide scanner (20×, NA = 0.75) (NanoZoomer-SQ, Hamamatsu Photonics K.K.).

2.2 Data acquisition and processing

The W-LED and laser-CHAMP images were captured under an inverted widefield dual-mode autofluorescence microscope which consists of a 4× objective lens (RMS4X, NA = 0.1, Thorlabs Inc.), an infinity-corrected tube lens (TTL180-A, Thorlabs Inc.), and a monochrome scientific complementary metal-oxide-semiconductor camera (PCO panda 4.2, 2048 × 2048 pixels, 6.5-µm pixel pitch, PCO Inc.) (Fig. 1(a)). The W-LED images were captured under the illumination of a 265-nm LED (M265L5, Thorlabs Inc.), which is focused onto the bottom surface of a specimen by a condenser lens (LA4725, Thorlabs Inc.). The raw laser-CHAMP images were acquired under speckle illumination generated by a 266-nm UV laser and a UV-fused silica ground glass diffuser (DGUV10-600, Thorlabs Inc.) as described in the reported CHAMP system [10]. An image sequence of 36 images was acquired with a scanning interval of 1 µm for each field of view. Then, the laser-CHAMP images with lateral resolution ∼1.1-µm will be generated via an iterative image reconstruction framework [10] using the raw speckle-illuminated images with an up-sampling factor of three for the mouse brain tissues and two for the human lung tissues. Consequently, the image acquisition time and the image reconstruction time will be scaled up with the number of field of view required for scanning the entire samples. Therefore, we are motivated to simplify the image acquisition and reconstruction through the EW-LED framework by utilizing the laser-CHAMP image as a bridge to obtain the high-resolution training target for training the super-resolution algorithm.

To prepare paired data for the training with ESRGAN, the W-LED and laser-CHAMP images were first globally registered by control point registration, followed by a local registration at image patch level with a size of 286 × 286 via an intensity-based registration with affine transformation in MATLAB. The paired W-LED and laser-CHAMP images were then used for ESRGAN model training and evaluation.

For the thin mouse brain tissue slice (Fig. 2), the training dataset of the ESRGAN model includes 2,333 pairs of W-LED and laser-CHAMP image patches with a crop size of 252 × 252 randomly cropped from the registered pairs for data augmentation. The W-LED, laser-CHAMP, and the generated EW-LED images were then pixel-aligned with the corresponding H&E-stained image of the same slice. Each W-LED, laser-CHAMP, and EW-LED dataset includes 2,333 autofluorescence and H&E-stained image pairs with a random crop size of 256 × 256, which were then trained by a supervised algorithm, called pix2pix, individually for the downstream virtual staining task [21].

 figure: Fig. 2.

Fig. 2. Comparison of W-LED, EW-LED, laser-CHAMP, and corresponding virtually stained H&E (labeled as vHE for figures hereafter) images on an FFPE thin slice of mouse brain tissue. (a) EW-LED image of the mouse brain slice. (b) Corresponding virtually stained H&E image of (a). (c–e) Zoomed-in images of W-LED, EW-LED, and laser-CHAMP marked with a blue solid box in (a), respectively. (f–h) Virtually stained H&E images of (c–e), respectively. (i) Corresponding H&E-stained image of the blue solid region. The nuclei marked with green, yellow, and blue arrows can be found on both the virtually stained H&E of EW-LED (g), laser-CHAMP (h), and the real H&E-stained images (i, labeled as HE). However, the nuclei are missed on the virtually stained H&E of the W-LED image (f).

Download Full Size | PDF

For the formalin-fixed and unprocessed thick mouse brain tissues (Fig. 3), the ESRGAN model was trained with 1,791 pairs of W-LED and laser-CHAMP images with a random crop size of 252 × 252. The ESRGAN model of human lung tissue was trained with 1,346 image pairs for Fig. 4, and 584 image pairs for Fig. 5 with a random crop size of 256 × 256. We used a 2× up-sampling factor in the CHAMP reconstruction, which resulted in a smaller image size of the laser-CHAMP image, hence fewer image pairs were prepared for training. Since the up-sampling factor defines the scale ratio between the low-resolution and high-resolution images, a large inference patch size of the original low-resolution image can be used for a given crop size. For example, a 128 × 128 inference patch size can be prepared followed by 2× upscaling to 256 × 256 if the up-sampling factor is two, which results in a shorter inference time with fewer image patches given a fixed tissue size.

 figure: Fig. 3.

Fig. 3. Comparison of W-LED, EW-LED, laser-CHAMP, and corresponding virtually stained H&E images on a thick mouse brain tissue. (a) EW-LED image of the thick mouse brain tissue. (b) Corresponding virtually stained H&E image of (a). (c–e) Zoomed-in images of W-LED, EW-LED, and laser-CHAMP marked with an orange solid box in (a), respectively. (f–h) Virtually stained H&E images of (c–e), respectively. (i) Corresponding adjacent H&E-stained image of the orange solid region (labeled as adjacent HE for figures hereafter).

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. (a–c) Comparison of W-LED, EW-LED, and laser-CHAMP images on thick human lung tissue. (d, e, j, k) Zoomed-in images of W-LED, EW-LED, laser-CHAMP, and H&E-stained histology images corresponding to the orange solid box marked in (a), respectively, showing the structure of bronchiole with epithelial cells marked with orange solid arrows. (f, g, l, m) Zoomed-in images of W-LED, EW-LED, laser-CHAMP, and H&E-stained histology images corresponding to the yellow dashed box marked in (a), respectively, showing the vascular wall. (h, i, n, o) Zoomed-in images of W-LED, EW-LED, laser-CHAMP, and H&E-stained histology images corresponding to the green dotted box marked in (a), respectively, showing the cell nuclei of individual alveolar macrophages.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Comparison of the histology images generated by W-LED and EW-LED images with their corresponding H&E-stained image on a thick human lung adenocarcinoma. (a) EW-LED image of the human lung tissue. (b) Virtually stained H&E image of (a). (c, d) Zoomed-in images of W-LED and EW-LED images marked with a green solid box in (a), respectively. (e) Corresponding adjacent H&E-stained image of the green solid region. (f, g) Virtually stained H&E images of (c) and (d), respectively.

Download Full Size | PDF

Since only the adjacent H&E layer of the thick tissue surface can be obtained after sample preparation, an unsupervised algorithm, cycleGAN [23], was employed for the downstream virtual staining task, trained with the unpaired autofluorescence images and adjacent H&E-stained images with a random crop size of 256 × 256. The training datasets of the thick mouse brain tissue for the three virtual staining models shared the same 2,231 H&E-stained image patches and different numbers of autofluorescence images, including 1,893 W-LED image patches, 1,817 laser-CHAMP images, and 1,914 EW-LED images.

Because of the complex features of the human cancer tissues, we employed an unsupervised one-sided algorithm for virtual staining on the human lung tissue to optimize the virtual staining performance [24]. The training dataset was composed of randomly sampled 5,000 image patches with a size of 256 × 256.

2.3 Network architecture

The architecture of ESRGAN includes one generator G and one discriminator D. The generator G can infer EW-LED image $\hat{{\boldsymbol y}} = {\boldsymbol G}({\boldsymbol x} )$ with 2× or 3× upscaling factors from W-LED image ${\boldsymbol x}$ according to the up-sampling factor used in the laser-CHAMP reconstruction. The discriminator D is designed based on relativistic GAN and trained to classify the EW-LED image $\hat{{\boldsymbol y}}$ and original CHAMP image ${\boldsymbol y}$ [25]. The adversarial loss of the generator ${\mathbf {\cal L}}_{\boldsymbol G}^{{\boldsymbol{adv}}}$ and discriminator ${\mathbf {\cal L}}_{\boldsymbol D}^{{\boldsymbol{adv}}}$ are formulated with respect to the relativistic average discriminator and can be expressed with binary cross-entropy loss:

$${\mathbf {\cal L}}_{\boldsymbol G}^{{\boldsymbol{adv}}} ={-} {\mathrm{\mathbb{E}}_{\boldsymbol y}}[{\boldsymbol{log} ({1 - {\boldsymbol \sigma }({{\boldsymbol D}({\boldsymbol y} )- {\mathrm{\mathbb{E}}_{\hat{{\boldsymbol y}}}}[{{\boldsymbol D}({\hat{{\boldsymbol y}}} )} ]} )} )} ]- {\mathrm{\mathbb{E}}_{\hat{{\boldsymbol y}}}}[{\boldsymbol{\log} ({{\boldsymbol \sigma }({{\boldsymbol D}({\hat{{\boldsymbol y}}} )- {\mathrm{\mathbb{E}}_{\boldsymbol y}}[{{\boldsymbol D}({\boldsymbol y} )} ]} )} )} ]$$
$${\mathbf {\cal L}}_{\boldsymbol D}^{{\boldsymbol{adv}}} ={-} {\mathrm{\mathbb{E}}_{\boldsymbol y}}[{\boldsymbol{\log} ({{\boldsymbol \sigma }({{\boldsymbol D}({\boldsymbol y} )- {\mathrm{\mathbb{E}}_{\hat{{\boldsymbol y}}}}[{{\boldsymbol D}({\hat{{\boldsymbol y}}} )} ]} )} )} ]- {\mathrm{\mathbb{E}}_{\hat{{\boldsymbol y}}}}[{\boldsymbol{\log} ({1 - {\boldsymbol \sigma }({{\boldsymbol D}({\hat{{\boldsymbol y}}} )- {\mathrm{\mathbb{E}}_{\boldsymbol y}}[{{\boldsymbol D}({\boldsymbol y} )} ]} )} )} ]$$
where ${\boldsymbol D}({\boldsymbol y} )$ and ${\boldsymbol D}({\hat{{\boldsymbol y}}} )$ denote the discriminator output from the original CHAMP image ${\boldsymbol y}$ and the EW-LED image $\hat{{\boldsymbol y}}$, respectively, ${\boldsymbol \sigma }$ denotes the sigmoid function for the postprocessing of discriminator output, and ${\mathrm{\mathbb{E}}_{\boldsymbol y}}[{\cdot} ]$ and ${\mathrm{\mathbb{E}}_{\textrm{}\hat{{\boldsymbol y}}\textrm{}}}[{\cdot} ]$ represent the operation of averaging with respect to ${\boldsymbol y}$ and $\hat{{\boldsymbol y}}$.

The EW-LED image $\hat{{\boldsymbol y}}$ should share the same content as the original CHAMP image ${\boldsymbol y}$. The L1 loss function was used to evaluate the distance between these two types of images:

$${{\mathbf {\cal \boldsymbol L}}_1} = {\mathrm{\mathbb{E}}_{\boldsymbol x}}[{{||\boldsymbol y} - {\boldsymbol G}{{({\boldsymbol x} )}||_1}} ]$$

To measure the perceptual similarity between the EW-LED and laser-CHAMP images, the perceptual loss was calculated by the representative features from the last convolution layer of the pre-trained VGG19 network. Therefore, the total generator loss is:

$${{\mathbf {\cal \boldsymbol L}}_{\boldsymbol G}} = {{\mathbf {\cal L}}_{{\boldsymbol{vgg}}}} + {\boldsymbol \lambda }{\mathbf {\cal \boldsymbol L}}_{\boldsymbol G}^{{\boldsymbol{adv}}} + {{\mathbf {\cal \boldsymbol L}}_1}$$

The generator architecture is based on SRResNet [16] with 12 basic blocks using Residual in Residual Dense Block (RRDB) [17]. The PixelShuffle [26] method with 2× or 3× upscaling factors was used in the up-sampling layer. The input W-LED images and the output EW-LED images are both grayscale images with one channel. The discriminator architecture is based on PatchGAN [21] to differentiate between EW-LED and laser-CHAMP images.

For ESRGAN training settings, we set ${\boldsymbol \lambda } = 0.1$ in Eq. (4). The generator and discriminator were trained with Adam optimizer. The batch size is 4 and the iterations are 200 epochs. The initial learning rate is 0.0002 and halved at 100 epochs. To prepare the dataset for inference, the W-LED image patches (252 × 252, for 3× upscaling factor) are cropped from the whole registered W-LED images with a step size of 107. These W-LED image patches were downsampled to inference patch size (84 × 84, according to the 3× upscaling factor.) The output EW-LED image patches (252 × 252) were stitched with the same step size using linear blending for smoothing edges.

The unsupervised one-sided algorithm used in virtual staining deviates from the cycleGAN-like two-sided framework and only requires a single generator and discriminator [24]. The generator used in this part is a U-shape neural network with shortcut connections [27], while the discriminator is inspired by the one proposed in pix2pixHD [28], a multiple-branch discriminator for better feature representation. As for the loss function, we used adversarial loss [15] for realistic image generation and identity loss [29] for image content preservation. The adversarial loss computes the cross entropy between the prediction of the discriminator and the image label (real staining or generated staining) as follows:

$${\mathbf {\cal \boldsymbol L}}_{\boldsymbol G}^{{\boldsymbol{adv}}} ={-} \textrm{}{\mathrm{\mathbb{E}}_{\boldsymbol x}}[{{\boldsymbol{log}\; \boldsymbol D}({{\boldsymbol G}({\boldsymbol x} )} )} ]$$
$${\mathbf {\cal \boldsymbol L}}_{\boldsymbol D}^{{\boldsymbol{adv}}} = {\mathrm{\mathbb{E}}_{\boldsymbol x}}[{{\boldsymbol{log}}\;{\boldsymbol D}({{\boldsymbol G}({\boldsymbol x} )} )} ]+ \textrm{}{\mathrm{\mathbb{E}}_{\boldsymbol y}}[{{\boldsymbol{log}}(1 - {\boldsymbol D}({{\boldsymbol y})} )} ]$$

The identity loss is the L1 norm between images sampled from the real staining domain and generator output.

$${{\mathbf {\cal \boldsymbol L}}_{{\boldsymbol{idt}}}} = {\mathrm{\mathbb{E}}_{\boldsymbol y}}[{{\boldsymbol y} - {\boldsymbol G}{{({\boldsymbol y} )}_1}} ]$$

Then, the total loss for the generator is formulated as follows:

$${{\mathbf {\cal \boldsymbol L}}_{\boldsymbol G}} = {\mathbf {\cal L}}_{\boldsymbol G}^{{\boldsymbol{adv}}} + {\boldsymbol \lambda }{{\mathbf {\cal L}}_{{\boldsymbol{idt}}}}$$
where ${\boldsymbol \lambda }$ here is set to 5.

2.4 Nuclei segmentation and counting

To perform nuclei counting, the real H&E and virtual H&E-stained images were first segmented using a cell detection method with a star-convex polygon as shape priors [30]. A pretrained versatile H&E nuclei model was used in this nuclei segmentation task. Then, the number of nuclei was counted, and the nuclear density was calculated as the total cross-sectional area of nuclei over the total area of the hippocampus.

3. Results

3.1 EW-LED imaging verified on thin mouse brain tissue slice

To validate the resolution improvement (Fig. 2(a)) and the subsequent virtual staining performance (Fig. 2(b)) of the EW-LED workflow, we acquired W-LED images, EW-LED images, laser-CHAMP images, and corresponding H&E-stained images on an FFPE thin slice of mouse brain tissue. In Fig. 2, the EW-LED image (Fig. 2(d)) has shown obvious resolution improvement over the W-LED image (Fig. 2(c)), which is supported by a higher peak signal-to-noise ratio (PSNR: 22.50 and 20.17) and structural similarity index measure (SSIM: 0.608 and 0.570) of EW-LED image than that of the corresponding W-LED image with the laser-CHAMP image (Fig. 2(e)) as the baseline (Table 1). Three virtually stained H&E images were generated based on the W-LED image (Fig. 2(f)), EW-LED image (Fig. 2(g)), and laser-CHAMP image (Fig. 2(h)). Comparing the virtual staining output of the EW-LED image with the W-LED image, some nuclei are missed on the virtually stained H&E of the W-LED image (indicated with arrows), which, however, can still be visualized on the virtually stained H&E of the EW-LED image. Using the real H&E-stained image (Fig. 2(i)) as the ground truth, the PSNR and SSIM of the virtually stained H&E of the EW-LED image are also higher than that of the W-LED image (Table 1), clearly demonstrating the importance of ESRGAN which subsequently leads to the virtual staining improvement of EW-LED image over W-LED image.

Tables Icon

Table 1. Quantitative comparison between W-LED and EW-LED images with the laser-CHAMP image, and the corresponding virtually stained H&E images with the real H&E-stained image on the thin mouse brain tissue

Figure 2. Comparison of W-LED, EW-LED, laser-CHAMP, and corresponding virtually stained H&E (labeled as vHE for figures hereafter) images on an FFPE thin slice of mouse brain tissue. (a) EW-LED image of the mouse brain slice. (b) Corresponding virtually stained H&E image of (a). (c–e) Zoomed-in images of W-LED, EW-LED, and laser-CHAMP marked with a blue solid box in (a), respectively. (f–h) Virtually stained H&E images of (c–e), respectively. (i) Corresponding H&E-stained image of the blue solid region. The nuclei marked with green, yellow, and blue arrows can be found on both the virtually stained H&E of EW-LED (g), laser-CHAMP (h), and the real H&E-stained images (i, labeled as HE). However, the nuclei are missed on the virtually stained H&E of the W-LED image (f).

3.2 EW-LED imaging demonstrated on thick mouse brain tissue

The proposed EW-LED imaging method with virtual staining was then further validated using formalin-fixed and unprocessed thick mouse brain tissue (Fig. 3(a), (b)). The zoomed-in EW-LED image (Fig. 3(d)) of a hippocampus region can reveal the densely packed cell nuclei that are originally unresolved in the W-LED image (Fig. 3(c)), which is also supported by a higher PSNR (20.96 and 20.35) and SSIM (0.809 and 0.746) (Table 2). Similarly, three virtually stained H&E images of W-LED, EW-LED, and laser-CHAMP (Fig. 3(f)–(h)) were respectively generated. Since it is impossible to obtain the H&E-stained slice which has the exact tissue layer as the tissue surface imaged by our dual-mode autofluorescence microscope, only an adjacent layer with H&E staining is obtained as a reference (Fig. 3(i)). Similar nuclear packing density and morphology were observed between the virtually stained H&E of EW-LED, laser-CHAMP, and the adjacent H&E-stained images (Fig. 3(g)–(i)). However, a lower nuclear density with incorrect cell nuclear boundary is shown on the virtually stained H&E of the W-LED image (Fig. 3(f)), resulting in a difference in nuclei count and nuclear density in the hippocampus region (Table 2).

Tables Icon

Table 2. Quantitative comparison between W-LED and EW-LED images with the laser-CHAMP image, and the nuclei count on the corresponding virtually stained and adjacent real H&E images of the thick mouse brain tissue

3.3 EW-LED and histology imaging of thick human lung tissue

To demonstrate the histology information provided by EW-LED images, we acquired W-LED (Fig. 4(a)), EW-LED (Fig. 4(b)), and laser-CHAMP (Fig. 4(c)) images on thick human lung tissue and compared them with the H&E-stained histology image. In Fig. 4, the structure of the bronchiole with surrounding epithelial cells marked with orange solid arrows (Fig. 4(e)), the vascular wall composed of muscle and other fibers (Fig. 4(g)), and the cell nuclei of the individual alveolar macrophages (Fig. 4(i)) can be revealed more clearly in the EW-LED images than that of the W-LED images (Fig. 4(d), (f), (h)), which are also supported by the laser-CHAMP (Fig. 4(j), (l), (n)) and H&E-stained histology images (Fig. 4(k), (m), (o)).

3.4 EW-LED imaging and virtual staining of thick human lung tissue

We also tested the EW-LED framework in generating histology images based on EW-LED images on a thick human lung adenocarcinoma tissue (Fig. 5(a), (b)). Due to the sample complexity of human cancer tissue, we have further optimized the virtual staining performance based on an unsupervised one-sided algorithm [24]. In Fig. 5, it is clear that the virtual staining output of the EW-LED image can resemble the histology pattern of one of the subtypes of lung adenocarcinoma-acinar adenocarcinoma (Fig. 5(d), (g)), which is validated by its corresponding H&E-stained image (Fig. 5(e)). However, the W-LED image with a comparatively low resolution is not able to output the round glandular structure (Fig. 5(c)). Instead, the central luminal space is filled up by carbon particles and blood-like artifacts (Fig. 5(f)).

4. Discussion

In this work, we have validated the proposed deep learning-assisted EW-LED framework could allow the implementation of widefield autofluorescence microscopy using a low-cost LED that can achieve high-resolution improvement and its compatibility with the subsequent virtual staining task to generate enhanced histology images on both animal and human tissues. We demonstrated the proposed EW-LED imaging method can be applicable to formalin-fixed and unprocessed thick tissue for proof of concept. Nonetheless, this imaging technique should also be applicable to fresh tissue as supported by the fresh tissue images reported in [10]. There are also other promising widefield imaging methods, such as DRUM and quantitative oblique back-illumination microscopy (qOBM) [31], for virtual histology demonstrated on fresh tissue. As detailed in the introduction, imaging contrast is one of the differences between DRUM and our approach based on CHAMP. While DRUM leverages diffuse reflectance contrast for cell nuclei, our image contrast covers a broad emission band that can potentially provide more biomolecular information with the assistance of deep learning-based virtual staining.

Furthermore, using CHAMP as the demonstration, the EW-LED framework as LED-based widefield autofluorescence microscopy also holds great promise to be generalized with other imaging methods. The high-resolution target image for training the super-resolution algorithm could also be derived from other methods, such as laser scanning confocal autofluorescence microscopy. This framework can also be employed with other deep-UV widefield microscopy, such as MUSE and DRUM, which also have limitations on accommodating tissue surface irregularities given the limited depth of field associated with using an objective lens with high NA for high-resolution imaging. Since the original ESRGAN is proposed for 4× upscaling, there is also potential to achieve higher enhanced resolution by obtaining target images with an objective lens 20× or above. In this case, using an extended depth of field could be one of the approaches to address the issue of surface irregularities. Our approach could simplify the z-stack image acquisition to a single image for each field of view at the expense of computational resources for deep learning-based image processing.

While the deep-UV-based method works well on excised tissue, non-UV histological imaging techniques, such as qOBM [31], might be more promising for in vivo application compared with UV-based techniques. qOBM is also more powerful in its capability to provide 3D volumetric information. Yet, utilizing the quantitative phase contrast, qOBM’s reconstruction also relies on the correct choice of transfer functions. Different transfer functions may be needed considering potential variations in refractive index between tissues, e.g., liver and brain. Autofluorescence image contrast is more related to intrinsic chemical composition, and the virtually stained images are reconstructed via deep learning, which may have less concern on parameter choice depending on organs, tumor types, microenvironment status, etc.

Different virtual staining methods were used across samples. For thin tissue, pix2pix was used as aligned data can be obtained before and after staining of the same slide for fully supervised training [21]. However, it is challenging to obtain the exact H&E layer as the imaged thick tissue surface. Therefore, we used an unsupervised method, cycleGAN [22], for virtual staining on the thick mouse brain tissue. Although cycleGAN, as an unsupervised method, has its strength in thick tissue virtual staining without requiring aligned image pairs for training, we also noticed errors in the number and location of nucleolus structures on the virtually stained EW-LED image (Fig. 3(g)), which possibly contributed by the limited resolution on the EW-LED image. Independent training of the two cascaded models from super-resolution to virtual staining may also have a potential downside of accumulation of errors. Apart from obtaining higher resolution images as training targets, other potential improvements on the model training perspective could be using a super-resolution model with better image degradation modeling design [32], and using end-to-end training for the cascaded model [33].

Furthermore, we also found challenges in virtual staining on more complex human samples. Due to the feature complexity of human cancer tissue, we employed an unsupervised one-sided algorithm for virtual staining [24], which incorporated a region classification loss to improve the classification ability of the discriminator. While more and better data should help improve the virtual staining performance, the future direction could consider incorporating certain degrees of labeling other than pixel-wise alignment, e.g., by introducing classes for better feature learning.

Initially, we followed the CHAMP implementation using a 3× up-sampling factor for the mouse brain data. For a fixed sample size, an image target with a higher resolution will require more pixels to encode the information satisfying the Nyquist sampling theorem. On the contrary, the upscaling factor in the ESRGAN training, which is defined by the up-sampling factor used in CHAMP reconstruction in this work, defines the input patch size on a low-resolution image and affects the subsequent inference time (e.g., a high upscaling factor would lead to a small patch size and a large number of patches, hence, causing a long inference time). Consequently, there is a trade-off between the inference time and the factor of resolution improvement. Using a 2× upscaling factor in the human lung data, the algorithm takes 12s/10 mm2 for computation, which is ∼17× computationally more efficient when compared with the iterative image reconstruction used in CHAMP [10]. The algorithm was run on a workstation with a Core I9-10980XE CPU @4.80 Ghz, 8 × 32GB RAM, and using 1 NVIDIA GeForce RTX3090 GPU.

5. Conclusion

In conclusion, this work introduces a deep learning-assisted framework for enhancing low-quality widefield images with high-quality images as learning targets for better virtual histology. For instance, using laser-CHAMP as the training target, we demonstrated the lateral resolution enhancement and subsequent improvement in virtual staining by comparing the virtually stained images generated from W-LED images, EW-LED images, and laser-CHAMP, with real H&E-stained images as ground truth. Compared but not limited to CHAMP, this framework has three key advantages: (1) simplifies the image acquisition by reducing the number of images required from a sequence of images to a single image for each field of view with 36× image acquisition reduction, (2) simplifies the computation with simple model inference with 17× computational time reduction for generating the EW-LED image, and (3) enables the use of 85× cheaper light source (UV-LED) to replace the costly UV-laser while preserving similar image quality. All these features show the great promise of the proposed EW-LED framework to be a cost-effective alternative that is also applicable to other deep-UV or widefield imaging methods that may require a costly UV-laser or sequential scanning for each field of view, simplifying the imaging system and potentially improving the acquisition or computation time for slide-free and label-free rapid histopathology.

Funding

Hong Kong University of Science and Technology (HKUST startup grant (R9421), HKUST-Kaisa grant (OKT21EG12)).

Disclosures

T. T. W. W. has a financial interest in PhoMedics Limited, which, however, did not support this work. W. D. and I. H. M. W. have a financial interest in V Path Limited, which, however, did not support this work. I. H. M. W., W. D., and T. T. W. W. have applied for a patent (US Provisional Patent Application No.: 63/428 127) related to the work reported in this manuscript.

Data availability

All data involved in this work, including raw/processed images provided in the manuscript, are available from the corresponding author upon request.

References

1. B. W. Maloney, D. McClatchy, B. Pogue, et al., “Review of methods for intraoperative margin detection for breast conserving surgery,” J. Biomed. Opt. 23(10), 1–19 (2018). [CrossRef]  

2. H. Jaafar, “Intra-operative frozen section consultation: concepts, applications and limitations,” Malays. J. Med. Sci. 13, 4–12 (2006).

3. S. Soltani, B. Cheng, A. O. Osunkoya, et al., “Deep UV Microscopy Identifies Prostatic Basal Cells: An Important Biomarker for Prostate Cancer Diagnostics,” BME Front. 2022, 9847962 (2022). [CrossRef]  

4. D.-K. Yao, “Optimal ultraviolet wavelength for in vivo photoacoustic imaging of cell nuclei,” J. Biomed. Opt 17(5), 056004 (2012). [CrossRef]  

5. T. T. W. Wong, R. Zhang, P. Hai, et al., “Fast label-free multilayered histology-like imaging of human breast cancer by photoacoustic microscopy,” Sci. Adv. 3(5), e1602168 (2017). [CrossRef]  

6. X. Li, L. Kang, Y. Zhang, et al., “High-speed label-free ultraviolet photoacoustic microscopy for histology-like imaging of unprocessed biological tissues,” Opt. Lett. 45(19), 5401 (2020). [CrossRef]  

7. R. Cao, S. D. Nelson, S. Davis, et al., “Label-free intraoperative histology of bone tissue via deep-learning-assisted ultraviolet photoacoustic microscopy,” Nat. Biomed. Eng. 7(2), 124–134 (2022). [CrossRef]  

8. F. Fereidouni, Z. T. Harmany, M. Tian, et al., “Microscopy with ultraviolet surface excitation for rapid slide-free histology,” Nat. Biomed. Eng. 1(12), 957–966 (2017). [CrossRef]  

9. S. Ye, J. Zou, C. Huang, et al., “Rapid and label-free histological imaging of unprocessed surgical tissues via dark-field reflectance ultraviolet microscopy,” iScience 26(1), 105849 (2023). [CrossRef]  

10. Y. Zhang, L. Kang, I. H. M. Wong, et al., “High-Throughput, Label-Free and Slide-Free Histological Imaging by Computational Microscopy and Unsupervised Learning,” Adv. Sci. 9(2), 2102358 (2022). [CrossRef]  

11. G. A. Wagnières, W. M. Star, and B. C. Wilson, “In Vivo Fluorescence Spectroscopy and Imaging for Oncological Applications,” Photochem. Photobiol. 68(5), 603–632 (1998). [CrossRef]  

12. D. Dan, M. Lei, B. Yao, et al., “DMD-based LED-illumination Super-resolution and optical sectioning microscopy,” Sci. Rep. 3(1), 1116 (2013). [CrossRef]  

13. J. Qian, M. Lei, D. Dan, et al., “Full-color structured illumination optical sectioning microscopy,” Sci. Rep. 5(1), 14513–10 (2015). [CrossRef]  

14. J. T. Fong, T. W. Winter, S. J. Jacobs, et al., “Advances in DMD-based UV application reliability below 320nm,” Proc. SPIE7637, 763718 (2010).

15. I. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., “Generative Adversarial Nets,” in Advances in Neural Information Processing Systems (Curran Associates, Inc., 2014), Vol. 27.

16. C. Ledig, L. Theis, F. Huszar, et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2017), Vol. 2, pp. 105–114.

17. X. Wang, K. Yu, S. Wu, et al., “ESRGAN: Enhanced super-resolution generative adversarial networks,” Computer Vision – ECCV 2018 Workshops. ECCV 2018. Lect. Notes Comput. Sci.11133, (2019).

18. Y. Rivenson, H. Wang, Z. Wei, et al., “Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning,” Nat. Biomed. Eng. 3(6), 466–477 (2019). [CrossRef]  

19. Z. Chen, W. Yu, I. H. M. Wong, et al., “Deep-learning-assisted microscopy with ultraviolet surface excitation for rapid slide-free histological imaging,” Biomed. Opt. Express 12(9), 5920 (2021). [CrossRef]  

20. S. Soltani, A. Ojaghi, H. Qiao, et al., “Prostate cancer histopathology using label-free multispectral deep-UV microscopy quantifies phenotypes of tumor aggressiveness and enables multiple diagnostic virtual stains,” Sci. Rep. 12(1), 1–17 (2022). [CrossRef]  

21. P. Isola, J.-Y. Zhu, T. Zhou, et al., “Image-to-Image Translation with Conditional Adversarial Networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2017), Vol. 2017-Janua, pp. 5967–5976.

22. J. Y. Zhu, T. Park, P. Isola, et al., “Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks,” in 2017 IEEE International Conference on Computer Vision (ICCV) (2017), pp. 2242–2251.

23. X. Li, G. Zhang, H. Qiao, et al., “Unsupervised content-preserving transformation for optical microscopy,” Light: Sci. Appl. 10(1), 44 (2021). [CrossRef]  

24. L. Shi, I. H. M. Wong, C. T. K. Lo, et al., “One-side Virtual Histological Staining Model for Complex Human Samples,” in 2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI) (2022), pp. 1–4.

25. A. Jolicoeur-Martineau, “The relativistic discriminator: A key element missing from standard GaN,” 7th Int. Conf. Learn. Represent. ICLR 2019 (2019).

26. W. Shi, J. Caballero, F. Huszár, et al., “Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016) pp. 1874–1883.

27. O. Ronneberger, P. Fischer, T. Brox, et al., eds. “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2015 (Springer International Publishing, 2015), pp. 234–241.

28. T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, et al., “High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs,” 2018 IEEE/CVF Conf. Comput. Vis. Pattern Recognit.8798–8807 (2018).

29. Y. Taigman, A. Polyak, and L. Wolf, “Unsupervised Cross-Domain Image Generation,” in 5th International Conference on Learning Representations, {ICLR} 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings (OpenReview.net, 2017).

30. U. Schmidt, M. Weigert, C. Broaddus, et al., eds. “Cell Detection with Star-Convex Polygons,” inMedical Image Computing and Computer-Assisted Intervention -- MICCAI 2015 (Springer International Publishing, 2018), pp. 265–273.

31. T. M. Abraham, P. C. Costa, C. Filan, et al., “Label- and slide-free tissue histology using 3D epi-mode quantitative phase imaging and virtual hematoxylin and eosin staining,” Optica 10(12), 1605 (2023). [CrossRef]  

32. X. Wang, L. Xie, C. Dong, et al., “Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data,” Proc. IEEE Int. Conf. Comput. Vis. 2021-October,1905–1914 (2021).

33. Y. Zhang, L. Huang, T. Liu, et al., “Virtual Staining of Defocused Autofluorescence Images of Unlabeled Tissue Using Deep Neural Networks,” Intell. Comput. 2022, 19 (2022). [CrossRef]  

Data availability

All data involved in this work, including raw/processed images provided in the manuscript, are available from the corresponding author upon request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. A super-resolution and virtual staining workflow based on W-LED images. (a) A dual-mode autofluorescence microscope with both UV-laser pattern illumination and UV-LED illumination, modified from CHAMP [10], is used to acquire paired training data of low-resolution W-LED images and the high-resolution laser-CHAMP images with improved resolution through pattern illumination. (b) Compared with the workflow of laser-CHAMP, this EW-LED framework reduces the number of image acquisitions from a sequence of 36 images to a single image, and replaces the time-consuming iterative reconstruction framework with a time-efficient and well-trained generator from a GAN-based super-resolution model. The output of this model can be further transformed into the virtually stained H&E image. (c) The ESRGAN network is employed for transforming the low-resolution W-LED images to EW-LED images with improved resolution.
Fig. 2.
Fig. 2. Comparison of W-LED, EW-LED, laser-CHAMP, and corresponding virtually stained H&E (labeled as vHE for figures hereafter) images on an FFPE thin slice of mouse brain tissue. (a) EW-LED image of the mouse brain slice. (b) Corresponding virtually stained H&E image of (a). (c–e) Zoomed-in images of W-LED, EW-LED, and laser-CHAMP marked with a blue solid box in (a), respectively. (f–h) Virtually stained H&E images of (c–e), respectively. (i) Corresponding H&E-stained image of the blue solid region. The nuclei marked with green, yellow, and blue arrows can be found on both the virtually stained H&E of EW-LED (g), laser-CHAMP (h), and the real H&E-stained images (i, labeled as HE). However, the nuclei are missed on the virtually stained H&E of the W-LED image (f).
Fig. 3.
Fig. 3. Comparison of W-LED, EW-LED, laser-CHAMP, and corresponding virtually stained H&E images on a thick mouse brain tissue. (a) EW-LED image of the thick mouse brain tissue. (b) Corresponding virtually stained H&E image of (a). (c–e) Zoomed-in images of W-LED, EW-LED, and laser-CHAMP marked with an orange solid box in (a), respectively. (f–h) Virtually stained H&E images of (c–e), respectively. (i) Corresponding adjacent H&E-stained image of the orange solid region (labeled as adjacent HE for figures hereafter).
Fig. 4.
Fig. 4. (a–c) Comparison of W-LED, EW-LED, and laser-CHAMP images on thick human lung tissue. (d, e, j, k) Zoomed-in images of W-LED, EW-LED, laser-CHAMP, and H&E-stained histology images corresponding to the orange solid box marked in (a), respectively, showing the structure of bronchiole with epithelial cells marked with orange solid arrows. (f, g, l, m) Zoomed-in images of W-LED, EW-LED, laser-CHAMP, and H&E-stained histology images corresponding to the yellow dashed box marked in (a), respectively, showing the vascular wall. (h, i, n, o) Zoomed-in images of W-LED, EW-LED, laser-CHAMP, and H&E-stained histology images corresponding to the green dotted box marked in (a), respectively, showing the cell nuclei of individual alveolar macrophages.
Fig. 5.
Fig. 5. Comparison of the histology images generated by W-LED and EW-LED images with their corresponding H&E-stained image on a thick human lung adenocarcinoma. (a) EW-LED image of the human lung tissue. (b) Virtually stained H&E image of (a). (c, d) Zoomed-in images of W-LED and EW-LED images marked with a green solid box in (a), respectively. (e) Corresponding adjacent H&E-stained image of the green solid region. (f, g) Virtually stained H&E images of (c) and (d), respectively.

Tables (2)

Tables Icon

Table 1. Quantitative comparison between W-LED and EW-LED images with the laser-CHAMP image, and the corresponding virtually stained H&E images with the real H&E-stained image on the thin mouse brain tissue

Tables Icon

Table 2. Quantitative comparison between W-LED and EW-LED images with the laser-CHAMP image, and the nuclei count on the corresponding virtually stained and adjacent real H&E images of the thick mouse brain tissue

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

L G a d v = E y [ l o g ( 1 σ ( D ( y ) E y ^ [ D ( y ^ ) ] ) ) ] E y ^ [ log ( σ ( D ( y ^ ) E y [ D ( y ) ] ) ) ]
L D a d v = E y [ log ( σ ( D ( y ) E y ^ [ D ( y ^ ) ] ) ) ] E y ^ [ log ( 1 σ ( D ( y ^ ) E y [ D ( y ) ] ) ) ]
L 1 = E x [ | | y G ( x ) | | 1 ]
L G = L v g g + λ L G a d v + L 1
L G a d v = E x [ l o g D ( G ( x ) ) ]
L D a d v = E x [ l o g D ( G ( x ) ) ] + E y [ l o g ( 1 D ( y ) ) ]
L i d t = E y [ y G ( y ) 1 ]
L G = L G a d v + λ L i d t
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.