Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Adaptive super-resolution enabled on-chip contact microscopy

Open Access Open Access

Abstract

We demonstrate an adaptive super-resolution based contact imaging on a CMOS chip to achieve subcellular spatial resolution over a large field of view of ∼24 mm2. By using regular LED illumination, we acquire the single lower-resolution image of the objects placed approximate to the sensor with unit magnification. For the raw contact-mode lens-free image, the pixel size of the sensor chip limits the spatial resolution. We develop a hybrid supervised-unsupervised strategy to train a super-resolution network, circumventing the missing of in-situ ground truth, effectively recovering a much higher resolution image of the objects, permitting sub-micron spatial resolution to be achieved across the entire sensor chip active area. We demonstrate the success of this approach by imaging the proliferation dynamics of cells directly cultured on the chip.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Lensfree on-chip imaging directly samples the light transmitted through a specimen without the use of any imaging lenses between the object and the sensor planes [14]. With combing small image sensor with microfluidics, lensfree on-chip microscopy [57] has provided a simple and cost-effective solution for insitu cell culture and microscopic imaging. Compared to conventional lens-based microscopy, such an imaging geometry is significantly simpler and much more compact and lightweight, and possesses a larger field-of-view (FOV). However, obtained lensfree images suffer from a particularly low resolution caused by the unit-magnification imaging mode, as compared to the lens-based microscopy. To increase the image resolution, computational ways known as the image super-resolution are used to enhance the raw lensfree images. Fourier ptychographic microscopy stitches together a number of variably illuminated, low-resolution intensity images in Fourier space to produce a high-resolution image [8]. Synthetic aperture-based on-chip microscope increases the effective NA of the reconstructed lensfree image by using illumination angles scanned across the surface of a dome [9].Pixel super resolution algorithm estimates a maximum-likehood solution by fusing the low-resolution images with sub-pixel shift [5,10]. Ptychographic modulation approaches encodes the spectral information into 2D intensity measurements by inserting a thin diffuser between the specimen and the monochromatic image sensor and reconstruct a high-resolution image from a sequence of monochromatic scans [11,12]. Despite offering unique imaging capabilities, these multi-frame based methods require special hardware setup such as patterned illumination or micro-channel, and the computation on multiple frames which largely compromises the imaging speed. More recently, deep learning based single image super-resolution techniques greatly benefit the microscopy imaging [1316]. While for on-chip image super-resolution, researchers barely enjoy the convenient of these techniques, because of the lack of in-situ high-resolution images as targets for training a data-driven super-resolution network.

Here we present a lensfree on-chip imaging device for both cell culturing and imaging (Fig. 1(a)), as well as a strategy for training a single-image super-resolution (SISR) neural network adaptively for the on-chip imaging. We cultured cells upon the sensor surface of our on-chip imaging device, simply acquiring the single raw lensfree image of the cells (Fig. 1(b)). To enhance the raw lensfree image, traditional deep learning SISR methods requires the in-situ high-resolution (HR) images to train a neural network. In our case, however, the HR counterparts of the raw lensfree images are not available (Fig. 1(c), an ordinary microscopy fails to capture HR in-situ images due to the opaque chip substrate). To train such a network, We capture non-in-situ HR images under the ordinary optical microscopy, where the cells are planted on a transparent slide instead of the chip surface. These HR images and their synthetic low-resolution (LR) versions are used to train an initial super-resolution network. However, this super-resolution network cannot recover lensfree images due to the domain shift between microscopy images and lensfree images. To generalize its super-resolution effect successfully to lensfree images, we import an auxiliary style transfer model to transfer the synthetic LR images into "fake" lensfree images, and further train the super-resolution network on these "fake" lensfree images and HR microscopy images. Note that training the super-resolution model with such data is supervised, for the "fake" lensfree images generated from the synthetic LRs are exactly aligned to the HR microscopy images. The transfer module and the super-resolution module are jointly trained, helping each other with its own task. An earlier work [17] for in-the-wild image super-resolution shares the same strategy as ours. What’s different is that, we have a pre-training course of the super-resolution module using the microscopy HR-LR image pairs to establish its initial SR ability. Furthermore, we focus on the microscopy and lensfree on-chip imaging, which have higher requests for accuracy. For this we put a saliency constraint to preserve microscopy morphology and eliminate the interference during the style transfer. Once the training is done, the transfer module is no longer needed in subsequent steps. The super-resolution module alone can perform an end-to-end resolution enhancement on novel real lensfree images acquired with our on-chip imaging device, instantly achieving subcellular resolution across the entire sensor chip active area.

 figure: Fig. 1.

Fig. 1. Our lensfree on-chip imaging device. (a) The on-chip device connecting to a PC. (b) The lensfree image of cells obtained by the device. (c) In-situ high-magnification imaging of the cells on our device, captured under a 10$\times$ microscopy. Details are not resolved due to the opaque chip substrate. Such images cannot be used for training a super-resolution neural network.

Download Full Size | PDF

In summary, our contributions are:

  • 1. a compact cell culturing/imaging device with FOV as large as the active area of the sensor chip ( 24 mm$^{2}$);
  • 2. an adaptive super-resolution (AdaSR) strategy that increases the resolution of lensfree images without in-situ target based training, and a saliency constraint for convincing reconstruction;
  • 3. imaging the proliferation dynamics of cells directly cultured on the chip, with large FOV and convincing high resolution details.

2. Method

Our adaptive super-resolution (AdaSR) based contact imaging method consists of a hardware for cell culturing and imaging, and an AdaSR network for lensfree image super-resolution processing.

2.1 Hardware fabrication

We used an imaging sensor (model: Aptina MT9P031) with the pitch size of 2.2 $\mathrm{\mu}$m and active area of 5.70$\times$4.28 mm$^{2}$ for imaging (Fig. 1(a)). The readout part digitized the near-field optical signal of the cells cultured on the surface of the sensor. The obtained digital images were transferred transmitted to a PC through a USB data cable. To enable the direct cell culture and onsite lensfree imaging, we carefully removed the protective glass in front of the CMOS sensor and treated the sensor with oxygen plasma for 10 minutes, to remove the micro-lens on the chip surface, thus allowing the cells to be attached approximate to the sensor surface. Then we built a water-proofing chamber onto the chip using PDMS adhesive, to accommodate the culture medium necessary for the long-term cell cultivation.

Since the multi-frame acquisition has been circumvented in our implementation, complex patterned illumination is not required in our setup. Empirically, we simply used the LED light as the illumination source.

2.2 AdaSR

Due to the unit-magnification imaging mode, the raw lensfree image usually has a large FOV but contains few high-resolution details which may be important for quantitative cell profiling. For example, a regular cell (about 10 $\mathrm{\mu}$m in diameter) only covers about 5$\times$5 pixels on the CMOS sensor of our device, which is definitely under-sampled. We developed the above-mentioned AdaSR strategy to enable the recovery of an super-resolution image from the single-frame raw low-resolution lensfree measurement. We first obtained several HR images of the cells cultured on a petri dish using a conventional microscope under 10$\times$ magnification (Fig. 2(a), step 1). LR images are synthesized through an image degrading model that approximates the optical transfer function of the imaging system (Fig. 2, step 2) [15]. Apart from the HRs and the synthetic LRs, we also captured several lensfree images of cultured cells of the same cell line by our portable on-chip device. Images of three types composed the whole training dataset, where the microscopy HRs and LRs were one-to-one matched, and share nothing but the cell type in common with lensfree images. The goal is to find out the resolution mapping from the lensfree images to the non-in-situ microscopy HR ones. Our adaptive learning strategy managed to fulfill this (Fig. 2(a), step3, detailed explanation in Fig. 3). Afterwards, the well-trained super-resolution module is capable of enhancing raw lensfree measurements. Experimentally, a raw lensfree image of cultured cells can be quickly captured by our on-chip device (Fig. 2(b), step 1), and then inputted to the SR module for super-resolution recovery at real-time efficiency (Fig. 2(b), step 2). As a result, this SR image keeps the large FOV from unit-magnification measurement while recovers high-resolution details which are originally decimated by the sensor pixilation.

 figure: Fig. 2.

Fig. 2. Schematic of the adaptive super-resolution enabled contact microscopy. (a) The procedure of adaptive training. (b) On-chip contact imaging super-resolution reconstruction. A raw lensfree image with large FOV but low resolution is captured by our on-chip imaging device and instantly recovered by the well-trained AdaSR net, resulting in an image with a large FOV and abundant cellular details.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Adaptive super-resolution training.

Download Full Size | PDF

2.3 Training

The training of the model is performed as shown in Fig. 3. It is worthy noting that the proposed adaptive super-resolution strategy is not limited to the specific network architecture and can be seamlessly used on any other style-transfer networks and super-resolution networks. In our experiment, we choose one off-the-shelf network in each field: the CycleGAN [18] as the style transfer module, and the Residual Dense Network (RDN) [19] as the super-resolution module. The style-transfer module consists of two image generators, where the generator A takes as input synthetic LR microscopy images, and outputs lensfree-style LRs, and the generator B takes as input raw lensfree images and outputs microscopy-style lensfree images (Fig. 3, step 1). The style-transferred outputs are then input respectively to the other generator to turn back to their original style, marked as "recovered" images (Fig. 3, step 2), which are compared to their original version to compute the pixel-wise mean-absolute-error (MAE) as a cycle loss (Fig. 3, step 3),

$${ \begin{aligned} \mathrm{Loss}_{\mathrm{cycle}} & = \frac{1}{N}\sum_{n=1}^{N}{(\left\| G_B\left( G_A\left( I_{i}^{LR} \right) \right) -I_{i}^{LR} \right\| _1} \\ & + \left\| G_A\left( G_B\left( I_{i}^{LF} \right) \right) -I_{i}^{LF} \right\| _1) \end{aligned} }$$
where $N$ is the number of training images per batch, $G_A$ and $G_B$ is short for generator A and B, respectively, and $I_i^{LR}$ and $I_i^{LF}$ is the $i^{th}$ LR images and lensfree images in the training set, respectively. The recovered LRs should be as close to original synthetic microscopy LRs as possible, meaning that the cycle loss is expected to be zero. The same goes for recovered lensfree images and original lensfree images. There is a discriminator for each generator respectively, which takes as input either the original image (i.e., the real image) or the style-transferred image (i.e., the fake image) in each domain , and judge how likely the input is a real one by giving a probability. The cross entropy between the probability and the fact (1 for real and 0 for fake) is computed as the GAN loss (step 4). For the discriminator to make correct decisions, the GAN loss should be as small as possible:
$${ \begin{aligned} \mathrm{Loss}_{\mathrm{GAN}} & =\frac{1}{N}\sum_{n=1}^{N}{\left({-}log\left( 1-D_A\left( G_A\left(I_{i}^{LR} \right)\right) \right) -log\left( D_A(I_{i}^{LR} \right)\right) )} \\ & +\frac{1}{N}\sum_{n=1}^{N}{\left({-}log\left( 1-D_B\left( G_B\left(I_{i}^{LF}\right) \right) \right) -log\left( D_B\left(I_{i}^{LF} \right)\right)\right)} \end{aligned} }$$
where $D_A$ and $D_B$ is the discriminator in the lensfree domain and the microscopy domain, respectively. While for the generator, its generated fake images are expected to be as "real" as possible to fool the discriminator, thus the GAN loss for the generator is formulated as:
$${ \begin{aligned} \mathrm{Loss}_{\mathrm{GAN}} & =\frac{1}{N}\sum_{n=1}^{N}{\left({-}log\left( D_A\left( G_A\left(I_{i}^{LR}\right) \right) \right)\right) } \\ & +\frac{1}{N}\sum_{n=1}^{N}{\left({-}log\left( D_B\left( G_B\left(I_{i}^{LF}\right) \right) \right) \right)} \end{aligned} }$$

Unsupervised image-to-image style transfer tends to changing the image content and generates arbitrary patterns, which is unacceptable to accurate microscopy imaging. To maintain the cell contours and structures after style transfer, we use a saliency loss to further constrain the generator (step 5). Despite that CycleGAN introduced an identity loss defined as the pixel-wise difference between the input and output of the generator, to encourage the mapping to preserve color composition [18], we find it unsuitable for our application, where the single-channel images in two domains have close pixel intensity distributions. Instead, we defined a saliency loss as the MAE between the edging pixels at outlines of images,

$${ \begin{aligned} \mathrm{Loss}_{\mathrm{saliency}} & =\frac{1}{N}\sum_{n=1}^{N}{(\left\| \Phi \left( G_A\left( I_{i}^{LR} \right) \right) -\Phi \left( I_{i}^{LR} \right) \right\| _1}\\ & +\left\| \Phi \left( G_B\left( I_{i}^{LF} \right) \right) -\Phi \left( I_{i}^{LF} \right) \right\| _1) \end{aligned} }$$
where $\Phi$ is the Sobel operator that detects edges in an image. Then the lensfree-style LRs generated by the generator A are fed into the initial SR net, to be reconstructed into enhanced lensfree-style LRs (step 7), which are further compared to HR microscopy images to compute the MAE as the SR loss (step 8),
$${ \mathrm{Loss}_{\mathrm{SR}}=\frac{1}{N}\sum_{n=1}^{N}{(\left\| \Psi (G_A\left( I_{i}^{LR} \right) )-I_{i}^{HR} \right\| _1} }$$
where $\Psi$ is the SR net and $I_{i}^{HR}$ is the HR image. For both the SR net and the generator A, the SR loss is to be minimized, encouraging the generator A to output lensfree-style LRs as real as possible, and the SR net to perform precise super-resolution reconstruction on such lensfree-style images. The losses as mentioned above were minimized using gradient steepest descent, optimizing parameters of each module that answer for the specific loss.

3. Experiments

3.1 Training dataset

We captured 100 HR images of Human umbilical vein endothelial cells (HUVECs) cultured on petri dish and applied the degradation model of bilinear 4-time down-sampling following a Gaussian blurring to generate LRs. The HRs and LRs were cropped into 1416 pairs, each containing a HR patch of 384$\times$384 pixels and a LR patch of 96$\times$96 pixels. We Also captured lensfree images of the same cell type at 7 different time point ranging from 3 hours to 24 hours after planting. 1428 patches of size 96$\times$96 pixels were sampled from raw lensfree images, to remove regions without any cells. Three types of image patches composed the whole training set, namely the HR-LR-lensfree dataset.

3.2 Results

The efficacy of our adaptive super-resolution on-chip microscopy was first validated via imaging HUVECs. We performed the on-chip cell culture with complying to the standard protocol using our lensfree device. After an overnight cultivation, the cells were imaged on the chip with unit magnification, and then 4-time super-resolved by the well-trained AdaSR net.

Appearance. Fig. 4(a) shows the entire FOV ( 24 mm$^{2}$) of the super-resolved lensfree image. The zoom-in views of the cells are provided in Fig. 4(b–f). As a comparison, we also used several well-established model, including the Robust super-resolution (RSR) [20], the SRGAN [21], and the CinCGAN [22], to recover the lensfree images. Among these methods, RSR is a multi-frame based super-resolution technique without the need of training. Others are learning based methods and were trained with the same HR-LR-lensfree dataset as the AdaSR, each taking inputs and targets on its own demand. Specifically, the supervised SRGAN only needed HR-LR pairs. The CinCGAN was first trained with the LR-lensfree subset to learn transferring lensfree images into LRs, and finally fine-tuned on the whole HR-LR-lensfree dataset. While testing, 4 images were captured by the lensfree device. RSR used all the 4 images as inputs, and others used the average of 4 images as the input. Unsurprisingly, SRGAN cannot reconstruct accurate structures but plenty of hallucinations(Fig. 4(b–f), $3^{rd}$ column) because of the domain shift between its training data and the lensfree image. The RSR hardly introduced any resolution improvements (Fig. 4, $2^{nd}$ column). The two-stage processing CinCGAN totally failed to recover convincing contents due to the absence of constraints(e.g., the saliency loss and the SR loss for generators) for domain adaptation. Our method(Fig. 4(b–f), the $5^{th}$ column), on the contrary, resolved high-resolution cellular structures with a much more regular cell appearance.It is note worthy that AdaSR also managed to suppressed the mottled shade in the background (Fig. 4, red dotted lines) caused by floating non-adherent cells, generating a much cleaner background (Fig. 4, white dotted lines). We owe this feature of AdaSR to the saliency loss. Further discussion is in the ablation study.

 figure: Fig. 4.

Fig. 4. Large-FOV and high-throughput contact imaging. (a) AdaSR-reconstructed large-FOV lensfree image of HUVECs after 12 hours’ on-chip culture. (b)–(f) Zoom-in views of regions of interest (ROI) from the raw lensfree image and the SR images by several methods. Our method, AdaSR, is better than others in image quality and details. AdaSR also suppressed the mottled shade in the background(red dotted lines) caused by floating non-adherent cells, generating a much cleaner background (white dotted lines). Scale bar is 500 $\mu$m for (a) and 50 $\mu$m for (b)-(f).

Download Full Size | PDF

Cell counting. Due to the absence of ground truths, traditional pixel intensity based metrics cannot be used for evaluating the SR reconstructions of lensfree inputs. We quantitatively assessed the quality of the reconstructed SR images by the accuracy of automatic cell counting. For the SR reconstructions of each method, we randomly selected 30 regions of 600–500 pixels, and followed the pipeline of automatic cell counting in ImageJ [23] to obtain the statistics of cell population in each region, and compared it with the manually counted result. We defined the counting error as:

$$E = \frac{\left| N_{real} - N_{auto} \right|}{N_{real}} * 100\%$$
where $N_{real}$ and $N_{auto}$ are manually and automatically counted cell numbers in an image, respectively. As shown in Fig. 5, cells are identified and segmented in the form of masks. Our method (Fig. 5(e)) increase the rate of successful identification and segmentation by a large margin, as compared to those of others.

 figure: Fig. 5.

Fig. 5. Automatic cell counting. (a)–(e) Identical region from the raw lensfree image and SR images by several methods, used to count cell population by the automatic cell counting tool. Below are the mask of automatically identified cells of each image. (f) The counting error on each image.

Download Full Size | PDF

Resolution. Following the practice of [5], the optical resolution of AdaSR was investigated by measuring the full-width at half maximum (FWHM) of the finest structures (Fig. 6(a)), and the distance between two closely spaced features (Fig. 6(b)) in the superresolution reconstructions. Both cases establish that the resolution of our AdaSR is about 1.6 $\mathrm{\mu}$m or better, which is compared to about 6 $\mathrm{\mu}$m resolution (as dictated by Nyquist criterion) in the raw lensfree images. Although multi-frame based methods may reach an even better resolution, their time cost on data acquisition and reconstruction both notably increase. As a reference point, [5] and [11] used 200 and 300 low-resolution measurement, to reach a resolution of about 660 nm and 550 nm, respectively. In contrast, our implementation only requires single low-resolution frame. Moreover, the limit of our method could be potentially extended by the development of more powerful networks trained on higher-resolution labels.

 figure: Fig. 6.

Fig. 6. Resolution assessment of AdaSR reconstruction. (a) Line intensity plot of the finest structures being resolved by AdaSR. The FWHM indicates the resolution achieved in AdaSR reconstruction. (b) Measured distance of two adjacent spherical structures in AdaSR reconstruction.

Download Full Size | PDF

Mean opinion score testing. Following the practice of SRGAN [21], we also used the mean opinion score (MOS) to evaluate the quality of SR reconstructions. Specifically, we asked 11 cell biology-majored researchers to assign an score ranging from 1(bad) to 5(good) to SR images by each method. For each method, we randomly sampled 30 regions of size 500$\times$600 pixels for evaluation. Given a SR image and the corresponding original lensfree input, the raters were asked to make their evaluations on the perceptual looking as well as the structure fidelity of the SR image. The results are shown in Fig. 7. Based on the two metrics, raters gave the highest scores to our AdaSR in most cases.

 figure: Fig. 7.

Fig. 7. MOS distribution heatmap. Images by each SR method were evaluated with scores ranging from 1(bad) to 5(good). Darker color means more scores fell into that level.

Download Full Size | PDF

Dynamic imaging. Besides the end-point detection, we also imaged the dynamics of living cells on chip over time. The cultured cells were imaged on chip every 3 hours till the endpoint of 18 hours. At each time point, we obtained a large-FOV lensfree image and reconstructed it with the pre-trained AdaSR network for subsequent cell counting. For each time point a number of ROI were selected to reveal the time-varying dynamics of cell proliferation, as shown in Fig. 8. Benefiting from the improved resolution, the cells indicated by the arrows in Fig. 8 ROI1 were observed to complete a cycle of division. The population change of the cells on the whole chip was also calculated over the entire culture period (Fig. 8(b)). It shows that at the first 9 hours of culture, the number of cells increased from $\sim$ 2358 to $\sim$ 2951, which is slower than the increase at later period (from $\sim$ 3908 to $\sim$ 6246) probably due to the necessary process of cell adhesion at the initial living stage.

 figure: Fig. 8.

Fig. 8. The proliferation dynamics of cells on chip. (a) AdaSR-enabled contact image after 12 hours’ cell culture. Three ROIs selected to reveal the time-varying dynamics of cell proliferation. (b) The variation of cell population over time.

Download Full Size | PDF

3.3 Ablation study

Joint training. The joint training of the style transfer module and the SR module makes two parts cooperate better.To prove it, we trained the style transfer module and the SR module separately, that is, the intermediate outputs of the generator A are no longer the input to the SR module during training, nor does the optimizing of the SR loss involve the parameters of the generator A. Specifically, the style transfer module was first trained with the unpaired LR-Lensfree dataset in a totally unsupervised way, and then the SR module was trained with the paired HR-Lensfree style LR dataset, where the lensfree-style-LR images were generated from the LR images by the transfer module. At the inference stage, the raw lensfree image was directly input to the SR module for reconstruction. The results are marked as "Non-joint" in 9. The dramatic decline of image quality implies the indispensable role of the joint training and the generator-related SR loss.

 figure: Fig. 9.

Fig. 9. The Ablation study. From left to right are the raw lensfree images for test, the non-joint-training experiment, the non-saliency experiment, and the full-version AdaSR results.

Download Full Size | PDF

Saliency loss. Unlike [17,22] that imposes no explicit saliency constraints to the style transfer module, our AdaSR has a saliency loss to help preserving contents of images after been transferred. To be brief, cells in the "fake" image are encouraged to share the same contours and micro structures as original real ones. We also conducted an ablation study for this design, training an AdaSR without optimizing saliency losses. The results are shown as "No saliency" in 9. Apart from the preserved cell morphology, the saliency loss has another profit of eliminating the mottled shade caused by non-adherent cells. Based on the observation that adherent cells have a steeper edge than non-adherent ones (see Fig. 10), we extracted the edges of cell images using the gradient based Sobel operator, where adherent cells have higher responses than the non-adherent. Consequently, the networks were encouraged to maintain the adherent cells other than the non-adherent, by optimizing the saliency loss defined as the difference of edging pixels between the original and the transferred images. As a result, AdaSR is free from the disruptive mottled shades in the input lensfree images, while the non-saliency network generated artifacts around cells (see Fig. 9, Non-saliency and AdaSR).

 figure: Fig. 10.

Fig. 10. Sobel edges for calculating the saliency loss. The raw lensfree image (left) and the corresponding Sobel edges (right). Pixel intensity profiles at two lines in the raw image were plotted. The adherent cells (cyan line) have steeper edges than non-adherent ones (orange line).

Download Full Size | PDF

4. Conclusion

We have demonstrated an adaptive super-resolution (AdaSR) based contact imaging method, which can computationally improve the resolution of raw lensfree images, and thus increase the optical throughput for large-scale on-chip imaging. Our method largely simplifies the hardware setup and can realize large-FOV, high-resolution image reconstruction with high-fidelity. The proposed AdaSR network automatically learns the mapping from low resolution to high resolution patterns from easily accessible images by conventional lens-based microscope, and generalize itself to the lensfree image domain. The proposed saliency loss ensures the preserving of contents as well as the eliminating of unwanted mottled shades. AdaSR-enabled contact imaging can be considered as a compact culture/imaging device with FOV as large as the active area of the sensor chip, the same with the raw lensfree images, and maximum achieved resolution similar to that of a 10$\times$ objective of the traditional microscopy. As a demonstration, we proved that our approach has a chip-size large FOV to accommodate large-scale cell culture and improved single-cell resolution adequate for quantitative analysis. Also, the performance could be further improved by using better sensor chip. In addition, it is easily reproducible and highly cost-effective for resource-limited environments. Due to the compact form factor, the device can be easily integrated into an incubator to conduct long-term, on-site observation of living cells, giving further insight into the time-course study of cell dynamics. These advancements render AdaSR-enabled contact imaging method a potential tool for a variety of biomedical assays, such as invitro drug test and stem cell differentiation & migration, in which cellular profiling at large-scale is highly desired.

Funding

Thousand Young Talents Program of China; Research Program of Shenzhen (JCYJ20160429182424047); National Natural Science Foundation of China (21874052); National Key Research and Development Program of China (2017YFA0700500).

Acknowledgments

The authors thank Yi Li (College of Life Science and Technology, Huazhong University of Science and Technology) for helping conducting the MOS experiment.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. T.-W. Su, A. Erlinger, D. Tseng, and A. Ozcan, “Compact and light-weight automated semen analysis platform using lensfree on-chip microscopy,” Anal. Chem. 82(19), 8307–8312 (2010). [CrossRef]  

2. S. Seo, T.-W. Su, D. K. Tseng, A. Erlinger, and A. Ozcan, “Lensfree holographic imaging for on-chip cytometry and diagnostics,” Lab Chip 9(6), 777–787 (2009). [CrossRef]  

3. D. Jin, D. Wong, J. Li, Z. Luo, Y. Guo, B. Liu, Q. Wu, C. M. Ho, and P. Fei, “Compact wireless microscope for in-situ time course study of large scale cell dynamics within an incubator,” Sci. Rep. 5(1), 18483 (2015). [CrossRef]  

4. A. Ozcan and E. McLeod, “Lensless imaging and sensing,” Annu. Rev. Biomed. Eng. 18(1), 77–102 (2016). [CrossRef]  

5. G. Zheng, S. A. Lee, Y. Antebi, M. B. Elowitz, and C. Yang, “The epetri dish, an on-chip cell imaging platform based on subpixel perspective sweeping microscopy (spsm),” Proc. Natl. Acad. Sci. 108(41), 16889–16894 (2011). [CrossRef]  

6. S. A. Lee, G. Zheng, N. Mukherjee, and C. Yang, “On-chip continuous monitoring of motile microorganisms on an epetri platform,” Lab Chip 12(13), 2385–2390 (2012). [CrossRef]  

7. J. H. Jung and J. E. Lee, “Real-time bacterial microcolony counting using on-chip microscopy,” Sci. Rep. 6, 1–8 (2016). [CrossRef]  

8. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

9. W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light Sci Appl 4(3), e261 (2015). [CrossRef]  

10. W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18(11), 11181–11191 (2010). [CrossRef]  

11. P. Song, R. Wang, J. Zhu, T. Wang, Z. Bian, Z. Zhang, K. Hoshino, M. Murphy, S. Jiang, C. Guo, and G. Zheng, “Super-resolved multispectral lensless microscopy via angle-tilted, wavelength-multiplexed ptychographic modulation,” Opt. Lett. 45(13), 3486–3489 (2020). [CrossRef]  

12. S. Jiang, J. Zhu, P. Song, C. Guo, Z. Bian, R. Wang, Y. Huang, S. Wang, H. Zhang, and G. Zheng, “Wide-field, high-resolution lensless on-chip microscopy via near-field blind ptychographic modulation,” Lab Chip 20(6), 1058–1065 (2020). [CrossRef]  

13. M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Sergovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018). [CrossRef]  

14. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017). [CrossRef]  

15. H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. Jin, and P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” Biomed. Opt. Express 10(3), 1044–1063 (2019). [CrossRef]  

16. H. Zhang, Y. Zhao, C. Fang, G. Li, M. Zhang, Y.-H. Zhang, and P. Fei, “Exceeding the limits of 3d fluorescence microscopy using a dual-stage-processing network,” Optica 7(11), 1627–1640 (2020). [CrossRef]  

17. S. Chen, Z. Han, E. Dai, X. Jia, Z. Liu, L. Xing, X. Zou, C. Xu, J. Liu, and Q. Tian, “Unsupervised image super-resolution with an indirect supervised path,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, (2020) pp. 468–469.

18. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE international conference on computer vision, (2017) pp. 2223–2232

19. Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2018) pp. 2472–2481.

20. A. Zomet, A. Rav-Acha, and S. Peleg, “Robust super-resolution, in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 1 (IEEE, 2001), pp. I–I.

21. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2017) pp. 4681–4690.

22. Y. Yuan, S. Liu, J. Zhang, Y. Zhang, C. Dong, and L. Lin, “Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2018) pp. 701–710.

23. I. V. Grishagin, “Automatic cell counting with imagej,” Anal. Biochem. 473, 63–65 (2015). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       implemention details

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Our lensfree on-chip imaging device. (a) The on-chip device connecting to a PC. (b) The lensfree image of cells obtained by the device. (c) In-situ high-magnification imaging of the cells on our device, captured under a 10$\times$ microscopy. Details are not resolved due to the opaque chip substrate. Such images cannot be used for training a super-resolution neural network.
Fig. 2.
Fig. 2. Schematic of the adaptive super-resolution enabled contact microscopy. (a) The procedure of adaptive training. (b) On-chip contact imaging super-resolution reconstruction. A raw lensfree image with large FOV but low resolution is captured by our on-chip imaging device and instantly recovered by the well-trained AdaSR net, resulting in an image with a large FOV and abundant cellular details.
Fig. 3.
Fig. 3. Adaptive super-resolution training.
Fig. 4.
Fig. 4. Large-FOV and high-throughput contact imaging. (a) AdaSR-reconstructed large-FOV lensfree image of HUVECs after 12 hours’ on-chip culture. (b)–(f) Zoom-in views of regions of interest (ROI) from the raw lensfree image and the SR images by several methods. Our method, AdaSR, is better than others in image quality and details. AdaSR also suppressed the mottled shade in the background(red dotted lines) caused by floating non-adherent cells, generating a much cleaner background (white dotted lines). Scale bar is 500 $\mu$m for (a) and 50 $\mu$m for (b)-(f).
Fig. 5.
Fig. 5. Automatic cell counting. (a)–(e) Identical region from the raw lensfree image and SR images by several methods, used to count cell population by the automatic cell counting tool. Below are the mask of automatically identified cells of each image. (f) The counting error on each image.
Fig. 6.
Fig. 6. Resolution assessment of AdaSR reconstruction. (a) Line intensity plot of the finest structures being resolved by AdaSR. The FWHM indicates the resolution achieved in AdaSR reconstruction. (b) Measured distance of two adjacent spherical structures in AdaSR reconstruction.
Fig. 7.
Fig. 7. MOS distribution heatmap. Images by each SR method were evaluated with scores ranging from 1(bad) to 5(good). Darker color means more scores fell into that level.
Fig. 8.
Fig. 8. The proliferation dynamics of cells on chip. (a) AdaSR-enabled contact image after 12 hours’ cell culture. Three ROIs selected to reveal the time-varying dynamics of cell proliferation. (b) The variation of cell population over time.
Fig. 9.
Fig. 9. The Ablation study. From left to right are the raw lensfree images for test, the non-joint-training experiment, the non-saliency experiment, and the full-version AdaSR results.
Fig. 10.
Fig. 10. Sobel edges for calculating the saliency loss. The raw lensfree image (left) and the corresponding Sobel edges (right). Pixel intensity profiles at two lines in the raw image were plotted. The adherent cells (cyan line) have steeper edges than non-adherent ones (orange line).

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

L o s s c y c l e = 1 N n = 1 N ( G B ( G A ( I i L R ) ) I i L R 1 + G A ( G B ( I i L F ) ) I i L F 1 )
L o s s G A N = 1 N n = 1 N ( l o g ( 1 D A ( G A ( I i L R ) ) ) l o g ( D A ( I i L R ) ) ) + 1 N n = 1 N ( l o g ( 1 D B ( G B ( I i L F ) ) ) l o g ( D B ( I i L F ) ) )
L o s s G A N = 1 N n = 1 N ( l o g ( D A ( G A ( I i L R ) ) ) ) + 1 N n = 1 N ( l o g ( D B ( G B ( I i L F ) ) ) )
L o s s s a l i e n c y = 1 N n = 1 N ( Φ ( G A ( I i L R ) ) Φ ( I i L R ) 1 + Φ ( G B ( I i L F ) ) Φ ( I i L F ) 1 )
L o s s S R = 1 N n = 1 N ( Ψ ( G A ( I i L R ) ) I i H R 1
E = | N r e a l N a u t o | N r e a l 100 %
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.