Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Classification of cell morphology with quantitative phase microscopy and machine learning

Open Access Open Access

Abstract

We describe and compare two machine learning approaches for cell classification based on label-free quantitative phase imaging with transport of intensity equation methods. In one approach, we design a multilevel integrated machine learning classifier including various individual models such as artificial neural network, extreme learning machine and generalized logistic regression. In another approach, we apply a pretrained convolutional neural network using transfer learning for the classification. As a validation, we show the performances of both approaches on classification between macrophages cultured in normal gravity and microgravity with quantitative phase imaging. The multilevel integrated classifier achieves average accuracy 93.1%, which is comparable to the average accuracy 93.5% obtained by convolutional neural network. The presented quantitative phase imaging system with two classification approaches could be helpful to biomedical scientists for easy and accurate cell analysis.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Recent research in cell biology has demonstrated the tendency to analyze cellular morphology at the single-cell level [1], as individual morphology has to be taken into account to accurately identify different cell types regulated by cellular material [2], active properties [3], and external environment [4]. Therefore, the morphology is widely used as a measure of cell classification. Cellular morphology could be obtained through various microscopy techniques, providing insight at different levels.

Fluorescence microscopy, in particular, has indeed revolutionized the cellular imaging research field [5,6]. However, fluorescence imaging techniques require fluorescent dyes and fluorescent probes, which are not available for all target molecules. Moreover, fluorescent stains may interfere with natural cellular functions. To overcome these issues, an alternative imaging technique is needed to perform noninvasive screening on cell morphology.

Alternatively, quantitative phase microscopy (QPM) is an imaging modality that does not necessitate the addition of an exogenous contrast agent for live cell imaging [7]. Live cell imaging measures optical path length differences and offers a quantitative assessment of morphology. Compared with phase-contrast microscopy [8,9], QPM can obtain not only important biophysical cell parameters including cell shape and volume, but also the quantitative intracellular refractive index related to cell content as well as dry mass [1012].

So far, interferometry and non-interferometry based QPM techniques [1318], have shown the ability to differentiate between various conditions of cells and tissues, based on the parameters derived from the phase map and the cell visible morphology [11,19]. However, manual observation and evaluation of cell morphology in quantitative phase images makes analysis rather time-consuming and less objective.

One approach to address these limitations is supervised machine learning [20]. Once the cells are segmented, the common step is to represent each of them by a set of features. The features are summarized into a feature vector, which serves as an input to the supervised machine learning classifier. After the classifier is trained on the labeled data, it is able to distinguish between a defined set of cell classes. Supervised machine learning has shown great capabilities in long data processing within a short computation time, and has been used for various studies, such as red blood cell classification [21], malaria identification [22], cell viability [23], cellular drug responses [24], macrophage activation detection [25], classification between normal and stressed sperm [26], and cells death identification [27], etc.

More recently, the advent of deep learning represents a step increase in the ability to handle images. Deep learning, as a subset of supervised machine learning, is a state-of-the-art tool for generic feature extraction. One of the advantages of deep convolutional neural networks (CNN) is that it does not need the manual or handcraft extraction of features [28], paving the way for complex image recognition and classification. Recent studies show that the deep learning methods based on QPM images are also rapidly growing for cell classification, such as assessment of cell activation state [29], cancer diagnosis [30,31], identification of mammalian cell type [32], screening of anthrax spores [33], prediction of best focus distance [34], and digital staining of histological samples [35], etc.

Due to the advantage of deep learning itself learning to extract features while training and data augmentation techniques, deep learning classifiers usually achieve better performance than traditional machine learning classifiers [1,30,33]. Earlier studies have compared the performance of deep learning pipeline and various machine learning algorithms including artificial neural network, support vector machine, logistic regression, etc. However, it is well known that deep learning usually requires long computational times. In a recent study [22], a multilevel ensemble-based classifier using integration algorithm of individual machine learning models achieved a high-average accuracy. Such an integrated machine learning classifier shows the potential for high-level classification performance as deep learning only requires several seconds of computation time. While earlier studies have provided comparisons of deep learning and individual machine learning classifier on cell classification based on QPM [30,33], none of the publications have attempted to compare an integrated machine learning classifier and deep learning for the classification of live adherent macrophage cells.

In this work, we aim to use the two approaches (integrated supervised machine learning and deep learning) and compare their respective performances on the classification of macrophage cells based on QPM measurements. In the experiment, we utilize QPM based on the transport of intensity equation (TIE) for obtaining phase images of macrophages cultured in normal gravity and microgravity, respectively. The microgravity cultured cells exhibit different types of morphologies, which represent the categories for classification. We design a multilevel integrated supervised machine learning classifier for the prediction of different cell types using morphology-related features derived from quantitative phase images. For comparison, we apply a pretrained deep convolutional neural network (CNN) with transfer learning, to classify the correct cell type with phase images. The multilevel integrated machine learning classifier shows a similar high performance as the deep convolutional neural network in the automated identification of macrophages cultured in normal gravity and microgravity, respectively.

2. Materials and method

2.1 Experimental setup and validation

The experimental setup is built on a commercial bright-field inverted microscope. A halogen lamp is used for an illumination light source, of which the spatial coherence is determined by size of the condenser aperture. This spatial coherence can be represented by coherent parameter S, where S is the ratio of the condenser to objective numerical apertures. In the experiment, the condenser aperture is properly adjusted to set the coherent parameter S around 0.3 [16,36,37]. The beam passing through the sample is focused by a microscope objective, and then projected through a tube lens onto the CCD target (1280 × 960, 4.65µm pixel size). The CCD is implemented on an axial scanning stage to take in-focus and out-of-focus images.

TIE states the relationship of the derivative of intensity with respect to the optical axis and the object-plane phase [38]:

$$- k\frac{{\partial I({x,y} )}}{{\partial z}} = {\nabla _{ \bot }} \cdot [{I({x,y} ){\nabla_ \bot }\varphi ({x,y} )} ], $$
where (x, y) represents transverse spatial coordinates perpendicular to the optical axis. I (x, y) is the in-focus intensity and ∇ is the 2D gradient, z indicates the position along the optical axis, k = 2π / λ is the wave number, and φ (x, y) is the phase of the sample. The longitudinal intensity derivative ∂I(x, y)/∂z can be approximated using finite differences taken between two closely spaced planes [36].

To demonstrate the validity of this approach, we take a planoconvex micro-lens array (OPTON, MLA-2R250, 250µm pitch) as example and use a ×10/0.25 NA objective for imaging. Figures 1(b)–1(d) show the over-focus, in-focus and under-focus images, respectively. The defocus distance is chosen as 500µm. The 2-D phase map of micro-lens is retrieved from through-focus intensity images based on the two standard Poisson functions using Fourier transform implemented under MATLAB [3941]. The phase distribution of single micro-lens (in the black square of Fig. 1(b)) is shown in Fig. 1(e), and the corresponding 3-D image is illustrated in Fig. 1(f). The actual height of the micro-lens measured by TIE along the red solid line and measured by digital holographic microscopy are plotted together in Fig. 1(g).

 figure: Fig. 1.

Fig. 1. (a) Experimental setup for quantitative phase microscopy based on TIE. A, B, C, under-focus, in-focus, over-focus planes. (b) Over-focus image; (c) in-focus image; (d) under-focus image; (e) phase map obtained by TIE; (f) 3D thickness profiles of (e); (g) comparison of micro-lens thickness obtained by TIE and DHM.

Download Full Size | PDF

2.2 Cell culture

Macrophages are a kind of immune cell in the immune system of an organism, which are an essential compartment of anti-tumor, anti-virus and anti-bacterial infection mechanisms [3]. Previous investigations on mammalian cells have shown that microgravity, either experienced in space, or simulated on earth, could cause severe cellular modifications [42,43].

Macrophages are first plated and cultured in MEM (Gibco) with 5% fetal bovine serum and 5% calf serum including 1% penicillin/streptomycin in a humidified atmosphere at 37°C, 5% CO2 for 24h. Then, the cells are randomly divided into two groups, stationary control (SC) and experimental (E), where the cells are covering only 30% to 40% of the chamber. This low cell density avoids superposition of cells and insures a relatively homogenous background for single cell. After that, the E group is placed on the 2D clinostat (China Astronaut and Training Centre, China) and rotated around a horizontal axis for 24 h of clinorotation, while the SC group is statically placed on the clinostat for 24 h. During the measurement experiments, two groups of cells are taken outside of the clinostat and placed on the microscope stage, exposed to the same conditions during the experiment: room temperature (∼25°C) and no CO2 control. The halogen lamp used for illumination has very low power (∼2k mW), so that there is no temperature consideration for this technique.

2.3 Image preprocessing and feature extraction

Because of the spatial inhomogeneities of the surrounding medium, the surrounding area around living cells has non-zero contribution in phase volume measurement. In order to exclude this fluctuating background effect on morphology feature calculation, we first adopt a threshold operation [44], and then apply morphological opening operations to eliminate any undesired, smaller-sized background objects. The cells in the resulting binary images are then segmented from the background by a marker-controlled watershed segmentation approach and identified as separate regions of interest (ROIs) after tracing region boundaries [23,4446]. In Fig. 2, there are two ROIs for the representative figure, the red one for cell 1 and the white one for cell 2. Then, the phase map of cell 1 corresponding to the red ROI is cropped from the original phase image and centered in a square zero matrix background (size: 256×256). The size of this blank background is suitable for widely spreading cells in E group. Similarly, cell 2 corresponding to the white ROI is cropped and centered. As a result, the two connected cells in the input raw phase image are segmented into two separate cells as shown in Fig. 2. To compare the performances of the multilevel integrated machine learning classifier and deep learning on the classification of macrophage cells based on QPM measurements, after cell segmentation, the flowchart is divided into two branches. In one branch each phase map is represented by a set of cell features — a procedure referred to as feature extraction. All extracted features are summarized into feature vectors, and each feature vector represents one cell. Each cell feature vector and its corresponding class label are later used for training multilevel integrated machine learning algorithm. In the other branch for deep learning, each phase map for single cells (size: 256×256) are resized to 227×227. A three-channel data cube (size: 227×227×3) is obtained by stacking the resized 2-D phase map for three color channels to meet the input requirement of CLSNet. To overcome the challenge of overfitting and to increase the cardinality of the training data in deep learning, data augmentation techniques have been applied to the three-channel data cube [31].

 figure: Fig. 2.

Fig. 2. Flowchart of image processing, machine-learning and deep-learning steps.

Download Full Size | PDF

2.4 Multilevel integrated supervised machine learning

Classification, as a category of supervised machine learning, aims to build a model that makes predictions based on a self-learning procedure on known labeled data. Each cell feature vector and its corresponding class label are used for training the machine learning models. A short description of traditional supervised machine learning algorithms is presented below:

  • (1) Classification trees (CT), a tree structure is built with root node and leaf nodes;
  • (2) Generalized linear regression (GLR), this syntax uses the canonical link to relate the distribution to the predictors;
  • (3) Random forest (RF), here the number of classification trees in RF is chosen as 100 for better accuracy;
  • (4) Extreme learning machines (ELM), which is based on feed-forward neuron network with randomly generated hidden nodes and random neurons;
  • (5) Artificial neural network (ANN), here a feed-forward back-propagation neural network with 1 hidden layer containing 10 hidden neurons is used.
To further improve classification accuracy, a multilevel integrated classifier (MIC) is designed using the fusion algorithm of the above individual models [22,47]. This strong classifier includes 5 classifiers shown as in Fig. 3, which perfectly learns from data and makes reliable predictions as the data traveled through all individual models. In stage 1, two classifiers (CT and GLR) are trained individually from a 75% dataset and calculates predictions from a 25% dataset. In stage 2, the true predictions from the two classifiers (CT and GLR) are added to the original training dataset, and this updated training dataset is used to train another two classifiers (RF and ELM). The false predictions made by both classifiers (CT and GLR) are used as a testing dataset for latter classifiers (RF and ELM). Similarly, in stage 3, the true predictions from the two former classifiers (RF and ELM) are added to the training dataset for training ANN, while the false predictions from the former classifiers (RF and ELM) are used as a testing dataset for ANN. Finally, the machine-predicted species labels are compared with the true classes to estimate identification accuracy. All data is analyzed using MATLAB.

 figure: Fig. 3.

Fig. 3. Architecture of the multilevel integrated classifier.

Download Full Size | PDF

2.5 Deep convolutional neural network

CNNs are the most commonly used deep learning networks, which is very helpful when information is stored in the spatial location, especially in imaging-based applications. In the case of cell classification, CNNs identify patterns in the input images and train a model based on labels. Such a trained model is able to classify cells in new so-far unseen images. For comparison, a CNN classification network (CLSNet) is built by modifications of Alexnet which has been pretrained on millions of images from ImageNet and fine-tuned on our dataset using transfer learning [48]. The modifications are as follows:

  • (1) Removal of last three layers of the original network: fully connected layer, softmax layer and classification layer;
  • (2) Addition of three new layers at the end of network: Fully connected layer, softmax layer and classification layer;
  • (3) The added Fully connected layer are set to have two outputs;
  • (4) Weights of all fully connected layers and convolutional layers are randomly initialized.
The architecture of CLSNet is illustrated in Fig. 4. A phase image of a single cell is processed by five layers of convolution and max-pooling operations, and then we receive class labels through soft-max layers. For example, in the convolution layer C1, a kernel of size 11×11 acts with step distance 4 on each channel of the input image (size 227×227×3) and the results from three channels are summed as one output. A total of 96 different kernels are used with 96 different outputs with a size of 55×55. At the end of the overall network, all the feature maps are flattened in a huge one-dimensional vector and the softmax function is used to classify SC and E cells.

 figure: Fig. 4.

Fig. 4. Architecture of the CLSNet. C1, C2, C3, C4, C5, convolutional layers. S, step distance.

Download Full Size | PDF

3. Results and discussion

As a validation of the two proposed classification approaches, we utilize the QPM system described in Section 2.1 to obtain the phase images of macrophages cultured under normal gravity conditions and simulated weightlessness, respectively. Figure 5 shows the global phase maps for the SC and E groups. The first row represents the global phase maps of macrophages in the SC group corresponding to Figs. 5(a1)–5(a3). The second row gives the phase images of macrophages in the E group corresponding to Figs. 5(b1)–5(b3). The results show that the cell morphology of macrophages in the E group changes dramatically compared with that of macrophages in the SC group, which represent the categories for classification.

 figure: Fig. 5.

Fig. 5. Phase images of macrophage cells in (a1-a3) and (b1-b3) for the SC and E groups, respectively.

Download Full Size | PDF

Segmented single cells are achieved by the marker-controlled watershed algorithm, which is described in detail in Section 2.3. Not all segmented cells are included in the dataset. We do not consider highly overlapping cells where the segmentation boundary is unclear. However, when overlapping cells represent certain “types” of cells, such as cells during or just after mitosis, the overlapping cells are taken into consideration after the proper process. For example, for the three cells within close proximity to each other on the right-hand side of Fig. 5(b1), as the two cells on the left are highly connected, we just consider the rightmost one which has a clear region boundary for segmentation. While in Fig. 5(b2), there are two pairs of cells during the cell division stage, so we take each pair of cells as a single entity. In the upper left corner of Fig. 5(b3), two daughter cells just finish the mitotic process. Due to the boundary between the twin cells is clear, the two connected cells are segmented into two separate cells. The cells gathered in the database consist of 127 for E group and 101 for the SC group.

First, in order to classify the two cell types using the multilevel integrated machine learning algorithm, sets of features are extracted from the phase images of single cells. Upon analyzing tens of cells in each group, twenty quantitative morphological features are calculated. To remove features which have no contribution to the differentiation between SC and E cells, we perform a t-test to calculate the p values for the twenty features. Among the 20 features, 15 parameters are considered as highly significant. The remaining 15 parameters are used as predictor and response variables to achieve efficient results. The comparisons of mean and standard deviation for SC and E cells are listed in Table 1. We can see the discrepancies in the E and SC groups. As a whole, microgravity seems to stretch the cells in morphology and enlarge the cells in phase volume.

Tables Icon

Table 1. Comparison of QPM features for SC and E cells.

Out of the 228 cell dataset used for the discrimination between the SC and E group cells, 75% (170 cells) are used as a training dataset to train the machine learning classifiers and 25% (58 cells) are used as a testing dataset to acquire the response of the 6 classifiers mentioned in Section 2.4. Receiver operating characteristic (ROC) curves are then utilized to assess the performance of classifiers. Performance measures (ROC, accuracy, and AUC) of each classification algorithm can be found in Fig. 6. The proposed multilevel-integrated classifier (MIC) achieves higher accuracy compared with the other five individual classifiers.

 figure: Fig. 6.

Fig. 6. Performance evaluation of machine learning classifiers for the SC and E cells.

Download Full Size | PDF

Then, to compare the performances of CLSNet and the multilevel integrated machine learning algorithm, CLSNet is initially trained on the original dataset without data augmentation. All of the phase images are preprocessed as previously mentioned. The dataset is split into two nonoverlapping subsets, 75% of the dataset is used for training and 25% of the dataset for validation. The training loss, the validation loss, the training accuracy and the validation accuracy are calculated every 0.2 epoch during the training process. A momentum optimizer with a momentum 0.9 and a batch size of 5 is used to minimize the loss. The initial learning rate is set as 0.0001. Figures 7(a)–7(c) displays the progress of the training operation and ROC curve of CLSNet. It can be observed from Figs. 7(b)–7(c), CLSNet training without data augmentation obtains accuracy 84.1% and AUC 90%. Furthermore, we notice that after thirty times of epoch, the validation loss gradually increases and the validation accuracy stops increasing.

 figure: Fig. 7.

Fig. 7. Learning performance and ROC curve of CLSNet without data augmentation and with augmented dataset, respectively.

Download Full Size | PDF

Next, to overcome the challenges of over-fitting, data augmentation techniques have been applied and the network is retrained using augmented dataset. First, the original dataset containing 228 cells is randomly split into a training set (75%) and a validation set (25%). Then, the data augment operation is performed on the original training set. The augmented training dataset consists of the original, rotated and horizontally flipped images, which result in a total of 1368 phase images in the training dataset. Performance measures of CLSNet with data augmentation can be found in Figs. 7(d)–7(e). Figure 7(f) shows the ROC for the testing dataset. The high validation accuracy 93.1% and AUC 97% indicate the great ability of CLSNet to learn the optimal representation of phase images without any predesigned features required by supervised machine learning techniques. Augmentation techniques help in decreasing the loss and increasing the accuracy. The data augmentation generating virtual images using flipping and rotation at different angle position helps to increase the robustness of CLSNet.

The classification procedure for both multilevel integrated classifier and CLSNet is randomly repeated 10 times. The statistical results of the classification, including the mean and standard deviation of accuracy, and AUC, can be seen in Table 2. The overall results obtained by the multilevel integrated classifier has accuracy of 93.1% ± 2.4% and AUC of 98.5% ± 1.7% for the discrimination between macrophages cultured in normal gravity and microgravity, respectively. The multilevel integrated classifier is comparable to the classifier (CLSNet with phase data augment) whose overall accuracy and AUC are 93.5% ± 0.6% and AUC 97.7% ± 0.9%, respectively. While the multilevel integrated classifier shows relatively less stable in classification performance, certainly its computation time is much shorter than that of CLSNet.

Tables Icon

Table 2. Comparison of test accuracy and AUC for different classifiers

The overall results demonstrate the effectiveness of the multilevel integrated classifier and CNNs on accurate classification of cell morphologies, while the precondition of the two classification approaches lies in the quantitative nature of phase images. To further study the influence of morphometric features on the classification, a new dataset with in-focus images after the data augmentation technique is used to train CLSNet. The training progress and ROC curve is shown in Fig. 8. Compared with the performance of CLSNet with data augment in Fig. 7, the accuracy and AUC are reduced by 6.1% and 3%, respectively. Such reduction is not obvious in contrast to the values mentioned in previous studies on cell morphology classification [30,33]. Due to microgravity stretching the cells in morphology, we infer that morphometric features and quantitative phase signal play equally important roles in the differentiation between macrophages cultured in gravity and microgravity.

 figure: Fig. 8.

Fig. 8. Learning performance and ROC curve of CLSNet with in-focus images

Download Full Size | PDF

4. Conclusion

We have applied two machine learning classification approaches based on the QPM images obtained by TIE, while two types of macrophages cultured under gravity and microgravity have been distinguished. Our aim is to compare the performance of the designed multilevel integrated machine learning classifier and the pretrained CNN network, CLSNet. The results show that the integrated classifier has a similarly high accuracy as CLSNet employing QPM images with data augmentation for the automated identification of macrophages. The robustness of both approaches can be further improved by enlarging the image database and adopting multimodal QPM, such as spectral [29], polarized [49] and tomographic [21] images as the stacked input to the network, to increase the amount of raw information investigated by classifiers. Therefore, our future work will be aimed at database and multimodal QPM for applications of the approaches to different classification tasks.

Funding

National Natural Science Foundation of China (61927810); Joint Fund of the National Natural Science Foundation of China and China Academy of Engineering Physics (U1730137); Fundamental Research Funds for the Central Universities (3102019ghxm018).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. C. L. Chen, A. Mahjoubfar, L. C. Tai, I. K. Blaby, A. Huang, K. R. Niazi, and B. Jalali, “Deep Learning in Label-free Cell Classification,” Sci. Rep. 6(1), 21471 (2016). [CrossRef]  

2. V. K. Lam, T. C. Nguyen, B. M. Chung, G. Nehmetallah, and C. B. Raub, “Quantitative assessment of cancer cell morphology and motility using telecentric digital holographic microscopy and machine learning,” Cytometry, Part A 93(3), 334–345 (2018). [CrossRef]  

3. A. E. Ekpenyong, S. M. Man, S. Achouri, C. E. Bryant, J. Guck, and K. J. Chalut, “Bacterial infection of macrophages induces decrease in refractive index,” J. Biophotonics 6(5), 393–397 (2013). [CrossRef]  

4. H. Xu, J. Wu, Y. Weng, J. Zhang, and P. Shang, “Two-dimensional clinorotation influences cellular morphology, cytoskeleton and secretion of MLO-Y4 osteocyte-like cells,” Biologia (Cham, Switz.) 67(1), 255–262 (2012). [CrossRef]  

5. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]  

6. D. Dan, M. Lei, B. Yao, W. Wang, M. Winterhalder, A. Zumbusch, Y. Qi, L. Xia, S. Yan, Y. Yang, P. Gao, T. Ye, and W. Zhao, “DMD-based LED-illumination super-resolution and optical sectioning microscopy,” Sci. Rep. 3(1), 1116–1123 (2013). [CrossRef]  

7. Y. Park, C. Depeursinge, and G. Popescu, “Quantitative phase imaging in biomedicine,” Nat. Photonics 12(10), 578–589 (2018). [CrossRef]  

8. F. Zernike, “How I discovered phase contrast,” Science 121(3141), 345–349 (1955). [CrossRef]  

9. M. G. Nomarski, “Differential microinterferometer with polarized waves,” J. Phys. Radium 16, S9–S13 (1955).

10. V. L. Calin, M. Mihailescu, N. Mihale, A. V. Baluta, E. Kovacs, T. Savopol, and M. G. Moisescu, “Changes in optical properties of electroporated cells as revealed by digital holographic microscopy,” Biomed. Opt. Express 8(4), 2222–2234 (2017). [CrossRef]  

11. R. Cao, W. Xiao, X. Wu, L. Sun, and F. Pan, “Quantitative observations on cytoskeleton changes of osteocytes at different cell parts using digital holographic microscopy,” Biomed. Opt. Express 9(1), 72–85 (2018). [CrossRef]  

12. P. Girshovitz and N. T. Shaked, “Generalized cell morphological parameters based on interferometric phase microscopy and their application to cell life cycle characterization,” Biomed. Opt. Express 3(8), 1757–1773 (2012). [CrossRef]  

13. J. L. Di, Y. Song, T. L. Xi, J. W. Zhang, Y. Li, C. J. Ma, K. Q. Wang, and J. L. Zhao, “Dual-wavelength common-path digital holographic microscopy for quantitative phase imaging of biological cells,” Opt. Eng. 56(11), 111712 (2017). [CrossRef]  

14. J. L. Di, Y. Li, K. Q. Wang, and J. L. Zhao, “Quantitative and Dynamic Phase Imaging of Biological Cells by the Use of the Digital Holographic Microscopy Based on a Beam Displacer Unit,” IEEE Photonics J. 10(4), 1–10 (2018). [CrossRef]  

15. F. Pan, S. Liu, Z. Wang, P. Shang, and W. Xiao, “Digital holographic microscopy long-term and real-time monitoring of cell division and changes under simulated zero gravity,” Opt. Express 20(10), 11496–11505 (2012). [CrossRef]  

16. C. Zuo, Q. Chen, W. Qu, and A. Asundi, “Noninterferometric single-shot quantitative phase microscopy,” Opt. Lett. 38(18), 3538–3541 (2013). [CrossRef]  

17. Y. Li, J. Di, C. Ma, J. Zhang, J. Zhong, K. Wang, T. Xi, and J. Zhao, “Quantitative phase microscopy for cellular dynamics based on transport of intensity equation,” Opt. Express 26(1), 586–593 (2018). [CrossRef]  

18. Y. Li, J. Di, W. Wu, P. Shang, and J. Zhao, “Quantitative investigation on morphology and intracellular transport dynamics of migrating cells,” Appl. Opt. 58(34), G162–G168 (2019). [CrossRef]  

19. P. Girshovitz and N. T. Shaked, “Generalized cell morphological parameters based on interferometric phase microscopy and their application to cell life cycle characterization,” Biomed. Opt. Express 3(8), 1757–1773 (2012). [CrossRef]  

20. Y. Jo, H. Cho, S. Y. Lee, G. Choi, G. Kim, H. S. Min, and Y. Park, “Quantitative Phase Imaging and Artificial Intelligence: A Review,” IEEE J. Sel. Top. Quantum Electron. 25(1), 1–14 (2019). [CrossRef]  

21. H. Park, S. Lee, M. Ji, K. Kim, Y. Son, S. Jang, and Y. Park, “Measuring cell surface area and deformability of individual human red blood cells over blood storage using quantitative phase imaging,” Sci. Rep. 6(1), 34257 (2016). [CrossRef]  

22. N. Singla, V. Srivastava, and D. S. Mehta, “Development of full-field optical spatial coherence tomography system for automated identification of malaria using the multilevel ensemble classifier,” J. Biophotonics 11(5), e201700279 (2018). [CrossRef]  

23. L. Strbkova, D. Zicha, P. Vesely, and R. Chmelik, “Automated classification of cell morphology by coherence-controlled holographic microscopy,” J. Biomed. Opt. 22(08), 1–9 (2017). [CrossRef]  

24. H. Kobayashi, C. Lei, Y. Wu, A. Mao, Y. Jiang, B. Guo, Y. Ozeki, and K. Goda, “Label-free detection of cellular drug responses by high-throughput bright-field imaging and machine learning,” Sci. Rep. 7(1), 12454 (2017). [CrossRef]  

25. N. Pavillon, A. J. Hobro, S. Akira, and N. I. Smith, “Noninvasive detection of macrophage activation with single-cell resolution through machine learning,” Proc. Natl. Acad. Sci. U. S. A. 115(12), E2676–E2685 (2018). [CrossRef]  

26. V. Dubey, D. Popova, A. Ahmad, G. Acharya, P. Basnet, D. S. I. Mehta, and B. S. I. Ahluwalia, “Partially spatially coherent digital holographic microscopy and machine learning for quantitative analysis of human spermatozoa under oxidative stress condition,” Sci. Rep. 9(1), 3564 (2019). [CrossRef]  

27. A. V. Belashov, A. A. Zhikhoreva, T. N. Belyaeva, E. S. Kornilova, A. V. Salova, I. V. Semenova, and O. S. Vasyutinskii, “In vitro monitoring of photoinduced necrosis in HeLa cells using digital holographic microscopy and machine learning,” J. Opt. Soc. Am. A 37(2), 346–352 (2020). [CrossRef]  

28. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

29. S. H. Karandikar, C. Zhang, A. Meiyappan, I. Barman, C. Finck, P. K. Srivastava, and R. Pandey, “Reagent-Free and Rapid Assessment of T Cell Activation State Using Diffraction Phase Microscopy and Deep Learning,” Anal. Chem. 91(5), 3405–3411 (2019). [CrossRef]  

30. L. Zheng, K. Yu, S. Cai, Y. Wang, B. Zeng, and M. Xu, “Lung cancer diagnosis with quantitative DIC microscopy and a deep convolutional neural network,” Biomed. Opt. Express 10(5), 2446 (2019). [CrossRef]  

31. N. Singla, K. Dubey, and V. Srivastava, “Automated assessment of breast cancer margin in optical coherence tomography images via pretrained convolutional neural network,” J. Biophotonics 12(3), e201800255 (2019). [CrossRef]  

32. D. A. Van Valen, T. Kudo, K. M. Lane, D. N. Macklin, N. T. Quach, M. M. DeFelice, I. Maayan, Y. Tanouchi, E. A. Ashley, and M. W. Covert, “Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments,” PLoS Comput. Biol. 12(11), e1005177 (2016). [CrossRef]  

33. Y. Jo, S. Park, J. Jung, J. Yoon, H. Joo, M. H. Kim, S. J. Kang, M. C. Choi, S. Y. Lee, and Y. Park, “Holographic deep learning for rapid optical screening of anthrax spores,” Sci. Adv. 3(8), e1700606 (2017). [CrossRef]  

34. T. Pitkaaho, A. Manninen, and T. J. Naughton, “Focus prediction in digital holographic microscopy using deep convolutional neural networks,” Appl. Opt. 58(5), A202–A208 (2019). [CrossRef]  

35. Y. Rivenson, T. Liu, Z. Wei, Y. Zhang, K. de Haan, and A. Ozcan, “PhaseStain: the digital staining of label-free quantitative phase microscopy images using deep learning,” Light: Sci. Appl. 8(1), 23 (2019). [CrossRef]  

36. C. Zuo, J. Sun, J. Li, J. Zhang, A. Asundi, and Q. Chen, “High-resolution transport-of-intensity quantitative phase microscopy with annular illumination,” Sci. Rep. 7(1), 7654–7674 (2017). [CrossRef]  

37. T. Chakraborty and J. C. Petruccelli, “Source diversity for transport of intensity phase imaging,” Opt. Express 25(8), 9122–9137 (2017). [CrossRef]  

38. M. R. Teague, “Deterministic phase retrieval - a Green's function solution,” J. Opt. Soc. Am. A 73(11), 1434–1441 (1983). [CrossRef]  

39. J. A. Schmalz, T. E. Gureyev, D. M. Paganin, and K. M. Pavlov, “Phase retrieval using radiation and matter-wave fields: Validity of Teague's method for solution of the transport-of-intensity equation,” Phys. Rev. A: At., Mol., Opt. Phys. 84(2), 023808 (2011). [CrossRef]  

40. S. Mehrabkhani, L. Wefelnberg, and T. Schneider, “Fourier-based solving approach for the transport-of-intensity equation with reduced restrictions,” Opt. Express 26(9), 11458–11470 (2018). [CrossRef]  

41. C. Zuo, Q. Chen, and A. Asundi, “Boundary-artifact-free phase retrieval with the transport of intensity equation: fast solution with use of discrete cosine transform,” Opt. Express 22(8), 9220–9244 (2014). [CrossRef]  

42. K. Lang, C. Strell, B. Niggemann, K. S. Zänker, A. Hilliger, F. Engelmann, and O. Ullrich, “Real-Time Video-Microscopy of Migrating Immune Cells in Altered Gravity During Parabolic Flights,” Microgravity Sci. Technol. 22(1), 63–69 (2010). [CrossRef]  

43. D. Grimm, J. Grosse, M. Wehland, V. Mann, J. E. Reseland, A. Sundaresan, and T. J. Corydon, “The impact of microgravity on bone in humans,” Bone 87, 44–56 (2016). [CrossRef]  

44. F. Yi, I. Moon, B. Javidi, D. Boss, and P. Marquet, “Automated segmentation of multiple red blood cells with digital holographic microscopy,” J. Biomed. Opt. 18(2), 026006 (2013). [CrossRef]  

45. H. Alanazi, A. J. Canul, A. Garman, J. Quimby, and A. E. Vasdekis, ““Robust Microbial Cell Segmentation by Optical-Phase Thresholding with Minimal Processing Requirements,” Cytometry, Part A 91(5), 443–449 (2017). [CrossRef]  

46. C. F. Koyuncu, S. Arslan, I. Durmaz, R. Cetin-Atalay, and C. Gunduz-Demir, “Smart Markers for Watershed-Based Cell Segmentation,” PLoS One 7(11), e48664 (2012). [CrossRef]  

47. S. Rathor and R. S. Jadon, “Acoustic domain classification and recognition through ensemble based multilevel classification,” J. Ambient Intell. Human. Comput. 10(9), 3617–3627 (2019). [CrossRef]  

48. A. Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” (Advances in neural information processing systems, 2012), pp. 1097–1105.

49. X. Tian, X. Tu, K. Della Croce, G. Yao, H. Cai, N. Brock, S. Pau, and R. Liang, “Multi-wavelength quantitative polarization and phase microscope,” Biomed. Opt. Express 10(4), 1638–1648 (2019). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. (a) Experimental setup for quantitative phase microscopy based on TIE. A, B, C, under-focus, in-focus, over-focus planes. (b) Over-focus image; (c) in-focus image; (d) under-focus image; (e) phase map obtained by TIE; (f) 3D thickness profiles of (e); (g) comparison of micro-lens thickness obtained by TIE and DHM.
Fig. 2.
Fig. 2. Flowchart of image processing, machine-learning and deep-learning steps.
Fig. 3.
Fig. 3. Architecture of the multilevel integrated classifier.
Fig. 4.
Fig. 4. Architecture of the CLSNet. C1, C2, C3, C4, C5, convolutional layers. S, step distance.
Fig. 5.
Fig. 5. Phase images of macrophage cells in (a1-a3) and (b1-b3) for the SC and E groups, respectively.
Fig. 6.
Fig. 6. Performance evaluation of machine learning classifiers for the SC and E cells.
Fig. 7.
Fig. 7. Learning performance and ROC curve of CLSNet without data augmentation and with augmented dataset, respectively.
Fig. 8.
Fig. 8. Learning performance and ROC curve of CLSNet with in-focus images

Tables (2)

Tables Icon

Table 1. Comparison of QPM features for SC and E cells.

Tables Icon

Table 2. Comparison of test accuracy and AUC for different classifiers

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

k I ( x , y ) z = [ I ( x , y ) φ ( x , y ) ] ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.