Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Digital holographic deep learning of red blood cells for field-portable, rapid COVID-19 screening

Open Access Open Access

Abstract

Rapid screening of red blood cells for active infection of COVID-19 is presented using a compact and field-portable, 3D-printed shearing digital holographic microscope. Video holograms of thin blood smears are recorded, individual red blood cells are segmented for feature extraction, then a bi-directional long short-term memory network is used to classify between healthy and COVID positive red blood cells based on their spatiotemporal behavior. Individuals are then classified based on the simple majority of their cells’ classifications. The proposed system may be beneficial for under-resourced healthcare systems. To the best of our knowledge, this is the first report of digital holographic microscopy for rapid screening of COVID-19.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Coronavirus disease 2019 (COVID-19) is the infectious disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The disease can present itself through a variety of symptoms including fever, shortness of breath, persistent cough, loss of taste or smell, etc., and the severity can vary from mild or asymptomatic to critical with severe cases requiring hospitalization and mechanical ventilation. The highly contagious infection spreads through respiratory droplets and close contact with an infected individual, which creates a need for widely available testing to identify and contain infections and curb the spread of the disease. Current diagnostic methods for COVID-19 include chest computed tomography (CT) scans in combination with clinical symptoms, polymerase chain reaction (PCR) testing, antigen testing, and antibody-based tests [1]. The first of these methods requires dedicated facilities not typically available except at larger hospitals, and the last is generally not regarded as effective in detecting early infections due to the delay in the presence of detectable levels of antibodies after the onset of the infection. Antigen tests, which look for proteins specific to the virus, can be fast, highly specific, and beneficial for screening and in point of care settings but are lower in sensitivity as compared with PCR tests, and the performance may vary depending on setting and circumstance [2]. Nucleic acid amplification tests such as PCR tests are the gold standard of testing, as they are both highly sensitive and highly specific. The disadvantages of PCR testing include required laboratory facilities, turnaround times typically of the order of days though they can be as little as a few hours, and potentially high false negative rates [1]. Given the need to identify and contain infections to limit further spread, the desire remains for alternative testing methods, especially for accessible, rapid, point-of-care tests.

Along with the wide range of systemic effects that may be brought on by COVID-19, recent research suggests statistically significant differences in red blood cells (RBCs), particularly with increasingly severe cases of infection [3]. It was found that severe cases of COVID-19 were accompanied by significantly lower hemoglobin and hematocrit levels in comparison to moderate cases, and that increases in the RBC distribution width (RDW) followed increasing levels of disease severity [3]. Another study did not find significantly altered hematocrit levels but observed significantly altered lipid metabolism and suggested altered structural proteins may affect RBC deformability and contribute to the thromboembolic and coagulopathic complications of critically ill patients [4]. Furthermore, morphological changes such as high percentages of stomatocytes and knizocytes have been reported from blood smear observations of COVID positive patients [5]. These changes in RBC morphology suggest that examination of RBCs may be an important tool in the assessment of COVID-19 patients and may provide a potential avenue for disease screening technologies.

Digital holographic microscopy (DHM) is a quantitative phase imaging technique, with prominent applications in cell imaging, cell classification, and disease identification [619]. DHM has drawn great interest due to its stain-free operation, numerical refocusing ability, and single-shot operation, lending itself as a powerful tool for biological sample investigation. Notable applications of DHM include identification of diseases such as malaria [7], diabetes [11], and sickle cell disease [12,13]. Furthermore, DHM enables spatiotemporal analysis of live biological cells [1217], bolstering its capacity for use in rapid disease screening and classification systems [12,13], especially when coupled with deep learning [13].

In this Letter, we propose the use of a deep learning cell classification system using the reconstructed phase profiles output from a compact and field-portable shearing digital holographic microscope in the rapid screening of RBCs for COVID-19. Features are extracted from the phase profile of segmented RBCs at each time frame of the reconstructed video data, then input into a bi-directional long short-term memory (Bi-LSTM) network to classify the cells based on their spatiotemporal behavior. The system is capable of successfully classifying between diseased and healthy samples without chemical processing and with fast turnaround times. To the best of our knowledge, this is the first report of DHM for rapid COVID screening using the spatiotemporal behavior of RBCs.

In this study, all data were collected at the University of Connecticut Health Center using a compact, field-portable shearing-based digital holographic microscope [8,18]. Shearing-based interferometers provide a common-path arrangement resulting in high temporal stability [18] and enable the accurate measurement of RBC membrane fluctuations, which are expected to be in the range of 50–60 nm [20]. An optical diagram of the system is shown in Fig. 1(a), and the 3D-printed, field-portable microscopy system is shown in Fig. 1(b). The system comprises a red laser diode (1.2 mW, 635 nm, Thorlabs CPS 635 R), a 1D translation stage for axial positioning of the sample, a 40X (0.65 NA) objective lens, a glass plate to induce lateral shearing of the object wavefront, and a CMOS image sensor (Basler aca A3800-14um). The image sensor has a 1.67 µm pixel size and ${{3840}} \times {{2748}}$ pixels. The theoretical resolution limit as determined by the Raleigh criterion is 0.594 µm. The temporal stability of the system was assessed by recording ${{256}} \times {{256}}$ video holograms for a plain glass slide in the system at the testing site, then calculating the pixel-wise standard deviation in the optical path lengths (OPLs) over time. The mean value over all pixels was calculated as 2.515 nm and is taken as the temporal stability of the system. Based on the shearing configuration [Fig. 1(a)], a collimated source beam emitted from the laser diode transilluminates the sample under inspection, which is then magnified by the microscope objective. Following the objective lens, the wavefront carrying the complex amplitude of the specimen is incident upon a glass plate creating reflections from both its front and back surfaces separated by a lateral shear as determined by the glass plate thickness, refractive index of the glass, and angle of incidence [21]. The two reflected beams self-interfere to form an interference pattern at the sensor plane wherein the fringe frequency of the system is determined by the radius of curvature of the wavefront, the source wavelength, and the lateral shear induced. Capturing of redundant information (i.e., sheared copies) can be avoided by having a lateral shear greater than the size of the sensor.

 figure: Fig. 1.

Fig. 1. (a) Optical configuration and (b) 3D-printed experimental system with dimensions ${{94}}\;{\rm{mm}}\;{\rm{x}}\;{{107}}\;{\rm{mm}}\;{\rm{x}}\;{190.5}\;{\rm{mm}}$.

Download Full Size | PDF

Following hologram acquisition, the phase profiles of the cells are extracted using Fourier spectrum analysis [8,22]. The object spectrum is recovered from the hologram by filtering out the DC and conjugate term in the Fourier domain. Then, from the recovered object complex amplitude, $\tilde U\;(\xi ,\;\eta)$, the object phase is calculated as $\Phi = {\rm{ta}}{{\rm{n}}^{- 1}}[{\rm{Im}}\tilde U/{\rm{Re}}\tilde U]$, where Im{·} and Re{·} represent the real and imaginary functions, respectively. The phase is unwrapped using Goldstein’s branch-cut algorithm [23], and system aberrations are reduced by subtracting the phase captured from a cell-free region of the sample leaving only the unwrapped object phase (${\Phi _{\textit{un}}}$). From the unwrapped phase, the OPL is computed as ${\rm{OPL}} = \;{\Phi _{\textit{un}}}[\lambda /(2\pi)]$.

The OPL can be converted to the sample thickness given the refractive indices of the sample and background via $h = {\rm{OPL}}/\Delta n$, with $h$ being the thickness and $\Delta n$ the refractive index difference. However, as we cannot assume the refractive indices of disease states, the following analysis is conducted using the OPL values.

From the recorded video hologram data, individual cells are segmented in a supervised, partially automated manner, numerically reconstructed as described above to retrieve the cells’ phase profile, then features extracted from the phase profiles are input into the deep learning network for classification. Example video reconstructions are provided for both healthy and COVID positive RBCs in Visualization 1 and Visualization 2, respectively. For extracted features, we use a combination of handcrafted and transfer learned features. The handcrafted features are morphological features that describe the 3D shape of the phase profiles. In particular, for each frame of the reconstructed video data, from the segmented RBC, we calculate measurements for mean OPL, coefficient of variation, projected cell area, optical volume, cell thickness skewness, cell thickness kurtosis, cell perimeter, cell circularity, cell elongation, cell eccentricity, cell thickness entropy, maximum and minimum cell widths, and maximum and minimum OPL values [12,13,24]. Histograms for each of the morphological features are provided in Supplement 1, Fig. S1. Notable differences are apparent between populations in some cell features such as optical volume, projected area, and cell widths. However, it is also apparent that none of the features alone provides sufficient information for classification given the significant overlap between classes. Furthermore, while these hand-crafted features are highly interpretable and provide a good description of a cell’s morphology, we have found that inclusion of transfer learned features [25] improves the system performance through increasing the network’s capacity to learn [13]. Here, we use the feature vectors taken from the last fully connected layer of a DenseNet-201 convolutional neural network [26] pretrained on the ImageNet database to provide an additional 1000 extracted features for each segmented cell. These features do not have any intrinsic or interpretable meaning but are highly generalizable features that can be useful in classification tasks. For each individual cell, the features are extracted from each frame of the video sequence and stored in a ${{1017}} \times f$ matrix, with 1017 being the extracted features and $f$ being the number of video frames recorded. This matrix is input into a LSTM network [2729], which is one form of recurrent neural network (RNN) designed to deal with sequential data types such as time-varying data.

In general, RNNs work by processing sequential data with a looped architecture that maintains some information from the previous data in the sequence [29]. In the LSTM network, information is passed through repeating blocks of the LSTM layer and updated by several interacting gates, where each gate is composed of a sigmoid function and a multiplication operation to determine how much information should be passed to the following block. The interacting gates are known as the forget gate (${f_t}$), the input gate (${i_t}$), and the output gate (${O_t}$), wherein these gates work together to update the cell state (${C_t}$) and output the updated hidden state (${h_t}$) to the following block. These defining interactions can be described mathematically as follows [27]:

$${f_t} = \sigma ({W_{\textit{xf}}}{x_t} + {W_{\textit{hf}}}{h_t}_{- 1} + {b_f}),$$
$${i_t} = \sigma ({W_{\textit{xi}}}{x_t} + {W_{\textit{hi}}}{h_t}_{- 1} + {b_i}),$$
$${\tilde C_t} = \tanh ({W_{\textit{xC}}}{x_t} + {W_{\textit{hC}}}{h_{t - 1}} + {b_C}),$$
$${C_t} = {f_t}{C_t}_{- 1} + \,\,{i_t}\tilde C,$$
$${O_t} = \sigma ({W_{\textit{xO}}}{x_t} + {W_{\textit{hO}}}{h_{t - 1}} + {b_O}),$$
$${h_t} = {O_t}\tanh ({C_t}),$$
where $i$, $f$, and $O$ are the input, forget, and output gates, respectively, $C$ is the cell state vector, $\sigma$ represents a sigmoidal function, tanh is the hyperbolic tangent function, ${{\textbf{W}}_{\alpha \beta}}$ and ${{\textbf{b}}_\beta}$, $\alpha \; \in \{{x,h} \}$, $\beta \; \in \;\{{f,i,C,O} \}$ are the weights and biases of the network, respectively, and ${x_t}$ is the input feature vector at time $t$. More specifically, we use a Bi-LSTM layer that simultaneously learns from both the forward and reverse directions [28]. Our deep-learning classification network is built with one Bi-LSTM layer that leads to a fully connected layer followed by a softmax layer then a classification layer. The Bi-LSTM network learns how to differentiate between healthy and diseased RBCs by finding differences in spatiotemporal behavior of the cells from changes over time in the features extracted at each frame of the recorded data. A flowchart of classification strategy is provided in Fig. 2 [13].
 figure: Fig. 2.

Fig. 2. Overview diagram for deep-learning-based digital holographic microscopy cell identification system. Video holograms are recorded using the system depicted in Fig. 1. Individual cells are segmented at each time frame, and features are extracted for input into a LSTM classifier. ${x_t}$ and ${h_t}$ denote, respectively, the input and output of a portion of a LSTM block at time-step $t$. LSTM, long short-term memory [13].

Download Full Size | PDF

Our network was trained with the ADAM optimizer, a learn rate of 0.001, and a sequence length of 50. The hyperparameters for number of epochs, minibatch size, number of hidden units, and the dropout rate were chosen as 1, 128, 300, and 0.4, respectively, based on the results of a grid search using one patient’s data from each class. During the search, the results were averaged over 20 trials to account for random weight initialization [30]. Similarly, during testing, we averaged the results by training the network five times, wherein the models varied only by their initial randomized weights. After feature extraction, training took under 2 min, and classification of a single patient’s RBCs took less than 10 s using a NVIDIA Quadro P4000 GPU.

Whole blood samples were obtained from study participants in K2DETA spray-coated tubes to prevent clotting and processed within 4 h of collection. Video holograms of thin blood smears were recorded using the system in Fig. 1(b) for 10 s at 20 frames per second. Following segmentation, the total dataset in this study consisted of 1474 RBCs, with 840 RBCs coming from 10 COVID positive subjects and 634 RBCs coming from 14 healthy healthcare workers. The COVID positive samples came from consenting hospitalized patients at UConn Health Center collected on day 3, day 6, and day 9 of the patient’s stay. Healthcare workers were considered healthy and eligible for inclusion in this study if they had a recent negative PCR test along with negative antibody serology results at the time of blood draw, or if they had tested PCR positive at least 90 days prior to the date of the blood draw. This study was conducted in accordance with UConn Health and UConn Storrs Institutional Review Board policies. All available clinical data for the human subjects in this study are provided in Tables S1 and S2 in Supplement 1.

Tables Icon

Table 1. Classification Results for COVID Determination of RBCsa

Tables Icon

Table 2. Patient Level Classification Using a 0.5 Classification Thresholda

The classification is carried out using a patient-wise cross validation procedure, wherein each individual in the dataset is considered once as the test subject using the remaining data for training. During each training iteration, the dataset is balanced by removing instances of the majority class to prevent undue bias. The confusion matrix is provided in Table 1. We achieve 67.44% accuracy for all cells in the dataset along with a 64.64% sensitivity (true positive rate) and 71.14% specificity (true negative rate). Furthermore, we achieve an area under the receiver operating characteristic curve (AUC) value of 0.7373 and a Mathew’s correlation coefficient (MCC) of 0.3543. Given the high accuracy, AUC, and MCC of our classifier, we can be confident that the network is learning to distinguish between healthy and COVID positive RBCs. However, the goal of any diagnostic or disease screening system is ultimately to provide the health status of an individual. To this end, we classify each individual based on the simple majority of their cells’ classifications with any tie being classified as COVID positive. The results of this patient level classification are provided by the confusion matrix in Table 2.

From Table 2, we achieve a patient level classification accuracy of 87.50% (80% sensitivity, 92.86% specificity) with 21/24 individuals correctly classified in terms of COVID-19 disease status; 8/10 COVID positive and 13/14 healthy individuals were correctly classified. The AUC for patient level classification was 0.9393, and the MCC was 0.7419. Furthermore, using an optimal threshold, determined as 0.525 from AUC analysis, the classification accuracy is 91.67% with a MCC of 0.8366. These results indicate an ability to reliably distinguish between individuals from the healthy and COVID positive participants in this study.

In light of these promising preliminary results, there are several key limitations of this study that must be discussed. First, all COVID positive data in this study came from hospitalized subjects who were hospitalized for of at least three days. As such, our COVID positive cohort contains only individuals having at least a moderate case of COVID-19. This evaluation does not allow us to speculate on the performance of the system in asymptomatic or mild cases of COVID-19, which were not included in our study, though previous research has suggested the impact on RBCs is more apparent as severity of the disease increases [35]. Furthermore, we cannot know how early in the disease progression such a system would be beneficial. It is also important to reiterate that this disease screening system examines the behavior of RBCs in time to determine whether an individual is healthy or infected. This is in contrast to diagnostic systems such as PCR testing that look for DNA of the virus, or serology testing, which checks the antibody levels [1]. The advantages of the presented system include cost, accessibility, and time to results, but further investigations with larger patient pools including mild or asymptomatic infections are needed to validate the system performance more rigorously.

In summary, we have proposed a low-cost, portable, 3D-printed DHM system using deep learning for classification of RBCs between healthy and COVID-19 positive subjects. The shearing-based holographic system has a measured temporal stability of 2.515 nm making it suitable for the study of RBC membrane fluctuations. Our dataset consisted of 1474 RBCs from 10 COVID-19 positive individuals and 14 healthy healthcare workers. Video holograms of RBCs were recorded on-site using the compact microscope, then cells were individually segmented, and features were extracted at each timeframe of the video data. Time-varying feature vectors were input into a bi-LSTM neural network that correctly classified 67.44% of all cells in this study. Using the simple majority of cell classifications for each individual, our system correctly identified 87.50% of all individuals [21/24, (8/10 COVID positive, 13/14 Healthy)] in this study. These results provide strong evidence of differences in RBC behavior between the two studied populations and the ability to classify individuals based on RBC examination. Future work entails continued investigation of this approach for low-cost rapid COVID screening, which may be beneficial for under-resourced healthcare systems, and exploration into different cell types such as white blood cells for COVID classification [31].

Funding

UConn Office of Vice President of Research (COVID-RSF).

Acknowledgment

We thank Mike Kleinberg and the staff of Dr. Shen’s laboratory for clinical research support. A special thanks to UCHC healthcare workers for volunteering to take part in this study and for their ongoing work during the COVID-19 pandemic. T. O’Connor acknowledges support through the GAANN fellowship.

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

REFERENCES

1. F. Cui and H. S. Zhou, Biosens. Bioelectron. 112349, 165 (2020). [CrossRef]  

2. “Interim guidance for antigen testing for SARS-CoV-2,” https://www.cdc.gov/coronavirus/2019-ncov/lab/resources/antigen-tests-guidelines.html.

3. B. M. Henry, J. L. Benoit, S. Benoit, C. Pulvino, B. A. Berger, M. H. S. de Olivera, C. A. Crutchfield, and G. Lippi, Diagnostics 10, 618 (2020). [CrossRef]  

4. T. Thomas, D. Stefanoni, M. Dzieciatkowska, A. Issaian, T. Nemkov, R. C. Hill, R. O. Francis, K. E. Hudson, P. W. Buehler, J. C. Zimring, E. A. Hod, K. C. Hansen, S. L. Spitalnik, and A. D’Alessandro, J. Proteome Res. 19, 4455 (2020). [CrossRef]  

5. A. Berzuini, C. Bianco, A. C. Migliorini, M. Maggioni, L. Valenti, and D. Prati, Blood Transfus. 19, 34 (2021). [CrossRef]  

6. U. Schnars and W. Jueptner, Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques (Springer, 2005).

7. A. Anand, V. Chhaniwal, N. Patel, and B. Javidi, IEEE Photon. J. 4, 1456 (2012). [CrossRef]  

8. A. Anand, V. Chhaniwal, and B. Javidi, APL Photon. 3, 071101 (2018). [CrossRef]  

9. O. Matoba, X. Quan, P. A. Y. Xia, and T. Nomura, Proc. IEEE 105, 906 (2017). [CrossRef]  

10. A. Anand, I. Moon, and B. Javidi, Proc. IEEE 105, 924 (2017). [CrossRef]  

11. A. Doblas, E. Roche, F. Ampudia-Blasco, M. Martinez-Corral, G. Saavedra, and J. Garcia-Sucerquia, J. Microsc. 261, 285 (2015). [CrossRef]  

12. B. Javidi, A. Markman, S. Rawat, T. O’Connor, A. Anand, and B. Andemariam, Opt. Express 26, 13614 (2018). [CrossRef]  

13. T. O’Connor, A. Anand, B. Andemariam, and B. Javidi, Biomed. Opt. Express 11, 4491 (2020). [CrossRef]  

14. K. Jaferzadeh, I. Moon, M. Bardyn, M. Prudent, J. Tissot, B. Rappaz, B. Javidi, G. Turcatti, and P. Marquet, Biomed. Opt. Express 9, 4714 (2018). [CrossRef]  

15. D. Midtvedt, E. Olsen, and F. Hook, Nat. Commun. 10, 340 (2019). [CrossRef]  

16. F. Dubois, C. Yourassowsky, O. Monnom, J. Legros, O. Debeir IV, P. Van Ham, R. Kiss, and C. Decaestecker, J. Biomed. Opt. 11, 054032 (2006). [CrossRef]  

17. M. Hejna, A. Jorapur, J. S. Song, and R. L. Judson, Sci. Rep. 7, 11943 (2017). [CrossRef]  

18. A. S. Singh, A. Anand, R. A. Leitgeb, and B. Javidi, Opt. Express 20, 23617 (2012). [CrossRef]  

19. Y. Jo, S. Park, J. Jung, J. Yoon, H. Joo, M. Kim, S. Kang, M. C. Choi, S. Y. Lee, and Y. Park, Sci. Adv. 3, e1700606 (2017). [CrossRef]  

20. S. Lee, H. Park, K. Kim, Y. Sohn, S. Jang, and Y. Park, Sci. Rep. 7, 1039 (2017). [CrossRef]  

21. R. Shukla and D. Malacara, Opt. Laser Eng. 26, 1 (1997). [CrossRef]  

22. S. De Nicola, A. Finizio, G. Pierattini, P. Ferraro, and D. Alfieri, Opt. Express 13, 9935 (2005). [CrossRef]  

23. R. Goldstein, H. Zebker, and C. Werner, Radio Sci. 23, 713 (1988). [CrossRef]  

24. P. Girshovitz and N. T. Shaked, Biomed. Opt. Express 3, 1757 (2012). [CrossRef]  

25. T. Yichuan, in Workshop on Representation Learning (ICML, 2013).

26. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), Vol. 1, pp. 3.

27. S. Hochreiter and J. Schmidhuber, Neural Comput. 9, 1735 (1997). [CrossRef]  

28. M. Schuster and K. Paliwal, IEEE Trans. Signal Process. 45, 2673 (1997). [CrossRef]  

29. Y. LeCun, Y. Bengio, and G. Hinton, Nature 521, 436 (2015). [CrossRef]  

30. S. Haykin, Neural Networks: A Comprehensive Foundation, 2nd ed. (Prentice Hall, 1999).

31. M. Ugele, M. Weniger, M. Stanzel, M. Bassler, S. W. Krause, O. Friedrich, O. Hayden, and L. Richter, Adv. Sci. 5, 1800761 (2018). [CrossRef]  

Supplementary Material (3)

NameDescription
Supplement 1       Supplemental Data for Digital holographic deep learning of red blood cells for rapid COVID-19 screening
Visualization 1       Video reconstructions of red blood cells captured by the field portable compact shearing digital holographic microscopy for healthy patients.
Visualization 2       Video reconstructions of red blood cells captured by the field portable compact shearing digital holographic microscopy for COVID-19 patients.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (2)

Fig. 1.
Fig. 1. (a) Optical configuration and (b) 3D-printed experimental system with dimensions ${{94}}\;{\rm{mm}}\;{\rm{x}}\;{{107}}\;{\rm{mm}}\;{\rm{x}}\;{190.5}\;{\rm{mm}}$ .
Fig. 2.
Fig. 2. Overview diagram for deep-learning-based digital holographic microscopy cell identification system. Video holograms are recorded using the system depicted in Fig. 1. Individual cells are segmented at each time frame, and features are extracted for input into a LSTM classifier. ${x_t}$ and ${h_t}$ denote, respectively, the input and output of a portion of a LSTM block at time-step $t$ . LSTM, long short-term memory [13].

Tables (2)

Tables Icon

Table 1. Classification Results for COVID Determination of RBCs a

Tables Icon

Table 2. Patient Level Classification Using a 0.5 Classification Threshold a

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

f t = σ ( W xf x t + W hf h t 1 + b f ) ,
i t = σ ( W xi x t + W hi h t 1 + b i ) ,
C ~ t = tanh ( W xC x t + W hC h t 1 + b C ) ,
C t = f t C t 1 + i t C ~ ,
O t = σ ( W xO x t + W hO h t 1 + b O ) ,
h t = O t tanh ( C t ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.