Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Opportunities of optical and spectral technologies in intraoperative histopathology

Open Access Open Access

Abstract

Modern optical and spectral technologies represent powerful approaches for a molecular characterization of tissues enabling delineating pathological tissues but also a label-free grading and staging of tumors in terms of computer-assisted histopathology. First, currently used tools for intraoperative tumor assessment are described. Next, the requirements for intraoperative tissue visualization from a medical and optical point of view are specified. Then, optical and spectral techniques are introduced that are already approved or close to being used in standard clinical practice for ex vivo and in vivo monitoring, and proof-of concept studies utilizing linear and nonlinear spectroscopy and imaging modalities are presented. Combining several spectroscopic mechanisms in multi-contrast approaches constitutes further advances. Modern artificial intelligence and deep learning concepts have emerged to analyze spectroscopic and imaging datasets and have contributed to the progress of each technique. Finally, an outlook for opportunities and prospects of clinical translation is given.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. INTRODUCTION INTO MEDICAL PATHOLOGICAL GOLD STANDARD AND LIMITATIONS

Surgical excision is an established method to treat benign and malignant tumors. It was reported in 2020 [1] that 45% of patients diagnosed with malignant neoplasms in England were treated surgically. The main aims of surgery are to decrease the morbidity and mortality of patients despite its complications and to retain the quality of life. Complete surgical removal of tumors with tumor-free margins is challenging, in particular if eloquent parts of the affected organ are aimed to be conserved. Even more complicated is that every tumor type has unique characteristics. Common challenges shared by most types of solid tumors include detection of small primary, synchronous, or metachronous malignant deposits, identification of affected lymph nodes, precise delineation of tumor margins, and detection of residual malignant tissues during surgical resection. Hence, an unmet need exists for techniques to assist the surgeon in deciding precisely and intraoperatively the extent of tissue that should be resected without leaving any lesion behind, preserve normal tissues and important structures as much as possible, and improve the surgical outcome in general.

Successful modalities for intraoperative tumor visualization need to overcome the challenges that meet the surgeons and should fulfill requirements such as efficacy, speed, cost, presentation of information, complexity, safety, and labels. The surgeon needs to know the macroscopic characteristics of the tumor, the location, size, and extent in three dimensions, borders, neighboring tissues, blood and lymphatic supply of the tumor and the neighboring tissues, and nodal involvement, as well as the microscopic characteristics including the type of the tissue, the grade of differentiation, and any residual microscopic tumor tissues. Although many of these characteristics should be known preoperatively, it is also necessary to be provided intraoperatively. A fixed set of criteria is difficult to define because certain criteria such as penetration and resolution may depend on the type of surgery and the type of application (in vivo versus ex vivo). Considering that a single modality hardly offers all requirements, combinations of modalities might even be more promising.

This review starts with the description of the state-of-the-art methods for intraoperative tumor assessment. Then, optical and spectral techniques are described that are already licensed or close to being used in standard clinical practice, and proof-of-concept studies of emerging standalone or multi-contrast techniques for intraoperative tissue assessment. All modalities can benefit from artificial intelligence (AI) and deep learning (DL) concepts. Current challenges and potential solutions are discussed to overcome technological hurdles and provide prospects for future clinical translation.

2. CURRENTLY USED METHODS FOR INTRAOPERATIVE TUMOR ASSESSMENT

Table 1 summarizes several methods and their advantages and limitations that are currently used for intraoperative assessment of pathologies. Whereas magnetic resonance imaging (MRI), ultrasonography, and computed tomography offer intraoperative in vivo applications, frozen section analysis and imprint cytology (IC) are ex vivo procedures that are performed on excised tissues during surgery, usually by trained experts. Resolution, speed, and penetration are discussed in the text.

Tables Icon

Table 1. Currently Used In Vivo and Ex Vivo Methods for Intraoperative Tumor Assessment and Their Advantages and Limitationsa

A. Intraoperative Magnetic Resonance Imaging

MRI scanners use strong magnetic fields, magnetic field gradients, and radio waves to generate images of the organs in the body. MRI provides better contrast in images of soft-tissues, e.g., in the brain or abdomen, compared to x-ray-based computed tomography. Intraoperative MRI (iMRI) refers to an operating room configuration that enables surgeons to image the patient via an MRI scanner while the patient is undergoing surgery, most often in the context of brain tumors. One study of 288 patients at six neurosurgical centers concluded that gross total resection of low-grade gliomas is a positive prognosis factor and progression-free survival does not depend on field strength [2]. iMRI is also applied during transsphenoidal procedures in which instruments are inserted through the nose to remove tumors that are in or near the pituitary gland [3]. Other application fields include colorectal liver metastasis [4] and breast cancer [5]. Drawbacks of iMRI still exist such as poorer image resolution using low-field units of less than 1 tesla (${\rm{T}}$), high cost of specialized operating suites, instrumentation, and longer anesthesia and operating room time. Higher field strengths of 1.5 and ${{3}}\;{\rm{T}}$ provide better spatial and contrast resolution. The utility of high-resolution 3T-iMRI was demonstrated in 20 patients who underwent transsphenoidal resection [6]. MRI was obtained with 3 mm slice thickness before and after injection of a contrast agent, and the average iMRI acquisition time was 21.5 min (range 8–47 min). But 3T-iMRI requires that the MRI magnet is stored in an adjacent room, and either the patient is moved to the magnet or the MRI magnet is moved to the patient via ceiling-mounted rails to obtain the image.

B. Intraoperative Ultrasonography

The usage of ultrasound to produce visual images for medicine is called medical ultrasonography. The relatively low-cost portable instruments that can be brought to the bedside, high acquisition speed, and good penetration depth make ultrasonography suitable to intraoperative tumor assessment. Ongoing software and technical developments enable high spatial resolution with tissue-differentiating properties [7]. Typical lateral and axial resolution for frequencies between 2 and 15 MHz are 3.0–0.4 mm and 0.8–0.15 mm, respectively. Unfortunately, high-frequency ultrasound provides low tissue penetration. The depth of penetration is limited to approximately 200 wavelengths, corresponding to a depth 12 cm for 2.5 MHz transducer, and 2 cm for a 15 MHz transducer. Many studies described the intraoperative use of ultrasonography especially for neurological tumors [8]. A retrospective study, however, only demonstrated in one center, showed that intraoperative ultrasonography (IOUS) significantly increased the rate of gross total resection of pituitary adenomas and decreased the operation time and blood loss [9]. Another application of IOUS was reported to assess hepatic tumors [10], which showed its important role to localize and characterize lesions by providing high lesion-to-liver contrast. A multicentric randomized controlled trial showed that IOUS in breast surgery can significantly decrease the proportion of tumor-involved resection margins [11]. Although those works reported the wide use of IOUS, it still faces many challenges including the appearance of artefacts, the dependence on the experience of the operator, the inaccessibility for some locations on the body (e.g., air-filled lung), and the lack of chemical information.

C. Intraoperative Computed Tomography

Analogous to MRI, computed tomography (CT) is an established clinical imaging modality. A CT scan combines a series of x ray images taken from different angles around the body and uses computer processing to create detailed internal images. However, only few studies have been reported in the literature to use intraoperative computed tomography (iCT) because development and application of iMRI are more advantageous than iCT. A first CT scanner system for intraoperative imaging was developed for neurosurgery in 1991 [12]. Head fixation devices were designed, which allowed the patient to be scanned on the operating table preoperatively, intraoperatively, and immediately after surgery. State-art-the-art iCT systems enabled spatial resolutions of 0.8 mm and CT scanning with 0.4 mm slices in 2009 [13]. A newer generation of iCT scanners in cranial neurosurgery offers hardware and software improvements resulting in higher image quality [14]. With the advent of new multidetector iCT, soft tissue can be visualized far better than before, and its use in technically challenging neurosurgeries of 24 cranial and 8 spinal cases was reported [15]. Despite the advantages of CT in medicine, iCT faces many problems including the radiation risk, high cost, duration of acquisition, lack of chemical characterization of the tissue, and design of the operating theater.

D. Intraoperative Frozen Section

Intraoperative frozen section (IFS) is a procedure that starts with the surgical excision of a part of the lesion. The specimen is frozen, and a tissue section is cut in a cryotome, which is stained using a rapid protocol and subsequently assessed under a microscope by a pathologist. The result is a decision about the presence or absence of tumor cells in the specimen. The whole process takes approximately 20 min. If the sample has a positive margin with residual cancer cells, more tissue will be resected, and the procedure might be repeated. Time-consuming standard histopathological techniques are applied postoperatively to obtain more accurate results. IFS has been widely used in oncological surgery to maximize tumor resection upon preserving functional tissues in the vicinity of resection margins. Colorimetric histology, also termed histoplasmonics, has been reported that uses full-sized plasmonically active microscope slides to translate subtle changes in the dielectric constant into striking color contrast when tissue sections are placed upon them [16]. This approach can be used as a novel alternative or adjunct to general staining, potentially also in IFS.

During radical prostatectomy, IFS gave more frequent neurovascular bundle preservation and lower positive surgical margins [17]. Other studies also showed the successful use of IFS to identify the invasion status of lung adenocarcinoma [18], to assess myometrial invasion in patients with endometrial cancer [19], and to diagnose sentinel lymph node metastasis in breast cancer patients [20]. In the context of skin cancer, in particular the most common basal cell carcinoma (BCC), a variant of IFS is known as Mohs surgery. Here, surgery is controlled by microscopy to obtain complete margin control during cancer removal. The cure rate with Mohs surgery cited by most studies is between 97% and 99.8% for primary BCC [21]. Disadvantages of IFS include underdiagnosis and overdiagnosis of cancer, added costs, deterioration of the sample at room temperature, prolonged operation time, dependency on the experience of the surgeon and the pathologist, and most importantly the unrepresentative sampling.

E. Imprint Cytology

The principle of IC is to prepare a surgical specimen for pathological assessment by pressing a glass slide carefully against freshly cut tissue, which is fixed and stained with hematoxylin and eosin (H&E) and evaluated for suspected malignancies by a pathologist. IC was used in intraoperative assessment of the sentinel lymph node in breast cancer and appeared to be more sensitive than IFS in detecting sentinel lymph node metastatic activity [22]. IC was evaluated for upper aerodigestive tract malignancies [23] and head and neck malignancies [24] for intraoperative consultation in resource-limited countries where IFS is not available. This method provides immediate results with fewer costs and needs for infrastructure as in IFS. However, IC cannot provide information about the depth of the lesion, and some tumors cannot be assessed.

3. OPTICAL AND LINEAR SPECTROSCOPIC TECHNIQUES FOR INTRAOPERATIVE TUMOR VISUALIZATION

Several optical and spectral techniques have been suggested for applications in intraoperative diagnosis. Table 2 gives an overview of their categories. The gray shaded entries are already approved for tissue monitoring in clinical practice. More optical and spectral techniques are still under research and development. An overview of imaging techniques of head and neck cancer has also been presented in the context of perspectives, potentials, and trends of ex vivo and in vivo optical molecular pathology [25]. Linear spectroscopic techniques are based on first-order susceptibility, which means that the signal intensities scale linearly with the intensity of the excitation. This holds true for continuous emission of halogen light sources, light-emitting diodes, lasers and thermal radiation sources probing light–matter interaction due to absorption, scattering, and fluorescence. Generally, in vivo applications in open surgery require coupling with surgical microscopes or wide-field video cameras, and minimally invasive applications require coupling with endoscopes or fiber optic probes. Ex vivo applications constitute a complementary approach to histopathology, IFS, and IC with the advantage of providing easier sample preparation and label-free contrast—with the exception of administered fluorescence labels. The wavelength determines penetration into tissue, which is lower for blue/green light and deeper for red/near-infrared (NIR) light due to scattering and absorption. The resolution depends on the optical magnification and confocality, which is limited for microscopes due to diffraction near 300 nm.

Tables Icon

Table 2. Optical and Spectroscopic Techniques Approved in Clinical Practice (Shaded Gray) and Under Research Used for Intraoperative Tissue Assessment and Their Progress by Deep Learning

All optical and photonic methods benefit from AI and deep learning (DL) concepts, in particular if big data are available from large numbers of independent samples. AI is defined as a computer algorithm that exhibits intelligence via decision-making. Machine learning is an algorithm of AI that assists systems to learn from different types of datasets. DL is an algorithm of machine learning that uses several layers of neural networks to analyze data and provide output accordingly. DL applications in biomedical optics with a particular emphasis on image formation and analysis were described for microscopy [26], optical coherence tomography (OCT), photoacoustic imaging (PAI), fluorescence lifetime imaging (FLIm), and more [27]. Each type of tissue and each organ have a unique spectral fingerprint in, e.g., hyperspectral imaging (HSI), Raman, and infrared (IR) data. With advancements in data processing, it should be possible to distinguish even more tissue types and features that were previously invisible to clinicians for diagnostic purposes. Each of the following subsections is intended to evaluate how the modality can meet for intrasurgical histopathology. Augmentation by machine learning will close potential gaps, which are included in Table 2.

A. Optical Coherence Tomography

OCT is the optical analog of ultrasound and is based on low-coherence interferometry of light, typically in the NIR range to enhance penetration into the scattering medium. OCT probes the morphology of biological tissue (e.g., mucosa, ocular tissue) by optical scattering without contrast agents. OCT offers penetration depth of 1–3 mm, axial resolution of 0.5–10 µm, and lateral resolution of 1–10 µm. Approved versions of OCT were introduced for various clinical applications [28]. The median speed to set up an intraoperative OCT system was 1.7 min, and the median time that the surgery was paused was measured at 4.9 min per scan session [29].

Intraoperative OCT is frequently used to guide ocular surgeries in the cornea, cataract, refractive, glaucoma, and various retinal conditions [30,31]. Functional extensions of Doppler OCT are noninvasive imaging techniques that visualize and quantify microvasculature as well as elasticity in vivo and have been widely used in research and clinical applications, particularly in ophthalmology (see Fig. 1) [32]. OCT systems based on photonic integrated circuits (PICs) could enable a significant miniaturization of complex systems with a high degree of integration as well as low costs of goods [33]. Translational applications of OCT imaging for tissue classification in the upper aerodigestive tract focusing on the larynx were described. Applying it through a single mode optical fiber allows endoscopic approaches [34]. Beside this, the feasibility and diagnostic utility of full-field OCT for rapid intraoperative diagnosis was studied to examine breast and lymph node specimens during breast cancer surgery [35]. The results of 173 breast biopsies and 141 resected lymph nodes from 158 patients were compared to conventional processing and H&E staining. Sensitivity and specificity in breast cancer diagnosis were both near 85% while nodal assessments showed lower sensitivity of 67% and specificity of 80%.

 figure: Fig. 1.

Fig. 1. (A) Cross-sectional raw data showing elastic wave propagation of retinal layers at different time points for an ex vivo pig retina. (B) Elasticity map in rabbit retina in vivo. A different stiffness was demonstrated in different layers of the retina. (C) Doppler OCT image of rabbit cornea and crystalline lens. (D) and (E) Spatiotemporal Doppler OCT images of cornea and lens, respectively. (F) OCT structural and (G) Doppler OCT images of a human cadaver coronary artery. (H) Histological image and (D) close-up view of an atherosclerotic lesion. The red-colored region denoted by the blue arrow in (I) exhibits smaller phase and displacement and, therefore, indicates less elastic, stiffer tissue such as plaques. Scale bars, 1 mm. (Reprinted from [32] following the terms of Creative Commons CC-BY 4.0 license.)

Download Full Size | PDF

An ultrahigh-resolution OCT system was used to develop a classification scheme for four breast tissue types based on convolutional neural networks (CNNs) [36]. The proposed CNN classified cancer versus noncancer with 94% accuracy. Canine soft tissue sarcoma was intraoperatively assessed by DL-enhanced OCT [37]. An automatic diagnosis system utilizing a ResNet-50 network was developed that can assist clinicians in real-time for OCT image interpretation of tissues at surgical margins. After training on 80 cancer and 80 normal images, the proposed method achieved an average accuracy of 97% on the validation dataset (20 cancer, 20 normal) and correct differentiation of all soft tissue sarcoma images of an independent test dataset (10 cancer, 10 normal). These studies demonstrated that DL approaches significantly improve classification performance compared to earlier reports [35].

B. Narrowband Imaging

Similar to OCT, narrowband imaging (NBI) [38] can also enhance visualization of superficial tissue such as mucosa without contrast media. NBI employs blue (415 nm) and green (540 nm) wavelengths to improve contrast of blood vessels because hemoglobin absorbs at these wavelengths. Depending on the confocal microscopic or non-confocal implementation, typical penetration depth of optical absorption, scattering, and fluorescence is limited to 0.2 mm, axial resolution to 3–10 µm, and lateral resolution to 0.3–3 µm. NBI images can be recorded at video-time frame rates of 38 Hz [39].

Intraoperative NBI is frequently applied in transoral laser microsurgery of head and neck cancers. The usefulness of NBI for intraoperative assessment of the larynx mucosa in terms of specifying surgical margins was reported for 90 samples of suspected areas from 44 patients [40]. The authors concluded that NBI confirms the suspicions undertaken in white light (WL) guidance and, importantly, shows microlesions beyond the scope of WL. The combination of contact endoscopy with NBI allowed intraoperatively a highly contrasted, real-time visualization of vascular changes of the vocal folds [41]. The evaluation of NBI data confirmed the association of perpendicular vascular changes to vocal fold cancer, dysplastic lesions, and papillomatosis. Application of laparoscopic NBI was reported to examine 35 subcapsular hepatic lesions and to distinguish between malignant and benign tumors [42]. Abnormal microvascular patterns were found in 90.9% of hepatocellular carcinoma and 77.8% of colorectal liver metastasis, whereas neither normal sites nor benign lesions displayed microvascular abnormality.

A DL computer aided diagnosis (CAD) system characterized tissue in NBI zoom imaginary of Barrett’s esophagus (BE) [39]. Initial training started with 494,364 endoscopic images of general endoscopic imaginary, and the CAD system was further refined with three subsets. In total, 30,021 individual video frames were analyzed by the CAD system yielding an accuracy of 83%, sensitivity of 85%, and specificity of 83%. Laryngeal squamous cell carcinoma (LSCC) was detected in both WL and NBI videolaryngoscopies based on the You-Only-Look-Once (YOLO) CNN [43]. Exactly 624 LSCC video frames were retrospectively extracted from 219 patients. Best LSCC detection results achieved a promising precision of 66% (positive predictive value). The average computation time per video frame was 26 ms. The same group of authors designed a novel DL segmentation model (SegMENT) relying on CNN architecture. SegMENT outperformed other DL models on the LSCC datasets of NBI videoendoscopy of the upper aero-digestive tract [44]. These studies demonstrated that DL approaches enable analyzing a larger number of video frames for training and validation of models and computing single video frames in less time, which is key to real-time applications.

C. Hyperspectral Imaging

The principle of hyperspectral imaging (HSI) is to scan multiple wavelengths in a digital image. Reflectance, absorption, and scattering of visible to NIR light over a range of 400–1000 nm can provide diagnostic information about tissue physiology, morphology, and composition. The longer the wavelength is, the deeper into the sample the light can penetrate. NIR sensors can capture images from up to 6 mm under the surface. Each wavelength of light interacts differently with the material depending on the chemical composition, e.g., the amount of water, oxy-, deoxygenated, and total hemoglobin, lipids, and other molecules. Converting the reflectance values into colors creates a visual representation in image form. Challenges include acquiring HSI datasets at high resolution and high speed, rapidly processing vast amount of datasets, and establishing a spectral database for relevant molecular biomarkers, tissue types, and their physiological properties. To realize HSI for characterization of healthy and diseased tissues during surgery, video systems and endoscopes are combined with medical spectral imaging cameras scanning multiple wavelengths. Technical features of a medical HSI camera system include a spectral range 500–1000 nm, spectral resolution 5 nm, working distance 50 cm, HSI image resolution ${{640}} \times {{480}}$, image capture rate 100 frames per second, image capture time of 6.4 s, capture rate of 2 per min, and HSI image size of ${{16}} \times {11.5}\;{\rm{cm}}$ [45]. HSI supports the perfusion visualization based on the fluorophore indocyanine green (ICG, see also Section 3.F) [46]. But it goes further offering quantified perfusion data [47] and location of nerves [48] without the need for any color agents. Surgeons can use the HSI data to identify various types of tissue and determine the transection margin during colorectal resection [49]. Another intraoperative application of HSI is to visualize delineation of brain tumors [50]. The developed demonstrator was composed of two hyperspectral cameras covering an extended spectral range of 400–1700 nm. An overview of the capabilities, current limitations, and future directions of HSI for intraoperative guidance was reported [51], and a list of literature with more than 140 entries is found here [45].

A DL-based framework was presented to process hyperspectral images of in vivo human brain tissue [52]. The proposed framework was able to generate a thematic map where the parenchymal area of the brain was delineated and the location of the tumor was identified providing guidance to the operating surgeon for a successful and precise tumor resection. An overall accuracy of 80% was achieved for multiclass classification, improving the results obtained with traditional support vector machine (SVM)-based approaches. HSI combined with DL allowed automatically recognizing nerves and other types of in vivo hyperspectral images [48]. In an animal model, eight anesthetized pigs underwent neck midline incisions, exposing several structures (nerve, artery, vein, muscle, fat, skin). CNN achieved an overall average sensitivity of 91% and a specificity of 100% validated with leave-one-animal-out cross-validation. The potential of HSI to evaluate liver viability in a hepatic artery occlusion mode was combined with a DL-based model using CNNs, and an AI-score was derived [53]. In the context of HSI, DL approaches improve classification accuracies and automated recognition of tissue types.

D. Infrared Spectroscopy

IR spectroscopy probes vibrations by absorption of IR radiation. Due to the larger absorption cross sections in the mid-IR range (${{400 - 4000}}\;{\rm{c}}{{\rm{m}}^{- 1}}$) compared to Raman scattering cross sections (see Section 3.G), the sensitivity is higher. However, the penetration of IR radiation is limited to few micrometers, in particular for hydrated samples. Therefore, thin tissue sections (ca. 10 µm) are prepared on IR-transparent substrates (e.g., ${\rm{Ca}}{{\rm{F}}_2}$) for measurement in transmission mode. Thicker samples can be investigated by IR spectroscopy in reflection mode. The penetration of IR radiation is limited to a few micrometers for samples in contact with an attenuated total reflection element due to the evanescent wave propagation. As fundamental vibrations in the fingerprint range are more specific than combination and overtone vibrations, mid-IR spectroscopy is more frequently used for spectral histopathology than NIR spectroscopy ($4000{-}{{12}}{,}{{500}}\;{\rm{c}}{{\rm{m}}^{- 1}}$). IR images were first collected using Fourier transform infrared (FTIR) spectrometers with ${{64}} \times {{64}}$ or ${{128}} \times {{128}}$ MCT-based, liquid nitrogen cooled focal plane array detectors. Classification models were trained to determine the tumor grade of glioma brain tumors [54] and the primary tumor of brain metastases [55,56] using FTIR images. Relying on FTIR spectroscopic imaging and computation, stainless computed histopathology enabled a rapid, digital, quantitative, and non-perturbing visualization of morphology and multiple molecular epitopes simultaneously that mimics H&E and various immunohistochemical stains [57]. High-definition FTIR imaging accurately recognized eight distinct classes (naïve and memory ${\rm{B}}$ cells, ${\rm{T}}$ cells, erythrocytes, connective tissue, fibrovascular network, smooth muscle, and light and dark zone activated ${\rm{B}}$ cells) in healthy, reactive, and malignant lymph node biopsies using a random forest classifier [58]. FTIR imaging with machine learning was used to provide an objective classification pipeline for esophageal tissues pathologically classified as normal, esophagitis, dysplasia, Barrett’s disease, and cancer. Classification performances approached a receiver operator characteristic area-under-the-curve (ROC-AUC) of 0.90 for binary classification tasks (e.g., normal versus Barrett’s), and isolated five-class classification delivered a ROC-AUC of ${\sim}{0.69}$ [59].

With the advent of mid-IR quantum cascade lasers (QCLs), faster IR imaging instruments were developed. Laser direct infrared (LDIR) microscopy enables discrete frequency infrared (DFIR) imaging with 16 times faster speed than the fastest FTIR imaging instrument [60]. The principle of DFIR is to identify specific spectral features for accurate tissue identification and to probe them by rapid scanning of the QCL instead of collecting a full FTIR spectrum per pixel. Another progress was to develop a confocal design in which refractive IR optics were designed to provide high-definition, rapid spatial scanning, and discrete spectral tuning using a QCL source at 2 µm pixel size, high signal-to-noise ratio above 1300, and 50-fold speed increase [61]. The performance of this instrument was demonstrated by accurate analysis of a 100-case breast tissue case within one day. Tunable QCLs were combined with a microscope and ${{480}} \times {{480}}\;{\rm{pixel}}$ microbolometer array for rapid hyperspectral IR imaging of tissue sections within minutes [62]. This approach was applied to analyze 110 patients with stage II and III colorectal cancer showing 96% sensitivity and 100% specificity as compared to the gold standard histopathology. An IR-optical hybrid (IR-OH) approach was conceptualized that measures molecular composition based on an optical microscope with wide-field interferometric detection of QCL absorption-induced photothermal expansion [63]. IR-OH was shown to exceed state-of-the-art microscopy in coverage by factor 10, spatial resolution by factor 4, and spectral consistency by mitigating the effects of scattering. IR-OH enables automated histopathological segmentation and generation of computationally stained images resolving morphological details in both color and spatial detail comparable to current pathology protocols but without stains and human interpretation.

The architecture of CNN is designed to process both spectral and spatial information that can significantly improve classifier performance over per pixel spectral classification. CNNs were reported to improve classifier performance for FTIR image data of six constituents in tissue micro arrays [64]. FTIR imaging data of a 148-patient cohort were used to develop a DL classifier to estimate the grade of colorectal cancer [65]. Higher-resolution FTIR images with histopathological contrast were achieved using a deep CNN that integrates spatial and spectral features [66]. A generative adversarial-network-based approach was presented to reconstruct full datasets with low information loss from incomplete spatial and spectral data recording by DFIR imaging with 20-fold speed-up for typical biomedical samples [67]. The application of stacked contractive encoders was investigated to preprocess IR microscopic pixel spectra followed by supervised fine-tuning to obtain neural networks that can resolve tissue structures [68]. The robustness of the resulting classifier was validated by training a network on embedded tissue and transferring it to classify fresh frozen tissue. In the context of IR spectroscopy, DL approaches improve classifier performance by processing both spectral and spatial information and reconstructing information from incomplete data.

 figure: Fig. 2.

Fig. 2. Photoacoustic (PA), ultrasound (US), and their overlaid images at representative positions along a pig coronary artery. Scale bar of 1 mm (right column) applies to all the panels. The blue line denotes the boundary of lumen and the green line indicates the boundary of the neo-intima layer (boundary of initial lumen). The yellow arrowed line in III indicates the acoustic shadowing behind an echogenic calcium nodule. (Reprinted from [73] following the terms of Creative Commons CC-BY 4.0 license.)

Download Full Size | PDF

E. Photoacoustic Imaging

Photoacoustic (or optoacoustic) imaging (PAI) is based on the photoacoustic effect. Laser pulses illuminate biological tissues, and energy of the irradiated photons is absorbed and induces a photothermal effect, which causes transient thermoelastic expansion and ultrasonic emission. Ultrasonic transducers detect the generated ultrasonic waves. The optical absorption in tissues can be due to endogenous molecules such as hemoglobin or melanin, or exogenously delivered contrast agents (see also Section 3.F). A photoacoustic tomography (PAT) system uses an unfocused ultrasound detector to acquire the photoacoustic signals followed by image reconstruction. PAT enables penetration depth of 50 mm, which is similar to ultrasonography. A photoacoustic microscopy system (PAM) uses a spherically focused ultrasound detector with 2D point-by-point scanning without the need for reconstruction algorithms. A PAM system was developed that enabled an axial resolution near 48 µm and a lateral resolution near 330 nm, and it provided label-free multilayered histology-like imaging of human breast specimens [69]. Ultraviolet laser illumination highlighted cell nuclei providing comparable contrast as hematoxylin-labeling used in conventional histology. The high correlation of PAM images to conventional histologic images allowed rapid computations of diagnostic features such as nuclear size and packing density. PAT was applied to, e.g., human hepatic malignancies using intraoperative ICG fluorescence imaging [70] and to breast tumor margin assessment [71]. Optoacoustic imaging in gastroenterology can provide detailed information about tumor architecture, perfusions, and inflammation, and recent studies were summarized [72].

Intravascular photoacoustic/ultrasound (IVPA/US) is an emerging hybrid imaging modality that provides specific lipid detection and localization, while maintaining co-registered artery morphology, for diagnosis of vulnerable plaque in cardiovascular disease. A dual-frequency IVPA/US catheter was presented [73]. The low-frequency transducer provided enhanced photoacoustic sensitivity, while the high-frequency transducer maintained state-of-the-art spatial resolution for ultrasound imaging (Fig. 2).

Extraction of relevant tissue parameters from the raw data of PAI requires solving inverse image reconstruction problems, which have proven extremely difficult to solve. The unique advantages of DL methods were summarized to facilitate the clinical translation of PAI, such as extremely fast computation times and the fact that they can be adapted to any given problem [74,75]. An original paper described reconstruction of photoacoustic images based on the deep CNN [76]. Artefacts caused by incomplete data were compensated in photoacoustic images by the combination of time-reversal and DL methods.

 figure: Fig. 3.

Fig. 3. Fluorescing bladder tumors, excited at 635 nm and seen in pseudo green color. (Adapted from [82] following the terms of Creative Commons CC-BY 4.0 license.)

Download Full Size | PDF

F. Fluorescence

In the past decade, fluorescence-based modalities have attracted interest from researchers and clinicians because they allow us to better localize lesions and they assist the surgeons to easily detect and resect tumors more precisely than established methods. The concept of this technique is to probe autofluorophores or administer a fluorophore that is taken up by cancer cells and is distributed to cancer tissue such as crossing the blood brain barrier, which is corrupted by a brain tumor. Subsequently, a microscope or an endoscopic system with an excitation source and filter that transmits the fluorescence emission can visualize the tumor intraoperatively.

Autofluorescence is the emission of photons with longer wavelengths by inherent biomolecules after absorption of photons with shorter wavelengths [77]. NADPH, NADH (excitation 340 nm, fluorescence 450 nm), and flavins (excitation 380–490 nm, fluorescence 520–560 nm) are common autofluorophores in biological tissues. Ex vivo and in vivo applications without artificially added fluorescent markers have already been clinically approved. Reduced autofluorescence is usually observed in tumor tissue. However, quenching of autofluorescence by numerous processes limits its specificity. Labels or contrast media can improve specificity, which is utilized in induced fluorescence, confocal laser endomicroscopy, and NIR imaging [78]. So far, only few markers are certified for in vivo use including ICG, fluorescein (mainly as its derivative fluorescein isothiocyanate), and 5-aminolevulinic acid (5-ALA) [79]. 5-ALA is a precursor of protoporphyrin IX, which can be detected in tumors after metabolization by its red fluorescence upon excitation with blue light. This approach was applied as intraoperative imaging method to, e.g., distinguish the tumor from surrounding normal brain in neurosurgery [80] and to detect bladder cancer [81]. A study of 10 patients with superficial bladder cancer who underwent photodynamic diagnosis with the use of a fluorescence video system was presented [82]. General condition of the bladder was assessed, the main foci of the tumor were determined, and then a tumor resection was performed (see Fig. 3). Further examination and identification of the fluorescing boundaries of the tumor remaining after transurethral resection was performed to make a decision of a re-resection. Histology-like images by fluorescence confocal microscopy were used for improved detection of low-density tumor cell infiltration. Endomicroscopy implements fluorescence confocal microscope in endoscopes for, e.g., imaging of the gastrointestinal tract. Main absorption of ICG is between 600 and 900 nm, and main fluorescence emission is in the NIR range between 750 and 950 nm. ICG has a half-life of 2.4 min [83] making ICG injection tricky and giving fluorescence only for a limited time, e.g., as an intermittent indicator for blood circulation or for intraoperative navigation during sentinel lymph node biopsies.

A light sheet microscope was designed for slide-free and non-destructive imaging of large centimeter-sized tissue specimens [84]. Light sheet microscopy achieves optical sectioning and rejection of out-of-focus light by using a thin selective illumination plane, which generates a fluorescence signal that is imaged in the orthogonal direction. The utility of this technique was demonstrated for rapid intraoperative assessment of tumor margin surfaces offering a speed to ${12.5}\;{\rm{s}}\;{\rm{per}}\;{\rm{cm}}^2$ at a pixel resolution of 1.5 µm. Microscopy with ultraviolet surface excitation (MUSE) was developed as another fluorescence-based, slide-free optical imaging system that is simple and cost-effective, and offers imaging at 3–10 frames per second [85]. High-resolution MUSE images at 280 nm illumination can resemble those from standard histology slides and bright-field microscopes while also providing surface and color contrast within minutes not usually present in standard H&E preparations.

Compared to fluorescence intensity measurements, measuring the fluorescence lifetime is invariant to intensity variations of the incoming light, does not depend on light attenuation in the detection path, is independent of the fluorophore concentration in the focus, and is less sensitive to photobleaching. A frequent variant uses megahertz laser pulses for excitation and time correlated single photon counting for detection [86]. Four fluorescence lifetimes can also be registered by blue laser excitation, dielectric filters, delay lines, and micro-channel plates for sequential detection [87]. FLIm has already been applied to enhance intraoperative decision-making, e.g., during robotic-assisted surgery of oropharyngeal cancer [88]. Fluorescence lifetimes and spectral intensity ratios were calculated for three spectral channels, producing six FLIm parameters. The multiparameter linear discriminant analysis approach provides superior discrimination to individual FLIm parameters for tissue imaged both in vivo and ex vivo.

An approach called FLImBrush was reported as a robust method for the localization and visualization of intraoperative free-hand fiber optic FLIm [89]. FLImBrush builds upon an existing method while employing DL-based image segmentation, block matching-based motion correction, and interpolation-based visualization to address challenges such as the co-registration accuracy and interpretability of the acquired imaging data. Each of the main processing steps was shown to be capable of real-time processing (${\gt}{{30}}$ frames per second), highlighting the feasibility of FLImBrush for intraoperative imaging and surgical guidance. Glioma patients were enrolled and injected with ICG for fluorescence-image-guided surgery. Exactly 1874 tissue samples were collected from surgery of these patients, and the second NIR window (NIR-II, 1000–1700 nm) fluorescence images were obtained. CNNs combined with NIR-II fluorescence imaging were explored to automatically provide pathological diagnosis of glioma in situ in real time during patient surgery [90]. Deep CNNs were found to be better at capturing important information from fluorescence images than surgeons’ evaluations during patient surgery.

G. Spontaneous Raman Spectroscopy

Raman spectroscopy probes inherent molecular vibrations that are excited by inelastic light scattering. Its advantages include that no sample preparation and no extrinsic markers are required and that the spectrum can be acquired in aqueous solutions, non-destructively, and at submicrometer resolution. The spectrum of all Raman-active vibrations can be considered as a specific fingerprint of the sample. Due to these properties, Raman-based approaches have been suggested as a clinical tool for ex vivo and in vivo diagnostic applications. Weak Raman cross sections and relatively slow imaging speed motivate to combine spontaneous Raman spectroscopy with faster imaging modalities, which is the content of Sections 5.A.1 and 5.1.2. A protocol recently provided guidance on how to perform Raman spectral analysis including experimental design, data preprocessing, data learning, and model transfer [91].

In the past decade, the number of intraoperative Raman spectroscopic applications to diagnose and characterize early cancer in different organs, such as in the head and neck, colon, stomach, and brain, but also other pathologies, for example, inflammation and atherosclerotic plaques, has risen [92]. A Raman system was integrated to the robotic da Vinci surgical platform, and data were obtained from 20 whole human prostates immediately following radical prostatectomy [93]. With this dataset, prostate and extra prostatic were distinguished with an accuracy, sensitivity, and specificity above 90%. A Raman system with 785 nm excitation laser and indium gallium arsenide camera was capable of quantifying changes in water content by measuring the water to the total area ratio of the high wavenumber spectrum. This Raman approach was shown to accurately differentiate between tumor and non-tumor breast tissue, even in the presence of surgical pigments [94]. A recent study presented a clinical investigative study on bladder biopsies to characterize the tumor grade ex vivo, using a compact fiber probe coupled to a Raman imaging system [95]. Although the evaluation of bladder biopsies was complicated by heterogeneity of highly fluorescent bladder tissues, multivariate statistical analysis discriminated between non-tumor tissue, and low-grade and high-grade tumor. To enlarge the field of view for rapid intraoperative cancer margin detection, a handheld macroscopic Raman imaging system was designed [96]. The system consisted of a line scanner employing a fiber bundle for 785 nm laser excitation and a second fiber bundle for hyperspectral detection of an area of $1\,\,\rm cm^{2}$. Another fiber-optic probe Raman imaging system was developed for real-time molecular virtual reality data visualization of chemical boundaries [97]. The proposed approach uses a computer-vision-based positional tracking system in conjunction with photometric stereo and augmented and mixed reality. With a Raman spectral sampling frequency of 10 Hz, an extended tissue area can be imaged at a spatial resolution of 0.5 mm. Raman spectroscopy was implemented in a prototype instrument employing a fiber-optic needle probe for intraoperative assessment of resection margins in oral cancers [98,99]. This fiber-optic needle is driven into the specimen from the resection surface toward the tumor. Based on the Raman spectra collected along the insertion path, it is determined whether the needle tip is healthy or tumor tissue. This takes a few seconds per measurement and enables objective measurement of resection margins without the need for grossing of the specimen.

A comprehensive framework for higher throughput molecular imaging was presented via DL-enabled Raman spectroscopy that was trained on a large dataset of hyperspectral images with over 1.5 million spectra [100]. This framework could speed up Raman imaging up to 40–90 times enabling good quality cellular imaging with a high resolution and high signal-to-noise ratio in under 1 min. Furthermore, lower resolution Raman imaging could be sped up 160 times, which can be utilized for rapid screening of large tissue areas in spectral pathology. The specificity of Raman spectroscopy to assist Mohs surgery was improved from 84% to 92% for detecting BCC by integrating Raman spectra with reflectance confocal microscopy (RCM) [101]. The presented approach used DL networks ResNet50 trained on RCM images to identify false positive normal skin structures, such as hair follicles and epidermis, based on morphological and cytological details.

4. NONLINEAR OPTICAL TECHNIQUES FOR INTRAOPERATIVE TUMOR VISUALIZATION

Nonlinear optical techniques are based on higher order susceptibilities that are much smaller than the first-order susceptibility. Therefore, they can only be observed at extremely high excitation intensities that are achieved by pico- and femtosecond pulsed lasers at megahertz repetition rates. Signals can be enhanced by several orders of magnitude that enable data collection within microseconds and image acquisition at real-time video frame rates using laser scanning microscopes. Second-harmonic generation (SHG) microscopy is a coherent second-order nonlinear scattering technique. Several processes are based on third-order nonlinear susceptibility, of which two-photon excited fluorescence (TPEF), coherent anti-Stokes Raman scattering (CARS), and stimulated Raman scattering (SRS) will be discussed here.

A. Second-Harmonic Generation

In SHG, also referred to as frequency doubling, two photons typically lying in the NIR frequency region interact with the tissue, and one photon with twice the excitation frequency is generated. Large hyperpolarizability and bulk non-centrosymmetric molecular structures in the focal volume are required to generate strong SHG signals. Collagen-rich tissues such as skin, arteries, tendons, cornea, collagen organization, or its alteration in the microenvironment of tumors, during fibrosis, and in bones, muscle, cartilages, myosins, and tubulins have frequently studied by SHG due to intense their SHG signals.

Collagen-associated changes due to several diseases such as cancers were studied by SHG imaging. Its uses in visualizing collagen fibers in tissues have been reported in a majority of recent studies. Not just for tumor imaging as will be shown in combination with other modalities (Section 5.B), SHG imaging has been used in many diseases such as osteogenesis imperfecta, dermal photoaging, pathological conditions of cornea, atherosclerosis, Sjogren’s syndrome, spinal damage, and muscle disorders [102].

B. Two-Photon Excited Fluorescence

Exciting an electronic molecular transition by absorbing two photons having twice the excitation wavelength is the principle of TPEF. Fluorophores are mainly excited in the focus. The advantages are an intrinsic confocality, which simplifies the optical setup and reduces bleaching outside the focus. Compared to conventional one-photon fluorescence microscopy, TPEF enables us to image tissue sections deeper and at improved sectioning capability [103]. Frequent autofluorophores in tissue such as flavin adenine dinucleotide (FAD) and nicotinamide adenine dinucleotide hydrogenase (NADH) monitor the metabolic activity of the sample. Autofluorescence of melanin and structural proteins such as elastin, keratin, and collagen provides further information about the extracellular matrix and connective tissue. TPEF has been widely used in biomedical research including imaging of deep tissue structures [104]. Multiphoton microscopy based on autofluorescence visualized cellular features in 10 lymph node tissue sections, of which seven were positive from melanoma metastases [105]. A recent work used TPEF to detect prostate cancer [106]. Frozen sections were stained with nuclear, and cytoplasmic/stromal fluorophores and fluorescence signals could be displayed using an H&E color scale facilitating pathologist interpretation. The higher detection accuracy and rapid specimen preparation compared to subsequently processed paraffin H&E inspection suggested that TPEF may be useful for intraoperative evaluation in radical prostatectomy.

C. Coherent Anti-Stokes Raman Scattering

CARS is a four-photon coherent Raman scattering process using pulsed excitation lasers. A pump photon ($p$) and a Stokes ($S$) photon with the frequencies ${\omega _p}$ and ${\omega _S}$ coherently drive molecular vibrations. A prerequisite is that the frequency difference between Stokes and pump photons corresponds to the frequency of a Raman band. An anti-Stokes photon with frequency ${\omega _{{\rm aS}}}$ can be generated by inelastic scattering of a third pump photon with frequency ${\omega _p}$ off these coherently driven molecular vibrations. CARS is implemented as microscopic contrast modality by collinear alignment and focus of pump and Stokes beams with an objective lens of high numerical aperture onto the sample [107]. Inherent confocality is achieved because such tight focusing fulfills the phase matching condition in the sample only over a very short interaction length where the anti-Stokes signal is generated. In spite of very intense excitation pulses at MHz repetition rates, CARS is non-destructive to biological samples because (i) photons are just scattered, (ii) excitation wavelengths above 800 nm are applied in the optical window with minimum absorption, and (iii) laser pulses are pico- to femtosecond short. Compared to spontaneous Raman spectroscopy, autofluorescence does not disturb CARS because anti-Stokes photons have shorter wavelengths than pump and Stokes photons. Raman-resonant CARS signals can be obscured by non-resonant background due to four wave mixing effects, which limits the image contrast. The CARS spectrum is distorted due to interference between resonant and non-resonant spectral contributions, which requires further processing before comparison with a spontaneous Raman spectrum. Quantitation has to consider that CARS signals scale quadratic with the pump laser intensity and the concentration.

CARS provides chemical contrast in tissue sections without labels. Relevant texture features were automatically extracted in CARS images of cryotome tissue sections that analyzed and predicted healthy and tumor regions [108]. A perceptron algorithm for supervised learning of binary classifiers differentiated cancer tissue from normal tissue based on CARS images from unstained BCC skin samples. A knowledge-based CARS microscopy system was developed and applied recognizing specific patterns in images and classifying cells and tissue structures. A machine learning approach extracted quantitative features in CARS images that described fibrils and cell morphology. This approach differentiated between non-neoplastic lung tissues and lung cancer and even between subtypes small and non-small cell carcinomas [109]. Tumor identification by CARS microscopy on bulk samples and in vivo has been so far verified retrospectively on histological sections, which only provides a gross reference for the interpretation of CARS images without matching at cellular level. Fluorescent labels were exploited for direct assessment of the interpretation of CARS images of solid and infiltrative tumors [110]. Glioblastoma cells expressing green fluorescent protein (GFP) were used for induction of tumors in mice, and the neoplastic nature of cells imaged by CARS microscopy was unequivocally verified by addressing two-photon fluorescence of GFP on fresh brain slices and in vivo. The fluorescence of 5-aminolevulinic acid-induced protoporphyrin IX was used for identification of tumorous tissue in fresh unfixed biopsies of human glioblastoma. Distinctive morphological features of glioblastoma cells, i.e., larger nuclei, evident nuclear membrane, and nucleolus, were identified in the CARS images of both mouse and human brain tumors. More intraoperative CARS studies are described in Section 5.B in the context of multimodal approaches combining SHG and TPEF.

D. Stimulated Raman scattering

Besides CARS, SRS coherently enhances Raman signals using two intense laser pulses for excitation whose frequency difference corresponds to a vibrational band. Consequently, the intensity of the probe pulses increases, and the intensity signal of the pump pulses decreases. SRS generated photons are detected by modulating one of the excitation laser pulses and a lock-in amplifier [111,112]. Advantages of SRS are the linear dependence of the signal intensities on excitation intensity and concentration, the absence of a non-resonant background, and the ability to measure SRS spectra that correspond to spontaneous Raman spectra. Brain tumor infiltration was detected with quantitative SRS microscopy probing ${{\rm{CH}}_2}$ stretching vibrations at ${{2835}}\;{\rm{c}}{{\rm{m}}^{- 1}}$, representing the lipid fraction, and the ${{\rm{CH}}_3}$ stretching vibrations at ${{2930}}\;{\rm{c}}{{\rm{m}}^{- 1}}$, representing the protein fraction [113]. SRS revealed quantifiable alterations in tissue cellularity, axonal density, and protein/lipid ratio, and a classifier was trained based on these features in SRS images to detect tumor infiltration in 22 neurosurgical patients with 97.5% sensitivity and 98.5% specificity. Stimulated Raman histology (SRH) can provide intraoperative histologic images of fresh, unprocessed surgical specimens. The first application of SRS microscopy in the operating room was demonstrated using a portable fiber-laser-based microscope and unprocessed specimens from 101 neurosurgical patients [114]. An intraoperative consultation was simulated with specimens imaged using both SRH and standard H&E histology. SRH-based diagnosis achieved an accuracy exceeding 92%. Next, image features such as nuclear density, tumor-associated macrophage infiltration, and nuclear morphology parameters were extracted from 3337 SRH fields, and a machine learning model was trained that correctly classified normal tissue and tumor tissue and determined the tumor grade in 25 fresh pediatric type surgical specimens [115].

5. MULTIMODAL OPTICAL TECHNIQUES FOR INTRAOPERATIVE TUMOR VISUALIZATION

The motivation for multimodal imaging is to combine methods with complementary properties in order to increase speed and/or chemical information. The specific fingerprint of Raman spectroscopy can be combined with the rapid screening capability of less-specific optical imaging modalities like fluorescence imaging or OCT. Such combinations overcome slow image acquisition using spontaneous Raman spectroscopy. Overview images of suspicious lesions guide Raman measurements of points-of-interest or small regions-of-interest. While TPEF probes fluorophores, SHG is sensitive for fibrous proteins. The TPEF-SHG contrast can be further augmented by coherent Raman scattering, namely CARS and SRS, that are sensitive for hydrocarbons, in particular lipids. These three modalities can be simultaneously collected at the same axial and lateral resolution and at high speed, which makes the development of multimodal instruments very attractive.

A. Multimodal Spontaneous Raman Spectroscopy

1. Raman Spectroscopy and Fluorescence Imaging

The state of the art of Raman spectroscopy and fluorescence imaging was described, paying special attention to their combined intraoperative application in current clinical research [116]. It was noted that careful matching of Raman spectroscopic excitation wavelengths and fluorophore emission is required. For example, a laser excitation wavelength of 976 nm in combination with a novel low-noise InGaAs detector showed perfect elimination of laser-induced fluorescence of pigmented tissues and tumor-specific fluorescent agents and enabled Raman-based detection of specific differences in water concentration. In earlier work, autofluorescence images were automatically segmented to define sparse sampling points for Raman spectroscopy of BCC, which reduced the number of Raman spectra to ${{20}}\;{\rm{spectra}}\;{\rm{per}}\;{\rm{mm}}^2$, resulting in an acquisition time of 15 min for ${{25}}\;{\rm{mm}}^2$ and allowing objective diagnosis faster than frozen section histopathology [117]. A fiber-optic endoscopy system for real-time in vivo diagnosis of cancer in the esophagus combined multimodal guidance with WL reflectance imaging, NBI, and autofluorescence imaging with Raman spectroscopy at 785 nm excitation [118]. A bi-modal FLIm and Raman fiber-optic probe was applied to collect spectra from brain tissue of a living rat under a cranial window [119].

 figure: Fig. 4.

Fig. 4. Measurement setup and procedure. (A) Schematic overview of the setup, including acousto-optic modulator (AOM), galvanometer mirrors (GM), scan- and tube lens (SL and TL), dichroic mirrors (DM), mirror (M), microscope objective (MO), focus lenses (FL), filters (F), and analog photo-multiplier tubes (PMT). (B) Photo of portable FD1070 microscope. (C) Third-harmonic generation (THG), second-harmonic generation (SHG), and two-photon autofluorescence (2PEF) signals were separated by their detected wavelengths using appropriate filters, were depicted in green, red, and blue, respectively, and were combined into one THG/SHG/2PEF image. (D) Flow chart of the experiments: from lung tissue from hospital to THG/SHG/2PEF and histopathology images. (Reproduced from [126] following the terms of Creative Commons CC-BY 4.0 license.)

Download Full Size | PDF

2. Raman Spectroscopy and OCT

An optical multimodal ex vivo approach combined OCT and Raman spectroscopy in tandem [120]. The chemical information derived from Raman spectroscopy and the texture parameters extracted from OCT images discriminated between colonic adenocarcinoma and normal colon. Sensitivity and specificity of the classifier were superior using data from both modalities in comparison to using data from only one. The depth sensitivity of a confocal Raman setup was improved by combination with a time-domain OCT system [121]. The OCT sample arm was modified to allow for co-alignment of the OCT with the Raman probe beam. Raman spectra of resected goat mucosal tissue at 785 nm excitation were measured with a penetration of 900 µm, which was in a similar range as for the OCT system using a 1310 nm excitation source.

B. Multimodal Multiphoton Imaging

Coherent Raman scattering, such as CARS and SRS, generates enhanced signals using intense picosecond or femtosecond for excitation. The multiphoton modalities SHG and TPEF can simultaneously be generated probing complementary information also without external markers and with inherent confocality for optical sectioning. Integration of appropriate filters, beam splitters, and additional detectors for each modality enable combining either CARS-TPEF-SHG or SRS-TPEF-SHG. However, integration of SRS with other multiphoton imaging modalities is more complex due to the lock-in detection scheme. Combining coherent Raman scattering approaches with other spectroscopic modalities has previously been described for molecular multi-contrast diagnosis of cancer cells and tissues [122].

1. TPEF and SHG

An overview of the diagnostic value, advantages, and challenges was given in the practical use of TPEF and SHG multiphoton microscopy in surgical oncology [123]. The technique can image fresh, frozen, or fixed tissues up to a depth 1000 µm. Best results including functional imaging and virtual histochemistry were obtained by in vivo imaging or scanning fresh tissue immediately after excision. In the context of neurosurgery, multiphoton tomography of tumor-bearing mouse brains and native human tissue samples clearly differentiated tumor and adjacent brain tissue [124]. Multiphoton tomography was also used to visualize both the extracellular matrix and tumor cells in different morphological and molecular subtypes of human breast cancer [125]. SHG images quantitatively assessed differences of collagen quantity, while TPEF detected elastin fiber and amyloid proteins that may be used as biomarkers for the low-aggressive breast cancer subtype. A compact, portable multiphoton microscope combined SHG, TPEF, and third-harmonic generation (THG) to probe histopathological information within seconds in fresh unprocessed human tumorous and non-tumorous lung samples [126]. The setup and procedure are shown in Fig. 4. THG contributed to resolve cellular structures such as tumor cells, macrophages, and lymphocytes.

2. CARS-TPEF-SHG

A clinical CARS-TPEF-SHG microscope was developed for imaging a ${1.2} \times {1.2}\;{\rm{mm}}^2$ field of view, optimized NIR transmission of 60% along the excitation path, and a compact, turn-key fiber laser [127]. To reduce complexity, the picosecond pulses were tuned to fixed wavelengths for CARS imaging of ${{\rm{CH}}_2}$ stretching vibrations at ${{2850}}\;{\rm{c}}{{\rm{m}}^{- 1}}$. The setup efficiently generated all three signals at the same time from the same spot to visualize morphological and biochemical features of tissue sections with high quality and at high resolution. Ex vivo images were rapidly acquired within minutes from SCC in head and neck samples or atherosclerotic plaques in aorta samples. Further clinical applications of this multimodal microscope were impaired at the time of development in 2013 due to the unmet need of automated image processing. In the last decade, progress has been achieved with the advent of modern machine learning concepts including AI and DL to exploit the full potential of such multimodal imaging approaches that are described in Section 5.B.4.

 figure: Fig. 5.

Fig. 5. Building virtual stained images using stimulated Raman histology (SRH). (A)–(C) The data are acquired using nonlinear optical microscopy (NLO) via the two SRS channels, with (A) and (B) highlighting the lipid (CH2) and proteins (CH3) distributions, respectively. (C) SHG channel gives access to the collagen distribution. (D) A simple subtraction (B)–(A) reveals the nuclei distribution. (E)–(H) Lookup tables (LUTs) are applied to (A)–(C) to mimic hematoxylin, eosin, and saffron (HES) staining. (F) LUT in the pink shades is applied to (A) to copy eosin stain. (G) LUT in the dark purple is applied to (D) to resemble hematoxylin stain. Depending on the desired result (HE versus HES), a LUT in (E) the pink shades or in (H) the orange/brown shades—to virtually reproduce saffron stain—is applied to (C). (J) Combining (E)–(G) allows us to produce HE-like image while merging (F)–(H) enables to get HES-like virtually stained image (K). (I) HE image of the same section for comparison with SRH images (J) and (K). Scale bar 100 µm. (Reprinted from [138] following the terms of Creative Commons CC-BY 4.0 license.)

Download Full Size | PDF

Head and neck SCC was studied by multimodal multiphoton microscopy [128]. Analysis of images from tissue sections predicted the diagnosis of cancer, epithelial tissue, other tissue, and background with 90% accuracy. It was concluded that the time frame of this approach below 20 min is comparable to conventional IFS (see Section 2.D). The same multimodal multiphoton approach assessed inflammatory bowel diseases such as Crohn’s disease and ulcerative colitis [129]. An optimized set of geometry- and intensity-related features in multimodal images was correlated to histological index levels based on a linear classifier. Multivariate statistics could translate the information in CARS-SHG-TPEF images into computational, pseudo H&E stained images, which was demonstrated for murine colon sections [130]. Distinctive morphologic features were identified in 40 human intracranial tumors by label-free nonlinear multimodal microscopy. Besides the high cellularity, the typical tumor features, which were identified and quantified, are intracellular and extracellular lipid droplets, aberrant vessels, extracellular matrix collagen, and diffuse TPEF. Nonlinear multimodal microscopy performed on fresh unprocessed biopsies confirmed that the technique has the ability to visualize tumor structures and discern normal from neoplastic tissue likewise in conditions close to in situ [131]. Recent papers applied multiphoton microscopy to diagnose margins for endoscopic submucosal dissection of early gastric cancer [132]. Discriminant function analysis combining multiphoton microscopy and optical coherence angiography (OCA) correctly differentiated between equivocal melanocytic lesions [133]. A multiphoton microscopy score was developed to assess quantitatively melanoma features at the cellular level. OCA revealed higher vessel density and thicker blood vessels of invasive melanoma than melanoma in situ and benign lesions.

An endoscopic fiber optic probe was developed for in vivo tissue screening by simultaneous CARS-TPEF-SHG imaging [134]. The probe design incorporated a gradient index lens and 10,000 coherent light guiding fiber cores that preserved the spatial relationship between the fiber’s entrance and the output. Moving parts or driving current were avoided at the probe head by shifting the laser scanner to the proximal probe end. The multimodal signals were collected in epi geometry and guided via fibers to the detection unit. The optics of a CARS endoscope was integrated in a rigid tube of 2.2 mm diameter and 187 mm length [135]. Brain tissue samples were imaged at 750 nm spatial resolution in a 250 µm field of view, and this endoscopic design was envisioned to use in stereotactic neurosurgery.

 figure: Fig. 6.

Fig. 6. Example of patch prediction for a not annotated image. The best ResNet50 models based on the LOPO-CV are utilized to detect breast cancer patches. The patches A and B are predicted the same using the ResNet50 models 8, 9, and 12 while different prediction results are obtained for patches C, E, and D. (Reprinted from [141] following the terms of Creative Commons CC-BY 4.0 license).

Download Full Size | PDF

3. SRS-TPEF-SHG

A SRH approach was combined with SHG to generate virtual H&E images [136]. Results on cryogenic slides evidenced an excellent agreement between SRH and H&E images while the ones on biopsies established the relevance of SRH for rapid intraoperative histology to assist in surgical decision-making. The generated SRH images provided both chemical and collagen information due to the SHG modality, which even mimics hematoxylin, eosin, and saffron (HES) histopathology staining [137]. HES staining requires extensive sample preparations that are achieved in time scales that are not compatible with intraoperative situations. SRH and HES images were found to agree excellently for healthy, pre-cancerous, and cancerous colon and pancreas tissue sections acquired on the same patients. The SRH workflow is also compatible with protein expression analysis evaluated by immunohistochemistry and genetic alterations evaluated by genome sequencing. This was demonstrated for SRH images of three normal tissues and two tumor tissues and control samples that were not imaged by SRH (see Fig. 5).

4. AI and DL Concepts in Multiphoton Imaging

Several papers demonstrated that multiphoton images from unstained tissues can be robustly classified using DL techniques. Four pre-trained CNNs were fine-trained using over 200 murine tissue images based on combined SHG and TPEF contrast to classify the tissues either as healthy or associated with high-grade ovarian carcinoma with over 95% sensitivity and 97% specificity [139]. CARS imaging and DL were combined to automate differential lung cancer diagnosis [140]. Instead of laborious state-of-the-art CARS image processing, a DL algorithm automatically differentiated CARS images of cancerous and normal lung tissues with 89.2% accuracy. Two classification models utilizing the deep CNN ResNet50 were trained for breast cancer detection by nonlinear multimodal imaging [141]. This network was either used as feature extractor or to be fine-tuned as a classification model (see Fig. 6). DL models were also proposed for computational staining of tissue sections using CARS, TPEF, and SHG images [142]. In the supervised and the unsupervised approach, conditional generative adversarial networks and cycle conditional generative adversarial networks were used, respectively.

A fingerprint spectroscopic SRS platform was reported that acquires a distortion-free SRS spectrum at ${{10}}\;{\rm{c}}{{\rm{m}}^{- 1}}$ spectral resolution in the spectral range ${\rm{400 {-} 1800}}\;{\rm{c}}{{\rm{m}}^{- 1}}$ within 20 µs. The signal-to-noise ratio was significantly improved by employing a spatial-spectral residual DL network, reaching a level comparable to that with 100 times integration [143]. A parallel workflow was reported that combined SRH and deep CNNs to predict diagnosis at the bedside in near real-time in an automated fashion [144]. The CNN was trained on over 2.5 million SRH images and predicted brain tumor diagnosis in the operating room in less than 150 s. In a multicenter, prospective clinical trial with 278 cases, the CNN-based diagnosis of SRH images turned out to be non-inferior to pathologist-based interpretation of conventional histologic images (overall accuracy, 94.6% versus 93.9%).

6. CONCLUSION AND OUTLOOK

Spectroscopic modalities have successfully been applied to intraoperative assessment of tissue in proof of principle studies that encompass ex vivo screening of unstained tissue sections and in vivo, real-time visualization of tissues. Machine learning algorithms provided contrast to distinguish between normal and pathological tissue, grade tumors, and generate pseudo H&E stained images. AI- and DL-based approaches contributed to several improvements such as faster data processing, better spatial resolution, better signal-to-noise ratio, and better classification. Some improvements have been summarized here, and more are expected in the future as this research field further advances. As an outlook, a huge economical potential exists for easy-to-use intraoperative optical and spectral systems with robust data analysis that provide real benefits for patients’ health and care. Similar to the development of autonomous driving cars that are categorized in different levels based on their complexity, low-level applications should be targeted first to pave the way for further opportunities for optical and spectral technologies in intraoperative histopathology. Similar to common traffic rules, optical and spectral instruments for health care have to follow medical device regulations that should already be considered for the target market from the beginning of the development.

If the multi-contrast computer-based spectral histopathology further progresses with AI and DL concepts, integration with robotic surgery will be an option for clinical translation. A low-level intraoperative application might improve contrast and vision. Approaching cellular image resolution, it may even be possible to distinguish cluster of cancer cells, make tumor removal more precise, and protect noncancerous areas. In addition, patient risk should not be overlooked meaning that lower-risk and more conservative applications are pursued first—such as intrasurgical, ex vivo rather than in vivo imaging—to establish operating parameters in an environment more isolated from surgical workflow for safety. A high-level intraoperative application would be to utilize the superior accuracy of surgical robotics compared to the capability of the surgeon’s hand for tumor removal on the cellular level.

This review should create awareness and stimulate networking that more optical and spectral technologies complemented by AI and DL approaches overcome the valley of death from the proof of concept to successful commercialization in the future. A tremendous progress in optical and spectral instrumentation has been reported in the last years. A bottleneck of most commercial systems is the lack of certified analysis platforms. AI and DL concepts can solve this need. Their general challenges for applicability are that most instruments are not optimized for high-throughput studies. A route toward larger data, which is a key requirement for AI and DL concepts, is automated instruments, standardized operating procedures, and multi-center studies that should also include appropriate controls, references, and proper gold standards. The upload of experimental data together with metadata to databases in cloud systems may be complicated due to legislative regulations of patients’ data protection.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

REFERENCES

1. E. Hope, “Chemotherapy, radiotherapy and surgical tumour resections in England,” https://www.gov.uk/government/statistics/chemotherapy-radiotherapy-and-surgical-tumour-resections-in-england/chemotherapy-radiotherapy-and-surgical-tumour-resections-in-england.

2. J. Coburger, A. Merkel, M. Scherer, et al., “Low-grade glioma surgery in intraoperative magnetic resonance imaging: results of a multicenter retrospective assessment of the German study group for intraoperative magnetic resonance imaging,” Neurosurgery 78, 775–786 (2016). [CrossRef]  

3. S. Berkmann, J. Fandino, B. Müller, L. Remonda, and H. Landolt, “Intraoperative MRI and endocrinological outcome of transsphenoidal surgery for non-functioning pituitary adenoma,” Acta Neurochir. 154, 639–647 (2012). [CrossRef]  

4. C. Riediger, V. Plodeck, J. Fritzmann, A. Pape, A. Kohler, B. Lachmann, T. Koch, J.-P. Kühn, R.-T. Hoffmann, and J. Weitz, “First application of intraoperative MRI of the liver during ALPPS procedure for colorectal liver metastases,” Langenbeck’s Arch. Surg. 405, 373–379 (2020). [CrossRef]  

5. M. Papa, T. Allweis, T. Karni, J. Sandbank, M. Konichezky, J. Diment, A. Guterman, M. Shapiro, Z. Peles, R. Maishar, A. Gur, E. Kolka, and R. Brem, “An intraoperative MRI system for margin assessment in breast conserving surgery: initial results from a novel technique,” J. Surg. Oncol. 114, 22–26 (2016). [CrossRef]  

6. H. A. Zaidi, K. De Los Reyes, G. Barkhoudarian, Z. N. Litvack, W. L. Bi, J. Rincon-Torroella, S. Mukundan, I. F. Dunn, and E. R. Laws, “The utility of high-resolution intraoperative MRI in endoscopic transsphenoidal surgery for pituitary macroadenomas: early experience in the advanced multimodality image guided operating suite,” Neurosurg. Focus 40, E18 (2016). [CrossRef]  

7. J. A. Jacobo, J. Avendaño, S. Moreno-Jimenez, S. Nuñez, and R. Mamani, “Basic principles of intraoperative ultrasound applied to brain tumor surgery,” Indian J. Neurosurg. 9, 135–140 (2020). [CrossRef]  

8. J. F. Sweeney, H. Smith, A. Taplin, E. Perloff, and M. A. Adamo, “Efficacy of intraoperative ultrasonography in neurosurgical tumor resection,” J. Neurosurg. Pediatr. 21, 504–510 (2018). [CrossRef]  

9. M. Alshareef, S. Lowe, Y. Park, and B. Frankel, “Utility of intraoperative ultrasonography for resection of pituitary adenomas: a comparative retrospective study,” Acta Neurochir. 163, 1725–1734 (2021). [CrossRef]  

10. A. M. Lucchese, A. N. Kalil, A. Schwengber, E. Suwa, and G. G. Rolim de Moura, “Usefulness of intraoperative ultrasonography in liver resections due to colon cancer metastasis,” Int. J. Surg. 20, 140–144 (2015). [CrossRef]  

11. N. M. A. Krekel, M. H. Haloua, A. M. F. Lopes Cardozo, R. H. de Wit, A. M. Bosch, L. M. de Widt-Levert, S. Muller, H. van der Veen, E. Bergers, E. S. M. de Lange de Klerk, S. Meijer, and M. P. van den Tol, “Intraoperative ultrasound guidance for palpable breast cancer excision (COBALT trial): a multicentre, randomised controlled trial,” Lancet Oncol. 14, 48–54 (2013). [CrossRef]  

12. H. Okudera, S. Kobayashi, K. Kyoshima, H. Gibo, T. Takemae, and K. Sugita, “Development of the operating computerized tomographic scanner system for neurosurgery,” Acta Neurochir. 111, 61–63 (1991). [CrossRef]  

13. M. Kornswiet, “Three-dimensional intraoperative image guidance using computed tomography,” https://bme240.eng.uci.edu/students/09s/mkornswi/Three-Dimensional_Intraoperative_Image_Guidance_Using_Computed_Tomography/Home_Page.html.

14. C. Schichor, N. Terpolilli, J. Thorsteinsdottir, and J.-C. Tonn, “Intraoperative computed tomography in cranial neurosurgery,” Neurosurg. Clin. N. Am. 28, 595–602 (2017). [CrossRef]  

15. M. Ashraf, N. Choudhary, S. S. Hussain, U. A. Kamboh, and N. Ashraf, “Role of intraoperative computed tomography scanner in modern neurosurgery–an early experience,” Surg. Neurol. Int. 11, 247 (2020). [CrossRef]  

16. E. Balaur, S. O. Toole, A. J. Spurling, G. B. Mann, B. Yeo, K. Harvey, C. Sadatnajafi, E. Hanssen, J. Orian, K. A. Nugent, B. S. Parker, and B. Abbey, “Colorimetric histology using plasmonically active microscope slides,” Nature 598, 65–71 (2021). [CrossRef]  

17. F. Preisser, L. Theissen, P. Wild, K. Bartelt, L. Kluth, J. Köllermann, M. Graefen, T. Steuber, H. Huland, D. Tilki, F. Roos, A. Becker, F. K. H. Chun, and P. Mandel, “Implementation of intraoperative frozen section during radical prostatectomy: short-term results from a German tertiary-care center,” Eur. Urol. Focus 7, 95–101 (2021). [CrossRef]  

18. F. Li, L. Yang, Y. Zhao, L. Yuan, S. Wang, and Y. Mao, “Intraoperative frozen section for identifying the invasion status of lung adenocarcinoma: a systematic review and meta-analysis,” Int. J. Surg. 72, 175–184 (2019). [CrossRef]  

19. J. L. Alcazar, J. Dominguez-Piriz, L. Juez, M. Caparros, and M. Jurado, “Intraoperative gross examination and intraoperative frozen section in patients with endometrial cancer for detecting deep myometrial invasion: a systematic review and meta-analysis,” Int. J. Gynecol. Cancer 26, 407–415 (2016). [CrossRef]  

20. A. A. Elshanbary, A. A. Awad, A. Abdelsalam, et al., “The diagnostic accuracy of intraoperative frozen section biopsy for diagnosis of sentinel lymph node metastasis in breast cancer patients: a meta-analysis,” Environ. Sci. Pollut. Res. 29, 47931–47941 (2022). [CrossRef]  

21. D. E. Rowe, R. J. Carroll, and C. L. Day Jr., “Mohs surgery is the treatment of choice for recurrent (previously treated) basal cell carcinoma,” J. Dermatol. Surg. Oncol. 15, 424–431 (1989). [CrossRef]  

22. T. Petropoulou, A. Kapoula, A. Mastoraki, A. Politi, E. Spanidou-Karvouni, I. Psychogios, I. Vassiliou, and N. Arkadopoulos, “Imprint cytology versus frozen section analysis for intraoperative assessment of sentinel lymph node in breast cancer,” Breast Cancer 9, 325–330 (2017). [CrossRef]  

23. H. Naveed, M. Abid, A. A. Hashmi, M. M. Edhi, A. K. Sheikh, G. Mudassir, and A. Khan, “Diagnostic accuracy of touch imprint cytology for head and neck malignancies: a useful intra-operative tool in resource limited countries,” BMC Clin. Pathol. 17, 25 (2017). [CrossRef]  

24. F. Sharin, V. R. Ekhar, R. N. Shelkar, and J. N. Vedi, “Role of intraoperative cytology in head and neck lesions: a prospective study,” Indian J. Otolaryngol. Head Neck Surg. 71, 724–728 (2019). [CrossRef]  

25. C. Krafft, F. von Eggeling, O. Guntinas-Lichius, A. Hartmann, M. J. Waldner, M. F. Neurath, and J. Popp, “Perspectives, potentials and trends of ex vivo and in vivo optical molecular pathology,” J. Biophoton. 11, e201700236 (2018). [CrossRef]  

26. F. Xing, Y. Xie, H. Su, F. Liu, and L. Yang, “Deep learning in microscopy image analysis: a survey,” IEEE Trans. Neural Netw. Learn. Syst. 29, 4550–4568 (2018). [CrossRef]  

27. L. Tian, B. Hunt, M. A. L. Bell, J. Yi, J. T. Smith, M. Ochoa, X. Intes, and N. J. Durr, “Deep learning in biomedical optics,” Lasers Surg. Med. 53, 748–775 (2021). [CrossRef]  

28. J. Fujimoto and E. Swanson, “The development, commercialization, and impact of optical coherence tomographyhistory of optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 57, OCT1–OCT13 (2016). [CrossRef]  

29. J. P. Ehlers, W. J. Dupps, P. K. Kaiser, J. Goshe, R. P. Singh, D. Petkovsek, and S. K. Srivastava, “The prospective intraoperative and perioperative ophthalmic imaging with optical coherence tomography (PIONEER) study: 2-year results,” Am. J. Ophthalmol. 158, 999.e1001 (2014). [CrossRef]  

30. A. Pujari, D. Agarwal, R. Chawla, A. Kumar, and N. Sharma, “Intraoperative optical coherence tomography guided ocular surgeries: critical analysis of clinical role and future perspectives,” Clin. Ophthalmol. 14, 2427–2440 (2020). [CrossRef]  

31. M. Everett, S. Magazzeni, T. Schmoll, and M. Kempe, “Optical coherence tomography: from technology to applications in ophthalmology,” Transl. Biophoton. 3, e202000012 (2021). [CrossRef]  

32. Y. Li, J. Chen, and Z. Chen, “Advances in doppler optical coherence tomography and angiography,” Transl. Biophoton. 1, e201900005 (2019). [CrossRef]  

33. E. A. Rank, A. Agneter, T. Schmoll, R. A. Leitgeb, and W. Drexler, “Miniaturizing optical coherence tomography,” Transl. Biophoton. 4, e202100007 (2022). [CrossRef]  

34. L. Wittig, C. Betz, and D. Eggert, “Optical coherence tomography for tissue classification of the larynx in an outpatient setting-a translational challenge on the verge of a resolution?” Transl. Biophoton. 3, e202000013 (2021). [CrossRef]  

35. H. Yang, S. Zhang, P. Liu, L. Cheng, F. Tong, H. Liu, S. Wang, M. Liu, C. Wang, Y. Peng, F. Xie, B. Zhou, Y. Cao, J. Guo, Y. Zhang, Y. Ma, D. Shen, P. Xi, and S. Wang, “Use of high-resolution full-field optical coherence tomography and dynamic cell imaging for rapid intraoperative diagnosis during breast cancer surgery,” Cancer 126, 3847–3856 (2020). [CrossRef]  

36. D. Mojahed, R. S. Ha, P. Chang, Y. Gan, X. Yao, B. Angelini, H. Hibshoosh, B. Taback, and C. P. Hendon, “Fully automated postlumpectomy breast margin assessment utilizing convolutional neural network based optical coherence tomography image classification method,” Acad. Radiol. 27, e81–e86 (2020). [CrossRef]  

37. Y. Ye, W. W. Sun, R. X. Xu, L. E. Selmic, and M. Sun, “Intraoperative assessment of canine soft tissue sarcoma by deep learning enhanced optical coherence tomography,” Vet. Comp. Oncol. 19, 624–631 (2021). [CrossRef]  

38. X.-G. Ni and G.-Q. Wang, “The role of narrow band imaging in head and neck cancers,” Curr. Oncol. Rep. 18, 10 (2016). [CrossRef]  

39. M. R. Struyvenberg, A. J. de Groof, J. van der Putten, F. van der Sommen, F. Baldaque-Silva, M. Omae, R. Pouw, R. Bisschops, M. Vieth, E. J. Schoon, W. L. Curvers, P. H. de With, and J. J. Bergman, “A computer-assisted algorithm for narrow-band imaging-based tissue characterization in Barrett’s esophagus,” Gastrointest. Endosc. 93, 89–98 (2021). [CrossRef]  

40. H. Klimza, J. Jackowska, C. Piazza, J. Banaszewski, and M. Wierzbicka, “The role of intraoperative narrow-band imaging in transoral laser microsurgery for early and moderately advanced glottic cancer,” Braz. J. Otorhinolaryngol. 85, 228–236 (2019). [CrossRef]  

41. N. Davaris, A. Lux, N. Esmaeili, A. Illanes, A. Boese, M. Friebe, and C. Arens, “Evaluation of vascular patterns using contact endoscopy and narrow-band imaging (CE-NBI) for the diagnosis of vocal fold malignancy,” Cancers 12, 248 (2020). [CrossRef]  

42. T. Aoki, K. Matsuda, D. A. Mansour, T. Koizumi, S. Goto, M. Watanabe, K. Otsuka, and M. Murakami, “Narrow-band imaging examination of microvascular architecture of subcapsular hepatic tumors,” J. Surg. Res. 261, 51–57 (2021). [CrossRef]  

43. M. A. Azam, C. Sampieri, A. Ioppi, S. Africano, A. Vallin, D. Mocellin, M. Fragale, L. Guastini, S. Moccia, C. Piazza, L. S. Mattos, and G. Peretti, “Deep learning applied to white light and narrow band imaging videolaryngoscopy: toward real-time laryngeal cancer detection,” Laryngoscope 132, 1798–1806 (2022). [CrossRef]  

44. M. Azam, C. Sampieri, A. Ioppi, P. Benzi, G. Giordano, M. De Vecchi, V. Campagnari, S. Li, L. Guastini, and A. Paderno, “Videomics of the upper aero-digestive tract cancer: deep learning applied to white light and narrow band imaging for automatic segmentation of endoscopic images,” Front. Oncol. 12, 900451 (2022). [CrossRef]  

45. DiaspectiveVision, “With TIVITA® 2.0, users are offered an optimized extracorporeal camera for diagnostic support,” https://diaspective-vision.com/en/produkt/tivita-2-0/.

46. S. Hennig, B. Jansen-Winkeln, H. Köhler, L. Knospe, C. Chalopin, M. Maktabi, A. Pfahl, J. Hoffmann, S. Kwast, I. Gockel, and Y. Moulla, “Novel intraoperative imaging of gastric tube perfusion during oncologic esophagectomy—a pilot study comparing hyperspectral imaging (HSI) and fluorescence imaging (FI) with indocyanine green (ICG),” Cancers 14, 97 (2022). [CrossRef]  

47. M. Barberio, A. Lapergola, S. Benedicenti, M. Mita, V. Barbieri, F. Rubichi, A. Altamura, G. Giaracuni, E. Tamburini, M. Diana, M. Pizzicannella, and M. G. Viola, “Intraoperative bowel perfusion quantification with hyperspectral imaging: a guidance tool for precision colorectal surgery,” Surg. Endosc 36, 8520–8532 (2022). [CrossRef]  

48. M. Barberio, T. Collins, V. Bencteux, R. Nkusi, E. Felli, M. G. Viola, J. Marescaux, A. Hostettler, and M. Diana, “Deep learning analysis of in vivo hyperspectral images for automated intraoperative nerve detection,” Diagnostics 11, 1508 (2021). [CrossRef]  

49. B. Jansen-Winkeln, N. Holfert, H. Köhler, Y. Moulla, J. P. Takoh, S. M. Rabe, M. Mehdorn, M. Barberio, C. Chalopin, T. Neumuth, and I. Gockel, “Determination of the transection margin during colorectal resection with hyperspectral imaging (HSI),” Int. J. Colorectal Dis. 34, 731–739 (2019). [CrossRef]  

50. H. Fabelo, S. Ortega, R. Lazcano, et al., “An intraoperative visualization system using hyperspectral imaging to aid in brain tumor delineation,” Sensors 18, 430 (2018). [CrossRef]  

51. M. Barberio, S. Benedicenti, M. Pizzicannella, E. Felli, T. Collins, B. Jansen-Winkeln, J. Marescaux, M. G. Viola, and M. Diana, “Intraoperative guidance using hyperspectral imaging: a review for surgeons,” Diagnostics 11, 2066 (2021). [CrossRef]  

52. H. Fabelo, M. Halicek, S. Ortega, M. Shahedi, A. Szolna, J. F. Piñeiro, C. Sosa, A. J. O’Shanahan, S. Bisshopp, C. Espino, M. Márquez, M. Hernández, D. Carrera, J. Morera, G. M. Callico, R. Sarmiento, and B. Fei, “Deep learning-based framework for in vivo identification of glioblastoma tumor using hyperspectral images of human brain,” Sensors 19, 920 (2019). [CrossRef]  

53. E. Felli, M. Al-Taher, T. Collins, R. Nkusi, E. Felli, A. Baiocchini, V. Lindner, C. Vincent, M. Barberio, B. Geny, G. M. Ettorre, A. Hostettler, D. Mutter, S. Gioux, C. Schuster, J. Marescaux, J. Gracia-Sancho, and M. Diana, “Automatic liver viability scoring with deep learning and hyperspectral imaging,” Diagnostics 11, 1527 (2021). [CrossRef]  

54. S. B. Sobottka, K. D. Geiger, R. Salzer, G. Schackert, and C. Krafft, “Suitability of infrared spectroscopic imaging as an intraoperative tool in cerebral glioma surgery,” Anal. Bioanal.Chem. 393, 187–195 (2009). [CrossRef]  

55. N. Bergner, B. F. M. Romeike, R. Reichart, R. Kalff, C. Krafft, and J. U. Popp, “Tumor margin identification and prediction of the primary tumor from brain metastases using FTIR imaging and support vector machines,” Analyst 138, 3983–3990 (2013). [CrossRef]  

56. C. Krafft, L. Shapoval, S. B. Sobottka, K. D. Geiger, G. Schackert, and R. Salzer, “Identification of primary tumors of brain metastases by SIMCA classification of IR spectroscopic images,” Biochim. Biophys. Acta 1758, 883–891 (2006). [CrossRef]  

57. D. Mayerich, M. J. Walsh, A. Kadjacsy-Balla, P. S. Ray, S. M. Hewitt, and R. Bhargava, “Stain-less staining for computed histopathology,” Technology 03, 27–31 (2015). [CrossRef]  

58. L. S. Leslie, T. P. Wrobel, D. Mayerich, S. Bindra, R. Emmadi, and R. Bhargava, “High definition infrared spectroscopic imaging for lymph node histopathology,” PLoS One 10, e0127238 (2015). [CrossRef]  

59. A. Keogan, T. N. Q. Nguyen, J. J. Phelan, N. O. Farrell, N. Lynam-Lennon, B. Doyle, D. O’Toole, J. V. Reynolds, J. O’Sullivan, and A. D. Meade, “Chemical imaging and machine learning for sub-classification of oesophageal tissue histology,” Transl. Biophoton. 3, e202100004 (2021). [CrossRef]  

60. S. Tiwari, J. Raman, V. Reddy, A. Ghetler, R. P. Tella, Y. Han, C. R. Moon, C. D. Hoke, and R. Bhargava, “Towards translation of discrete frequency infrared spectroscopic imaging for digital histopathology of clinical biopsy samples,” Anal. Chem. 88, 10183–10190 (2016). [CrossRef]  

61. S. Mittal, K. Yeh, L. S. Leslie, S. Kenkel, A. Kajdacsy-Balla, and R. Bhargava, “Simultaneous cancer and tumor microenvironment subtyping using confocal infrared microscopy for all-digital molecular histopathology,” Proc. Natl. Acad. Sci. USA 115, E5651–E5660 (2018). [CrossRef]  

62. C. Kuepper, A. Kallenbach-Thieltges, H. Juette, A. Tannapfel, F. Großerueschkamp, and K. Gerwert, “Quantum cascade laser-based infrared microscopy for label-free and automated cancer classification in tissue sections,” Sci. Rep. 8, 7717 (2018). [CrossRef]  

63. M. Schnell, S. Mittal, K. Falahkheirkhah, A. Mittal, K. Yeh, S. Kenkel, A. Kajdacsy-Balla, P. S. Carney, and R. Bhargava, “All-digital histopathology by infrared-optical hybrid microscopy,” Proc. Natl. Acad. Sci. USA 117, 3388–3396 (2020). [CrossRef]  

64. S. Berisha, M. Lotfollahi, J. Jahanipour, I. Gurcan, M. J. Walsh, R. Bhargava, H. Van Nguyen, and D. Mayerich, “Deep learning for FTIR histology: leveraging spatial and spectral features with convolutional neural networks,” Analyst 144, 1642–1653 (2019). [CrossRef]  

65. S. Tiwari, K. Falahkheirkhah, G. Cheng, and R. Bhargava, “Colon cancer grading using infrared spectroscopic imaging-based deep learning,” Appl. Spectrosc. 76, 475–484 (2022). [CrossRef]  

66. M. Lotfollahi, S. Berisha, D. Daeinejad, and D. Mayerich, “Digital staining of high-definition Fourier transform infrared (FT-IR) images using deep learning,” Appl. Spectrosc. 73, 556–564 (2019). [CrossRef]  

67. K. Falahkheirkhah, K. Yeh, S. Mittal, L. Pfister, and R. Bhargava, “Deep learning-based protocols to enhance infrared imaging systems,” Chemometr. Intell. Lab. Syst. 217, 104390 (2021). [CrossRef]  

68. A. P. Raulf, J. Butke, C. Küpper, F. Großerueschkamp, K. Gerwert, and A. Mosig, “Deep representation learning for domain adaptable classification of infrared spectral imaging data,” Bioinformatics 36, 287–294 (2019). [CrossRef]  

69. T. T. W. Wong, R. Zhang, P. Hai, C. Zhang, M. A. Pleitez, R. L. Aft, D. V. Novack, and L. V. Wang, “Fast label-free multilayered histology-like imaging of human breast cancer by photoacoustic microscopy,” Sci. Adv. 3, e1602168 (2017). [CrossRef]  

70. A. Miyata, T. Ishizawa, M. Kamiya, A. Shimizu, J. Kaneko, H. Ijichi, J. Shibahara, M. Fukayama, Y. Midorikawa, Y. Urano, and N. Kokudo, “Photoacoustic tomography of human hepatic malignancies using intraoperative indocyanine green fluorescence imaging,” PLoS One 9, e112667 (2014). [CrossRef]  

71. R. Li, P. Wang, L. Lan, F. P. Lloyd, C. J. Goergen, S. Chen, and J.-X. Cheng, “Assessing breast tumor margin by multispectral photoacoustic tomography,” Biomed. Opt. Express 6, 1273–1281 (2015). [CrossRef]  

72. D. M. Klett and M. J. Waldner, “Optoacoustic imaging in gastroenterology,” Transl. Biophoton. 1, e201900002 (2019). [CrossRef]  

73. Y. Cao, M. Alloosh, M. Sturek, and J.-X. Cheng, “Highly sensitive lipid detection and localization in atherosclerotic plaque with a dual-frequency intravascular photoacoustic/ultrasound catheter,” Transl. Biophoton. 2, e202000004 (2020). [CrossRef]  

74. J. Gröhl, M. Schellenberg, K. Dreher, and L. Maier-Hein, “Deep learning for biomedical photoacoustic imaging: a review,” Photoacoustics 22, 100241 (2021). [CrossRef]  

75. C. Yang, H. Lan, F. Gao, and F. Gao, “Review of deep learning for photoacoustic imaging,” Photoacoustics 21, 100215 (2021). [CrossRef]  

76. P. Farnia, M. Mohammadi, E. Najafzadeh, M. Alimohamadi, B. Makkiabadi, and A. Ahmadian, “High-quality photoacoustic image reconstruction based on deep convolutional neural network: towards intra-operative photoacoustic imaging,” Biomed. Phys. Eng. Express 6, 045019 (2020). [CrossRef]  

77. A. Croce and G. Bottiroli, “Autofluorescence spectroscopy and imaging: a tool for biomedical research and diagnosis,” Eur. J. Histochem. 58, 2461 (2014). [CrossRef]  

78. A. L. Vahrmeijer, M. Hutteman, J. R. van der Vorst, C. J. H. van de Velde, and J. V. Frangioni, “Image-guided cancer surgery using near-infrared fluorescence,” Nat. Rev. Clin. Oncol. 10, 507–518 (2013). [CrossRef]  

79. M. Koch and V. Ntziachristos, “Advancing surgical vision with fluorescence imaging,” Annu. Rev. Med. 67, 153–164 (2016). [CrossRef]  

80. T. A. Eatz, D. G. Eichberg, V. M. Lu, L. Di, R. J. Komotar, and M. E. Ivan, “Intraoperative 5-ALA fluorescence-guided resection of high-grade glioma leads to greater extent of resection with better outcomes: a systematic review,” J. Neuro-Oncol. 156, 233–256 (2022). [CrossRef]  

81. A. Stenzl, H. Penkoff, E. Dajc-Sommerer, A. Zumbraegel, L. Hoeltl, M. Scholz, C. Riedl, J. Bugelnig, A. Hobisch, M. Burger, G. Mikuz, and U. Pichlmeier, “Detection and clinical outcome of urinary bladder cancer with 5-aminolevulinic acid-induced fluorescence cystoscopy,” Cancer 117, 938–947 (2011). [CrossRef]  

82. M. Loshchenov, A. Seregin, N. Kalyagina, E. Dadashev, A. Borodkin, A. Babaev, O. Loran, and V. Loschenov, “Fluorescence visualization of the borders of bladder tumors after TUR with quantitative determination of diagnostic contrast,” Transl. Biophoton. 2, e201900026 (2020). [CrossRef]  

83. A. Duprée, H. Rieß, C. Detter, E. S. Debus, and S. H. Wipper, “Utilization of indocynanine green fluorescent imaging (ICG-FI) for the assessment of microperfusion in vascular medicine,” Innov. Surg. Sci. 3, 193–201 (2018). [CrossRef]  

84. A. K. Glaser, N. P. Reder, Y. Chen, E. F. McCarty, C. Yin, L. Wei, Y. Wang, L. D. True, and J. T. C. Liu, “Light-sheet microscopy for slide-free non-destructive pathology of large clinical specimens,” Nat. Biomed. Eng. 1, 0084 (2017). [CrossRef]  

85. F. Fereidouni, Z. T. Harmany, M. Tian, A. Todd, J. A. Kintner, J. D. McPherson, A. D. Borowsky, J. Bishop, M. Lechpammer, S. G. Demos, and R. Levenson, “Microscopy with ultraviolet surface excitation for rapid slide-free histology,” Nat. Biomed. Eng. 1, 957–966 (2017). [CrossRef]  

86. W. Becker, The bh TCSPC Handbook (Becker & Hickl GmbH, 2017).

87. L. Marcu, “Fluorescence lifetime techniques in medical applications,” Ann. Biomed. Eng. 40, 304–331 (2012). [CrossRef]  

88. B. W. Weyers, M. Marsden, T. Sun, J. Bec, A. F. Bewley, R. F. Gandour-Edwards, M. G. Moore, D. G. Farwell, and L. Marcu, “Fluorescence lifetime imaging for intraoperative cancer delineation in transoral robotic surgery,” Transl. Biophoton. 1, e201900017 (2019). [CrossRef]  

89. M. Marsden, T. Fukazawa, Y.-C. Deng, B. W. Weyers, J. Bec, D. Gregory Farwell, and L. Marcu, “FLImBrush: dynamic visualization of intraoperative free-hand fiber-based fluorescence lifetime imaging,” Biomed. Opt. Express 11, 5166–5180 (2020). [CrossRef]  

90. B. Shen, Z. Zhang, X. Shi, C. Cao, Z. Zhang, Z. Hu, N. Ji, and J. Tian, “Real-time intraoperative glioma diagnosis using fluorescence imaging and deep convolutional neural networks,” Eur. J. Nucl. Med. Mol. Imaging 48, 3482–3492 (2021). [CrossRef]  

91. S. Guo, J. Popp, and T. Bocklitz, “Chemometric analysis in Raman spectroscopy from experimental design to machine learning–based modeling,” Nat. Protoc. 16, 5426–5459 (2021). [CrossRef]  

92. H. P. S. Heng, C. Shu, W. Zheng, K. Lin, and Z. Huang, “Advances in real-time fiber-optic Raman spectroscopy for early cancer diagnosis: pushing the frontier into clinical endoscopic applications,” Transl. Biophoton. 3, e202000018 (2021). [CrossRef]  

93. M. Pinto, K. Zorn, J.-P. Tremblay, J. Desroches, F. Dallaire, K. Aubertin, E. Marple, C. Kent, F. Leblond, D. Trudel, and F. Lesage, “Integration of a Raman spectroscopy system to a robotic-assisted surgical system for real-time tissue characterization during radical prostatectomy procedures,” J. Biomed. Opt. 24, 025001 (2019). [CrossRef]  

94. T. J. E. Hubbard, A. P. Dudgeon, D. J. Ferguson, A. C. Shore, and N. Stone, “Utilization of Raman spectroscopy to identify breast cancer from the water content in surgical samples containing blue dye,” Transl. Biophoton. 3, e202000023 (2021). [CrossRef]  

95. E. Cordero, J. Rüger, D. Marti, A. S. Mondol, T. Hasselager, K. Mogensen, G. G. Hermann, J. Popp, and I. W. Schie, “Bladder tissue characterization using probe-based Raman spectroscopy: evaluation of tissue heterogeneity and influence on the model prediction,” J. Biophoton. 13, e201960025 (2020). [CrossRef]  

96. F. Daoust, T. Nguyen, P. Orsini, J. Bismuth, M.-M. de Denus-Baillargeon, I. Veilleux, A. Wetter, P. McKoy, I. Dicaire, M. Massabki, K. Petrecca, and F. Leblond, “Handheld macroscopic Raman spectroscopy imaging instrument for machine-learning-based molecular tissue margins characterization,” J. Biomed. Opt. 26, 022911 (2021). [CrossRef]  

97. W. Yang, F. Knorr, I. Latka, M. Vogt, G. O. Hofmann, J. Popp, and I. W. Schie, “Real-time molecular imaging of near-surface tissue using Raman spectroscopy,” Light Sci. Appl. 11, 90 (2022). [CrossRef]  

98. Y. Aaboubout, E. Barroso, R. N. Soares, et al., “Intraoperative assessment of resection margins based on Raman spectroscopy in OCSCC surgery,” Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 132, e35–e36 (2021). [CrossRef]  

99. Y. Aaboubout, I. ten Hove, R. W. H. Smits, J. A. Hardillo, G. J. Puppels, and S. Koljenovic, “Specimen-driven intraoperative assessment of resection margins should be standard of care for oral cancer patients,” Oral Dis. 27, 111–116 (2021). [CrossRef]  

100. C. C. Horgan, M. Jensen, A. Nagelkerke, J.-P. St-Pierre, T. Vercauteren, M. M. Stevens, and M. S. Bergholt, “High-throughput molecular imaging via deep-learning-enabled Raman spectroscopy,” Anal. Chem. 93, 15850–15860 (2021). [CrossRef]  

101. M. Chen, X. Feng, M. Fox, J. Reichenberg, F. C. P. Lopes, K. Sebastian, M. Markey, and J. Tunnell, “Deep learning on reflectance confocal microscopy improves Raman spectral diagnosis of basal cell carcinoma,” J. Biomed. Opt. 27, 065004 (2022). [CrossRef]  

102. P. J. Campagnola and C.-Y. Dong, “Second harmonic generation microscopy: principles and applications to disease diagnosis,” Laser Photon. Rev. 5, 13–26 (2011). [CrossRef]  

103. W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990). [CrossRef]  

104. D. R. Miller, J. W. Jarrett, A. M. Hassan, and A. K. Dunn, “Deep tissue imaging with multiphoton fluorescence microscopy,” Curr. Opin. Biomed. Eng. 4, 32–39 (2017). [CrossRef]  

105. D. Kantere, J. Siarov, S. De Lara, S. Parhizkar, R. Olofsson Bagge, A.-M. Wennberg Larkö, and M. B. Ericson, “Label-free laser scanning microscopy targeting sentinel lymph node diagnostics: a feasibility study ex vivo,” Transl. Biophoton. 2, e202000002 (2020). [CrossRef]  

106. L. C. Cahill, Y. Wu, T. Yoshitake, C. Ponchiardi, M. G. Giacomelli, A. A. Wagner, S. Rosen, and J. G. Fujimoto, “Nonlinear microscopy for detection of prostate cancer: analysis of sensitivity and specificity in radical prostatectomies,” Mod. Pathol. 33, 916–923 (2020). [CrossRef]  

107. A. Volkmer, “Vibrational imaging and microspectroscopies based on coherent anti-stokes Raman scattering microscopy,” J. Phys. D 38, R59–R81 (2005). [CrossRef]  

108. F. B. Legesse, A. Medyukhina, S. Heuke, and J. Popp, “Texture analysis and classification in coherent anti-stokes Raman scattering (CARS) microscopy images for automated detection of skin cancer,” Comput. Med. Imaging Graph. 43, 36–43 (2015). [CrossRef]  

109. L. S. Gao, F. Li, Y. Yang, J. Xing, A. A. Hammoudi, H. Zhao, Y. Fan, K. K. Wong, Z. Wang, and S. T. Wong, “On-the-spot lung cancer differential diagnosis by label-free, molecular vibrational imaging and knowledge-based classification,” J. Biomed. Opt. 16, 096004 (2011). [CrossRef]  

110. R. Galli, O. Uckermann, A. Temme, E. Leipnitz, M. Meinhardt, E. Koch, G. Schackert, G. Steiner, and M. Kirsch, “Assessing the efficacy of coherent anti-Stokes Raman scattering microscopy for the detection of infiltrating glioblastoma in fresh brain samples,” J. Biophoton. 10, 404–414 (2017). [CrossRef]  

111. C. W. Freudiger, W. Min, B. G. Saar, S. Lu, G. R. Holtom, C. He, J. C. Tsai, J. X. Kang, and X. S. Xie, “Label-free biomedical imaging with high sensitivity by stimulated Raman scattering microscopy,” Science 322, 1857–1861 (2008). [CrossRef]  

112. P. Nandakumar, A. Kovalev, and A. Volkmer, “Vibrational imaging based on stimulated Raman scattering microscopy,” New J. Phys. 11, 033026 (2009). [CrossRef]  

113. M. B. Ji, S. Lewis, S. Camelo-Piragua, S. H. Ramkissoon, M. Snuderl, S. Venneti, A. Fisher-Hubbard, M. Garrard, D. Fu, A. C. Wang, J. A. Heth, C. O. Maher, N. Sanai, T. D. Johnson, C. W. Freudiger, O. Sagher, X. S. Xie, and D. A. Orringer, “Detection of human brain tumor infiltration with quantitative stimulated Raman scattering microscopy,” Sci. Transl. Med. 7, 309ra163 (2015). [CrossRef]  

114. D. A. Orringer, B. Pandian, Y. S. Niknafs, et al., “Rapid intraoperative histology of unprocessed surgical specimens via fiber-laser-based stimulated Raman scattering microscopy,” Nat. Biomed. Eng. 1, 0027 (2017). [CrossRef]  

115. T. C. Hollon, S. Lewis, B. Pandian, Y. S. Niknafs, M. R. Garrard, H. Garton, C. O. Maher, K. McFadden, M. Snuderl, A. P. Lieberman, K. Muraszko, S. Camelo-Piragua, and D. A. Orringer, “Rapid intraoperative diagnosis of pediatric brain tumors using stimulated Raman histology,” Cancer Res 78, 278–289 (2018). [CrossRef]  

116. L. J. Lauwerends, H. Abbasi, T. C. Bakker Schut, P. B. A. A. Van Driel, J. A. U. Hardillo, I. P. Santos, E. M. Barroso, S. Koljenović, A. L. Vahrmeijer, R. J. Baatenburg de Jong, G. J. Puppels, and S. Keereweer, “The complementary value of intraoperative fluorescence imaging and Raman spectroscopy for cancer surgery: combining the incompatibles,” Eur. J. Nucl. Med. Mol. Imaging 49, 2364–2376 (2022). [CrossRef]  

117. K. Kong, C. J. Rowlands, S. Varma, W. Perkins, I. H. Leach, A. A. Koloydenko, H. C. Williams, and I. Notingher, “Diagnosis of tumors during tissue-conserving surgery with integrated autofluorescence and Raman scattering microscopy,” Proc. Natl. Acad. Sci. USA 110, 15189–15194 (2013). [CrossRef]  

118. M. S. Bergholt, W. Zheng, K. Lin, K. Y. Ho, M. Teh, K. G. Yeoh, J. B. Y. So, and Z. Huang, “In vivo diagnosis of esophageal cancer using image-guided Raman endoscopy and biomolecular modeling,” Technol. Cancer Res. Treat. 10, 103–112 (2011). [CrossRef]  

119. S. Dochow, D. Ma, I. Latka, T. Bocklitz, B. Hartl, J. Bec, H. Fatakdawala, E. Marple, K. Urmey, S. Wachsmann-Hogiu, M. Schmitt, L. Marcu, and J. Popp, “Combined fiber probe for fluorescence lifetime and Raman spectroscopy,” Anal. Bioanal. Chem. 407, 8291–8301 (2015). [CrossRef]  

120. P. C. Ashok, B. B. Praveen, N. Bellini, A. Riches, K. Dholakia, and C. S. Herrington, “Multi-modal approach using Raman spectroscopy and optical coherence tomography for the discrimination of colonic adenocarcinoma from normal colon,” Biomed. Opt. Express 4, 2179–2186 (2013). [CrossRef]  

121. K. M. Khan, H. Krishna, S. K. Majumder, K. D. Rao, and P. K. Gupta, “Depth-sensitive Raman spectroscopy combined with optical coherence tomography for layered tissue analysis,” J. Biophoton. 7, 77–85 (2014). [CrossRef]  

122. C. Krafft and J. Popp, “Combination of spontaneous and coherent Raman scattering approaches with other spectroscopic modalities for molecular multi-contrast cancer diagnosis,” in Multimodal Optical Diagnostics of Cancer, V. V. Tuchin, J. Popp, and V. Zakharov, eds. (Springer International Publishing, 2020), pp. 325–358.

123. T. T. König, J. Goedeke, and O. J. Muensterer, “Multiphoton microscopy in surgical oncology- a systematic review and guide for clinical translatability,” Surg. Oncol. 31, 119–131 (2019). [CrossRef]  

124. S. R. Kantelhardt, D. Kalasauskas, K. König, E. Kim, M. Weinigel, A. Uchugonova, and A. Giese, “In vivo multiphoton tomography and fluorescence lifetime imaging of human brain tumor tissue,” J. Neuro-Oncol. 127, 473–482 (2016). [CrossRef]  

125. E. V. Gubarkova, V. V. Elagin, V. V. Dudenkova, S. S. Kuznetsov, M. M. Karabut, A. L. Potapov, D. A. Vorontsov, A. Y. Vorontsov, M. A. Sirotkina, E. V. Zagaynova, and N. D. Gladkova, “Multiphoton tomography in differentiation of morphological and molecular subtypes of breast cancer: a quantitative analysis,” J. Biophoton. 14, e202000471 (2021). [CrossRef]  

126. L. M. G. van Huizen, T. Radonic, F. van Mourik, D. Seinstra, C. Dickhoff, J. M. A. Daniels, I. Bahce, J. T. Annema, and M. L. Groot, “Compact portable multiphoton microscopy reveals histopathological hallmarks of unprocessed lung tumor tissue in real time,” Transl. Biophoton. 2, e202000009 (2020). [CrossRef]  

127. T. Meyer, M. Baumgartl, T. Gottschall, T. Pascher, A. Wuttig, C. Matthaus, B. F. M. Romeike, B. R. Brehm, J. Limpert, A. Tuennermann, O. Guntinas-Lichius, B. Dietzek, J. U. Popp, and J. Popp, “A compact microscope setup for multimodal nonlinear imaging in clinics and its application to disease diagnostics,” Analyst 138, 4048–4057 (2013). [CrossRef]  

128. S. Heuke, O. Chernavskaia, T. Bocklitz, F. B. Legesse, T. Meyer, D. Akimov, O. Dirsch, G. Ernst, F. von Eggeling, and I. Petersen, “Multimodal nonlinear microscopy of head and neck carcinoma—toward surgery assisting frozen section analysis,” Head Neck 38, 1545–1552 (2016). [CrossRef]  

129. O. Chernavskaia, S. Heuke, M. Vieth, O. Friedrich, S. Schürmann, R. Atreya, A. Stallmach, M. F. Neurath, M. Waldner, I. Petersen, M. Schmitt, T. Bocklitz, and J. Popp, “Beyond endoscopic assessment in inflammatory bowel disease: real-time histology of disease activity by non-linear multimodal imaging,” Sci. Rep. 6, 29239 (2016). [CrossRef]  

130. T. W. Bocklitz, F. S. Salah, N. Vogler, S. Heuke, O. Chernavskaia, C. Schmidt, M. J. Waldner, F. R. Greten, R. Bräuer, and M. Schmitt, “Pseudo-HE images derived from CARS/TPEF/SHG multimodal imaging in combination with Raman-spectroscopy as a pathological screening tool,” BMC Cancer 16, 534 (2016). [CrossRef]  

131. R. Galli, O. Uckermann, T. Sehm, E. Leipnitz, C. Hartmann, F. Sahm, E. Koch, G. Schackert, G. Steiner, and M. Kirsch, “Identification of distinctive features in human intracranial tumors by label-free nonlinear multimodal microscopy,” J. Biophoton. 12, e201800465 (2019). [CrossRef]  

132. X. Zheng, N. Zuo, H. Lin, L. Zheng, M. Ni, G. Wu, J. Chen, and S. Zhuo, “Margin diagnosis for endoscopic submucosal dissection of early gastric cancer using multiphoton microscopy,” Surg. Endosc. 34, 408–416 (2020). [CrossRef]  

133. V. Elagin, E. Gubarkova, O. Garanina, D. Davydova, N. Orlinskaya, L. Matveev, I. Klemenova, I. Shlivko, M. Shirmanova, and E. Zagaynova, “In vivo multimodal optical imaging of dermoscopic equivocal melanocytic skin lesions,” Sci. Rep. 11, 1405 (2021). [CrossRef]  

134. A. Lukic, S. Dochow, H. Bae, G. Matz, I. Latka, B. Messerschmidt, M. Schmitt, and J. Popp, “Endoscopic fiber probe for nonlinear spectroscopic imaging,” Optica 4, 496–501 (2017). [CrossRef]  

135. P. Zirak, G. Matz, B. Messerschmidt, T. Meyer, M. Schmitt, J. Popp, O. Uckermann, R. Galli, M. Kirsch, and M. Winterhalder, “A rigid coherent anti-stokes Raman scattering endoscope with high resolution and a large field of view,” APL Photon. 3, 092409 (2018). [CrossRef]  

136. B. Sarri, F. Poizat, S. Heuke, J. Wojak, F. Franchi, F. Caillol, M. Giovannini, and H. Rigneault, “Stimulated Raman histology: one to one comparison with standard hematoxylin and eosin staining,” Biomed. Opt. Express 10, 5378–5384 (2019). [CrossRef]  

137. B. Sarri, R. Canonge, X. Audier, E. Simon, J. Wojak, F. Caillol, C. Cador, D. Marguet, F. Poizat, M. Giovannini, and H. Rigneault, “Fast stimulated Raman and second harmonic generation imaging for intraoperative gastro-intestinal cancer detection,” Sci. Rep. 9, 10052 (2019). [CrossRef]  

138. B. Sarri, R. Appay, S. Heuke, F. Poizat, F. Franchi, S. Boissonneau, F. Caillol, H. Dufour, D. Figarella-Branger, M. Giovaninni, and H. Rigneault, “Observation of the compatibility of stimulated Raman histology with pathology workflow and genome sequencing,” Transl. Biophoton. 3, e202000020 (2021). [CrossRef]  

139. M. Huttunen, A. Hassan, C. McCloskey, S. Fasih, J. Upham, B. Vanderhyden, R. Boyd, and S. Murugkar, “Automated classification of multiphoton microscopy images of ovarian tissue using deep learning,” J. Biomed. Opt. 23, 066002 (2018). [CrossRef]  

140. S. Weng, X. Xu, J. Li, and S. T. Wong, “Combining deep learning and coherent anti-Stokes Raman scattering imaging for automated differential diagnosis of lung cancer,” J. Biomed. Opt. 22, 106017 (2017). [CrossRef]  

141. N. Ali, E. Quansah, K. Köhler, T. Meyer, M. Schmitt, J. Popp, A. Niendorf, and T. Bocklitz, “Automatic label-free detection of breast cancer using nonlinear multimodal imaging and the convolutional neural network ResNet50,” Transl. Biophoton. 1, e201900003 (2019). [CrossRef]  

142. P. Pradhan, T. Meyer, M. Vieth, A. Stallmach, M. Waldner, M. Schmitt, J. Popp, and T. Bocklitz, “Computational tissue staining of non-linear multimodal imaging using supervised and unsupervised deep learning,” Biomed. Opt. Express 12, 2280–2298 (2021). [CrossRef]  

143. H. Lin, H. J. Lee, N. Tague, J.-B. Lugagne, C. Zong, F. Deng, J. Shin, L. Tian, W. Wong, M. J. Dunlop, and J.-X. Cheng, “Microsecond fingerprint stimulated Raman spectroscopic imaging by ultrafast tuning and spatial-spectral learning,” Nat. Commun. 12, 3052 (2021). [CrossRef]  

144. T. C. Hollon, B. Pandian, A. R. Adapa, et al., “Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks,” Nat. Med. 26, 52–58 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. (A) Cross-sectional raw data showing elastic wave propagation of retinal layers at different time points for an ex vivo pig retina. (B) Elasticity map in rabbit retina in vivo. A different stiffness was demonstrated in different layers of the retina. (C) Doppler OCT image of rabbit cornea and crystalline lens. (D) and (E) Spatiotemporal Doppler OCT images of cornea and lens, respectively. (F) OCT structural and (G) Doppler OCT images of a human cadaver coronary artery. (H) Histological image and (D) close-up view of an atherosclerotic lesion. The red-colored region denoted by the blue arrow in (I) exhibits smaller phase and displacement and, therefore, indicates less elastic, stiffer tissue such as plaques. Scale bars, 1 mm. (Reprinted from [32] following the terms of Creative Commons CC-BY 4.0 license.)
Fig. 2.
Fig. 2. Photoacoustic (PA), ultrasound (US), and their overlaid images at representative positions along a pig coronary artery. Scale bar of 1 mm (right column) applies to all the panels. The blue line denotes the boundary of lumen and the green line indicates the boundary of the neo-intima layer (boundary of initial lumen). The yellow arrowed line in III indicates the acoustic shadowing behind an echogenic calcium nodule. (Reprinted from [73] following the terms of Creative Commons CC-BY 4.0 license.)
Fig. 3.
Fig. 3. Fluorescing bladder tumors, excited at 635 nm and seen in pseudo green color. (Adapted from [82] following the terms of Creative Commons CC-BY 4.0 license.)
Fig. 4.
Fig. 4. Measurement setup and procedure. (A) Schematic overview of the setup, including acousto-optic modulator (AOM), galvanometer mirrors (GM), scan- and tube lens (SL and TL), dichroic mirrors (DM), mirror (M), microscope objective (MO), focus lenses (FL), filters (F), and analog photo-multiplier tubes (PMT). (B) Photo of portable FD1070 microscope. (C) Third-harmonic generation (THG), second-harmonic generation (SHG), and two-photon autofluorescence (2PEF) signals were separated by their detected wavelengths using appropriate filters, were depicted in green, red, and blue, respectively, and were combined into one THG/SHG/2PEF image. (D) Flow chart of the experiments: from lung tissue from hospital to THG/SHG/2PEF and histopathology images. (Reproduced from [126] following the terms of Creative Commons CC-BY 4.0 license.)
Fig. 5.
Fig. 5. Building virtual stained images using stimulated Raman histology (SRH). (A)–(C) The data are acquired using nonlinear optical microscopy (NLO) via the two SRS channels, with (A) and (B) highlighting the lipid (CH2) and proteins (CH3) distributions, respectively. (C) SHG channel gives access to the collagen distribution. (D) A simple subtraction (B)–(A) reveals the nuclei distribution. (E)–(H) Lookup tables (LUTs) are applied to (A)–(C) to mimic hematoxylin, eosin, and saffron (HES) staining. (F) LUT in the pink shades is applied to (A) to copy eosin stain. (G) LUT in the dark purple is applied to (D) to resemble hematoxylin stain. Depending on the desired result (HE versus HES), a LUT in (E) the pink shades or in (H) the orange/brown shades—to virtually reproduce saffron stain—is applied to (C). (J) Combining (E)–(G) allows us to produce HE-like image while merging (F)–(H) enables to get HES-like virtually stained image (K). (I) HE image of the same section for comparison with SRH images (J) and (K). Scale bar 100 µm. (Reprinted from [138] following the terms of Creative Commons CC-BY 4.0 license.)
Fig. 6.
Fig. 6. Example of patch prediction for a not annotated image. The best ResNet50 models based on the LOPO-CV are utilized to detect breast cancer patches. The patches A and B are predicted the same using the ResNet50 models 8, 9, and 12 while different prediction results are obtained for patches C, E, and D. (Reprinted from [141] following the terms of Creative Commons CC-BY 4.0 license).

Tables (2)

Tables Icon

Table 1. Currently Used In Vivo and Ex Vivo Methods for Intraoperative Tumor Assessment and Their Advantages and Limitationsa

Tables Icon

Table 2. Optical and Spectroscopic Techniques Approved in Clinical Practice (Shaded Gray) and Under Research Used for Intraoperative Tissue Assessment and Their Progress by Deep Learning

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.