Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Methods for real-time feature-guided image fusion of intrasurgical volumetric optical coherence tomography with digital microscopy

Open Access Open Access

Abstract

4D-microscope-integrated optical coherence tomography (4D-MIOCT) is an emergent multimodal imaging technology in which live volumetric OCT (4D-OCT) is implemented in tandem with standard stereo color microscopy. 4D-OCT provides ophthalmic surgeons with many useful visual cues not available in standard microscopy; however it is challenging for the surgeon to effectively integrate cues from simultaneous-but-separate imaging in real-time. In this work, we demonstrate progress towards solving this challenge via the fusion of data from each modality guided by segmented 3D features. In this way, a more readily interpretable visualization that combines and registers important cues from both modalities is presented to the surgeon.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

Corrections

Robert M. Trout, Christian Viehland, Jianwei D. Li, William Raynor, Al-Hafeez Dhalla, Lejla Vajzovic, Anthony N. Kuo, Cynthia A. Toth, and Joseph A. Izatt, "Methods for real-time feature-guided image fusion of intrasurgical volumetric optical coherence tomography with digital microscopy: publisher’s note," Biomed. Opt. Express 14, 5047-5047 (2023)
https://opg.optica.org/boe/abstract.cfm?uri=boe-14-10-5047

25 August 2023: A minor correction was made to an affiliation.

1. Introduction

Optical coherence tomography (OCT) is a depth-resolved imaging technique capable of acquiring high-resolution, 3-dimensional (3D) reflectance profiles of target objects [1]. Since its inception, it was recognized that these characteristics make it particularly potent as an ophthalmic imaging modality with its unprecedented ability to provide sub-surface visualization of ocular anatomy [24]. This has also led to investigation of OCT imaging as a tool in surgery, where it was displayed side-by-side with [514]. As imaging speeds continued to increase, acquisition of live intraoperative cross-sections (OCT B-scans) and live intraoperative volumetric images (4D-OCT) became possible. This advanced volumetric visualization modality could potentially further augment intrasurgical OCT, providing greater transverse context to the depth information in single cross-sectional B-scans [15,16]. However, despite the increased functionality of 4D-OCT, it still has yet to serve as a wholesale replacement for surgical microscopy due to its lower frame rate, higher latency, reduced field of view, and lack of traditional color imaging cues compared to surgical microscopy. Consequently, in current implementations 4D-OCT is displayed with traditional color surgical microscopy like its 2D MIOCT predecessors, composing a multimodal imaging technique termed 4D microscope-integrated optical coherence tomography (4D-MIOCT) which has demonstrated potential for clinical utility in a variety of applications in ophthalmic surgery [1640].

While 4D-MIOCT appears to have clinical promise, there still remains potential for improvement in the processing and presentation of the data to the clinical user. Several efforts have been made to address this, targeting different areas of visualization improvement. These include more effective methods of communicating important 3D information encoded in the volumetric OCT data, such as depth-based shading for improved depth perception, boundary-based shading for enhanced 3D structural visualization, and virtual reality interfaces for more flexible and contextualized viewing experience [4147].

In this work, we focus on the challenges posed by the need for the surgeon to integrate multiple visual channels when utilizing 4D-MIOCT for surgical guidance. As mentioned, visualizations from each mode are displayed separately side-by-side, requiring the surgeon to pay a significant cognitive overhead in identifying features of interest, mentally registering them across separate images, and integrating the information presented to make clinical decisions [48,49]. Due to this effect, surgeons tend to focus on the microscopy channel the majority of the time, as this is the visualization they are most accustomed to, only utilizing the OCT channel for a narrow set of specific maneuvers. Lowering the mental overhead necessary to integrate OCT visual information and standard microscopy could allow the surgeon to use OCT guidance more effectively throughout surgery. A potential solution to this problem is the removal of the spatial separation between the channels when displaying them, removing the need to mentally register them. However, the optimal implementation of this proposal requires careful consideration to detail concerning which visual cues from each modality are of importance, and how they might be combined within the reduced bandwidth of a single image while remaining appropriately represented.

To approach this problem, we first consider several key requirements that must be fulfilled by an imaging modality applied to real-time guidance. First, the surgical target needs to be properly localized within the greater context of its environment, such that the surgeon is able to effectively acquire and navigate to it. This demands a sufficiently large field of view (FOV) and/or visualization of anatomical landmarks such that the broader positional relationship between the tool and target is clear. Second, it should possess fine resolution beyond that required to localize the target as described previously, to the degree that the necessary manipulation of the target can be performed precisely and accurately. Finally, this display must update with sufficiently high frame rate and low latency that the dynamics of both navigation and target interaction are clearly reproduced.

Projecting these requirements into the context of retinal 4D-MIOCT surgery, cues of importance from OCT were identified as 3D tissue features (curves, indentations, bumps, thicknesses, etc.), subsurface features (choroidal vessels) and interactions between surgical tools and tissues. In color microscopy imaging, such useful cues consist of color feature contrast inherent in retinal vessels and colored tools, fundus tissue hue, and contextualization provided by visualization across larger axial and transverse ranges. In order to capture these cues and effectively preserve them within a single visual, we have developed a pipeline for the feature-informed fusion of volumetric OCT and color digital microscopy of 4D-MIOCT, utilizing shaders which effectively communicate these cues without occluding one another. In doing so, we present a single image channel possessing a composition of the important visual cues associated with separate modalities of 4D-MIOCT for a more interpretable and informative means of surgical guidance.

2. Methods

2.1 Intrasurgical ophthalmic imaging system

For 4D-MIOCT image data acquisition, we used a previously developed custom, high speed swept-source OCT [39,40].

Per Fig. 1(a), our custom-built OCT engine in this setup uses a standard transmissive Mach-Zehnder interferometer configuration with a 400 kHz “ping-pong” laser centered at 1050 nm (Axsun Technologies Inc.). The OCT scanner consists of a Leica Microsystems Enfocus OCT scanner modified for high-speed imaging. For patient safety, an electromechanical shutter is installed at the interferometer input to block the laser output while the system is not scanning. Standard OCT scan configurations consist of ‘fast’ and ‘dense’ volumes. The associated scan parameters for each of these configurations were 200 A-scans imaged per B-scan x 80 B-scans per volume for ‘fast’ and 700 A-scans imaged per B-scan x 256 B-scans per volume. The scan field of view at the retina for ‘fast’ scans was 3 × 3 mm or 5 × 5 mm depending on the surgeon’s need for denser detail or increased field of view respectively. ‘Dense’ scans had a field of view of 7 × 7 mm. Clinical operation rates for each configuration were measured at 7.0 and 1.5 volumes per second as measured from the start time of each galvanometer scan waveform observed on an oscilloscope; these rates are lower than those expected from a 400kHz sweep rate due to sweeps ignored during B-scan flyback and software delay between volume acquisitions. Color microscopy video was recorded by a Ngenuity visualization system camera (Alcon AG) alongside corresponding volumetric OCT data for a selection of posterior segment surgical cases.

 figure: Fig. 1.

Fig. 1. 4D-MIOCT system used for collecting intraoperative volumetric OCT and color microscopy video. (a) Block diagram for the imaging system; PD = photodetector, FBG = fiber Bragg grating, PC = polarization controller, 3D TV = 3-dimensional television. (b) Head of the imaging system containing the microscope head (blue) and OCT scanner (red). (c) Operating room photo of the deployed system during a retinal surgery case, including the microscope head (blue), OCT scanner (red), and Ngenuity 3D TV (green). Note that in current practice, all the visual data on the screen (3D OCT render, en face OCT, OCT B-scan, and color microscopy) all exist in separate spatial locations on the screen requiring the surgeon to mentally integrate these disparate representations of the surgical field.

Download Full Size | PDF

2.2 Image fusion pipeline

Fused rendering of acquired image data was conducted post-acquisition on a custom-built software module based on the Vortex library [50] running on an Intel Core i9-9900 K CPU @ 3.60 GHz and a NVIDIA GeForce RTX 2060 GPU. Registration of the image channels was conducted manually by optimizing an affine registration transform to minimize the distance between shared features of image modalities, specifically retinal blood vessel branches and the surgical instrument. The fusion pipeline then began with Gaussian filtering of the structural volumetric OCT data to reduce noise and improve feature contrast (Fig. 2(a, b)). Next, 3D features of interest were segmented from the filtered OCT volume for rendering. These features consisted of three surfaces: the tool, the retinal surface (specifically the internal limiting membrane (ILM)), and the subretinal retinal pigmented epithelium (RPE) surface. To start, the ‘first surface’ consisting of the combined tool and retinal surfaces was detected (Fig. 2(c-f)). First the filtered volume data (Fig. 2(a, b)) was thresholded to avoid segmentation artifacts in the vignetted, lower-intensity periphery of the OCT field of view (Fig. 2(c)) and the axial gradient magnitude for the result was computed (Fig. 2(d)). The first surface was then detected per A-scan as the shallowest depth where the gradient was greater than a preset threshold (Fig. 2(e)). This threshold value was fixed for all fused imaging. The result was then used for detection of the ‘second surface’ containing the RPE (Fig. 2(g-j)). After computing the axial gradient from the unthresholded volume data to avoid artifacts from lateral nonuniformity in intensity within the retina, the rear of the first surface was identified as the shallowest depth beneath the first surface where the gradient value was less than a gradient threshold (Fig. 2(g, h)). The second surface was then detected as the shallowest depth beneath the rear of the first surface where the gradient value was greater than a gradient threshold (Fig. 2(i)). All aforementioned threshold values were initially optimized for surface fidelity and then remained fixed for all data fusion. Finally, the resultant first and second surface segmentations were median-filtered and masked for vignetted portions in the periphery of the field of view, yielding the final surfaces to be used in rendering (Fig. 2(f, j)).

 figure: Fig. 2.

Fig. 2. Feature segmentation pipeline illustrated using recorded data from a soft-tip surgical tool in proximity to the retina. 3D OCT data (a) with a selected cross-sectional region indicated by the dashed blue frame. This region was then filtered (b). The first surface in depth consisting of the retinal surface was segmented from the thresholded result (c-f), using the axial gradient (d) to detect the surface boundary (e, dotted green line). This boundary was then used in the detection of the second surface from the axial gradient of the data (g). After detecting the rear of the first surface (h, dotted yellow line), the second surface was detected from the axial gradient below the rear of the first surface (i, dotted red line). Both detected surfaces were then median filtered and masked for the vignetted peripheral portions to yield the final depth map surfaces for use in rendering with the dashed blue lines representing the location of the cross-section (f, j).

Download Full Size | PDF

Following the segmentation of these surfaces, rendering of the fused visualization was achieved via image-order ray casting into the surfaces [51], which were then shaded based on the identity and orientation of the surface, the coincident volume data, and the coincident color image data. A summary of the logic governing this shading is detailed in the process flow diagram of Fig. 3.

 figure: Fig. 3.

Fig. 3. Process flow diagram detailing the logic used in determining what shading was applied for a ray cast during fused image rendering from 4D-MIOCT data.

Download Full Size | PDF

In detail, using the previously segmented surfaces (Fig. 4(a, f)), for each pixel in the rendered image, a ray was cast into the data. If the ray intercepts the first surface (Fig. 4(b)), a shader with a translucent appearance was applied to optimally communicate the 3D features of the surface while still maintaining an amount of transparency, permitting visualization of subsurface shading of color and structural features of the second surface at the RPE. This shader was computed as the combination of a Fresnel and a specular Phong shader (Fig. 4(b-e)) [52,53].

 figure: Fig. 4.

Fig. 4. Surface shading pipeline, sample was a soft-tip tool contacting the retina. The first surface (a) was intercepted by the raycaster (b) and the Fresnel (c) and specular Phong (d) shaders are computed and summed for each ray intercept (e). The second surface and the portion of the first belonging to the tool (f) were likewise intercepted by the raycaster (g), shaded by the average subsurface OCT intensity (h) and color microscopy image (i) to produce the subsurface shader (j). The combination of (e) and (j) then yields the fused render (k). In the fused OCT and color microscopy render (k), the light blue is the soft-tip tool, the nearly transparent surface is the ILM, and the layer below that is the RPE and deeper structures (choroid); the dark linear area under the tool is due to tool shadowing from the OCT channel. This visual can be further augmented with the rendering of the remaining periphery of the microscopy image (l) or the upper retinal surface color shaded based on the retinal thickness (m) as computed from the axial difference of the segmented surfaces (a, f).

Download Full Size | PDF

The value of a Fresnel shader depends on the orientation of the surface relative to the viewing direction, yielding higher intensities the more parallel the viewing direction is to the surface. In our application, this shader enhanced object silhouettes and surface visibility at low viewing angles. The Fresnel shader here was computed as

$${I_F} = {A_F}{\left|{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over N} \otimes \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over V} } \right|^n}$$
where ${A_F}$ is the relative weighting of the shader, $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over V} $ is the unit vector defined by direction of viewing, $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over N} $ is the surface normal, n is the power, or ‘sharpness’ of the shader response to the angle made by the viewing direction and surface normal. Visually optimal values for ${k_F}$ and n for this case were found to be 0.4 and 10 respectively (Fig. 4(c)).

The contribution of the specular Phong shader at each surface intercept was computed as the summation over the contributions of N specular illumination sources:

$${I_P} = \mathop \sum \nolimits_{i = 1}^N {A_p}_i{\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} }_i} \cdot \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over V} } \right)^{{n_i}}}$$
where i specifies the illumination source, ${A_p}_i$ is the relative weighting of the source, ${n_i}$ is the power, or ‘sharpness’ of the source, and ${\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} _i}$ is the unit vector defined by the specular angle of the source. For each source, the specular vector $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} $ was computed as
$$\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} = 2\left( {\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over N} \cdot \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over L} } \right)\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over N} - \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over L} $$
where and $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over L} $ is the unit vector defined by the lighting direction of the source. In our case, two different specular illumination sources were modeled and summed for optimal visualization. Both share ${A_p}$ and n of 0.4 and 40 respectively, varying only in their lighting directions $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over L} $. The lighting direction of one source was tracked to be coaxial with the viewing direction, while the second was tracked to the specular angle of incidence with the viewing direction and the horizontal plane Fig. 4(d).

After the translucent surface shader was computed from the sum of the Fresnel shader ${I_F}$ and Phong shader ${I_P}$ (Fig. 4(e)), subsurface rendering began with a check to see if the second surface was present directly below the current position of the ray. If no surface was detected, it was assumed that this first surface point belonged to a surgical tool on the basis that tools cast shadows below themselves, obscuring the second surface. In this case, subsurface shading takes place, adding color from the microscopy image to the shading accumulated by the ray for the tool. The ray was then terminated and the shading it acquired was returned to its corresponding pixel in the output image.

In the alternative case where the second surface was detected below the first surface intersection, it was assumed that the RPE was present below, and the identity of the intercepted surface point was the retinal surface rather than the surgical tool. In this case, the ray continues propagating, and if it intercepted the second surface (Fig. 4(g)), the average OCT signal intensity over an axial range approximately 200 microns beneath the intersection point was sampled (Fig. 4(h)) along with the color at the corresponding position in the microscopy image (Fig. 4(i)). The average OCT intensity was then used to modulate the intensity of the sampled color, resulting in a shader containing both color and structural OCT information, namely shadowing from choroidal vasculature imaged in OCT (Fig. 4(j)).

Finally, this shader was summed with the translucent first-surface shader computed previously, the ray was terminated and returned its shade to the corresponding pixel of the output image. Once all of the rays have terminated, the resulting fused image render was displayed (Fig. 4(k)). Here the logic driving the selection of features and their shaders is illustrated: the transparent ILM surface is visualized by a translucent shader, communicating surface topology while maintaining visibility of the subsurface RPE. In this way, the features present in the color- and OCT-based shaders of the RPE can be visualized coincident with the ILM topology, all within a familiar context, as in a photoreal scenario the ILM is indeed transparent, while the RPE is responsible for the majority of color contrast. The context provided by this result can be further augmented with the simple addition of the remaining peripheral data in the microscopy image (Fig. 4(l)). This is achieved by applying microscopy color shading to rays which do not intercept volume data (to avoid occluding volume visualization) but do intercept the 2D microscopy image plane. This plane is positioned at a depth shared by the RPE as a constant approximation to the true depth position of the color source. The axial location of the 2D plane was selected as preferred by the surgeons, compared to mid- or superficial retinal location. Working at the retinal surface across the 3D-to-2D junction would be challenging for a surgeon; however, the OCT field of view would be shifted to the site of work, typical of how the surgeon re-centers the microscope field of view, via foot pedals, while working throughout surgery. An improvement to feature detail can also be realized through the use of a retinal thickness channel (Fig. 4(m)), computed simply as the difference between the segmented surfaces (Fig. 4(a, f)). In this case, the difference values are linearly mapped to values of the standard ‘jet’ colormap.

3. Results

3.1 Fused visualization

Recorded surgical 4D-MIOCT imaging were selected post-op to examine the visual impact of image fusion and measure its performance. Three representative time series of data collected in the ‘fast’ scan configuration (340 A-scans per B-scan x 80 B-scans per volume) were selected for a time-resolved demonstration of a variety of potentially useful characteristics of fused imaging. These include suction by a soft-tip tool of a retinal edema near the fovea, a membrane scrape using a finesse loop tool, and a forceps grasp of the retinal surface.

3.1.1 Soft-tip tool suction

Figure 5(a) presents four frames of interest in an instance of image fusion applied over the course of a soft tip suction maneuver at the retina, including the microscopy image (row 1), 4D-OCT (row 2, rendering methodology from Viehland et al. [42]), fusion without (row 3) and with (row 4) peripheral microscopy rendering. Over the course of the 2.4 second maneuver, the tool approaches (column 1, 2), contacts (column 3, blue arrow), and releases (column 4) the retina. Examining the first frame of the fusion result (column 1, row 3), several features of interest are evident. The surface indentation of the fovea (red arrow) is all but invisible on the microscopy view, but is visualized in the fused view courtesy of the translucent shading of the 3D surface from the OCT imaging. Additionally, subsurface choroidal vessel visibility (green arrow) is enhanced from the subsurface modulation of the image intensity by the OCT signal, while more superficial retinal vessels (yellow arrow) are also visualized. From the color image, the tool contrast is much improved, greater contextualization is provided from increased FOV (row 4), and a generally more familiar, photorealistic appearance is provided to the surgeon.

 figure: Fig. 5.

Fig. 5. (a) Time series intrasurgical 4D-MIOCT image data (size 340 × 80 A-scans/5 × 5 mm) for the approach (column 1, 2), contact (column 3), and withdrawal (column 4) of a soft-tip tool at the retina (Visualization 1). 2-D color microscopy (row 1) and 4D-OCT (row 2) are fused into a single visualization without (row 3) and with (row 4) peripheral color image. Visualized features of interest include the fovea (red arrow), retinal vessels (yellow arrow), choroidal vessels (green arrow), and tool-tissue contact (blue arrow). (b) 2-D color image (left) and enface fused render without translucent surface shader (right) for comparison of color feature translation in the fused render, notably instrument color (purple arrow) and retinal vessel (yellow arrow) contrast. (c) OCT volume render (left) and translucent surface shader (right) for comparison of 3-D surface feature translation in the fused render, notably the topology of the foveal depression (red arrow).

Download Full Size | PDF

While the fused channel does manage to combine cues of interest into a single visual, it is worth acknowledging the tradeoff that is apparent here. Namely, the surface topology of the retina is visible but somewhat less clear than that presented in row 2. This is a necessary consequence of superimposing two coincident features; that is, in introducing transparency to the retinal surface the subsurface features and color visibility are increased at the expense of the visibility of the retinal surface itself. While a 1:2 ratio of composition between Fig. 4(e) and (j) is applied here for a photoreal appearance, it is conceivable that this balance may need to change depending on the nature of the particular phase of surgery, with the (e) better suited to target manipulation, and the (j) to target acquisition. Consequently, to enable a clearer evaluation of the translation of color and 3D OCT features into the fused visualization, alternative visualizations of timepoint 1.6s are provided in Fig. 5(b) and (c). Panel (b) presents the 2-D color image alongside the fused visual from row 4 of panel (a), but rendered from the enface direction without the translucent surface shader. In this way, a closer comparison may be drawn between the two visuals, and it may be verified that color features of interest including instrument color contrast (purple arrow) and retinal vessels (yellow arrow) have been translated in the fused visualization. Panel (c) serves a similar purpose for the translation of 3D surface features evident in the OCT volume visuals of panel (a), row 2. Here the OCT volume data rendered via Viehland’s method (c, left) is displayed alongside the translucent surface shader channel of the fused visual. 3D features of interest in the tissue surface like the foveal pit (red arrow) visualized in Viehland’s method are translated in the surface shader of the fused method.

3.1.2 Finesse loop scrape

Figure 6(a) depicts the scraping of the retinal surface by a finesse loop tool as it approaches, contacts, scrapes, and retracts (columns 1-4 respectively). Like the soft tip example, the fused visualization in row 3 exhibits a combination of the useful cues from each image channel. In particular, the color image shows the brightening of the tissue in the vicinity of the tool contact point (green arrow). This brightening is a result of blanching the tissue of blood, which serves as a useful cue of tool contact replicated in the fused image. Further, the indentation in the retinal surface left behind following the scrape (column 4, red arrow) is visualized from the OCT surface shading. This is an important feature that may be indicative of retinal damage due to the scrape, as scrapes should not result in retinal deformations that remain after the tool is withdrawn. Finally, the fused visual maintains the continuity of the vascular network visible in the color image across the OCT data (row 3), with the subsurface OCT shading visualizing vessel shadowing (yellow arrow) not evident in the renders of row 2, improving the contextualization of the 3D data. Unfortunately, in addition to these features, an undesirable artifact is also evidenced in the visualization: the thin polyamide loop portion of the instrument has been shaded with the translucent shader intended for the ILM surface. This is a result of segmentation failure; the loop’s profile is too thin to shadow strongly enough against deeper structures for it to be classified as tool with a threshold that does not compromise RPE detection in other parts of the scene. In this case, retinal appearance was prioritized when forced to optimize between the accuracy of tool and retinal appearance. Identical to the panels of Fig. 5, panels (b) and (c) serve to enable clearer evaluation of the translation of color and 3D OCT features into the fused visualization. In this case (t = 1.6s), (b) once again demonstrates correspondence of vascular features in the 2D microscopy (left, black arrow) with those in the fused visual (right). In fact, the additional shading from the OCT data appears to increase the vessel contrast beyond that in the 2D microscopy image. Panel (c) also presents feature correspondence in the tissue indentation from compression by the instrument (white arrow), however the definition of the feature is inferior in the fused method compared to the Viehland method. In this scenario, fused visualization relies more heavily on video context to support scene comprehension of the reflective shader.

 figure: Fig. 6.

Fig. 6. (a) Time series intrasurgical 4D-MIOCT image data (size 340 × 80 A-scans/5 × 5 mm) for the approach, contact, scrape and withdrawal (columns 1-4 respectively) of a finesse loop at the retina (Visualization 2). 2-D color microscopy (row 1) and 4D-OCT (row 2) are fused into a single visualization with peripheral color image (row 3). Visualized features of interest include tissue blanching (green arrow), persisting retinal deformation (red arrow), and vessels (yellow arrow). (b) 2-D color image (left) and enface fused render without translucent surface shader (right) for comparison of color feature translation in the fused render, notably retinal vessel (black arrow) contrast. (c) OCT volume render (left) and translucent surface shader (right) for comparison of 3-D surface feature translation in the fused render, notably the topology of the indentation of the tissue surface by the instrument (white arrow).

Download Full Size | PDF

3.1.3 Forceps grasp

Figure 7(a) visualizes the approach, halt, grasp, and withdrawal (columns 1-4 respectively) of a pair of microsurgical forceps. In this case, the OCT scan size has been reduced to increase the scan density of the FOV. While the two previous examples possess larger scan sizes of 5 × 5 mm, here the size is only 3 × 3 mm. This reduction in the lateral FOV emphasizes the utility of peripheral color rendering, which recovers the lost context of the extent of the surrounding retina and tool body. In this way, the surgeon can maintain their localization of the target area, and retain awareness of the tool even as it exits the OCT FOV while leveraging the increased 3D detail afforded by the denser OCT scan. Once again, panel (b) (t = 0.0s) demonstrates correspondence of vascular features in the 2D microscopy (black arrow), while panel (c) presents feature correspondence in the form of the tissue surface bulges from subsurface vessels (white arrow).

 figure: Fig. 7.

Fig. 7. (a) Time series intrasurgical 4D-MIOCT image data (size 340 × 80 A-scans/3 × 3 mm) for the approach, contact, grasp and withdrawal (columns 1-4 respectively) of forceps at the retina (Visualization 3). 2-D color microscopy (row 1) and 4D-OCT (row 2) are fused into a single visualization with peripheral color image (row 3). (b) 2-D color image (left) and enface fused render without translucent surface shader (right) for comparison of color feature translation in the fused render, notably retinal vessel (black arrow) contrast. (c) OCT volume render (left) and translucent surface shader (right) for comparison of 3-D surface feature translation in the fused render, notably the topology of the raised portions of the tissue due to subsurface vessels (white arrow).

Download Full Size | PDF

3.2 Retinal thickness visualization

As mentioned previously, an alternative surface shader may be derived from the segmented surface data in the form of a retinal thickness shader. This rendering can provide enhanced feature detail compared to the fused shaders, at the expense of photorealism.

3.2.1 Soft-tip tool suction

As reflected in the imaging of Fig. 8, the retinal thickness shader (row 2) demonstrates its usefulness in providing enhanced visualization of the 3D retinal morphology encoded in the OCT data.

 figure: Fig. 8.

Fig. 8. Time series intrasurgical 4D-MIOCT image data (size 340 × 80 A-scans/5 × 5 mm) for the approach (column 1, 2), contact (column 3), and withdrawal (column 4) of a soft-tip tool at the retina (Visualization 1). Segmented surfaces from 4D-OCT data (row 1) are used to compute retinal thickness for use in surface shading (row 2).

Download Full Size | PDF

The fovea is defined with high contrast as the retina thins in its vicinity, while the asymmetry of the thickness of the regions flanking the retina is revealed. Furthermore, the dynamics of the interaction between the soft tip tool and the retinal tissue is clearly visible, with the suction drawing the surface up towards the tool, thickening the retina in its vicinity as it comes into contact and subsequently releases it.

3.2.2 Finesse loop scrape

Similar to the soft tip case, the thickness shader in Fig. 9 also reveals enhanced visualization of 3D features over the course of the finesse loop scrape.

 figure: Fig. 9.

Fig. 9. Time series intrasurgical 4D-MIOCT image data (size 340 × 80 A-scans/5 × 5 mm) for the approach (column 1), contact (column 2), scrape (column 3) and withdrawal (column 4) of a finesse loop tool at the retina (Visualization 2). Segmented surfaces from 4D-OCT data (row 1) are used to compute retinal thickness for use in surface shading (row 2). Shader enhances visualization of retinal deformation dynamics indicative of potential damage (purple arrows).

Download Full Size | PDF

In this case, the thickness shader provides more detail concerning the potential retinal damage (purple arrow). In the third frame, the bunching of the retina and the contact pressure of the loop is visible as retinal thickening and thinning respectively. This deformation of the retina is clearly seen to persist after the withdrawal of the loop in frame 4, alerting the surgeon to potentially excessive compression of the retina. Unfortunately, artifacts once again manifest in the tool shading due to the inaccurate segmentation of the loop previously discussed in section 3.1.2. As the tool surface is not part of the retina, the desired behavior is the masking of the tool surface in this shader to avoid confusion. However, the misclassification of the loop as ILM, and the ILM in the tool’s shadow as RPE, results in the shading of the tool surface with the retinal thickness shader rather than being masked. In this case, the shader hue is a function of the depth between the tool and ILM. This is most clearly illustrated in the thickness visual of column 2. Portions of the tool not yet in contact with the are shaded with the retinal thickness shader according to the distance of the tool to the retina. However, once the tool contacts the retina and is effectively coincident with the ILM, it is then shaded according to the retinal thickness, producing a discontinuity in the shader along the loop (deep blue band on the tool near the portion in contact changes suddenly to yellow-green when moving to the portion in contact).

While the thickness shader in these two instances is presented here in isolation for clarity, it could also be composited with any of the previously detailed fusion shaders to add improved feature detail at the expense of photorealism.

3.3 Performance

Performance for the 4D-MIOCT fusion pipeline was measured and compared with that of Viehland et al. [42] using Nvidia’s Visual Profiler Tool.

Using the reported total execution time in Table 1, we computed the volume rate as the inverse of these values and examine the feasibility of the fused pipeline in the context of real-time application by comparing it with the data acquisition rate of our 4D-MIOCT system. The theoretical maximum acquisition rate (with a 400 kHz sweep rate system) for fast scan (340 A-scans per B-scan x 80 B-scans) and dense scan (840 A-scans per B-scan x 256 B-scans) volumes were 14.7 and 1.9 volumes per second respectively, as compared with 22.2 and 10.6 for the fusion method, and 72.0 and 12.3 for Viehland’s method.

Tables Icon

Table 1. Fusion Pipeline Performance Breakdown (n = 8 OCT volumes)

4. Discussion

While both methods appear feasible for real-time execution in our system, there was a disparity in performance between the fused and Viehland methods. The disparity can be chiefly attributed to their differences in filtering and rendering. In the fast scan case, the fused method was slower due to the branching behavior of its raycasting kernel. These branches occur from conditionals (if…else) based on the identity of the intercepted surface, and is an inefficiency in kernel design due to the need to devote a kernel to each branch possibility for a single ray cast. However, this slowdown scales with ray number, not volume size. Because of this, the relative disparity in performance was reduced at larger volumes sizes like those acquired in the dense scanning configuration. Beyond relative performance differences within the visualization pipeline, there are further processes which must be considered when extrapolating real-time performance from these data, as they can significantly lower the operational volume rate compared to the max theoretical rate depending on their computational complexity and the hardware they are executed on. These processes include stereo pair rendering and image registration methods, as well as latency between data acquisition and visualization. Stereo pair rendering is a critical feature of surgical image guidance which provides stereo depth perception to the clinician, while the performance data here is reported for the rendering of only one perspective of the scene. However, it would be quite feasible to scale the process to a stereo pipeline without loss of performance by utilizing, for example, additional graphics processing units (GPUs). In this way, the two separate rendering passes required for each of the perspectives could be executed in parallel on their respective hardware. A similar strategy has been applied concerning data acquisition and storage: execution on highly parallelized hardware such as GPUs enables the processing of one set of data while the next is being acquired. In this way, if the fusion method can execute on data more rapidly than data acquisition, visualization will not fall behind acquisition. However, a fixed delay (latency) will exist proportional to the execution time of the fusion method, and although the method can keep up with the existing system acquisition rate, it will impose a small but noticeable amount of additional latency compared to Viehland’s previous method; 31 and 13 additional milliseconds for fast and dense scan configurations respectively as computed from the difference in overall execution time. Regarding the matter of image registration between the OCT and microscopy image channels, the manual registration applied in this work is clearly not an acceptable method for real-time use; a faster, automated technique is necessary. Image processing techniques for the multimodal registration of OCT and microscopy have been demonstrated previously, but thus far have not demonstrated sufficient speeds [5456]. Alternatively, registration can be achieved via the optical alignment of each modality. This strategy has been implemented in commercial surgical MIOCT systems including the OPMI LUMERA 700 (Zeiss Meditec) and EnFocus (Leica Microsystems). In this way, the degrees of freedom between the optical paths of the two channels can be constrained to an extent where the additional computational load imposed by registration can be limited to a one-time alignment lasting the life of the system.

In addition, while this work demonstrates the proof of principle and feasibility of fused visualization of the image data, there are various potential adverse use cases and pipeline optimization opportunities that merit discussion for future work. While the peripheral 2D color rendering affords greater contextualization beyond the OCT field of view, continuity across the image boundaries is maintained only for RPE-depth features, due to the placement of the 2D image plane at the RPE depth. For features in the color image not necessarily positioned at the RPE, like instruments, there could be an undesirable discontinuity in perceived depth of the tool when moving in and out of the OCT field of view, which could be further exacerbated when the viewing angle is in a more lateral direction (e.g., more parallel to the 2D image plane). While this point should be weighed against the value of improved lateral position awareness outside the OCT field of view, depth perception is a critical aspect of surgical guidance, thus any potential adverse effects on it should be considered carefully. Visual conditions may exist where it is more appropriate to turn off rendering of the peripheral 2D image data for the sake of depth comprehension. Despite these very real issues of concern for confusion for the surgeon, surgeons responded very positively to these fused images as being far more useful than separate images of OCT and color video adjacent to each other on a viewing screen [24,39,40]. Their response to this point was that both looking at these as video and as still images, they provided useful information to the surgeon. While the projection into the “shadow” of the instrument is visible, this was not distracting to the surgeon who knew what was happening. The visibility depended on the angle of view; many aspects of lighting and visualization in conventional surgical viewing, such as glare, shadow, etc. vary as the surgeon positions instruments and lighting during the case. This is similar, and has the useful surface lighting effect which the surgeon can also control. The segmentation techniques utilized here, although fast, are simple and not necessarily very robust; in addition to the artifacts detailed previously, there is additional development necessary where visualization of more structurally complex scenarios is desired. In the current implementation, the first two surfaces are segmented, classified as tool, ILM, or RPE, and rendered accordingly. While these are undoubtedly important structures to visualize in surgery, there are many more intraretinal layers between them resolved by OCT but not segmented for display in this implementation. Methods exist for the rapid and robust segmentation of these layers in addition to the ILM and RPE [5762], and it is conceivable that alternative visualizations including them could prove useful in surgery. Beyond this, there are several surgical circumstances that can introduce (membrane separation from peels), remove (macular holes), or otherwise modify (transplantations) layer structures relative to the normal retina structure assumed by the method here. In such cases, the surface classification strategy employed here would become inaccurate, potentially producing visualizations which are confusing or aberrant in appearance. Regarding this, it is possible that more sophisticated segmentation strategies could be applied to address this issue [6366]. Further, there are more challenging imaging conditions not yet examined here which are known to negatively impact standard OCT volume visuals, and will likely have a similar impact on the method detailed here. These conditions can consist of saturation artifacts, lens reflection artifacts, signal attenuation (roll-off, defocus, vignetting, low object reflectivity) and tissue pathologies which manifest in more complicated 3D structures. In the future, it will be necessary to examine the impact this wider range of potential conditions and samples has on segmentation when obtaining a representative measure of clinical reliability. Finally, while the rendering pipeline was implemented as an image-order raycaster, fundamentally is a surface renderer rather than a volume renderer, and as such there is opportunity for improved rendering speed if object-order rasterization for the surfaces were to be used rather than the image-order raycasting required for volumetric rendering. Such speed improvements may not be strictly necessary for the system detailed here, whose performance is limited by both optical throughput and ocular safety considerations [67], including consideration of other sources of ocular illumination during surgery. Regardless, even if acquisition speed remains the same, the increased execution speed provided by these changes would still be welcome for the reduction in observed latency that would result between data acquisition and image display.

Finally, given the close relationship of the work with clinical context, the next logical step would be to conduct clinical validation testing – specifically, quantify the degree of benefit provided by the visualization technique compared to previous methods in a context representative of the surgical environment. This involves conducting surgical tasks like object approach, contact, and grasp in a controlled but representative environment, such as an ex-vivo porcine eye. During the tasks, quantitative (execution time, number of adverse contacts, positioning precision and accuracy) and qualitative (user preference and task load surveys) data can be collected as performance metrics to compare between different visualization methods [24,44,6871]. Additionally, it would be ideal to extend this technique from retinal imaging to anterior segment imaging. Retinal surgery is a critical part of ophthalmic interventions, but it shares the space with anterior procedures including cataract surgery and intraocular lens implantation. We have found that anterior applications share many of the same challenges with 4D-MIOCT outlined here, therefore strong motivation exists for the adaptation of the visualization principles realized in this work to anterior segment surgery.

5. Conclusion

In this work, we have demonstrated the potential in fused rendering of 4D-MIOCT data to improve its clinical utility and interpretability. Color image information is combined with subsurface OCT features like choroidal vessels via modulation of the color intensity by the local OCT signal intensity. In the case of smaller FOV OCT scans, rendering of the peripheral color image surrounding the OCT volume provides greater contextualization and localization to the surgeon. These cues are combined with surface topology visualization via Fresnel- and Phong-type shaders, which effectively communicate retinal surface features without causing a detrimental degree of occlusion to the subsurface image. Additionally, other useful non-photorealistic visuals can be derived from this pipeline including retinal thickness mapping for high-detail visualization of retinal morphology and tool-tissue interactions.

Funding

National Institutes of Health (5U01EY028079-05).

Acknowledgments

The authors thank the staff at Duke Eye Center for their assistance in study coordination, clinical operations, and data acquisition.

Disclosures

RT: Duke University (P), CV: Duke University (P), JDL: Duke University (P), WR: Duke University Medical Center (P), AD: Duke University (P), Leica Microsystems (P, R), Alcon Inc. (C), LV: Duke University Medical Center (P), AK: Duke University Medical Center (P), Leica Microsystems (P), CT: Duke University Medical Center (P), Alcon Inc. (R), Emmes Inc. (C), Theia Imaging LLC (I), JAI: Duke University (P), Leica Microsystems (P, R), Alcon Inc. (C).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. W. Drexler and J. G. Fujimoto, Optical Coherence Tomography: Technology and Applications, 2nd ed. (Springer International Publishing, 2015). [CrossRef]  

2. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]  

3. E. A. Swanson, J. A. Izatt, M. R. Hee, D. Huang, C. P. Lin, J. S. Schuman, C. A. Puliafito, and J. G. Fujimoto, “In vivo retinal imaging by optical coherence tomography,” Opt. Lett. 18(21), 1864–1866 (1993). [CrossRef]  

4. J. A. Izatt, M. R. Hee, E. A. Swanson, C. P. Lin, D. Huang, J. S. Schuman, C. A. Puliafito, and J. G. Fujimoto, “Micrometer-scale resolution imaging of the anterior eye in vivo with optical coherence tomography,” Arch. Ophthalmol. 112(12), 1584–1589 (1994). [CrossRef]  

5. J. R. Wilkins, C. A. Puliafito, M. R. Hee, J. S. Duker, E. Reichel, J. G. Coker, J. S. Schuman, E. A. Swanson, and J. G. Fujimoto, “Characterization of epiretinal membranes using optical coherence tomography,” Ophthalmology 103(12), 2142–2151 (1996). [CrossRef]  

6. C. A. Toth, R. Birngruber, S. A. Boppart, M. R. Hee, J. G. Fujimoto, C. D. Dicarlo, E. A. Swanson, C. P. Cain, D. G. Narayan, G. D. Noojin, and W. P. Roach, “Argon laser retinal lesions evaluated in vivo by optical coherence tomography,” Am. J. Ophthalmol. 123(2), 188–198 (1997). [CrossRef]  

7. G. Ripandelli, A. M. Coppé, S. Bonini, R. Giannini, S. Curci, E. Costi, and M. Stirpe, “Morphological evaluation of full-thickness idiopathic macular holes by optical coherence tomography,” Eur. J. Ophthalmol. 9(3), 212–216 (1999). [CrossRef]  

8. K. Mikajiri, A. A. Okada, M. Ohji, T. Morimoto, S. Sato, A. Hayashi, S. Kusaka, Y. Saito, and Y. Tano, “Analysis of vitrectomy for idiopathic macular hole by optical coherence tomography,” Am. J. Ophthalmol. 128(5), 655–657 (1999). [CrossRef]  

9. R. P. Gallemore, M. J. Jumper, B. W. McCUEN, G. J. Jaffe, E. A. Postel, and C. A. Toth, “Diagnosis of vitreoretinal adhesions in macular disease with optical coherence tomography,” Retina 20(2), 115–120 (2000). [CrossRef]  

10. P. Massin, C. Allouch, B. Haouchine, F. Metge, M. Paques, L. Tangui, A. Erginay, and A. Gaudric, “Optical coherence tomography of idiopathic macular epiretinal membranes before and after surgery,” Am. J. Ophthalmol. 130(6), 732–739 (2000). [CrossRef]  

11. G. Geerling, M. Müller, C. Winter, H. Hoerauf, S. Oelckers, H. Laqua, and R. Birngruber, “Intraoperative 2-dimensional optical coherence tomography as a new tool for anterior segment surgery,” Arch. Ophthalmol. 123(2), 253–257 (2005). [CrossRef]  

12. Y. K. Tao, J. P. Ehlers, C. A. Toth, and J. A. Izatt, “Intraoperative spectral domain optical coherence tomography for vitreoretinal surgery,” Opt. Lett. 35(20), 3315–3317 (2010). [CrossRef]  

13. J. P. Ehlers, W. J. Dupps, P. K. Kaiser, J. Goshe, R. P. Singh, D. Petkovsek, and S. K. Srivastava, “The Prospective Intraoperative and Perioperative Ophthalmic ImagiNg With Optical CoherEncE TomogRaphy (PIONEER) Study: 2-Year Results,” Am. J. Ophthalmol. 158(5), 999–1007.e1 (2014). [CrossRef]  

14. J. P. Ehlers, “Intraoperative optical coherence tomography: past, present, and future,” Eye 30(2), 193–201 (2016). [CrossRef]  

15. O. Carrasco-Zevallos, B. Keller, C. Viehland, L. Shen, B. Todorich, C. Shieh, A. Kuo, C. Toth, and J. A. Izatt, “4D microscope-integrated OCT improves accuracy of ophthalmic surgical maneuvers,” Proc. SPIE 9693, 969306 (2016). [CrossRef]  

16. B. Todorich, C. Shieh, P. J. Desouza, O. M. Carrasco-Zevallos, D. L. Cunefare, S. S. Stinnett, J. A. Izatt, S. Farsiu, P. Mruthyunjaya, A. N. Kuo, and C. A. Toth, “Impact of microscope-integrated OCT on ophthalmology resident performance of anterior segment surgical maneuvers in model eyes,” Invest. Ophthalmol. Visual Sci. 57(9), OCT146 (2016). [CrossRef]  

17. C. Toth, L. Shen, O. Carrasco-Zevallos, B. Keller, C. Viehland, P. Hahn, A. Kuo, and J. Izatt, “Real-time volumetric (4D) optical coherence tomography visualization of vitreoretinal surgery,” in (2015).

18. O. Carrasco-Zevallos, B. Keller, Christian Viehland, Liangbo Shen, Gar Waterman, Philip Desouza, Paul Hahn, Anthony N. Kuo, Cynthia A. Toth, and Joseph A. Izatt, “Swept-source microscope integrated optical coherence tomography for real-time 3D imaging of ophthalmic human surgery,” in Ophthalmic Technologies XXV (2015).

19. C. A. Toth, O. Carrasco-Zevallos, B. Keller, L. Shen, C. Viehland, D. H. Nam, P. Hahn, A. N. Kuo, and J. A. Izatt, “Surgically integrated swept source optical coherence tomography (SSOCT) to guide vitreoretinal (VR) surgery,” Invest. Ophthalmol. Visual Sci. 56, 3512 (2015).

20. O. Carrasco-Zevallos, B. Keller, C. Viehland, L. B. Shen, G. Waterman, C. Chukwurah, P. Hahn, A. N. Kuo, C. A. Toth, and J. A. Izatt, “Real-time 4D stereoscopic visualization of human ophthalmic surgery with swept-source microscope integrated optical coherence tomography,” Invest. Ophthalmol. Visual Sci. 56, 4085 (2015).

21. D. S. Grewal, P. K. Bhullar, N. D. Pasricha, O. M. Carrasco-Zevallos, C. Viehland, B. Keller, L. Shen, J. A. Izatt, A. N. Kuo, C. A. Toth, and P. Mruthyunjaya, “Intraoperative 4D microscope-integrated optical coherence tomography guided transvitreal retinochoroidal biopsy for choroidal melanoma,” Retina 37(4), 796–799 (2017). [CrossRef]  

22. Oscar Carrasco-Zevallos, Brenton Keller, Christian Viehland, Liangbo Shen, Bozho Todorich, Christine Shieh, Anthony Kuo, Cynthia Toth, and Joseph A. Izatt, “Guidance of Human Ophthalmic Microsurgery with 4D Microscope Integrated OCT,” in Ophthalmic Technologies XXVI (2016).

23. O. M. Carrasco-Zevallos, B. Keller, C. Viehland, L. B. Shen, M. I. Seider, J. A. Izatt, and C. A. Toth, “Optical coherence tomography for retinal surgery: perioperative analysis to real-time four-dimensional image-guided surgery,” Invest. Ophthalmol. Visual Sci. 57(9), OCT37–OCT50 (2016). [CrossRef]  

24. O. M. Carrasco-Zevallos, B. Keller, C. Viehland, L. Shen, G. Waterman, B. Todorich, C. Shieh, P. Hahn, S. Farsiu, A. N. Kuo, C. A. Toth, and J. A. Izatt, “Live volumetric (4D) visualization and guidance of in vivo human ophthalmic surgery with intraoperative optical coherence tomography,” Sci. Rep. 6(1), 31689 (2016). [CrossRef]  

25. P. K. Bhullar, N. D. Pasricha, O. M. Zevallos-Carrasco, C. Viehland, B. Keller, M. B. Daluvoy, P. Challa, S. F. Freedman, J. A. Izatt, C. A. Toth, and A. N. Kuo, “4D Microscope-Integrated OCT to visualize depth-related steps during anterior segment and external eye procedures,” Invest. Ophthalmol. Visual Sci. 57, 467 (2016).

26. N. D. Pasricha, P. K. Bhullar, C. Shieh, C. Viehland, O. M. Carrasco-Zevallos, B. Keller, J. A. Izatt, C. A. Toth, P. Challa, and A. N. Kuo, “Four-dimensional microscope- integrated optical coherence tomography to enhance visualization in glaucoma surgeries,” Indian J. Ophthalmol. 65, 57–59 (2017). [CrossRef]  

27. O. M. Carrasco-Zevallos, C. Viehland, B. Keller, M. Draelos, A. N. Kuo, C. A. Toth, and J. A. Izatt, “Review of intraoperative optical coherence tomography: technology and applications [Invited],” Biomed. Opt. Express 8(3), 1607–1637 (2017). [CrossRef]  

28. O. Carrasco-Zevallos, C. Viehland, B. Keller, A. N. Kuo, C. A. Toth, and J. A. Izatt, “Microscope-integrated OCT at 800 kHz line rate for high speed 4D imaging of ophthalmic surgery,” Invest. Ophthalmol. Visual Sci. 58, 3813 (2017).

29. H. Gabr, X. Chen, T. H. Mahmoud, L. Vajzovic, S. T. Hsu, A. Dandridge, K. Sleiman, O. Carrasco-Zevallos, C. Viehland, J. A. Izatt, and C. A. Toth, “Visualization from microscope-integrated swept-source OCT in vitreoretinal surgery for diabetic tractional retinal detachment,” Invest. Ophthalmol. Visual Sci. 38, S110–S120 (2018). [CrossRef]  

30. K. Sleiman, L. Vajzovic, O. Carrasco-Zevallos, M. Klingeborn, A. Dandridge, C. Viehland, C. B. Rickman, J. A. Izatt, and C. A. Toth, “Four-dimensional microscope-integrated optical coherence tomography (4D MIOCT) guidance in subretinal surgery,” Invest. Ophthalmol. Visual Sci. 39, S194–S198 (2019). [CrossRef]  

31. L. Vajzovic, K. Sleiman, A. Dandridge, O. Carrasco-Zevallos, C. Viehland, A. Maminishkis, J. Amaral, K. Bharti, C. A. Toth, and J. A. Izatt, “Subretinal therapy delivery technique guided by intraoperative 4-dimensional microscope-integrated optical coherence tomography,” Invest. Ophthalmol. Visual Sci. 58, 3122 (2017).

32. C. D. Lu, N. K. Waheed, A. Witkin, C. R. Baumal, J. J. Liu, B. Potsaid, A. Joseph, V. Jayaraman, A. Cable, K. Chan, J. S. Duker, and J. G. Fujimoto, “Microscope-integrated intraoperative ultrahigh-speed swept-source optical coherence tomography for widefield retinal and anterior segment imaging,” Ophthalmic Surg. Lasers Imaging Retina 49(2), 94–102 (2018). [CrossRef]  

33. M. I. Seider, O. M. Carrasco-Zevallos, R. Gunther, C. Viehland, B. Keller, L. Shen, P. Hahn, T. H. Mahmoud, A. Dandridge, J. A. Izatt, and C. A. Toth, “Real-time volumetric imaging of vitreoretinal surgery with a prototype microscope-integrated swept-source OCT device,” Ophthalmol. Retina 2(5), 401–410 (2018). [CrossRef]  

34. J. P. Kolb, W. Draxinger, J. Klee, T. Pfeiffer, M. Eibl, T. Klein, W. Wieser, and R. Huber, “Live video rate volumetric OCT imaging of the retina with multi-MHz A-scan rates,” PLoS One 14, e0213144 (2019). [CrossRef]  

35. A. Pujari, D. Agarwal, R. Chawla, A. Kumar, and N. Sharma, “Intraoperative optical coherence tomography guided ocular surgeries: critical analysis of clinical role and future perspectives,” Clin. Ophthalmol. 14, 2427–2440 (2020). [CrossRef]  

36. I. Laíns, J. C. Wang, Y. Cui, R. Katz, F. Vingopoulos, G. Staurenghi, D. G. Vavvas, J. W. Miller, and J. B. Miller, “Retinal applications of swept source optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA),” Prog. Retinal Eye Res. 84, 100951 (2021). [CrossRef]  

37. A. Pujari, D. Agarwal, and N. Sharma, “Clinical role of swept source optical coherence tomography in anterior segment diseases: a review,” Semin. Ophthalmol. 36(8), 684–691 (2021). [CrossRef]  

38. A. Britten, P. Matten, J. Weiss, M. Niederleithner, H. Roodaki, B. Sorg, N. Hecker-Denschlag, W. Drexler, R. A. Leitgeb, and T. Schmoll, “Surgical microscope integrated MHz SS-OCT with live volumetric visualization,” Biomed. Opt. Express 14(2), 846–865 (2023). [CrossRef]  

39. C. Viehland, J. D. Li, A.-H. Dhalla, W. Raynor, L. Vajzovic, A. N. Kuo, C. Toth, and J. A. Izatt, “High Speed Volumetric Intrasurgical Optical Coherence Tomography at 400 kHz with Real Time, 4D Visualization of Surgical Maneuvers,” Invest. Ophthalmol. Visual Sci. 61, 3244 (2020).

40. J. D. Li, C. Viehland, A.-H. Dhalla, W. Raynor, R. Trout, A. N. Kuo, C. A. Toth, L. M. Vajzovic, and J. A. Izatt, “Intraoperative optical coherence tomography with 4D visualization of surgical maneuvers and quantitative measurements of intraocular structures,” in Ophthalmic Technologies XXXII (SPIE, 2022), Vol. PC11941, p. PC119410S.

41. L. B. Shen, O. Carrasco-Zevallos, B. Keller, C. Viehland, G. Waterman, P. S. Hahn, A. N. Kuo, C. A. Toth, and J. A. Izatt, “Novel microscope-integrated stereoscopic heads-up display for intrasurgical optical coherence tomography,” Biomed. Opt. Express 7(5), 1711–1726 (2016). [CrossRef]  

42. C. Viehland, B. Keller, O. M. Carrasco-Zevallos, D. Nankivil, L. B. Shen, S. Mangalesh, D. T. Viet, A. N. Kuo, C. A. Toth, and J. A. Izatt, “Enhanced volumetric visualization for real time 4D intraoperative ophthalmic swept-source OCT,” Biomed. Opt. Express 7(5), 1815–1829 (2016). [CrossRef]  

43. M. Draelos, B. Keller, C. Viehland, O. M. Carrasco-Zevallos, A. Kuo, and J. Izatt, “Real-time visualization and interaction with static and live optical coherence tomography volumes in immersive virtual reality,” Biomed. Opt. Express 9(6), 2825–2843 (2018). [CrossRef]  

44. I. D. Bleicher, M. Jackson-Atogi, C. Viehland, H. Gabr, J. A. Izatt, and C. A. Toth, “Depth-Based, Motion-Stabilized Colorization of Microscope-Integrated Optical Coherence Tomography Volumes for Microscope-Independent Microsurgery,” Trans. Vis. Sci. Tech. 7(6), 1 (2018). [CrossRef]  

45. W. Draxinger, Y. Miura, C. Grill, T. Pfeiffer, and R. Huber, “A real-time video-rate 4D MHz-OCT microscope with high definition and low latency virtual reality display,” in Clinical and Preclinical Optical Diagnostics II (2019), Paper 11078_1 (Optica Publishing Group, 2019), p. 11078_1.

46. J. Weiss, U. Eck, M. A. Nasseri, M. Maier, A. Eslami, and N. Navab, “Layer-Aware iOCT Volume Rendering for Retinal Surgery,” in Eurographics Workshop on Visual Computing for Biomedicine (2019).

47. J. Weiss, M. Sommersperger, A. Nasseri, A. Eslami, U. Eck, and N. Navab, “Processing-Aware Real-Time Rendering for Optimized Tissue Visualization in Intraoperative 4D OCT,” Med. Image Comput. Comput. Assist. Interv. 12265, 267–276 (2020). [CrossRef]  

48. M. T. El-Haddad and Y. K. Tao, “Advances in intraoperative optical coherence tomography for surgical guidance,” Curr. Opin. Biomed. Eng. 3, 37–48 (2017). [CrossRef]  

49. C. Posarelli, F. Sartini, G. Casini, A. Passani, M. D. Toro, G. Vella, and M. Figus, “What is the impact of intraoperative microscope-integrated oct in ophthalmic surgery? Relevant applications and outcomes. a systematic review,” J. Clin. Med. 9(6), 1682 (2020). [CrossRef]  

50. M. Draelos, “Vortex - An open-source library for building real-time OCT engines in C++ or Python,” (2022).

51. A. Appel, “Some techniques for shading machine renderings of solids,” in Proceedings of the April 30–May 2, 1968, Spring Joint Computer Conference, AFIPS ‘68 (Spring) (Association for Computing Machinery, 1968), pp. 37–45.

52. B. T. Phong, “Illumination for computer generated pictures,” Commun. ACM 18(6), 311–317 (1975). [CrossRef]  

53. R. Fosner, Real-Time Shader Programming (Morgan Kaufmann, 2003).

54. Y. Li, G. Gregori, R. W. Knighton, B. J. Lujan, and P. J. Rosenfeld, “Registration of OCT fundus images with color fundus photographs based on blood vessel ridges,” Opt. Express 19(1), 7–16 (2011). [CrossRef]  

55. N. Padmasini and R. Umamaheswari, “Detection of neovascularisation using K-means clustering through registration of peripapillary OCT and fundus retinal images,” in 2016 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC) (2016), pp. 1–4.

56. M. S. Miri, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal registration of SD-OCT volumes and fundus photographs using histograms of oriented gradients,” Biomed. Opt. Express 7(12), 5252–5267 (2016). [CrossRef]  

57. Y. Huang, R. Asaria, D. Stoyanov, M. Sarunic, and S. Bano, “PseudoSegRT: efficient pseudo-labelling for intraoperative OCT segmentation,” Int. J. Comput. Assist. Radiol. Surg. 2023, 2928 (2023). [CrossRef]  

58. Q. Li, S. Li, Z. He, H. Guan, R. Chen, Y. Xu, T. Wang, S. Qi, J. Mei, and W. Wang, “DeepRetina: Layer Segmentation of Retina in OCT Images Using Deep Learning,” Trans. Vis. Sci. Tech. 9(2), 61 (2020). [CrossRef]  

59. A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian, C. Wachinger, and N. Navab, “ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks,” Biomed. Opt. Express 8(8), 3627–3642 (2017). [CrossRef]  

60. S. Borkovkina, A. Camino, W. Janpongsri, M. V. Sarunic, and Y. Jian, “Real-time retinal layer segmentation of OCT volumes with GPU accelerated inferencing using a compressed, low-latency neural network,” Biomed. Opt. Express 11(7), 3968 (2020). [CrossRef]  

61. S. Dehghani, M. Sommersperger, P. Zhang, A. Martin-Gomez, B. Busam, P. Gehlbach, N. Navab, M. A. Nasseri, and I. Iordachita, “Robotic Navigation Autonomy for Subretinal Injection via Intelligent Real-Time Virtual iOCT Volume Slicing,” arXiv, arXiv:2301.07204 (2023). [CrossRef]  

62. M. Sommersperger, J. Weiss, M. A. Nasseri, P. Gehlbach, I. Iordachita, and N. Navab, “Real-time tool to layer distance estimation for robotic subretinal injection using intraoperative 4D OCT,” Biomed. Opt. Express 12(2), 1085–1104 (2021). [CrossRef]  

63. M. Gende, J. de Moura, J. Novo, and M. Ortega, “End-to-end multi-task learning approaches for the joint epiretinal membrane segmentation and screening in OCT images,” Comput. Med. Imaging Graph. 98, 102068 (2022). [CrossRef]  

64. M. Gende, J. De Moura, J. Novo, P. Charlón, and M. Ortega, “Automatic Segmentation and Intuitive Visualisation of the Epiretinal Membrane in 3D OCT Images Using Deep Convolutional Approaches,” IEEE Access 9, 75993–76004 (2021). [CrossRef]  

65. O. L. C. Mendes, A. R. Lucena, D. R. Lucena, T. S. Cavalcante, and A. R. D. Alexandria, “Automatic Segmentation of Macular Holes in Optical Coherence Tomography Images: A review,” AIS 1(1), 163–185 (2020). [CrossRef]  

66. A. Stankiewicz, T. Marciniak, A. Dabrowski, M. Stopa, P. Rakowicz, and E. Marciniak, “Novel full-automatic approach for segmentation of epiretinal membrane from 3D OCT images,” in 2017 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA) (2017), pp. 100–105.

67. International Electrotechnical Commission, “IEC 60825-3.0: Safety of laser products–Part 1: Equipment classification and requirements,” (2014).

68. S. G. Hart and L. E. Staveland, “Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research,” in Advances in Psychology, P. A. Hancock and N. Meshkati, eds., Human Mental Workload (North-Holland, 1988), Vol. 52, pp. 139–183. [CrossRef]  

69. J. V. Rossi, D. Verma, G. Y. Fujii, R. R. Lakhanpal, S. L. Wu, M. S. Humayun, and E. J. De Juan, “Virtual vitreoretinal surgical simulator as a training tooL,” Retina 24(2), 231–236 (2004). [CrossRef]  

70. S. L. Cremers, A. N. Lora, and Z. K. Ferrufino-Ponce, “Global Rating Assessment of Skills in Intraocular Surgery (GRASIS),” Ophthalmology 112(10), 1655–1660 (2005). [CrossRef]  

71. M. R. Wilson, J. M. Poolton, N. Malhotra, K. Ngo, E. Bright, and R. S. W. Masters, “Development and Validation of a Surgical Workload Measure: The Surgery Task Load Index (SURG-TLX),” World J. Surg. 35(9), 1961–1969 (2011). [CrossRef]  

Supplementary Material (3)

NameDescription
Visualization 1       Real-time human retinal intrasurgical imaging of suction by a soft tip tool visualized via (a) digital ophthalmic surgical microscopy, (b) 3D-OCT data rendered with Viehland's method, (c) retinal thickness from segmented 3D-OCT data, and fusion of 2D
Visualization 2       Real-time human retinal intrasurgical imaging of a tissue scrape performed by a finesse loop tool visualized via (a) digital ophthalmic surgical microscopy, (b) 3D-OCT data rendered with Viehland's method, (c) retinal thickness from segmented 3D-OCT
Visualization 3       Real-time human retinal intrasurgical imaging of a tissue grasp performed by a forcep tool visualized via (a) digital ophthalmic surgical microscopy, (b) 3D-OCT data rendered with Viehland's method, (c) retinal thickness from segmented 3D-OCT data, a

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. 4D-MIOCT system used for collecting intraoperative volumetric OCT and color microscopy video. (a) Block diagram for the imaging system; PD = photodetector, FBG = fiber Bragg grating, PC = polarization controller, 3D TV = 3-dimensional television. (b) Head of the imaging system containing the microscope head (blue) and OCT scanner (red). (c) Operating room photo of the deployed system during a retinal surgery case, including the microscope head (blue), OCT scanner (red), and Ngenuity 3D TV (green). Note that in current practice, all the visual data on the screen (3D OCT render, en face OCT, OCT B-scan, and color microscopy) all exist in separate spatial locations on the screen requiring the surgeon to mentally integrate these disparate representations of the surgical field.
Fig. 2.
Fig. 2. Feature segmentation pipeline illustrated using recorded data from a soft-tip surgical tool in proximity to the retina. 3D OCT data (a) with a selected cross-sectional region indicated by the dashed blue frame. This region was then filtered (b). The first surface in depth consisting of the retinal surface was segmented from the thresholded result (c-f), using the axial gradient (d) to detect the surface boundary (e, dotted green line). This boundary was then used in the detection of the second surface from the axial gradient of the data (g). After detecting the rear of the first surface (h, dotted yellow line), the second surface was detected from the axial gradient below the rear of the first surface (i, dotted red line). Both detected surfaces were then median filtered and masked for the vignetted peripheral portions to yield the final depth map surfaces for use in rendering with the dashed blue lines representing the location of the cross-section (f, j).
Fig. 3.
Fig. 3. Process flow diagram detailing the logic used in determining what shading was applied for a ray cast during fused image rendering from 4D-MIOCT data.
Fig. 4.
Fig. 4. Surface shading pipeline, sample was a soft-tip tool contacting the retina. The first surface (a) was intercepted by the raycaster (b) and the Fresnel (c) and specular Phong (d) shaders are computed and summed for each ray intercept (e). The second surface and the portion of the first belonging to the tool (f) were likewise intercepted by the raycaster (g), shaded by the average subsurface OCT intensity (h) and color microscopy image (i) to produce the subsurface shader (j). The combination of (e) and (j) then yields the fused render (k). In the fused OCT and color microscopy render (k), the light blue is the soft-tip tool, the nearly transparent surface is the ILM, and the layer below that is the RPE and deeper structures (choroid); the dark linear area under the tool is due to tool shadowing from the OCT channel. This visual can be further augmented with the rendering of the remaining periphery of the microscopy image (l) or the upper retinal surface color shaded based on the retinal thickness (m) as computed from the axial difference of the segmented surfaces (a, f).
Fig. 5.
Fig. 5. (a) Time series intrasurgical 4D-MIOCT image data (size 340 × 80 A-scans/5 × 5 mm) for the approach (column 1, 2), contact (column 3), and withdrawal (column 4) of a soft-tip tool at the retina (Visualization 1). 2-D color microscopy (row 1) and 4D-OCT (row 2) are fused into a single visualization without (row 3) and with (row 4) peripheral color image. Visualized features of interest include the fovea (red arrow), retinal vessels (yellow arrow), choroidal vessels (green arrow), and tool-tissue contact (blue arrow). (b) 2-D color image (left) and enface fused render without translucent surface shader (right) for comparison of color feature translation in the fused render, notably instrument color (purple arrow) and retinal vessel (yellow arrow) contrast. (c) OCT volume render (left) and translucent surface shader (right) for comparison of 3-D surface feature translation in the fused render, notably the topology of the foveal depression (red arrow).
Fig. 6.
Fig. 6. (a) Time series intrasurgical 4D-MIOCT image data (size 340 × 80 A-scans/5 × 5 mm) for the approach, contact, scrape and withdrawal (columns 1-4 respectively) of a finesse loop at the retina (Visualization 2). 2-D color microscopy (row 1) and 4D-OCT (row 2) are fused into a single visualization with peripheral color image (row 3). Visualized features of interest include tissue blanching (green arrow), persisting retinal deformation (red arrow), and vessels (yellow arrow). (b) 2-D color image (left) and enface fused render without translucent surface shader (right) for comparison of color feature translation in the fused render, notably retinal vessel (black arrow) contrast. (c) OCT volume render (left) and translucent surface shader (right) for comparison of 3-D surface feature translation in the fused render, notably the topology of the indentation of the tissue surface by the instrument (white arrow).
Fig. 7.
Fig. 7. (a) Time series intrasurgical 4D-MIOCT image data (size 340 × 80 A-scans/3 × 3 mm) for the approach, contact, grasp and withdrawal (columns 1-4 respectively) of forceps at the retina (Visualization 3). 2-D color microscopy (row 1) and 4D-OCT (row 2) are fused into a single visualization with peripheral color image (row 3). (b) 2-D color image (left) and enface fused render without translucent surface shader (right) for comparison of color feature translation in the fused render, notably retinal vessel (black arrow) contrast. (c) OCT volume render (left) and translucent surface shader (right) for comparison of 3-D surface feature translation in the fused render, notably the topology of the raised portions of the tissue due to subsurface vessels (white arrow).
Fig. 8.
Fig. 8. Time series intrasurgical 4D-MIOCT image data (size 340 × 80 A-scans/5 × 5 mm) for the approach (column 1, 2), contact (column 3), and withdrawal (column 4) of a soft-tip tool at the retina (Visualization 1). Segmented surfaces from 4D-OCT data (row 1) are used to compute retinal thickness for use in surface shading (row 2).
Fig. 9.
Fig. 9. Time series intrasurgical 4D-MIOCT image data (size 340 × 80 A-scans/5 × 5 mm) for the approach (column 1), contact (column 2), scrape (column 3) and withdrawal (column 4) of a finesse loop tool at the retina (Visualization 2). Segmented surfaces from 4D-OCT data (row 1) are used to compute retinal thickness for use in surface shading (row 2). Shader enhances visualization of retinal deformation dynamics indicative of potential damage (purple arrows).

Tables (1)

Tables Icon

Table 1. Fusion Pipeline Performance Breakdown (n = 8 OCT volumes)

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

I F = A F | N V | n
I P = i = 1 N A p i ( R i V ) n i
R = 2 ( N L ) N L
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.