Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Microscope integrated optical coherence tomography system combined with augmented reality

Open Access Open Access

Abstract

One of the disadvantages in microscope-integrated optical coherence tomography (MI-OCT) systems is that medical images acquired via different modalities are usually displayed independently. Hence, surgeons have to match two-dimensional and three-dimensional images of the same operative region subjectively. In this paper, we propose a simple registration method to overcome this problem by using guided laser points. This method combines augmented reality with an existing MI-OCT system. The basis of our idea is to introduce a guiding laser into the system, which allows us to identify fiducials in microscopic images. At first, the applied voltages of the scanning galvanometer mirror are used to calculate the fiducials’ coordinates in an OCT model. After gathering data at the corresponding points’ coordinates, the homography matrix and camera parameters are used to superimpose a reconstructed model on microscopic images. After performing experiments with artificial and animal eyes, we successfully obtain two-dimensional microscopic images of scanning regions with depth information. Moreover, the registration error is 0.04 mm, which is within the limits of medical and surgical errors. Our proposed method could have many potential applications in ophthalmic procedures.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical coherence tomography (OCT), as a non-invasive and high-resolution imaging modality, has become a popular technology for intraoperative guidance in ophthalmology [1]. The integration of OCT systems into conventional surgical microscopes provides real-time cross-sectional and volumetric images for surgeons, which effectively simplifies and increases the precision of ophthalmic surgery [24]. In 2010, the use of a spectral-domain microscope integrated OCT (SD-MIOCT) prototype in the operating room was first described by Toth et al. [5]. The remarkable performance of the SD-MIOCT system was shown during retina and macular surgery. Since then, MI-OCT systems have seen dramatic developments [610]. The information provided by OCT during surgery enables precise lesion targeting and trauma minimization. However, the registration of anatomical structures and OCT images is a non-negligible obstacle restricting the widespread application of MI-OCT systems. Subjective judgment and rich surgical experience are required to estimate the spatial relationship between microscopic images and OCT images preoperatively, intraoperatively and postoperatively. These requirements bring additional complexity and a steeper learning curve. There remains a need for an efficient method that can register images taken with different modalities. Thus, considerable efforts have been invested in the registration of multi-modality images that provide fused images for surgeons [1113].

Augmented reality (AR) is an alternative approach to provide registered images in surgery. AR precisely displays computer-generated images, such as CT, MRI, and OCT, on the patient’s anatomical structures. Drawbacks caused by unregistered images can be solved by superimposing multi-modality images on the operative region. AR has been used in several studies and clinical operations throughout the past three decades. Vassallo et al. proposed a novel method to classify blood vessels based on AR. Color-coded spectral entropy map and color-coded signal power map were fused with the microscopic view to help distinguish blood vessels in cerebrovascular surgeries [14]. Umebayashi et al. described a navigation system that merged real-time microscopic view and keyhole tunnel model to identify appropriate drilling direction. The navigation system simplifies the surgery procedure and increases the surgical success rate during anterior cervical laminectomy and posterior cervical laminectomy [15]. Pratt et al. used the HoloLens to project computed tomography angiography images into surgical views. Perforating vessels were precisely located during vascular pedicle flap surgeries. The HoloLens-assisted system has statistically reduced procedure time and improved perioperative outcomes [16]. At present, the application of AR in MI-OCT systems is rare. The majority of studies have focused on viewing microscopic images and 2D/3D OCT images simultaneously. Ehlers et al. integrated a three-dimensional digital visualization system into an MI-OCT system, B-scan images and three-dimensional video were combined on the high-definition 3-dimensional monitor. Subtle retinal alterations were revealed during small-gauge vitrectomy procedures [11]. Lee et al. overlaid the cross-sectional OCT image on the microscope view by using a beam projector. The status of mastoid inflammation removal and detailed information of graft was precisely presented to surgeons during tympanomastoidectomy [12]. Seider et al. used a stereoscopic heads-up display to project B-scan images and volume models into surgical views. The B-scan images and volume models of retina and intraocular instruments were fed directly to surgeons through stereoscopic heads-up display during macular surgeries [13]. However, although displaying 2D/3D OCT and microscopic images simultaneously was demonstrated, little attention has been paid to image registration.

In this paper, we present a modified MI-OCT system combined with AR technology. The 3D model reconstructed by OCT scanning data was superimposed precisely on 2D microscopic images. Since the fused images contain anatomical structures from microscope and spatial information from OCT, and the operative region is transparent for the surgeon's eye, the registration problem mentioned above is solved sufficiently. The intuitive spatial information enhances preoperative planning, improves postoperative outcomes and reduces operative time. Meanwhile, a fiducials-based registration method was demonstrated. Feature-based registration, intensity-based registration, segmented-based registration, and fluoroscopy-based registration are four traditional medical imaging methods [17]. Feature-based registration is widely used in navigation systems for its high accuracy and robustness [18]. Among the many features that can be used for registration, the fiducial-based method is the most effective and simple approach. Compared with the surface-based method and the curve-based method, the fiducial-based method has high accuracy and doesn’t need a good initial pre-registration [19,20]. Therefore, the fiducials-based registration was chosen for intraoperative navigation with our MI-OCT system. However, in ophthalmic surgery, it is complicated and dangerous to adhere or implant markers in the eyeballs. Therefore, aiming to generate fiducials, we added a guiding laser to our MI-OCT system. By sharing the optical path with the MI-OCT system, the guiding laser marks fiducials visibly and accurately. After obtaining fiducials from microscopic images and OCT images, it is simple to get access to the relationship of corresponding points, which realizes the registration of images from two different modalities.

2. Experiment setup

Figure 1 shows our custom-built swept-source microscope-integrated OCT (SS-MIOCT) device. It is composed of two units: a surgical microscope unit and an optical coherence tomography unit. The surgical microscope unit captures and outputs 2D or 3D images of the operative region. It consists of a binocular camera, surgical microscope, illumination source, and optical zoom unit. During long-term surgery, the conventional design of a surgical microscope places great pressure on the surgeon’s neck, which has a negative impact on surgery implementation. To reduce this pressure, we apply a binocular camera (MCC-500MDC, SONY, Japan) with a shorter time delay (less than 100ms), which outputs real-time microscopic images to additional display devices. Simultaneously, a companion 3D monitor (LMD-4251TD, SONY, Japan) displays the binocular microscopic images, which significantly enhances depth perception with circular polarizer 3D glasses [21]. Detailed parameters of the binocular camera are shown in Table 1. The light source provides illumination for the operation area. Surgeons can adjust the brightness according to the field of vision of the operation area. Since the whole optical system is integrated with the robotic arm, the illumination unit can light the operative region more efficiently than conventional head-mounted light sources [22]. The magnification of the microscope changes from 4.5x to 27x, corresponding to the field of view (FOV) ranging from 45mm to 8mm and is controlled by a foot pedal.

 figure: Fig. 1.

Fig. 1. Optical schematic and photograph of the SS-MIOCT showing surgical microscope (gold), SS-OCT (red), and illumination (blue). BS: beam splitter, MG: magnification, DM: dichroic mirror, OL: objective lens, M: mirror, Gxy: galvanometer scanner, CL: collimator, PC: polarization controller, WDM: wavelength division multiplexer, CIR: circulator, CP: coupler, BD: balanced detector.

Download Full Size | PDF

Tables Icon

Table 1. MCC-500MDC specifications

The OCT unit is composed of a swept-source, guiding source, scanning galvanometer mirrors, and a balanced detector. The swept-source (HSL-20, Santec, Japan) is centered at wavelength of 1310 nm with a bandwidth of 90 nm and an A-line rate of 100 kHz. A super-luminescent diode (ADR-1805, SFOLT, China), with a wavelength of 640 nm, is combined in the sample arm of OCT and used as a guide source to mark the imaging region. Beam direction and lateral scan are controlled by an X-Y galvanometer (GVS002, Thorlabs, USA) and a waveform generation card (USB-6356, National Instrument). The optical interference signal is detected by a balanced detector (PDB480C-AC, Thorlabs, USA) and then acquired and processed by a data acquisition device (Ats9360, Alazartech) and a high-performance computer (CPU: E5-2620v4@2.1 GHz, RAM:32 GB, graphics cards: GTX1080Ti). The axial resolution and range of the SS-OCT are 9.8 μm and 6 mm, respectively. The maximum lateral range is 14.3 mm.

The back-scattered OCT light from the operative region returns along the original path to realize OCT 3D imaging. The reflected guiding light from the same region passes through the dichroic mirror. Part of it returns to the optical coherence tomography unit while the rest enters the surgical microscope unit to target fiducials.

3. Methodology

The goal of superimposing OCT images on microscopic images is to integrate invisible OCT spatial information into anatomical structures. This generally requires three steps: fiducial location, camera calibration, and image fusion. Each step will be described in detail in the following sub-sections. The whole registration procedure is illustrated as the flow chart in Fig. 2.

 figure: Fig. 2.

Fig. 2. Flowchart of registration

Download Full Size | PDF

3.1 Fiducial location

The purpose of the fiducial location step is to mark the coordinates of each corresponding fiducial. It was implemented in both two-dimensional microscopic images and three-dimensional OCT model.

Fiducials in two-dimensional microscopic images were located by extraction algorithm. In this paper, a SLD laser was used to mark two-dimensional fiducials. With the help of the waveform generation card and its supporting software (Measurement & Automation Explorer, National Instrument), fiducials’ location can be controlled by setting scanning voltages of the X-Y galvanometer. After collecting by medical camera, the two-dimensional fiducials are extracted using the HSV color space and the Circle Hough Transform (CHT) [23]. Meanwhile, we used ideal coordinates, calculated by the voltage of galvanometer scanning mirrors (Eq. (1)), to evaluate the extraction method performed.

$$\begin{array}{l} \textrm{x} = {W / 2} - {\textrm{v}_1} \times {{mmpv} / {mmpd}}\\ y = {H / 2} - {\textrm{v}_2} \times {{mmpv} / {mmpd}} \end{array}$$

In Eq. (1), H and W are the numbers of pixels in the vertical and horizontal directions, respectively. Since we set the scanning origin at the center of the image, the offsets need to be added to the ideal fiducial coordinates. Additionally, v1 and v2 stand for the voltages of axial scanning, mmpv is the displacement of each volt, in mm/V, and mmpd is the distance between adjacent pixel points, in mm.

After fiducials’ location in two-dimensional microscopic images, volume scan data was used to reconstruct the three-dimensional OCT model. At this point, laser points changed into raster scanning lines. With three-dimensional scanning data, the ray casting algorithm [24] was used to reconstruct the three-dimensional OCT model. The three-dimensional fiducials were located by voltage calculation and boundary extraction. The x coordinates and y coordinates were calculated by Eq. (2).

$$\begin{array}{l} \textrm{x} = ({{v_x} + {v_{{x_0}}}} )\times fxDirectionspeed\\ \textrm{y} = ({{v_y} + {v_{{y_0}}}} )\times f\textrm{y}Directionspeed \end{array}$$

In Eq. (2), x and y are 2D coordinates of fiducials, in pixels, vx and vy are applied voltages of galvanometer scanning mirrors, vx0 and vy0 are the initial voltages of galvanometer scanning mirrors, which were used to ensure the initial scanning position, fxDirectionspeed and fyDirectionspeed are the displacements of each voltage in the x and y direction respectively, in mm. A 2D OCT slice was chosen by the y coordinate. The anisotropic diffusion algorithm and canny edge detection were used to provide the z coordinate.

3.2 Camera calibration

The camera calibration step was based on Zhang’s camera calibration [25]. An OCT coordinate system was designed to replace the global coordinate system. Due to the fact that microscopic images are 2D images, we assumed the model plane was on z = Z of the OCT coordinate system, where Z is the z coordinate of the OCT slice. In order to simplify the calculation, we set z equal to zero. The relationship between fiducials in the 2D microscopic images and 3D OCT model was given by Eq. (3).

$$\textrm{s}\left[ {\begin{array}{{c}} u\\ v\\ 1 \end{array}} \right] = A\left[ {\begin{array}{{cc}} R&t \end{array}} \right]\left[ {\begin{array}{{c}} X\\ Y\\ 0\\ 1 \end{array}} \right]$$

In Eq. (3), s is an arbitrary scale factor, and ${\left[ {\begin{array}{{ccc}} \textrm{u}&\textrm{v}&\textrm{1} \end{array}} \right]^\textrm{T}}$ and ${\left[ {\begin{array}{{cccc}} \textrm{X}&\textrm{Y}&\textrm{0}&\textrm{1} \end{array}} \right]^\textrm{T}}$ denote augmented vectors of 2D fiducials and 3D fiducials respectively. A is a camera intrinsic matrix, and [R t], is a camera extrinsic matrix. At least four pairs of corresponding fiducials are needed for the following calculation. If more fiducials are used, the registration accuracy will be improved. The relationship was expressed in the form of a homography matrix, which was estimated with an 8-point algorithm and direct linear transformation (DLT). Then, singular value decomposition and Cholesky decomposition were used to calculate the camera intrinsic parameters and extrinsic parameters. For the purpose of correcting camera distortion, the Levenberg-Marquardt minimization method was used to optimize parameters from the homography matrix and camera matrix.

3.3 Image fusion

The image fusion stage combined the microscopic image with the OCT z-slice image to produce the final image. Camera parameters, which were given by camera calibration, were used to assign parameters in OpenGL camera. Two kind of images were put in the same coordinate system through camera parameter updating. In this paper, two kinds of images were superimposed directly after affine transformation. A microscopic image was used as a reference image, and an OCT slice image was used as a floating image. The transparency of the floating image was adjusted according to the importance of image information.

4. Result & discussion

To examine the performance of the registration algorithm in the MI-OCT system, we tested the proposed method in the imaging of artificial eyes and animal eyes both in vitro and in vivo. At first, laser points were extracted in microscopic images. As shown in Fig. 3, our algorithm was used to locate laser points in various objects, including an artificial eye, finger, and pig eyes in vitro.

 figure: Fig. 3.

Fig. 3. Extraction in microscopic images. (a)–(c) Microscopic images before extraction. (d)–(f) Laser points after extraction. (g)–(i) Microscopic images after extraction.

Download Full Size | PDF

The performance of extraction is shown in Fig. 3(g), 3(h), and 3(i). Since laser beam’s quality was poor, the laser spot was shown instead of laser points. The original laser spot is marked by a green point, and the boundary of the laser spot is marked by a blue circle. According to Fig. 3 it appears that a non-negligible red spot can be seen in images. The red spot was generated by our light path, and it had a negative influence on extraction. Meanwhile, the illumination is also affected by extraction. To improve the performance of extraction, the extraction area was limited to the OCT scanning area. The OCT scanning area is marked by a yellow square, which covers 7.168 mm${\times}$7.168 mm. The extracted laser point coordinate was compared with the ideal coordinate calculated by Eq. (1). The residuals of extracted laser points can be identified in Fig. 4.

 figure: Fig. 4.

Fig. 4. Residuals of extracted laser points.

Download Full Size | PDF

As we can see from Fig. 4, the average residual was around 0.23 mm based on 40 images. A few residuals over 0.25 mm, were caused by the height difference of the eyeball’s surface. During data collection, the scanning origin was set in the center of images to acquire a bigger scanning area. Therefore, the laser point located on the edge of the eyeball led to image distortion, which brought more significant extraction error. Fortunately, the surgeon generally places the operative region at the center of scanning. Under this circumstance, the primary factor contributing to error is eliminated. Besides, numerical error and beam quality also affect the extraction. Then, Eq. (2) and edge detection were used to calculate the 3D coordinates. As mentioned earlier, at least four pairs of 2D coordinates and 3D coordinates were needed to obtain the homography matrix and camera parameters. In our experiments, at least four pairs of coordinates were used in the camera calibration step.

The results after calibration are illustrated in Fig. 5 and Fig. 6. Since microscopic images present 2D pictures, whereas the OCT model shows 3D volumetric images, numerous methods were proposed to put them in the same dimension. Typical methods are rendering microscopic images as texture, reconstructing a 3D model from microscopic images, and putting 3D model in the background of microscopic images. In this paper, we captured 2D OCT images with different depth information. After updating camera parameters in the OpenGL camera, 2D OCT images were superimposed on microscopic images.

 figure: Fig. 5.

Fig. 5. Artificial eye registration results of different depths. (a)–(c) Original microscopic images. (d)–(f) OCT slices from different depths: (d) slice = 218; (e) slice = 245; (f) slice = 264; (g)–(i) Registration results.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Rabbit eye registration of different depths. (a)–(c) Original microscopic images. (d)–(f) OCT volume clipping images from different depths: (d) depth = 2.996 mm; (e) depth = 5.012 mm; (f) depth = 10.892 mm; (g)–(i) Registration results.

Download Full Size | PDF

Figure 5 shows the preliminary registration results when different OCT slices were superimposed on microscopic images. The red square marks our scanning area. In this experiment, the scanning area was 14.336 mm${\times}$14.336 mm. Figure 5(g), 5(h), and 5(i) present the registration results of slice 218, slice 245, and slice 264, respectively. The material distribution of artificial eyes is shown clearly. Different structures can be found from slices at different depths. Experiments of animal eyes in vivo were subsequently performed.

Figure 6 shows the registration results of rabbit eyes in vivo. OCT volume clipping images at different depths were superimposed on microscopic images. Figure 6(g), 6(h) and 6(i) reveal registration results of 2.996 mm, 5.012 mm, and 10.892 mm, respectively. Anatomical structures and tissues from different depths are visible in the registration results. Boundaries of an eyelid, pupil, and iris were matched remarkably. Registration results were relatively accurate through assigning camera parameters and optimization algorithms at the camera calibration step. The OCT volumes are rendered at 512 ${\times}$ 512 pixels (10 mm in width and 6 mm in depth) and 60fps. It is important to note that motion artifact is indeed a crucial problem during the volume imaging process. Since our registration is in the non-real-time stage, only a single OCT volume is needed in each registration. The effects brought by the motion artifacts are minimized. Besides, the eyeballs are fixed by surgeons during in vivo experiments, which effectively reduce breathing jitter and eyeball tremor. In the subsequent real-time registration, the motion artifacts will become a non-negligible factor. We will explore the methods to reduce the motion artifacts in future studies. Furthermore, the reflection and scattering effects of different anatomical structures and tissues have a negative effect on registration accuracy. For anatomical structures and tissues, the OCT volumetric imaging is strongly affected by reflection and multiple scattering effects in the depth direction [26], which would decrease the registration accuracy. Although some efficient methods, like ray tracing [2729], have been proposed to minimize the influence, we decided to explore an appropriate strategy for intraoperative navigation. Relevant work will be delivered in our future research.

Afterward, precision analysis was carried out by the registration of a calibration broad. It is hard to estimate the registration error in vivo and in vitro experiments. The primary subjects of our experiments are eyeballs. The corresponding features between microscopic images and OCT images are scarce [30]. Meanwhile, the complex and non-rigid structure makes it dangerous to adhere or implant markers in the eyeballs [31]. The calibration broad was chosen to process the precision analysis for its rich features, simple extraction, and low cost. Grid corner offsets show the residual error of registration.

OCT slice and registration results are shown in Fig. 7(b) and 7(c). The calibration board was a 10${\times}$7 grid, and the size of each square in the grid was 2 mm${\times}$2 mm. The blue square in Fig. 7(c) gives the area of OCT scanning, which covers 7.168 mm${\times}$7.168 mm. The offset of the grid corner is obvious and effective enough to evaluate the performance of the registration result.

 figure: Fig. 7.

Fig. 7. Registration results of calibration board (a) Microscopic image. (b) OCT slice. (c) Registration result.

Download Full Size | PDF

Ten calibration boards’ images, which were collected from different angles, were used to perform calibration. After calculating offsets over 200 grid corners, the distribution of standard error is shown in Fig. 8. It can be seen that the average corner offset was 0.04 mm, which meets the medical standard of surgical error (less than 0.1 mm). Therefore, the registration method mentioned above can satisfy the initial needs of surgical applications.

 figure: Fig. 8.

Fig. 8. Standard error of registration.

Download Full Size | PDF

Besides, the registration accuracy is lower with the increase of the residuals of extracted laser points. As is shown in Fig. 9, the corner offset is higher with the increase of extraction residual, which means the registration accuracy decreases when the extraction residual rises.

 figure: Fig. 9.

Fig. 9. Relationship between extraction residual and corner offset.

Download Full Size | PDF

In this paper, we applied an augmented reality method to a MI-OCT system. By adding a guiding light to the system, a highly accurate registration method was developed. The registration method makes it possible to obtain fiducials in the eyes or skin surfaces. At the same time, the homography matrix and the parameter settings of the virtual camera were used to perform registration of 2D microscope images and 3D OCT slices. During the experiment, we verified that the structural information of different depths in the scanning area was simultaneously superimposed on the 2D microscopic image. This provides surgeons with multi-dimensional information of the operative region, thus greatly benefitting preoperative planning and postoperative evaluation. In addition, the proposed method also shows tissue structures and the positions of the surgical instruments in real time. Therefore, surgeons have access to real time location information from either microscope imaging or an external monitor. The average error in the accuracy analysis experiment was 0.04 mm. Indicating that the registration method developed in this paper satisfies existing surgical standards.

The proposed method can be applied to register 2D microscopic and OCT slice images (with depth information) under static conditions. For example, it is suitable for temporary pauses before, after, and during surgery. For dynamic registration, the static registration steps can be iterated.

5. Conclusion

To summarize, we added a guiding laser to an existing custom-built SS-MIOCT system and obtained noticeably improved registration results. By utilizing a homography matrix algorithm and setting the virtual camera with different parameters, images obtained with two different modalities, namely, 2D microscopic images and 2D OCT slices, were fused precisely in static conditions. The average error of registration was 0.04 mm, meaning it is a powerful tool for surgery assistance, such as preoperative planning, intraoperative navigation and postoperative evaluation. The proposed method also has considerable potential for surgical applications. Compared with other display modes of MI-OCT systems, the registration method in this paper is more intuitive and vivid. The registration results give both OCT and microscopic information simultaneously. Meanwhile, details and structures from different depths are shown to surgeons directly. During surgery navigation, subjective judgment, and imagination regarding the operative region are unnecessary. Subsequent research will focus on real-time registration in dynamic conditions.

Funding

National Key Research and Development Program of China (2016YFF0102000, 2016YFF0102002, 2016YFF0102003); Key Research Program of Frontier Sciences (QYZDB-SSW-JSC03); Jiangsu Science and Technology Plan Program (BE2018667).

Acknowledgements

We would like to thank Jiang Chunhui’s research group for assistance with the experiments.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. O. M. Carrasco-Zevallos, C. Viehland, B. Keller, M. Draelos, A. N. Kuo, C. A. Toth, and J. A. Izatt, “Review of intraoperative optical coherence tomography: technology and applications,” Biomed. Opt. Express 8(3), 1607–1637 (2017). [CrossRef]  

2. C. Shieh, P. DeSouza, O. Carrasco-Zevallos, D. Cunefare, J. A. Izatt, S. Farsiu, P. Mruthyunjaya, A. N. Kuo, and C. A. Toth, “Impact of microscope integrated OCT on ophthalmology resident performance of anterior segment maneuvers in model eyes,” Invest. Ophthalmol. Visual Sci. 57(7), 4086 (2016). [CrossRef]  

3. N. D. Pasricha, C. Shieh, O. M. Carrasco-Zevallos, B. Keller, J. A. Izatt, C. A. Toth, and A. N. Kuo, “Real-time microscope-integrated OCT to improve visualization in DSAEK for advanced bullous keratopathy,” Cornea 34(12), 1606–1610 (2015). [CrossRef]  

4. C. A. Toth, O. Carrasco-Zevallos, B. Keller, L. Shen, C. Viehland, D. H. Nam, P. Hahn, A. N. Kuo, and J. A. Izatt, “Surgically integrated swept source optical coherence tomography (SSOCT) to guide vitreoretinal (VR) surgery,” Invest. Ophthalmol. Vis. Sci. 56(7), 3512 (2015).

5. Y. K. Tao, J. P. Ehlers, C. A. Toth, and J. A. Izatt, “Intraoperative spectral domain optical coherence tomography for vitreoretinal surgery,” Opt. Lett. 35(20), 3315–3317 (2010). [CrossRef]  

6. F. T. Nguyen, A. M. Zysk, E. J. Chaney, J. G. Kotynek, U. J. Oliphant, F. J. Bellafiore, K. M. Rowland, P. A. Johnson, and S. A. Boppart, “Intraoperative evaluation of breast tumor margins with optical coherence tomography,” Cancer Res. 69(22), 8790–8796 (2009). [CrossRef]  

7. C. C. Wykoff, A. M. Berrocal, A. C. Schefler, S. R. Uhlhorn, M. Ruggeri, and D. Hess, “Intraoperative OCT of a full-thickness macular hole before and after internal limiting membrane peeling,” Ophthalmic Surg Lasers Imaging 41(1), 7–11 (2010). [CrossRef]  

8. S. Siebelmann, M. Hermann, T. Dietlein, B. Bachmann, P. Steven, and C. Cursiefen, “Intraoperative optical coherence tomography in children with anterior segment anomalies,” Ophthalmology 122(12), 2582–2584 (2015). [CrossRef]  

9. J. P. Ehlers, P. K. Kaiser, and S. K. Srivastava, “Intraoperative optical coherence tomography using the RESCAN 700: preliminary results from the DISCOVER study,” Br. J. Ophthalmol. 98(10), 1329–1332 (2014). [CrossRef]  

10. P. N. Dayani, R. Maldonado, S. Farsiu, and C. A. Toth, “Intraoperative use of handheld spectral domain optical coherence tomography imaging in macular surgery,” Retina 29(10), 1457–1468 (2009). [CrossRef]  

11. J. P. Ehlers, A. Uchida, and S. K. Srivastava, “The Integrative Surgical Theater: Combining Intraoperative Optical Coherence Tomography and 3D Digital Visualization for Vitreoretinal Surgery in the DISCOVER Study,” Retina 38(1), S88–S96 (2018). [CrossRef]  

12. J. Lee, R. E. Wijesinghe, D. Jeon, P. Kim, Y.-H. Choung, J. H. Jang, M. Jeon, and J. Kim, “Clinical Utility of Intraoperative Tympanomastoidectomy Assessment Using a Surgical Microscope Integrated with an Optical Coherence Tomography,” Sci. Rep. 8(1), 17432 (2018). [CrossRef]  

13. M. I. Seider, O. M. Carrasco-Zevallos, R. Gunther, C. Viehland, B. Keller, L. Shen, P. Hahn, T. H. Mahmoud, A. Dandridge, J. A. Izatt, and C. A. Toth, “Real-time volumetric imaging of vitreoretinal surgery with a prototype microscope-integrated swept-source OCT device,” Ophthalmology Retina 2(5), 401–410 (2018). [CrossRef]  

14. R. Vassallo, H. Kasuya, B. W. Y. Lo, T. Peters, and Y. Xiao, “Augmented reality guidance in cerebrovascular surgery using microscopic video enhancement,” Healthc. Technol. Lett. 5(5), 158–161 (2018). [CrossRef]  

15. D. Umebayashi, Y. Yamamoto, Y. Nakajima, N. Fukaya, and M. Hara, “Augmented Reality Visualization-guided Microscopic Spine Surgery: Transvertebral Anterior Cervical Foraminotomy and Posterior Foraminotomy,” JAAOS Glob. Res. Rev. 2(4), e008 (2018). [CrossRef]  

16. P. Pratt, M. Ives, G. Lawton, J. Simmons, N. Radev, L. Spyropoulou, and D. Amiras, “Through the HoloLens™ looking glass: augmented reality for extremity reconstruction surgery using 3D vascular models with perforating vessels,” Eur. Radiol. Exp. 2(1), 2 (2018). [CrossRef]  

17. F. Alam, S. U. Rahman, S. Ullah, and K. Gulati, “Medical image registration in image guided surgery: Issues, challenges and research opportunities,” Biocybern. Biomed. Eng. 38(1), 71–89 (2018). [CrossRef]  

18. S. Y. Guan, T. M. Wang, C. Meng, and J.-C. Wang, “A Review of Point Feature Based Medical Image Registration,” Chin. J. Mech. Eng. 31(1), 76 (2018). [CrossRef]  

19. Q Zhao, S Pizer, M Niethammer, and J. Rosenman, “Geometric-feature-based spectral graph matching in pharyngeal surface registration,” Med. Image Comput. Comput. Assist. Interv. 2014 8673(1), 259–266 (2014). [CrossRef]  

20. F. Alam, S. U. Rahman, S. Khusro, S. Ullah, and A. Khalil, “Evaluation of Medical Image Registration Techniques Based on Nature and Domain of the Transformation,” J. Med. Imaging Radiat. Sci. 47(2), 178–193 (2016). [CrossRef]  

21. S. M. Jung, J. U. Park, S. C. Lee, W.-S. Kim, M.-S. Yang, I.-B. Kang, and I.-J. Chung, A Novel Polarizer Glasses-Type 3D Displays with an Active Retarder, Society for Information Display International Symposium (Oxford, UK, Blackwell Publishing Ltd) 2009, 40(1): 348–351.

22. D. Li, J. Babcock, and D. J. Parkhurst, openEyes: a low-cost head-mounted eye-tracking solution, Proceedings of the 2006 Symposium on Eye Tracking Research & Applications, 2006: 95–100. [CrossRef]  

23. J. Illingworth and J. Kittler, “The adaptive Hough transform,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9(5), 690–698 (1987). [CrossRef]  

24. D. Roth S, “Ray casting for modeling solids,” Comput. Gr. Image Process. 18(2), 109–144 (1982). [CrossRef]  

25. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

26. P. Artal, “Optics of the eye and its impact in vision: a tutorial,” Adv. Opt. Photonics 6(3), 340–367 (2014). [CrossRef]  

27. S. Schedin, P. Hallberg, and A. Behndig, “Three-dimensional ray-tracing model for the study of advanced refractive errors in keratoconus,” Appl. Opt. 55(3), 507–514 (2016). [CrossRef]  

28. S. P. Chong, T. Zhang, A. Kho, M. T. Bernucci, A. Dubra, and V. J. Srinivasan, “Ultrahigh resolution retinal imaging by visible light OCT with longitudinal achromatization,” Biomed. Opt. Express 9(4), 1477–1491 (2018). [CrossRef]  

29. A. Langenbucher, N. Szentmáry, J. Weisensee, A. Cayless, R. Menapace, and P. Hoffmann, “Back-calculation of Keratometer Index Based On OCT Data and Raytracing - a Monte Carlo Simulation,” Acta Ophthalmol., 2021. [CrossRef]  

30. F. Menduni, L. N. Davies, D. Madrid-Costa, A. Fratini, and J. S. Wolffsohn, “Characterisation of the porcine eyeball as an in-vitro model for dry eye,” Cont. Lens Anterior Eye 41(1), 13–17 (2018). [CrossRef]  

31. M. N. Delyfer, D. Gaucher, M. Govare, A. Cougnard-Grégoire, J. F. Korobelnik, S. Ajana, S. Mohand-Saïd, S. Ayello-Scheer, F. Rezaiguia-Studer, H. Dollfus, J. A. Sahel, and P. O. Barale, “Adapted Surgical Procedure for Argus II Retinal Implantation: Feasibility, Safety, Efficiency, and Postoperative Anatomic Findings,” Ophthalmol. Retina 2(4), 276–287 (2018). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Optical schematic and photograph of the SS-MIOCT showing surgical microscope (gold), SS-OCT (red), and illumination (blue). BS: beam splitter, MG: magnification, DM: dichroic mirror, OL: objective lens, M: mirror, Gxy: galvanometer scanner, CL: collimator, PC: polarization controller, WDM: wavelength division multiplexer, CIR: circulator, CP: coupler, BD: balanced detector.
Fig. 2.
Fig. 2. Flowchart of registration
Fig. 3.
Fig. 3. Extraction in microscopic images. (a)–(c) Microscopic images before extraction. (d)–(f) Laser points after extraction. (g)–(i) Microscopic images after extraction.
Fig. 4.
Fig. 4. Residuals of extracted laser points.
Fig. 5.
Fig. 5. Artificial eye registration results of different depths. (a)–(c) Original microscopic images. (d)–(f) OCT slices from different depths: (d) slice = 218; (e) slice = 245; (f) slice = 264; (g)–(i) Registration results.
Fig. 6.
Fig. 6. Rabbit eye registration of different depths. (a)–(c) Original microscopic images. (d)–(f) OCT volume clipping images from different depths: (d) depth = 2.996 mm; (e) depth = 5.012 mm; (f) depth = 10.892 mm; (g)–(i) Registration results.
Fig. 7.
Fig. 7. Registration results of calibration board (a) Microscopic image. (b) OCT slice. (c) Registration result.
Fig. 8.
Fig. 8. Standard error of registration.
Fig. 9.
Fig. 9. Relationship between extraction residual and corner offset.

Tables (1)

Tables Icon

Table 1. MCC-500MDC specifications

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

x = W / 2 v 1 × m m p v / m m p d y = H / 2 v 2 × m m p v / m m p d
x = ( v x + v x 0 ) × f x D i r e c t i o n s p e e d y = ( v y + v y 0 ) × f y D i r e c t i o n s p e e d
s [ u v 1 ] = A [ R t ] [ X Y 0 1 ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.