Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Improved FAST algorithm for non-uniform rotational distortion correction in OCT endoscopic imaging

Open Access Open Access

Abstract

Optical Coherence Tomography (OCT) is widely used for endoscopic imaging in endoluminal organs because of its high imaging accuracy and resolution. However, OCT endoscopic imaging suffers from Non-Uniform Rotational Distortion (NURD), which can be caused by many factors, such as irregular motor rotation and changes in friction between the probe and the sheath. Correcting this distortion is essential to obtaining high-quality Optical Coherence Tomography Angiography (OCTA) images. There are two main approaches for correcting NURD: hardware-based methods and algorithm-based methods. Hardware-based methods can be costly, challenging to implement, and may not eliminate NURD. Algorithm-based methods, such as image registration, can be effective for correcting NURD but can also be prone to the problem of NURD propagation. To address this issue, we process frames by coarse and fine registration, respectively. The new reference frame is generated by filtering out the A-scan that may have the NURD problem by coarse registration. And the fine registration uses this frame to achieve the final NURD correction. In addition, we have improved the Features from Accelerated Segment Test (FAST) algorithm and put it into coarse and fine registration process. Four evaluation functions were used for the experimental results, including signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR), mean squared error (MSE), and structural similarity index measure (SSIM). By comparing with Scale-invariant feature transform (SIFT), Speeded up robust features (SURF), Oriented FAST and Rotated BRIEF (ORB), intensity-based (Cross-correlation), and Optical Flow algorithms, our algorithm has a higher similarity between the corrected frames. Moreover, the noise in the OCTA data is better suppressed, and the vascular information is well preserved. Our image registration-based algorithm reduces the problem of NURD propagation between B-scan frames and improves the imaging quality of OCT endoscopic images.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

Corrections

12 April 2023: A correction was made to the author affiliations.

1. Introduction

Optical coherence tomography (OCT) is a new non-invasive optical imaging technique with high resolution [14]. Early OCT imaging instruments are mainly used in ophthalmology because of the better light transmittance of the eye. With the development of optical imaging technology, endoscopic imaging is also gradually developing. For example, intracoronary OCT is used to guide interventional procedures. The high-speed imaging of the OCT provides the surgeon with information on the proper placement of the stent, avoiding malapposition and large stent edge dissections [5,6]. The advent of SS-OCT greatly improves the quality of esophagus imaging and provides a great deal of help in predicting esophageal adenocarcinoma (EAC) [7,8]. OCT is also increasingly being used for imaging in vasculature [911].

Figure 1 shows the process of OCT endoscopic imaging. First, the OCT endoscopy is placed deep into the area to be scanned and connected by a catheter. The probe is then rotated by a distal or proximal micro motor [12] to achieve endoscopic imaging. Finally, the scanning is completed in the continuous rotation and pulling back. The inevitable vibration of the motor and the non-uniformity of the probe speed during the scanning lead to distortion in the images, also known as non-uniform rotational distortion (NURD).

 figure: Fig. 1.

Fig. 1. Process of OCT endoscopic imaging. The probe keeps pulling back while it is rotating. The yellow arrow shows the direction of pulling back, and the white one shows the direction of rotation.

Download Full Size | PDF

Figure 2 illustrates endoscopic OCT frames with NURD. The leading causes of NURD problems include the scanning environment of narrow vessels, calcification of the vessels, and various frictions of motor, catheter, and sheath. These disrupt the smooth rotation of the probe and result in variations of A-scan spacing in the B-scan. Therefore, an effective method is needed to correct NURD in endoscopic OCT imaging. Current research around the NURD in OCT endoscope imaging comprises two main categories: hardware-based improvements and algorithm-based correction methods. Early OCT scanning instruments used a distal motor to rotate the probe. Some researchers apply a proximal micro-motor to reduce the vibration of the distal motor and the friction between the probe and the catheter sheath [13]. Li et al. use a proximal micro-motor with high speed to reduce friction variation during motor rotation [14]. To solve the problem of the friction between probe and catheter, Liao et al. design a manipulable catheter sheath to control the movement of the inside probe through bending, translation, and rotation of the sheath [15]. However, other factors can also cause NURD, such as involuntary movements of the patient and excessive curvature of the scanning area. Hardware improvements cannot eliminate the NURD problem. To address the NURD problem, some researchers also propose algorithm-based correction methods, which mainly use image registration. Van et al. present a new method to align a sequence of images by globally optimizing the match between individual lines in subsequent frames [16]. Jessica et al. employ spatial frequency analysis to select and remove distortions [17]. Most NURD correction algorithms based on image registration have a common problem: there is no standard reference frame. Using any OCT data with the presence of NURD as a reference frame may lead to the propagation of distortions [18]. We propose a feature-based registration method to correct the NURD in OCT endoscopic imaging. First, the first frame of the oct frame set is set as a reference frame, and the rest are set as floating frames. The two sets of frames are split separately and put into a coarse registration step to calculate the offset between the frames. The calculated offsets of floating blocks are then linearly interpolated to obtain the offsets of A-scans in all floating frames. Next, the offsets of the A-scan at the same positions in the floating frame are filtered separately using 95% confidence intervals. Then the offsets of all A-scans at the same position are averaged separately, which forms a spatial transformation matrix. Finally, a new reference frame is generated based on this matrix, and all frames are registered again with this new reference frame to obtain the final correction result. We improve the FAST algorithm by grading its detecting strategy. Our approach is evaluated on some data sets from a specially designed microfluidic phantom and produces suitable corrections results for data collected at different flow rates and rotation speed settings.

 figure: Fig. 2.

Fig. 2. Endoscopic OCT frames with NURD. A, An OCT frame under endoscopic imaging. B, Differences between different frames scanned at the same position (The figure shows part of the area in the frame), with NURD present near the orange dashed line.

Download Full Size | PDF

Our contributions are as follows:

  • 1. A feature-based registration method is proposed to correct the NURD problem in OCT endoscope imaging.
  • 2. To handle the propagation of the NURD, the offsets obtained from the coarse registration are filtered by 95% confidence intervals which then are averaged to generate a new reference frame.
  • 3. We improve the quality of feature points by grading the detection strategy of the FAST [19] algorithm.

2. Methods

This section describes the NURD correction method in detail, including the over-all scheme, data acquisition, coarse registration, evaluation metrics, and the improved FAST algorithm.

As shown in Fig. 3, the scheme of the proposed method includes two parts, coarse registration, and fine registration. In the coarse registration, the input frames are split, and feature points in the chunk are detected and matched separately to obtain the offset of each chunk. Next, the offsets of all A-scans are obtained by linearly interpolating each chunk’s offset. Then all the offsets of the A-scan are filtered using 95% confidence intervals and averaged to obtain the spatial transformation matrix $M_r$. Finally, the reference image is spatially transformed according to matrix $M_r$ to obtain the new reference frame $F_0\_ci$.

 figure: Fig. 3.

Fig. 3. The proposed scheme for correcting NURD in endoscopic OCT imaging data. The coarse registration step aims to generate a new reference frame $F_0\_ci$ (ci denotes the confidence interval). The fine registration uses the new reference frame to register all B-scan frames and thus generate the final OCTA image.

Download Full Size | PDF

Figure 4 illustrates the process of filtering using confidence intervals. In the coarse registration process, we obtained the offset of each A-scan by linear interpolation. In the filtering process, we calculate the 95 % confidence interval for the offset of the A-scan at the same position in all B-scans, which can filter out some of the A-scans. Then the mean value is calculated for the filtered A-scan offsets. Finally, all the A-scans in the B-scan are filtered in order. After completing the filtering, the calculated values can form a spatial transformation matrix $M_r$.

 figure: Fig. 4.

Fig. 4. The process for handling offsets using confidence intervals. The red brackets mark the A-scan at the same position, and the green arrow indicates the direction of filtering and averaging. Filtering and averaging the offset of all A-scan separately, thus achieving suppression of NURD problem propagation during registration.

Download Full Size | PDF

After the coarse registration step, the new reference frame is used in the fine registration step. The steps for the fine registration are similar to some of the steps for coarse registration, such as splitting frames, matching feature points in each block, and computing the spatial transformation matrix using linear interpolation. Unlike coarse registration, the spatial transformation matrices transform each floating frame to generate corrected frames during the fine registration step, and the corrected frames are used to generate OCTA results. The OCTA generated from the registered OCT frames consists of three main components: static component, noise, and vessel cross-section. We evaluated the corrected results in two aspects, SNR and PSNR for the noise of the OCTA and SSIM and MSE for the average similarity between the corrected frames. In the registration, the chunk size varies from a quarter of the B-scan length to 8 pixels that the feature point can detect, and the size of the chunk is reduced by half one by one. The split range is the same in the coarse registration and fine registration steps. In the fine registration, each new reference frame is experimented with separately using all strategies in the splitting range, and the correction results are evaluated. The set of settings with the best evaluation results will be recorded among all the splitting size settings.

2.1 Data acquisition

The experimental data were scanned in the Optical Laboratory, College of Science, Shanghai Institute of Technology. The scanner, in essence, includes a single-mode fiber for beam delivery, micro-optics to focus (and deflect) the beam, and a beam scanning device [20]. The Lab-built OCT system consists of a fiber optic-based Michelson interferometer with a 1310nm center wavelength swept laser (Axsun, AXP50125) which can sweep at a rate of 100 kHz over a broad spectral bandwidth of approximately 100 nm. The output light from the laser is split into two beams by a 90/10 coupler into two circulators (Thorlabs, CIR-1310-50-APC), respectively. One of the light beams is collimated and then focused by a lens on the reflective surface of the plane mirror in the reference arm. Another beam enters the imaging catheter, allowed light to be transmitted to and collected from the sample. The backscattered light from the sample arm and reference arm generates interference in a 50/50 coupler (Thorlabs, TW1300R5A2), and then the interference signal was detected by a balanced detector (PDB480C-AC, Thorlabs). The interference signal is recorded via a 12-bit data acquisition board (Alazar, ATS9350) and transmitted to a computer for processing to obtain a complex domain depth encoded signal. The OCT system providing an axial resolution of 11 $\mu$m and an imaging depth of 3.7 mm (in tissue, refractive index n is 1.35). The catheter providing a lateral resolution of 16 $\mu$m.

In the scanning of a specially designed micro-fluidic phantom, we set various flow rates, rotation speeds of the probe, and image resolutions. As shown in Table 1, the settings of flow rates include no fluid (i.e., air) and 13 groups from 0.5 mm/s to 32 mm/s, angular velocities include four varying from 10rps to 100rps, and the image resolution include four of 1024*1024 pixels, 2048*1024 pixels, 6144*1024 pixels, and 4096*512 pixels. OCT data were acquired at 14 positions in the experiment, 20 frames were acquired at each time when the rotation speed was 40rps, and 50 frames were acquired at each group when the rotation speed was 60rps, 80rps, and 100rps.

Tables Icon

Table 1. Setting of experimental acquisition

2.2 Coarse registration

In OCT endoscopic imaging, a set of B-scan data is acquired from the same position of the phantom. However, due to the existence of NURD, there is an offset between each A-scan and its actual position. In addition, NURD is mainly present in the transverse portion of OCT endoscopic imaging, and there is almost no distortion in the longitudinal portion [21].

Therefore, in this study, only the transverse offset on B-scan is considered in the registration process. We vertically split each frame into several chunks as shown in Fig. 5. First, the reference frame is split vertically into $n_r$ chunks, the width of which is $p/n_r$ pixels (where p is the length of the frame). Second, the floating frame is also split equally into $n_r$ chunks frames with the width of $p/n_r +m$ (the default of $m$ is 50% of chunk width in the reference frame), which ensures that the floating chunk covers the entire corresponding reference chunk when registration. The size of m impacts the robustness of the algorithm, and its value depends on the size of the frame and the maximum offset of the data. Finally, feature point detection and matching are performed for the corresponding chunks in the reference frame and floating frame. After the feature points are matched, the offset of each chunk is obtained. In the coarse registration process, each chunk’s offset is represented by the offset the first A-scan in that chunk. We use a linear interpolation method (as shown in Formula 1) to calculate the offsets of all A-scans in each chunk.

$$O = \frac{O_0(P_1-P)+O_1(P-P_0)}{P_1-P_0}$$
where $O$ denotes the range of the offset of the A-scan between adjacent blocks, $O_0$ and $P_0$ denote the offset and horizontal pixel coordinates of the first A-scan in the first chunk, respectively. $O_1$ and $P_1$ denote the offset and horizontal pixel coordinates of the first A-scan in the next chunk, respectively. By linear interpolating all chunks, the offsets of all A-scans in that frame form a spatial transformation matrix. After doing linear interpolation for all frames, we obtain a set of spatial transformation matrices. The final spatial transformation matrix and the new reference frames can be obtained by filtering and averaging these matrices with confidence intervals.

 figure: Fig. 5.

Fig. 5. Overview of the splitting strategy of our method. In the splitting of the reference frame and floating frame, the first chunk is registered to the first A-scan, the last chunk is registered to the last A-scan, and the rest of the chunks are registered to the middle A-scan.

Download Full Size | PDF

2.3 Improved FAST feature detecting algorithm

FAST (Features from Accelerated Segment Test) is an algorithm for fast feature point extraction proposed by Rosten. Compared with SIFT [22] and SURF [23], FAST algorithm runs very fast. There are many frames and feature points in OCT endoscopic image, therefore an efficient algorithm such as the FAST algorithm is required to extract feature points.

As shown in Fig. 6, $N$ contiguous pixels out of the 16 ones need to be either above or below $I_p$ (the intensity of the pixel $p$) by the value $t$ (the default value of $t$ is 20%) if the pixel needs to be detected as an interesting point (The default value of $N$ is 12). However, in the feature point detection of OCT data, many duplicates and useless feature points are obtained by the FAST algorithm. The feature point detection strategy of the FAST algorithm is improved by increasing the original three types of points to five types of points, including $darker_1$, $darker_2$, $brighter_1$, $brighter_2$ and $similar$, shown as Formula 2.

$$S_{p\to x} = \left\{\begin{array}{l} d_2,I_{p\to x}\leq I_p-t_2 (darker2) \\ d_1,I_{p\to x}\leq I_p-t_1 (darker1) \\ s,I_p-t_1<I_{p\to x}<I_p+t_1 (similar)\\ b_1,I_p+t_1\leq I_{p\to x} (brighter1)\\ b_2,I_p+t_2\leq I_{p\to x} (brighter2) \end{array}\right.$$
Where $S_{p\to x}$ is the state, $t_i$ is a threshold, $I_p$ is the intensity of the pixel $p$, $I_{p\to x}$ is the intensity of the pixel $x$ (The point on the circle in Fig. 6), $d1$, $d2$, $s$, $b1$, $b2$ denote $darker1$, $darker2$, $similar$, $brighter1$, $brighter2$, respectively. In the FAST algorithm, the default value of the threshold $t$ is usually 20% of the pixel intensity of point $p$ [19]. To further grade the bright and dark points, we set $t_1$=20%*$I_p$, $t_2$=40%*$I_p$. The point $p$ is decided whether it is a feature point according to Formula 3.
$$F_p = \left\{\begin{array}{l} 1,(d_1>{=}4\;and\;d_2 >{=}8)\; or\; (b_1 >{=}4\;and\;b_2 >{=}8)\\ 0,else \end{array}\right.$$
Where $F_p$ is the identifier whether point p is a feature point. The point is defined as a feature point when 4 consecutive points belong to $darker_1$ and 8 consecutive points belong to $darker_2$, or 4 consecutive points belong to $brighter_1$, and 8 consecutive points belong to $brighter_2$. These parameters are obtained after several experimental comparisons.

 figure: Fig. 6.

Fig. 6. The process of detecting feature points by FAST algorithm. P is the point of interest to be tested, surrounded by the Bressenham arc, marking the 16 points p→x that will be compared.

Download Full Size | PDF

3. Results

Four resolutions of OCT endoscopic frames, including 1024*1024 pixels, 2048*1024 pixels, 6144*1024 pixels, and 4096*512 pixels, were tested separately in experiments. This section describes the treatment of the NURD propagation problem and the analysis of the results, the performance comparison of the improved FAST algorithm and the comparison and evaluation of the correction results. In our experiments, the configuration of the computing platform we used was based on Ubuntu 20.04 OS with Intel i5-9400F CPU.

3.1 Experiments of NURD propagation

In dealing with the NURD propagation problem, we selected the first frame in the dataset as the reference frame and performed a coarse registration operation first. Since each set of frames was collected repeatedly at the same position, the A-scan at the same position is theoretically aligned. However, some offset can exist between these A-scans due to the presence of NURD in the frame.

To minimize the propagation of NURD in frames, we filtered all A-scans using 95% confidence intervals. The steps are as follows: all B-scans are processed using the coarse alignment process in Fig. 3. Firstly, split all B-scans into chunks. Then the offsets of the chunks in all reference and floating frames are calculated. Then the offsets of all A-scans are obtained by interpolation. Finally, filtering is performed for each set of A-scan using 95% confidence intervals. By confidence interval filtering, A-scans with possible NURD problems can be excluded, thus reducing the impact of NURD propagation on the correction results. In the experiments we found that in a set of 50 frames of OCT data, 2-4 data are removed by filtering all A-scans using 95% confidence intervals. To compare the performance before and after using the confidence interval, Fig. 7-A compared the difference between the offsets calculated at the same A-scan position by mean and mean after using confidence intervals. To show the difference more visually, the difference between the two curves is shown in Fig. 7-B, in which the maximum difference is nearly 1.25 pixels.

 figure: Fig. 7.

Fig. 7. Comparison of offsets before and after filtering of 95% confidence intervals (The experiment used the phantom data with the size of 1024*2048 pixels.). A, The offset of the A-scan at the same position of the B-scan is compared using the mean before and after the confidence interval filtering. B, The plot of the difference between the two curves in panel A. The blue line is the curve of offset mean filtered by the confidence interval filter, and the yellow one is the curve of the offset mean.

Download Full Size | PDF

We used three methods for the propagation problem of NURD separately, and the evaluation results of the processed corrected data are shown in Table 2. The evaluation results of the method using confidence interval filtering followed by mean processing performed the best, while only using mean to process the offset calculated by rough registration also had good results in suppressing NURD propagation.

Tables Icon

Table 2. Results of different approaches to NURD propagation

3.2 Performance of improved FAST algorithm

In the experiment, we tested the performance of four feature point detection algorithms in chunks of different sizes, including SIFT, SURF, FAST, and ORB, as shown in Fig. 8. During the feature point detection process, the ORB algorithm cannot work when the image width is too small (For example, when the chunk size is 16 pixels) because of the window size of the detector and number of scales [24]. The results of FAST and ORB algorithms contain many duplicate feature points. The SIFT and SURF algorithms detect relatively more feature points, some of which are invalid, and make errors in some feature-free regions, but the overall quality of the feature points was relatively high. Our improved FAST algorithm guarantees the quality and quantity of feature points. When using the default threshold of the FAST algorithm for feature point detection on the data, there are too many useless feature points. Therefore, we adjusted the value of N (from 6 to 14) and selected two representative results for statistics.

 figure: Fig. 8.

Fig. 8. Detection results of different feature point detection algorithms. Upper part, results of 64*300 pixels frame. Bottom part, results of 80*300 pixels frame. The yellow arrows mark some of error feature points.

Download Full Size | PDF

Table 3 shows the number of feature points and the algorithm’s time cost. The FAST algorithm takes the least amount of time, and our improved FAST algorithm has similar execution efficiency as the FAST algorithm. Compared with SIFT, SURF, and FAST algorithms, our improved FAST algorithm detects only 20%-50% of the number of feature points of the other algorithms. Although our improvement increases the complexity of the FAST algorithm, it maintains its detection efficiency.

Tables Icon

Table 3. Performance of using several feature point detection algorithms under OCT dataa

3.3 Experiments of NURD corrections

SIFT, SURF, FAST, ORB algorithms, and Cross-Correlation based algorithms were used to perform comparisons under different sizes and numbers of chunks. In the Cross-Correlation algorithm, we chose gradient descent as the optimization function. In addition, we also used the optical flow algorithm [25] for comparison. Since the longitudinal distortion was ignored in the experiment, we averaged the longitudinal offsets calculated by the optical flow algorithm. The average of the pixel offsets for each column is the offset for that A-scan, and these offsets form a spatial transformation matrix. The optical flow algorithm results in a spatial transformation of the floating frames using the matrix obtained by this strategy to obtain the final OCTA results. In the optical flow algorithm, we set the radius of the window considered around each pixel to 8 pixels, and the number of times the floating frame warped was set to 10. We evaluated the overall noise of the generated OCTA frames by SNR and PSNR, and the mean SSIM and MSE between the corrected B-scan, as shown in Table 4.

Tables Icon

Table 4. Performance comparison with state-of-art methods (2048*1024 pixels phantom data)a

The FAST algorithm and the ORB algorithm have poorer correction results in the feature point-based algorithm. We tried to adjust the threshold in the FAST algorithm during the experiment. When N was reduced, the number of detecting feature points became large, and the correction result became worse. The ORB algorithm uses the same feature point detecting strategy as the FAST algorithm, therefore, the correction results were also poor. In evaluating noise in OCTA frames, our method performs the best in both SNR and PSNR. In evaluating the similarity between frames, ours performs the best in SSIM and the second best in MSE, which is close to the best one of the SURF algorithms.

Figure 9 compares the results of the OCTA data correction using frames of 1024*1024 pixels. Ideally, there should only be two white dots at the dashed yellow mark in the OCTA frame. Poor registration results between frames or the presence of NURD propagation between frames lead to noise and incorrect vascular information in the OCTA images. Although the SURF algorithm performed best when evaluated by MSE, the SURF algorithm still showed a small amount of significant noise in the OCTA image. The fact that many pixel points do not vary much in pixel intensity between frames causes the optical flow algorithm to ignore many of the changing pixel points and ultimately performs the worst when comparing the results of several algorithms. The results of the experiments show that our method is superior to other methods. The improved FAST algorithm reduced the number of feature points, while the detection quality was guaranteed. From the correction results, our method not only outperforms the SIFT and SURF algorithms in correction efficiency but also in terms of the corrected results.

 figure: Fig. 9.

Fig. 9. Comparison of feature detection algorithms and our method. A, the OCTA result without correction. B-H, the OCTA results with correction method of SIFT, SURF, FAST (N=12), ORB algorithms, optical flow, Cross-Correlation, and our method, respectively. Obvious noise is marked with a white dashed line and cross-sectional information about the vessels is marked with a yellow dashed line.

Download Full Size | PDF

4. Discussion

In this paper, we propose an improved FAST algorithm to solve NURD in endoscopic OCT imaging. It achieves better results in eliminating the NURD propagation problem between OCT frames during the registration process. In addition, we use a new grading strategy to improve the FAST algorithm and maintain its efficiency. In several sets of phantom data, our method has achieved good results. In future work, we will try to experiment in vivo data and further optimize the correction speed of our algorithm.

Funding

Collaborative Innovation Fund of Shanghai Institute of Technology (XTCX2022-04); Natural Science Foundation of Shanghai (20ZR1455600); Science and Technology Commission of Shanghai Municipality (19441905800); National Natural Science Foundation of China (61675134, 62175156, 81827807).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. A. F. Fercher, W. Drexler, C. K. Hitzenberger, and T. Lasser, “Optical coherence tomography-principles and applications,” Rep. Prog. Phys. 66(2), 239–303 (2003). [CrossRef]  

2. S. Aumann, S. Donner, J. Fischer, and F. Müller, “Optical coherence tomography (oct): principle and technical realization,” High Resolution Imaging in Microscopy and Ophthalmology pp. 59–85 (2019).

3. W. Drexler, M. Liu, A. Kumar, T. Kamali, A. Unterhuber, and R. A. Leitgeb, “Optical coherence tomography today: speed, contrast, and multimodality,” J. Biomed. Opt. 19(7), 071412 (2014). [CrossRef]  

4. J. Fujimoto and E. Swanson, “The development, commercialization, and impact of optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 57(9), OCT1–OCT13 (2016). [CrossRef]  

5. D. A. Jones, K. S. Rathod, S. Koganti, et al., “Angiography alone versus angiography plus optical coherence tomography to guide percutaneous coronary intervention: outcomes from the pan-london pci cohort,” JACC: Cardiovasc. Interv. 11(14), 1313–1321 (2018). [CrossRef]  

6. Z. A. Ali, A. Maehara, P. Généreux, et al., “Optical coherence tomography compared with intravascular ultrasound and with angiography to guide coronary stent implantation (ilumien iii: Optimize pci): a randomised controlled trial,” The Lancet 388(10060), 2618–2628 (2016). [CrossRef]  

7. M. Ulrich, L. Themstrup, N. de Carvalho, et al., “Dynamic optical coherence tomography in dermatology,” Dermatology 232(3), 298–311 (2016). [CrossRef]  

8. D. Kohli, M. Schubert, A. Zfass, and T. Shah, “Performance characteristics of optical coherence tomography in assessment of barrett’s esophagus and esophageal cancer: systematic review,” Dis. Esophagus 30(11), 1–8 (2017). [CrossRef]  

9. Y. Huang, Q. Zhang, M. R. Thorell, L. An, M. K. Durbin, M. Laron, U. Sharma, G. Gregori, P. J. Rosenfeld, and R. K. Wang, “Swept-source oct angiography of the retinal vasculature using intensity differentiation-based optical microangiography algorithms,” Ophthalmic Surgery, Lasers Imaging Retin. 45(5), 382–389 (2014). [CrossRef]  

10. J. de Moura, J. Novo, P. Charlón, N. Barreira, and M. Ortega, “Enhanced visualization of the retinal vasculature using depth information in oct,” Med. Biol. Eng. Comput. 55(12), 2209–2225 (2017). [CrossRef]  

11. S. T. Hsu, X. Chen, H. T. Ngo, et al., “Imaging infant retinal vasculature with oct angiography,” Ophthalmol. Retin. 3(1), 95–96 (2019). [CrossRef]  

12. J. Zhang, T. Nguyen, B. Potsaid, et al., “Multi-mhz mems-vcsel swept-source optical coherence tomography for endoscopic structural and angiographic imaging with miniaturized brushless motor probes,” Biomed. Opt. Express 12(4), 2384–2403 (2021). [CrossRef]  

13. R. N. Shah, S. Kretschmer, J. Nehlich, Ç. Ataman, and H. Zappe, “Compact oct probe for flexible endoscopy enabled by piezoelectric scanning of a fiber/lens assembly,” in MOEMS and Miniaturized Systems XVIII, vol. 10931 (SPIE, 2019), pp. 76–82.

14. J. Li, M. de Groot, F. Helderman, J. Mo, J. M. Daniels, K. Grünberg, T. G. Sutedja, and J. F. de Boer, “High speed miniature motorized endoscopic probe for optical frequency domain imaging,” Opt. Express 20(22), 24132–24138 (2012). [CrossRef]  

15. G. Liao, O. C. Mora, B. Rosa, D. D’Allaba, A. Asch, P. Fiorini, M. de Mathelin, F. Nageotte, and M. J. Gora, “Endoscopic optical coherence tomography volumetric scanning method with deep frame stream stabilization,” Scanning 20, 30 (2020).

16. G. van Soest, J. Bosch, and A. van der Steen, “Alignment of intravascular optical coherence tomography movies affected by non-uniform rotation distortion,” in Coherence Domain Optical Methods and Optical Coherence Tomography in Biomedicine XII, vol. 6847 (SPIE, 2008), pp. 278–285.

17. J. Mavadia-Shukla, J. Zhang, K. Li, and X. Li, “Stick-slip nonuniform rotation distortion correction in distal scanning optical coherence tomography catheters,” J. Innovative Opt. Health Sci. 13(06), 2050030 (2020). [CrossRef]  

18. A. Shirazi, “Analytical and experimental investigations on non-uniform rotational distortion (nurd) correction,” Ph.D. thesis, UC Irvine (2017).

19. E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in European conference on computer vision, (Springer, 2006), pp. 430–443.

20. M. J. Gora, M. J. Suter, G. J. Tearney, and X. Li, “Endoscopic optical coherence tomography: technologies and clinical applications,” Biomed. Opt. Express 8(5), 2405–2444 (2017). [CrossRef]  

21. T. H. Nguyen, O. O. Ahsen, K. Liang, J. Zhang, H. Mashimo, and J. G. Fujimoto, “Correction of circumferential and longitudinal motion distortion in high-speed catheter/endoscope-based optical coherence tomography,” Biomed. Opt. Express 12(1), 226–246 (2021). [CrossRef]  

22. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Computer Vision 60(2), 91–110 (2004). [CrossRef]  

23. H. Bay, T. Tuytelaars, and L. V. Gool, “Surf: Speeded up robust features,” in European conference on computer vision, (Springer, 2006), pp. 404–417.

24. E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” in 2011 International conference on computer vision, (Ieee, 2011), pp. 2564–2571.

25. A. Plyer, G. Le Besnerais, and F. Champagnat, “Massively parallel lucas kanade optical flow for real-time video processing applications,” J. Real-Time Image Process. 11(4), 713–730 (2016). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Process of OCT endoscopic imaging. The probe keeps pulling back while it is rotating. The yellow arrow shows the direction of pulling back, and the white one shows the direction of rotation.
Fig. 2.
Fig. 2. Endoscopic OCT frames with NURD. A, An OCT frame under endoscopic imaging. B, Differences between different frames scanned at the same position (The figure shows part of the area in the frame), with NURD present near the orange dashed line.
Fig. 3.
Fig. 3. The proposed scheme for correcting NURD in endoscopic OCT imaging data. The coarse registration step aims to generate a new reference frame $F_0\_ci$ (ci denotes the confidence interval). The fine registration uses the new reference frame to register all B-scan frames and thus generate the final OCTA image.
Fig. 4.
Fig. 4. The process for handling offsets using confidence intervals. The red brackets mark the A-scan at the same position, and the green arrow indicates the direction of filtering and averaging. Filtering and averaging the offset of all A-scan separately, thus achieving suppression of NURD problem propagation during registration.
Fig. 5.
Fig. 5. Overview of the splitting strategy of our method. In the splitting of the reference frame and floating frame, the first chunk is registered to the first A-scan, the last chunk is registered to the last A-scan, and the rest of the chunks are registered to the middle A-scan.
Fig. 6.
Fig. 6. The process of detecting feature points by FAST algorithm. P is the point of interest to be tested, surrounded by the Bressenham arc, marking the 16 points p→x that will be compared.
Fig. 7.
Fig. 7. Comparison of offsets before and after filtering of 95% confidence intervals (The experiment used the phantom data with the size of 1024*2048 pixels.). A, The offset of the A-scan at the same position of the B-scan is compared using the mean before and after the confidence interval filtering. B, The plot of the difference between the two curves in panel A. The blue line is the curve of offset mean filtered by the confidence interval filter, and the yellow one is the curve of the offset mean.
Fig. 8.
Fig. 8. Detection results of different feature point detection algorithms. Upper part, results of 64*300 pixels frame. Bottom part, results of 80*300 pixels frame. The yellow arrows mark some of error feature points.
Fig. 9.
Fig. 9. Comparison of feature detection algorithms and our method. A, the OCTA result without correction. B-H, the OCTA results with correction method of SIFT, SURF, FAST (N=12), ORB algorithms, optical flow, Cross-Correlation, and our method, respectively. Obvious noise is marked with a white dashed line and cross-sectional information about the vessels is marked with a yellow dashed line.

Tables (4)

Tables Icon

Table 1. Setting of experimental acquisition

Tables Icon

Table 2. Results of different approaches to NURD propagation

Tables Icon

Table 3. Performance of using several feature point detection algorithms under OCT data a

Tables Icon

Table 4. Performance comparison with state-of-art methods (2048*1024 pixels phantom data) a

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

O = O 0 ( P 1 P ) + O 1 ( P P 0 ) P 1 P 0
S p x = { d 2 , I p x I p t 2 ( d a r k e r 2 ) d 1 , I p x I p t 1 ( d a r k e r 1 ) s , I p t 1 < I p x < I p + t 1 ( s i m i l a r ) b 1 , I p + t 1 I p x ( b r i g h t e r 1 ) b 2 , I p + t 2 I p x ( b r i g h t e r 2 )
F p = { 1 , ( d 1 > = 4 a n d d 2 > = 8 ) o r ( b 1 > = 4 a n d b 2 > = 8 ) 0 , e l s e
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.