Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Unpaired intra-operative OCT (iOCT) video super-resolution with contrastive learning

Open Access Open Access

Abstract

Regenerative therapies show promise in reversing sight loss caused by degenerative eye diseases. Their precise subretinal delivery can be facilitated by robotic systems alongside with Intra-operative Optical Coherence Tomography (iOCT). However, iOCT’s real-time retinal layer information is compromised by inferior image quality. To address this limitation, we introduce an unpaired video super-resolution methodology for iOCT quality enhancement. A recurrent network is proposed to leverage temporal information from iOCT sequences, and spatial information from pre-operatively acquired OCT images. Additionally, a patchwise contrastive loss enables unpaired super-resolution. Extensive quantitative analysis demonstrates that our approach outperforms existing state-of-the-art iOCT super-resolution models. Furthermore, ablation studies showcase the importance of temporal aggregation and contrastive loss in elevating iOCT quality. A qualitative study involving expert clinicians also confirms this improvement. The comprehensive evaluation demonstrates our method’s potential to enhance the iOCT image quality, thereby facilitating successful guidance for regenerative therapies.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

Regenerative therapies [1,2] are considered promising treatments for degenerative eye diseases such as Age-Related Macular Degeneration (AMD) [3]. AMD is a leading cause of visual impairment for elderly individuals in developed countries [4]. The effectiveness of these therapies heavily relies on the precise delivery of novel therapeutic agents, either sub-retinally or intra-retinally. Consequently, robotic systems accompanied by exceptional visualization capabilities may provide the level of precision required during the implantation [5]. Intra-operative Optical Coherence Tomography (iOCT) can significantly assist such vitreoretinal surgeries by capturing cross-sectional images of the retina to guide therapy delivery.

In the pre-operative setting, Optical Coherence Tomography (OCT) is widely used to non-invasively visualize distinctive retina layers offering valuable diagnostic capabilities. OCT systems use light in the near-infrared spectral range to capture cross sectional images and subsequently apply spatiotemporal signal averaging to provide OCT images of high quality. This enables clinicians to effectively differentiate between various retinal layers and pathologies. However, the long acquisition time of pre-operative OCT scans renders them unsuitable for visualization of real-time interventions. On the other hand, iOCT provides the advantage of real-time acquisition of cross sectional retinal images in 2D and 3D with micrometer resolution. Studies indicate that iOCT can assist the precise delivery of the therapeutics in the subretinal space [6]. Nonetheless, this real-time acquisition is provided at the expense of compromised image quality as evidenced by high levels of speckle noise [7] and low signal strength [8], thereby limiting its interventional utility. The purpose of our work is to augment the capabilities of the current iOCT systems by computationally enhancing the iOCT image quality without requiring costly hardware upgrade.

Several approaches have been proposed for OCT image quality enhancement including diffusion-based [9], registration-based [10] and segmentation-based [11] methods along with Wiener filters [8], non-local filters [12] and sparse coding [13]. These methods can successfully enhance the OCT image quality by augmenting retinal layer spatial details and mitigating speckle noise, while preserving the image content. However, they rely solely on information from a single iOCT image without employing a learning-based mapping to a high-quality domain. This strategy might not be sufficient for simultaneous noise reduction and generation of high-frequency information and finer details which are vital for enhancing the overall quality of the denoised images. Furthermore, the computational cost and the long scan acquisition time prohibits the use of similar approaches for real-time iOCT quality enhancement. "Super-resolution" and "quality enhancement" are used interchangeably as usual in the literature.

Generative models such as Generative Adversarial Networks (GANs) [14] have been successfully used in image quality enhancement or image domain translation tasks using natural image datasets [1517]. Moreover, these models have been applied to various medical image modalities such as PET [18], CT [19], OCTA [20,21], OCT [2224] and iOCT [2527]. However, many of these methods [20,21,24] artificially generate paired LR and HR datasets by simulating the degradation process with conventional filters. It is important to acknowledge that the super-resolution challenge we address in our problem involves a more substantial gap between LR and HR domains. This is because iOCT images are acquired in real-time during surgical procedures, which can lead to dynamic changes in tissue conditions, lighting variations, motion artifacts and high noise levels. These factors can introduce substantial variations in image appearance and quality creating a domain that can not be easily simulated by conventional filters. Also, iOCT images exhibit surgical instruments interactions, a characteristic absent in pre-operative OCT diagnostic images. Furthermore, the existing methods focus on single-image quality enhancement disregarding temporal consistency and thereby leading to inhomogeneous enhancement when applied in videos. While numerous works have been proposed for video quality enhancement [28], they necessitate well-aligned input and target videos, a condition which often remains unmet in medical imaging domain, particularly in iOCT.

This research explores unpaired iOCT video super-resolution with the objective of improving image quality of low-resolution (LR) iOCT video frames by leveraging information from high-resolution (HR) pre-operatively acquired OCT images. We propose an adversarial framework which contains a bidirectional recurrent neural network (VSR model) as generator to ensure temporal consistency, trained with a multilayer, patchwise contrastive loss to enable super-resolution between unpaired LR and HR datasets. Our VSR model adopts critical components from state-of-the-art VSR models such as BasicVSR [29] for efficient alignment and fusion of temporal information. The vast majority of the proposed VSR models (including BasicVSR) are trained on artificially paired video datasets using pixel-level supervision. As simultaneous acquisition of real LH and HR iOCT videos is not possible, we ease the requirement of pixel-level supervision by using contrastive loss. Patchwise Contrastive Learning [30] enables unpaired super-resolution by preserving the content using maximization of the mutual information between corresponding LR and generated HR patches. To establish the effectiveness of the proposed framework we provide extensive quantitative analysis showcasing our model’s superior performance against current state-of-the art iOCT super-resolution techniques. Additionally, to further support our design choices, we conducted ablation studies, revealing that the aggregation of temporal information and the use of multilayer patchwise contrastive loss play crucial roles in image quality enhancement and structure preservation, respectively.

Therefore, our contributions can be summarized as: First, we have successfully achieved a new state-of-the-art performance in iOCT super-resolution, thereby augmenting the capabilities of this imaging modality. Second, we propose a novel methodology for training VSR models in unpaired setting enabling the application of these models in domains where paired datasets are limited, such as the medical imaging domain. Also, to facilitate the super-resolution research in Optical Coherence Tomography and to provide the broader research community with a valuable resource for experimentation and further advancements, our source code will be available online at: https://github.com/RViMLab/BOE2023-iOCT-video-super-resolution.

2. Methods

In this section, we present the clinical data utilized in our study and we introduce our proposed unpaired video super-resolution framework. We outline its design and highlight its key features, providing implementation details for a comprehensive understanding of the methodology.

2.1 Datasets

We used an internal database which contains intra-operative videos and pre-operative OCT data accompanied with vitreoretinal surgery videos (see Fig. 1). The dataset encompasses a diverse cohort of $61$ patients presenting various pathologies including: Macular hole, Epiretinal membrane, Optic disc pit maculopathy, Floaters, Choroideremia, Myopic foveoschisis, Vitreomacular traction and Synchisis Scintillans. The data acquisition process adhered to the principles outlined in the Declaration of Helsinki (1983 Revision). The intra-operative OCT (iOCT) video sequences used in our research consist of 5 frames per sequence, with a resolution of 304$\times$448 pixels. Each iOCT frame has a variable width of 3–16 mm and height of 2mm. These sequences were acquired during the vitreoretinal surgeries using RESCAN 700 mounted on the Zeiss OPMI LUMERA 700. The patients’ pre-operative OCT (preOCT) scans (resolution of 512$\times$1024$\times$128 voxels), acquired using a Cirrus 5000, undergo a process of extraction, leading to the creation of 128 two-dimensional (2D) images. Each 2D image has width of 6mm and height of 2mm. The extracted 2D images subsequently serve as constituents of the high-resolution (HR) domain.

 figure: Fig. 1.

Fig. 1. Dataset overview: (a) Microscope view during vitreoretinal surgery accompanied with low-resolution (LR) intra-operative OCT video frames. (b) Pre-operatively acquired OCT 3D volume and 2D high-resolution (HR) preOCT image. Unpaired video super-resolution is performed between iOCT videos and preOCT images.

Download Full Size | PDF

The dataset, 9676 iOCT video sequences and 7808 preOCT images, was split randomly into: training set (43 patients, 70%), test set (9 patients, 15%) and validation set (9 patients, 15%). The data of each patient was used only in one set. The various pathologies present in the dataset are representatively distributed between both the training and test sets, thus maintaining a satisfactory level of pathological diversity across the partitions. Please note that the data can be provided to interested researchers via a data-sharing agreement, while there are parallel endeavours to make it public.

2.2 Unpaired video super-resolution

The task addressed in this work is unpaired video super-resolution of iOCT. Given a LR iOCT video sequence $x \in X \subset \mathbb {R}^{T\times H\times W\times C}$, with $T$ being its temporal extent, $H$ and $W$ being height and width respectively, $C$ denoting its channels ($C=1$) and each frame denoted by $x_t$ (of shape $H\times W\times 1$), the goals of our video super-resolution approach are three-fold:

First, $\forall x_t$ we aim to generate a high resolution image $\widehat y_{t}$ with the appearance and image quality of the target domain $Y \subset \mathbb {R}^{H\times W\times 1}$. Second, temporal information from previous input frames $\{\widehat x_{t-N},\ldots, \widehat x_{t-1}\}$, within a certain time window of size $N<T$, must be efficiently incorporated to generate image $\widehat y_{t}$ of enhanced quality. Third, $\widehat y_{t}$ must retain the structural information contained in the original LR iOCT input $x_t$. We now discuss how each of those objectives is addressed by the proposed approach.

2.3 Appearance mapping using adversarial training

To enforce that the appearance and image quality of the HR domain $Y$ is transferred to a LR input video frame $x_t$, we resort to adversarial training, given its established effectiveness for domain mapping problems [16]. Our model consists of a generator $G$, shown in Fig. 2, and a PatchGAN discriminator $D$ [17] for which the following standard adversarial objective is optimized:

$$L_{GAN}= \mathbb{E}_{y \sim Y} \log D(y) + \mathbb{E}_{x\sim X}\log(1 - D(G(x)))$$

 figure: Fig. 2.

Fig. 2. The architecture of the Bidirectional recurrent VSR network used as generator in adversarial training. The input LR iOCT sequence $x_{[t-N:t]}$ passes through: downsampling, optical flow, warping, reconstruction, aggregation and upsampling modules both in backward and forward direction generating HR $\widehat y_{[t-N,t]}$ output.

Download Full Size | PDF

By minimizing this loss, $G$ learns to generate video frames $\widehat y_{t}$ that are indistinguishable by $D$ from real images $y \in Y$. It is also noted that the generator in the above equation receives as input sequences of frames $x \in X$ and the discriminator operates over single frames. This choice is discussed next.

2.4 Incorporating temporal information in the generator

To incorporate temporal information we condition the generation of $\widehat y_{t}$ on a time window of the input video sequence $x$. More specifically, given a sequence of $N$ consecutive LR iOCT frames $x_{[t-N:t]}$, we obtain the corresponding super-resolved frame as $\widehat y_{t} = G(x_{[t-N:t]})$. In this work, $G$ (shown in Fig. 2) is a typical bidirectional recurrent network that adopts essential components from the foundational work of BasicVSR [29]. Recurrent VSR networks, as BasicVSR, have showcased excellent performance in natural video super-resolution by spatially increasing the input’s pixel resolution (by a factor of: $2$ or $4$), thus we integrate several key modules from bidirectional propagation, flow estimation and warping modules within our proposed framework, to incorporate the temporal information. Importantly, while [29] proposes these architectural designs when pixel-level supervision is available, we explore their applicability and importance in the more challenging task where no such paired data are available and the former also includes a domain mapping component (Sec. 2.2).

First, the input sequence $x_{[t-N:t]}$ is downsampled and subsequently passed through the backward propagation branch. Backward propagation includes optical flow estimation, spatial warping and reconstruction modules. Particularly, as shown in Fig. 3 (left), during the backward propagation, optical flow is estimated between $x_{t+1}$ and $x_{t}$ and is utilized for spatial alignment by performing warping on the propagated features $h_{t+1}^{b}$ from the adjacent $t+1$ frame. The aligned features are then passed through multiple residual blocks (reconstruction module) for refinement as commonly used in super-resolution networks [15]. The equations that describe the functioning of the backward branch are the following:

$$OF_{t}^{b} = OF(x_{t},x_{t+1})$$
$$h_{t}^b = Rec(Warp(OF_{t}^{b},h_{t+1}^b),x_{t})$$
where $OF$, $Warp$, $Rec$ denote optical flow estimation, warping and reconstruction modules respectively. $b$ refers to backward propagation and $h$ pertains to the propagated features.

 figure: Fig. 3.

Fig. 3. Bidirectional feature propagation: Backward Branch (Left), Forward branch (Right). $OF$, $Warp$, $Rec$ refer to optical flow estimation, spatial warping and reconstruction modules respectively.

Download Full Size | PDF

Subsequently, the downsampled $x_{[t-N:t]}$ sequence is fed into the forward propagation branch (see Fig. 3 (right)) which contains similar components as those in backward propagation. However, in the forward branch, the input sequence is utilized in reverse order. Similar set of equations applies to the forward case:

$$OF_{t}^{f} = OF(x_{t},x_{t-1})$$
$$h_{t}^f = Rec(Warp(OF_{t}^{f},h_{t-1}^f),x_{t})$$
where f refers to forward branch.

Finally, we perform aggregation and upsampling operations to the outputs obtained from both backward and forward branches to generate a HR output, denoted $\widehat y_{t}$, for every $x_t$:

$$\widehat y_{t} = Up(Aggr(h_{t}^f,h_{t}^b))$$
where $Aggr$ and $Up$ pertain to the Aggregation and Upsampling modules.

2.5 Preserving structural information

The adversarial loss alone does not guarantee that structural information in $x_t$, such as the exact location and shape of surgically targeted retinal layers is preserved during this mapping. Thus, we require an additional constraint that explicitly enforces spatial and structural consistency between $x_t$ and $\widehat y_{t}$.

To this end, we apply a multi-layer patch-wise contrastive loss [30] that enforces that the local and global spatial structure of the input frame $x_t$ is preserved in $\widehat y_{t}$ by maximizing the mutual information between corresponding input and generated patches.

To obtain such features from a common embedding space, both the input video $x$ and the generated images $\widehat y$ are passed through the generator to extract features from multiple layers indexed by $L$ = {${l_{0}, l_{1},l_{2},l_{3},l_{4}}$} (denoted by black lines in Fig. 4), comprising both early high-resolution layers that encode local information and deeper lower-resolution layers that produce more contextualized features.

 figure: Fig. 4.

Fig. 4. Multilayer Patchwise Contrastive Loss using part of the video super-resolution generator as encoder. Both $x$ and $\widehat y_{t}$ are encoded in features. ${l_{0}, l_{1},l_{2},l_{3},l_{4}}$ are the multiple layers that we extracted features from. The features of a generated output patch should be closer to the features of the corresponding patch in the input image (associating) compared to random patches from different locations (dissociating).

Download Full Size | PDF

Then, for each layer $l$, we randomly sample a subset of the features extracted by the generator from $x_{[t-N:t]}$ denoted as $\{u_i\}^l$. We also select their spatially corresponding counterparts $\{v_i\}^l$ from features extracted by the generator from $\widehat y_{t}$ and treat them as the positive samples of $\{u_i\}^l$. Finally, as negatives we sample a subset of the features extracted by the generator from $x_{[t-N:t]}$ from different spatial locations than those of the two previous sets, denoted as $\{n_i\}^l$. Then the contrastive loss is defined as:

$$L_{Contr}={-} \sum_{l\in L} \sum_{i=1}^{P^{(l)}} \log \frac{\exp(u_i\cdot v_i/\tau )}{\exp(u_i\cdot v_i/\tau ) + \sum_{k=1}^{K^{(l)}} \exp(u_i\cdot n_k/\tau )}$$
where $P^{(l)}=|\{u_i\}^l| = |\{v_i\}^l|$, $K^{(l)} = |\{n_i\}^l|$ ($|\cdot |$ being cardinality) and $\tau = 0.07$.

To further encourage structure consistency between input and output, we use a perceptual loss [15] $L_{Perc}$. Our complete objective then becomes:

$$L_{total} = \lambda _{1} L_{GAN} + \lambda _{2} L_{Contr} + \lambda _{3} L_{Perc}$$
where $\lambda _{1}$, $\lambda _{2}$ and $\lambda _{3}$ denote the weights assigned to $L_{GAN}$, $L_{Contr}$ and $L_{Perc}$ losses respectively.

2.6 Implementation details

To train the recurrent VSR model, we used a strided $1{\times }1$ convolution for downsampling and the pre-trained SPyNet [31] as optical flow estimation module. For the reconstruction module, we used 10 residual blocks of 128 feature channel with instance normalization [32] and ReLU non-linearities. For aggregation, we used feature concatenation, and for upsampling pixel-shuffle [33] and hyperbolic tangent function (tanh) as last layer in the generator to produce output between $[-1,1]$.

The learning rate for all the modules is set to $10^{-4}$ adopting linear decay policy after 50 epochs. Our model is trained using Adam optimizer and a batch size of 8 for a total of 200 epochs. We used NVIDIA Tesla V100 SXM3 32 GB for our experiments.

In order to calculate the multi-layer, patch-based contrastive loss, we extract features from 5 layers: the grayscale pixels, the downsampling convolution, the first convolution, the second and the fourth residual block of the reconstruction module. This approach aligns with the application of the contrastive loss, as outlined in the foundational work [30], as well as the utilization of the VGG loss, as demonstrated in [15]. Both methods employ layers ranging from pixel-level up to a deep convolutional layer. We base our approach on the same rationale.

Furthermore, we choose $\lambda _{1}=1$, $\lambda _{2}=20$ and $\lambda _{3}=1$. The coefficients were experimentally determined, aligning with the methodology outlined in the original paper [30], wherein they selected $\lambda _{1}=1$ and $\lambda _{2}=10$. Through our own exploration, we identified that the application of an increased $\lambda _{2}$, specifically $\lambda _{2}=20$, serves to establish higher preservation of structural information between the input and the generated output.

3. Experiments

In this section, we report the results derived from the quantitative analysis performed to assess the efficacy of our approach. We employed nine different no-reference image quality metrics for evaluation purposes. Additionally, we conducted ablation studies to highlight the key contributions of our methodology.

3.1 Quantitative analysis

We conducted an evaluation to assess the image quality improvement between real iOCT video frames and those generated by our iOCT video super-resolution methodology video frames. We utilized a test set comprising a total of 1738 iOCT video sequences, extracted from iOCT vitreoretinal surgery videos of 9 patients, all of whom were not included in the training set. Since aligned ground truth HR video sequences are not available, we used nine different no-reference IQA metrics to extensively assess the image quality. These metrics include Fréchet Inception Distance (FID) [34], Kernel Inception Distance (KID) [35], perceptual loss function ${\ell }_{feat}$ [25], Global Contrast Factor (GCF) [36], Fast Noise Estimation (FNE) [37], Precision and Recall (P&R) [38] and Density and Coverage (D&C) [39]. To gauge the impact of our SR method, we calculated $|\Delta$GCF$|$, which quantifies the absolute difference in the mean GCF values between SR images and preOCT images as well as $|\Delta$FNE$|$, which measures the absolute difference in the mean FNE values between SR and preOCT images. These evaluation metrics enable a comprehensive assessment of the image quality enhancement achieved by our methodology in the absence of available ground truth HR video sequences.

Table 1 summarizes the results of our quantitative analysis. Our unpaired video super-resolution approach demonstrates superior performance compared to all the other iOCT super-resolution methods, ranking first in six out of nine metrics. We conducted comparisons with the state-of-the-art [2527] and also with the single-image super-resolution model using contrastive learning (SRCUT) as proposed in the original paper [30]. It is important to note that CycleGAN [16], the state-of-the-art two-sided image-to-image translation method, failed to generate images of satisfactory quality. Therefore, we excluded it from the comparisons.

Tables Icon

Table 1. Quantitative analysis on the generated HR OCT images comparing different single-image SR approaches with our VSR approach. Arrows show whether higher/lower is better.

Our approach showcases remarkable performance in perceptual metrics such as FID, KID, ${\ell }_{feat}$. Lower FID values indicate lower distance and, consequently, higher similarity between the distributions of the generated SR and real preOCT images in a deep feature domain, encompassing both low-level and high-level features. Therefore, our methodology generates SR iOCT images of the highest quality as they closely resemble the images of preOCT domain which contains high levels of spatial details and diagnostic utility (see also Fig. 5 and Fig. 6 and Fig. 7). Furthermore, the KID values, which offer a similar but more advantageous than FID metric along with the ${\ell }_{feat}$ values, indicate highest level of quality enhancement achieved by our method, concerning perceptual quality. As generated images highly resemble preOCT images, they are characterized by finer and high-frequency details avoiding noticeable blurriness and discontinuities in anatomical structures. This is attributed to the establishment of a stable adversarial training, which has shown remarkable effectiveness in domain mapping problems. Moreover, it is noteworthy that the omission of L1 and L2 loss functions from training likely contributed to the reduction of blurriness of the results as they are recognized for their potential to introduce blurriness into the generated output.

 figure: Fig. 5.

Fig. 5. Visual comparison between different SR methods. From top to bottom: LR iOCT images, SR using our UVSR approach, SRCUT and SR using [26]. See also Visualization 1, Visualization 2, Visualization 3, Visualization 4, Visualization 5, Visualization 6, Visualization 7 and Visualization 8 for video comparisons.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Visual comparison of different SR methods. From top to bottom: LR iOCT images, SR using our UVSR approach, SRCUT and SR using [26].

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Visual results of our proposed method in challenging scenarios encountered during vitreoretinal surgeries. (a) Macular hole: Our method effectively preserves the original structural elements and maintains the interaction between the retina and surgical tools as seen in the input. (b) ERM Peeling: The output distinctly illustrates the presence of epiretinal membrane (ERM) in unpeeled retinal areas, while accurately depicting the absence of ERM in the peeled regions. Remarkably, the model achieves an acceptable level of generalization, even when faced with inputs that lie outside the usual realm of its training and testing data.

Download Full Size | PDF

Moreover, recognising that perceptual metrics may not be able to capture and assess low-level characteristics, we provide an analysis of the $|\Delta GCF|$ and $|\Delta FNE|$ values, where our method achieves the lowest (best) and the second lowest values, respectively. This divergence appears reasonable as noise and contrast values do not necessarily depend on each other or have to follow the same trend. Notably, the worse |$\Delta FNE$| value of our method could be attributed to the FNE’s inclination to identify thin lines as noise, as stated in [37]. Nevertheless, our method demonstrate improved |$\Delta GCF$| and |$\Delta FNE$| values compared to previous methods ensuring that the contrast extracted from various resolution levels (GCF) and the noise levels (FNE) in SR images closely resemble the corresponding levels in the HR preOCT images, which represent the highest quality images in our dataset.

Additionally, to address the limitations of using single-number perceptual metrics, such as FID, in separating fidelity and diversity, both significant aspects of generative models’ quality, we utilize precision and recall (P&R) metrics. Our method achieves the second highest (best) values in both metrics indicating that our generated SR iOCT images highly resemble the real preOCT images (P) while covering their full variability (R).

Similarly, we incorporate Density and Coverage (D&C) metrics, an improved version of P&R, in our evaluation. Our method attains the highest D and C values among other SR models, further supporting the capability of our network to generate images with high fidelity and diversity concerning the HR domain.

3.2 Human evaluation study

To further validate our video super-resolution pipeline, we performed a human evaluation study. Our survey included 20 pairs of LR and generated through our methodology images, randomly selected from the test set. We asked 6 retinal dOCTors/surgeons to evaluate these image pairs by assigning a score between 1 (strongly disagree) and 5 (strongly agree) on the following questions:

  • Q1: Can you notice an improvement in the delineation of RPE/Bruchs vs. IS/OS junction in the generated image? (A1: 4.58$\pm$0.29)
  • Q2: Do you agree that the generated image does NOT contain artificially hallucinated structures? (A2: 4.52$\pm$0.05)
  • Q3: Can you notice an improvement in the delineation of the ILM vs. RNFL in the generated image? (A3: 4.38$\pm$0.22)

The responses, denoted as A1, A2, A3, each accompanied by their respective mean values and standard deviations, demonstrate that the clinicians recognized the improved delineation of RPE vs IS/OS junction (Q1), the absence of artificially hallucinated structures (Q2) and the enhanced delineation of ILM vs RNFL (Q3). Compared to the qualitative results reported in [27], our analysis indicates greater lever of preference and agreement among clinicians regarding important characteristics of our generated images. Across the three questions, we note a substantial elevation in the mean value exceeding 0.5 compared to the results in [27]. Visual results are shown in Fig. 5 and Fig. 6, complementing the findings of our survey.

3.3 Importance of different loss terms for iOCT VSR

To study the importance of each of the $3$ terms of the overall loss function (Eq. (8)) that was used for training the proposed unsupervised video super-resolution model, we conducted a series of ablation experiments, each evaluating the effect of removing a loss term. The results are presented in Table 2 (Fig. 8). We have omitted the ${\ell }_{feat}$ and P&R values due to spatial constraints. Instead, we have included FID and D&C values as they represent similar aspects of the evaluation.

 figure: Fig. 8.

Fig. 8. Visual comparison between different training strategies. From left to right: LR iOCT images, ours (using both $L_{Contr}$ and $L_{Perc}$), ours w/o $L_{Perc}$ and ours w/o $L_{Contr}$ ($\lambda 3=5$).

Download Full Size | PDF

Tables Icon

Table 2. Quantitative analysis on the generated HR OCT images using different training strategies.

Removing the perceptual loss (w/o $L_{Perc}$) leads to a moderate drop on all metrics. Removing the (w/o $L_{Contr}$, $\lambda _3=1$) contrastive term and keeping the perceptual term’s weight in its default value ($\lambda _3=1$) results in significantly worse super-resolution results, as reported in Table 2.

Furthermore, removing the contrastive loss term and adjusting the weights for the perceptual loss, (w/o $L_{Contr}$, $\lambda _3=5$ and w/o $L_{Contr}$, $\lambda _3=10$), improves performance but still results in slightly worsen perceptual quality metrics (as measured by FID and KID). Importantly, in these cases the obtained $|\Delta$GCF$|$ and $|\Delta$FNE$|$ are higher indicating that the contrast and noise levels of the preOCT are better captured by the model when using our complete loss function ("ours"). This observation highlights the importance of contrastive loss in maintaining consistent perceptual quality as well as contrast and noise levels.

Finally, removing the adversarial loss term (w/o $L_{GAN}$) significantly impairs performance w.r.t to all metrics, hinting that it is crucial for generating images that capture both the high and low-level aspects of the HR preOCT domain.

3.4 Importance of aggregating temporal information

In this section, we highlight the importance of using multiple images with feature propagation between neighbouring frames for iOCT super-resolution. We initially perform inference of our model while setting the output of the feature warping module (depicted as the output of blue box denoted as ’Warp’ in Fig. 3) equal to zero. The rational behind this ablation is to assess whether the reconstruction module can generate high quality images without utilizing warped features from preceding frames. As reported in Table 3, in the absence of feature propagation from previous frames, the model ’w/o feat_prop’ achieves in worse results on all metrics.

Tables Icon

Table 3. Ablation study on the importance of aggregating temporal information.

Furthermore, to explore the effect of aggregating temporal information on the ultimate super-resolution outcome, we vary the number of input frames. As shown in Table 3 and $2^{nd}$ row in Fig. 9, our model exhibits a gradual enhancement in performance with the inclusion of additional frames in the input sequence particularly evident in perceptual metrics such as FID, KID as well as the D&C metrics. However, concerning $|\Delta$GCF$|$ and $|\Delta$FNE$|$, we observe that these values do not exhibit a monotonic improvement with the inclusion of more frames. This observation can be attributed to the fact that our recurrent network was trained on sequences of five frames and has learned to capture and utilize the temporal context within that specific sequence. As a result, during inference, when we reduced the sequence length, we lose some of the contextual information potentially leading to a negative impact on the network’s ability to make accurate predictions.

 figure: Fig. 9.

Fig. 9. Visual comparison between different training strategies across varying input sequence lengths. From left to right: different timestamps of the input sequence. From top to bottom: LR iOCT images, ours (using both $L_{Contr}$ and $L_{Perc}$), ours w/o $L_{Contr}$ ($\lambda 3=5$) and ours w/o $L_{Perc}$.

Download Full Size | PDF

We also investigated whether our trained model can effectively handle longer image sequences (ranging from 6 to 9 frames) compared to the different training strategies as shown in Fig. 9. The FID and $|\Delta$GCF$|$ values for each method are presented in the graphs 10(a) and 10(b), respectively. In Fig. 10(a), we observe that our method (green curve), utilizing the contrastive loss, consistently generates images of relatively similar quality as indicated by the relatively constant FID values, across different numbers of input frames. On the contrary, when trained without the constrastive loss (magenta curve), we achieved acceptable results (low FID) solely when using image sequence of a length close to the training (5) frames while higher or lower sequence lengths resulted in worse performance (high FID). Similarly, removing the perceptual loss (orange curve), leads to higher FID values degrading the perceptual quality of the generated images across time.

 figure: Fig. 10.

Fig. 10. Ablation on temporal information: Graphs illustrating the performance of our methodology with contrastive loss (green curve), without contrastive loss (w/o $L_{Contr}$ ($\lambda 3=5$)) (magenta curve) and without perceptual loss (w/o $L_{Perc}$) (orange curve) across varying input sequence lengths in terms of image quality metrics.

Download Full Size | PDF

Similarly, regarding $|\Delta$GCF$|$, our approach (green) maintains the contrast values at consistent levels across different frames. Analogous trend is observed in the orange curve, which was trained solely with contrastive loss. Conversely, our model trained without contrastive loss exhibited high $|\Delta$GCF$|$ values for frames lower than 4 and higher than 7.

These findings further reinforce the efficacy or our methodology, particularly in handling longer input sequences. According to these graphs, our methodology generates consistently images of high quality particularly in handling image sequences of varying lengths while preserving image quality and contrast characteristics. The integration of contrastive loss assists the generator to learn intermediate representations that encapsulate meaningful semantic or structural information, thereby facilitating the aggregation of propagated features and the reconstruction of high-quality images. Particularly for longer input sequences, the generator is able to effectively propagate and aggregate long-term information. On the contrary, our model trained without contrastive loss, exhibits signs of overfitting, as it may have become overly reliant to using the entire 5-image context for predictions and may face challenges when that context is reduced, leading to decline in performance.

At this point, we would like to note that the selection of $5$ frames is rooted in our decision to reduce the training duration in comparison to lengthier input sequences, thus affording us the opportunity to undertake an expanded number of experiments to comprehensively assess our work’s efficacy. It is important to acknowledge that the adoption of extended image sequences for training purposes to capture long-term information remains a promising avenue for future investigation.

3.5 Comparison with denoising methods

In this section, we undertake a comparative analysis of our super-resolution method against conventional denoising filter in order to illustrate the denoising efficacy of our work, which forms part of the broader objective of image quality enhancement.

For evaluation, we selected two state-of-the-art speckle reduction methods for OCT images: Non-local means (NLM) [40] and Block-matching and 3D filtering (BM3D) [41]. These methods have been evaluated in previous studies [8,4244] for their denoising capabilities. The NLM implementation, provided by scikit-image [45], was employed and we conducted tests with two variations: one utilizing estimated noise standard deviation (NLM ($\tilde {\sigma }$)) and one without incorporating such estimation (NLM). Additionally, the BM3D approach was assessed using experimentally defined values of $\sigma =0.05$ and $\sigma =0.1$ as shown in the accompanying Table 4.

Tables Icon

Table 4. Quantitative analysis on the generated HR OCT images using conventional denoising filters.

 figure: Fig. 11.

Fig. 11. Visual comparison between our method and different conventional denoising approaches. From left to right: LR iOCT images, ours method, denoising using BM3D ($\sigma =0.05)$, BM3D($\sigma =0.1$), NLM and NLM ($\tilde {\sigma }$).

Download Full Size | PDF

Based on the data presented in Table 4 (see also Fig. 11), it is evident that denoised images obtained through conventional techniques do not yield images of comparable quality to those in the preOCT domain. Although these filters may indeed succeed in effectively reducing noise, they are unable to furnish valuable information regarding retinal layers and other crucial structures pertinent to vitreoretinal surgeries, a feat accomplished by our super-resolution method.

4. Conclusion

In this study, we proposed an unpaired video super-resolution approach for intra-operative Optical Coherence Tomography (iOCT) videos acquired during vitreoretinal surgeries. Through extensive quantitative analysis, ablation studies and a qualitative analysis by expert clinicians, we demonstrated that our approach can effectively enhance the iOCT image quality achieving a new state-of-the-art performance. Furthermore, we showed that video super-resolution models when trained with multilayer patchwise contrastive loss can effectively aggregate temporal information even in an unpaired scenario.

Potentially, our approach may be applicable in different intra-operative modalities, such as ultrasound and intra-operative MRI as well as research in 3D/4D OCT. Particularly, in the context of 3D/4D OCT, our work has the potential to surpass the outcomes reported in existing studies [43,46], as we utilize pre-operatively acquired OCT scans as HR domain to train our super-resolution network. These scans represent more than merely denoised OCT data that are used in [43,46]; rather, they have undergone comprehensive offline processing, thereby furnishing scans of high-fidelity information and diagnostic utility. Furthermore, the underlying architecture of our recurrent video super-resolution algorithm, which leverages temporal information from a sequence of images, bears the potential to increase the consistency between the generated outputs of adjacent 2D frames within the 3D OCT and thus contribute to an elevation in the overall consistency of the volumetric representation.

However, we acknowledge that our work has several limitations. First, our method exhibits slower runtime speed compared to existing works that perform single-image super-resolution. In our research, we chose to focus on the architectural innovation and training strategy to address a critical gap in the field of unpaired medical video super-resolution. To the best of our knowledge, this study is the first to address the application of video super resolution between unpaired datasets in the medical imaging domain. Moreover, the absence of pre-operative OCT images that illustrate interactions with surgical instruments, a phenomenon quite common during surgeries and iOCT scans, could potentially impact the performance of our model. Additionally, the unavailability of pixel-wise ground truth data poses constraints on our ability to incorporate spatial and temporal reference-based metrics that could further support the evaluation of our work.

In conclusion, our work can be used as a strong foundation for future research and optimization efforts. Our future work will explore improvements of our method’s computational efficiency, incorporation of tool interactions within the pre-operative domain, extension of input sequences capturing long-term dependencies, utilization of additional spatial and temporal metrics and ultimately deployment of our iOCT video super-resolution approach into the clinic.

Funding

Wellcome/EPSRC Centre for Medical Engineering (WT 203148/Z/16/Z); King's internally funded Centre for DOCToral Training.

Acknowledgments

We would like to thank T. Soomro, A. Makuloluwa, T. Jackson and P. Keane for participating in our qualitative analysis

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. There are parallel endeavours to make them public.

References

1. H. Nazari, L. Zhang, D. Zhu, G. J. Chader, P. Falabella, F. Stefanini, T. Rowland, D. O. Clegg, A. H. Kashani, D. R. Hinton, and M. S. Humayun, “Stem cell based therapies for age-related macular degeneration: the promises and the challenges,” Prog. Retinal Eye Res. 48, 1–39 (2015). [CrossRef]  

2. L. da Cruz, K. Fynes, O. Georgiadis, et al., “Phase 1 clinical study of an embryonic stem cell–derived retinal pigment epithelium patch in age-related macular degeneration,” Nat. Biotechnol. 36(4), 328–337 (2018). [CrossRef]  

3. E. K. de Jong, M. J. Geerlings, and A. I. den Hollander, “Age-related macular degeneration,” Genetics and Genomics of Eye Disease (ScienceDirect, 2020), 155–180.

4. W. L. Wong, X. Su, X. Li, C. M. G. Cheung, R. Klein, C.-Y. Cheng, and T. Y. Wong, “Global prevalence of age-related macular degeneration and disease burden projection for 2020 and 2040: a systematic review and meta-analysis,” The Lancet Glob. Health 2(2), e106–e116 (2014). [CrossRef]  

5. P. Cornelissen, M. Ourak, G. Borghesan, D. Reynaerts, and E. Vander Poorten, “Towards real-time estimation of a spherical eye model based on a single fiber OCT,” in 2019 19th International Conference on Advanced Robotics (ICAR), (IEEE, 2019), pp. 666–672.

6. K. Xue, M. Groppe, A. Salvetti, and R. MacLaren, “Technique of retinal gene therapy: delivery of viral vector into the subretinal space,” Eye 31(9), 1308–1316 (2017). [CrossRef]  

7. C. Viehland, B. Keller, O. M. Carrasco-Zevallos, D. Nankivil, L. Shen, S. Mangalesh, D. T. Viet, A. N. Kuo, C. A. Toth, and J. A. Izatt, “Enhanced volumetric visualization for real time 4D intraoperative ophthalmic swept-source OCT,” Biomed. Opt. Express 7(5), 1815 (2016). [CrossRef]  

8. A. Ozcan, A. Bilenca, A. E. Desjardins, B. E. Bouma, and G. J. Tearney, “Speckle reduction in optical coherence tomography images using digital filtering,” J. Opt. Soc. Am. A 24(7), 1901–1910 (2007). [CrossRef]  

9. R. Bernardes, C. Maduro, P. Serranho, A. Araújo, S. Barbeiro, and J. Cunha-Vaz, “Improved adaptive complex diffusion despeckling filter,” Opt. Express 18(23), 24048–24059 (2010). [CrossRef]  

10. B. Sander, M. Larsen, L. Thrane, J. L. Hougaard, and T. M. Jørgensen, “Enhanced optical coherence tomography imaging by multiple scan averaging,” Br. J. Ophthalmol. 89(2), 207–212 (2005). [CrossRef]  

11. L. Fang, S. Li, D. Cunefare, and S. Farsiu, “Segmentation based sparse reconstruction of optical coherence tomography images,” IEEE Trans. Med. Imaging 36(2), 407–421 (2017). [CrossRef]  

12. A. Smitha, I. Febin, and P. Jidesh, “A retinex based non-local total generalized variation framework for OCT image restoration,” Biomed. Signal Process. Control. 71, 103234 (2022). [CrossRef]  

13. X. Zhang, Z. Li, N. Nan, and X. Wang, “Denoising algorithm of OCT images via sparse representation based on noise estimation and global dictionary,” Opt. Express 30(4), 5788–5802 (2022). [CrossRef]  

14. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in Neural Information Processing Systems, 7 (2014).

15. J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European conference on computer vision, (Springer, 2016), pp. 694–711.

16. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE international conference on computer vision, (2017), pp. 2223–2232.

17. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2017), pp. 1125–1134.

18. J. Xu, E. Gong, J. Pauly, and G. Zaharchuk, “200x low-dose pet reconstruction using deep learning,” arXivarXiv:1712.04119 (2017). [CrossRef]  

19. Q. Yang, P. Yan, Y. Zhang, H. Yu, Y. Shi, X. Mou, M. K. Kalra, Y. Zhang, L. Sun, and G. Wang, “Low-dose ct image denoising using a generative adversarial network with wasserstein distance and perceptual loss,” IEEE Trans. Med. Imaging 37(6), 1348–1357 (2018). [CrossRef]  

20. X. Yuan, Y. Huang, L. An, et al., “Image enhancement of wide-field retinal optical coherence tomography angiography by super-resolution angiogram reconstruction generative adversarial network,” Biomed. Signal Process. Control. 78, 103957 (2022). [CrossRef]  

21. C. Zeng, S. Yuan, and Q. Chen, “Unpaired and self-supervised optical coherence tomography angiography super-resolution,” in Chinese Conference on Pattern Recognition and Computer Vision (PRCV), (Springer, 2022), pp. 117–126.

22. S. K. Devalla, G. Subramanian, T. H. Pham, X. Wang, S. Perera, T. A. Tun, T. Aung, L. Schmetterer, A. H. Thiéry, and M. J. Girard, “A deep learning approach to denoise optical coherence tomography images of the optic nerve head,” Sci. Rep. 9(1), 14454 (2019). [CrossRef]  

23. K. J. Halupka, B. J. Antony, M. H. Lee, K. A. Lucy, R. S. Rai, H. Ishikawa, G. Wollstein, J. S. Schuman, and R. Garnavi, “Retinal optical coherence tomography image enhancement via deep learning,” Biomed. Opt. Express 9(12), 6205–6221 (2018). [CrossRef]  

24. Z. Yuan, D. Yang, W. Wang, J. Zhao, and Y. Liang, “Self super-resolution of optical coherence tomography images based on deep learning,” Opt. Express 31(17), 27566–27581 (2023). [CrossRef]  

25. C. Komninos, T. Pissas, B. Flores, E. Bloch, T. Vercauteren, S. Ourselin, L. D. Cruz, and C. Bergeles, “Intra-operative OCT (iOCT) image quality enhancement: A super-resolution approach using high quality iOCT 3d scans,” in International Workshop on Ophthalmic Medical Image Analysis, (Springer, 2021), pp. 21–31.

26. C. Komninos, T. Pissas, B. Flores, E. Bloch, T. Vercauteren, S. Ourselin, L. D. Cruz, and C. Bergeles, “Intra-operative OCT (iOCT) super resolution: A two-stage methodology leveraging high quality pre-operative OCT scans,” in Ophthalmic Medical Image Analysis: 9th International Workshop, OMIA 2022, Held in Conjunction with MICCAI 2022, Singapore, Singapore, September 22, 2022, Proceedings, (Springer, 2022), pp. 105–114.

27. C. Komninos, T. Pissas, L. Mekki, B. Flores, E. Bloch, T. Vercauteren, S. Ourselin, L. Da Cruz, and C. Bergeles, “Surgical biomicroscopy-guided intra-operative optical coherence tomography (iOCT) image super-resolution,” Int. J. Comput. Assist. Radiol. Surg. 17(5), 877–883 (2022). [CrossRef]  

28. X. Wang, K. C. Chan, K. Yu, C. Dong, and C. Change Loy, “Edvr: Video restoration with enhanced deformable convolutional networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, (2019), pp. 0–0.

29. K. C. Chan, X. Wang, K. Yu, C. Dong, and C. C. Loy, “Basicvsr: The search for essential components in video super-resolution and beyond,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, (2021), pp. 4947–4956.

30. T. Park, A. A. Efros, R. Zhang, and J.-Y. Zhu, “Contrastive learning for unpaired image-to-image translation,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IX 16, (Springer, 2020), pp. 319–345.

31. A. Ranjan and M. J. Black, “Optical flow estimation using a spatial pyramid network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2017), pp. 4161–4170.

32. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” arXivarXiv:1607.08022 (2016). [CrossRef]  

33. W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2016), pp. 1874–1883.

34. M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “Gans trained by a two time-scale update rule converge to a local Nash equilibrium,” Advances in Neural Information Processing Systems, 30 (2017).

35. M. Bińkowski, D. J. Sutherland, M. Arbel, and A. Gretton, “Demystifying mmd gans,” arXivarXiv:1801.01401 (2018). [CrossRef]  

36. K. Matkovic, L. Neumann, A. Neumann, T. Psik, and W. Purgathofer, “Global contrast factor-a new approach to image contrast,” Comput. Aesthet. 2005, 159–167 (2005). [CrossRef]  

37. J. Immerkaer, “Fast noise variance estimation,” Comput. Vision Image Understanding 64(2), 300–302 (1996). [CrossRef]  

38. M. S. Sajjadi, O. Bachem, M. Lucic, O. Bousquet, and S. Gelly, “Assessing generative models via precision and recall,” Advances in Neural Information Processing Systems, 31 (2018).

39. M. F. Naeem, S. J. Oh, Y. Uh, Y. Choi, and J. Yoo, “Reliable fidelity and diversity metrics for generative models,” in International Conference on Machine Learning, (PMLR, 2020), pp. 7176–7185.

40. A. Buades, B. Coll, and J.-M. Morel, “Non-local means denoising,” Image Process. On Line 1, 208–212 (2011). [CrossRef]  

41. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. on Image Process. 16(8), 2080–2095 (2007). [CrossRef]  

42. H. Yu, J. Gao, and A. Li, “Probability-based non-local means filter for speckle noise suppression in optical coherence tomography images,” Opt. Lett. 41(5), 994–997 (2016). [CrossRef]  

43. J. Nienhaus, P. Matten, A. Britten, J. Scherer, E. Höck, A. Freytag, W. Drexler, R. A. Leitgeb, T. Schlegl, and T. Schmoll, “Live 4D-OCT denoising with self-supervised deep learning,” Sci. Rep. 13(1), 5760 (2023). [CrossRef]  

44. Q. Zhou, M. Wen, B. Yu, C. Lou, M. Ding, and X. Zhang, “Self-supervised transformer based non-local means despeckling of optical coherence tomography images,” Biomed. Signal Process. Control. 80, 104348 (2023). [CrossRef]  

45. S. van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “scikit-image: image processing in python,” PeerJ 2, e453 (2014). [CrossRef]  

46. J. Weiss, M. Sommersperger, A. Nasseri, A. Eslami, U. Eck, and N. Navab, “Processing-aware real-time rendering for optimized tissue visualization in intraoperative 4D OCT,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, (Springer, 2020), pp. 267–276.

Supplementary Material (8)

NameDescription
Visualization 1       Patient 1: The video visualizes the real iOCT video during vitreoretinal surgeries on the left and the super-resolved video based on the [22] cited method on the right.
Visualization 2       Patient 1: The video visualizes the real iOCT video during vitreoretinal surgeries on the left and the super-resolved video based on [21] cited method on the right.
Visualization 3       Patient 1: The video visualizes the real iOCT video during vitreoretinal surgeries on the left and the super-resolved video based on SRCUT method (explained in the paper) on the right.
Visualization 4       Patient 1: The video visualizes the real iOCT video during vitreoretinal surgeries on the left and the super-resolved video based on OUR method on the right.
Visualization 5       Patient 2: The video visualizes the real iOCT video during vitreoretinal surgeries on the left and the super-resolved video based on [22] cited method on the right.
Visualization 6       Patient 2: The video visualizes the real iOCT video during vitreoretinal surgeries on the left and the super-resolved video based on [21] cited method on the right.
Visualization 7       Patient 2: The video visualizes the real iOCT video during vitreoretinal surgeries on the left and the super-resolved video based on SRCUT method (described in the main paper) on the right.
Visualization 8       Patient 2: The video visualizes the real iOCT video during vitreoretinal surgeries on the left and the super-resolved video based on OUR method on the right.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. There are parallel endeavours to make them public.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Dataset overview: (a) Microscope view during vitreoretinal surgery accompanied with low-resolution (LR) intra-operative OCT video frames. (b) Pre-operatively acquired OCT 3D volume and 2D high-resolution (HR) preOCT image. Unpaired video super-resolution is performed between iOCT videos and preOCT images.
Fig. 2.
Fig. 2. The architecture of the Bidirectional recurrent VSR network used as generator in adversarial training. The input LR iOCT sequence $x_{[t-N:t]}$ passes through: downsampling, optical flow, warping, reconstruction, aggregation and upsampling modules both in backward and forward direction generating HR $\widehat y_{[t-N,t]}$ output.
Fig. 3.
Fig. 3. Bidirectional feature propagation: Backward Branch (Left), Forward branch (Right). $OF$, $Warp$, $Rec$ refer to optical flow estimation, spatial warping and reconstruction modules respectively.
Fig. 4.
Fig. 4. Multilayer Patchwise Contrastive Loss using part of the video super-resolution generator as encoder. Both $x$ and $\widehat y_{t}$ are encoded in features. ${l_{0}, l_{1},l_{2},l_{3},l_{4}}$ are the multiple layers that we extracted features from. The features of a generated output patch should be closer to the features of the corresponding patch in the input image (associating) compared to random patches from different locations (dissociating).
Fig. 5.
Fig. 5. Visual comparison between different SR methods. From top to bottom: LR iOCT images, SR using our UVSR approach, SRCUT and SR using [26]. See also Visualization 1, Visualization 2, Visualization 3, Visualization 4, Visualization 5, Visualization 6, Visualization 7 and Visualization 8 for video comparisons.
Fig. 6.
Fig. 6. Visual comparison of different SR methods. From top to bottom: LR iOCT images, SR using our UVSR approach, SRCUT and SR using [26].
Fig. 7.
Fig. 7. Visual results of our proposed method in challenging scenarios encountered during vitreoretinal surgeries. (a) Macular hole: Our method effectively preserves the original structural elements and maintains the interaction between the retina and surgical tools as seen in the input. (b) ERM Peeling: The output distinctly illustrates the presence of epiretinal membrane (ERM) in unpeeled retinal areas, while accurately depicting the absence of ERM in the peeled regions. Remarkably, the model achieves an acceptable level of generalization, even when faced with inputs that lie outside the usual realm of its training and testing data.
Fig. 8.
Fig. 8. Visual comparison between different training strategies. From left to right: LR iOCT images, ours (using both $L_{Contr}$ and $L_{Perc}$), ours w/o $L_{Perc}$ and ours w/o $L_{Contr}$ ($\lambda 3=5$).
Fig. 9.
Fig. 9. Visual comparison between different training strategies across varying input sequence lengths. From left to right: different timestamps of the input sequence. From top to bottom: LR iOCT images, ours (using both $L_{Contr}$ and $L_{Perc}$), ours w/o $L_{Contr}$ ($\lambda 3=5$) and ours w/o $L_{Perc}$.
Fig. 10.
Fig. 10. Ablation on temporal information: Graphs illustrating the performance of our methodology with contrastive loss (green curve), without contrastive loss (w/o $L_{Contr}$ ($\lambda 3=5$)) (magenta curve) and without perceptual loss (w/o $L_{Perc}$) (orange curve) across varying input sequence lengths in terms of image quality metrics.
Fig. 11.
Fig. 11. Visual comparison between our method and different conventional denoising approaches. From left to right: LR iOCT images, ours method, denoising using BM3D ($\sigma =0.05)$, BM3D($\sigma =0.1$), NLM and NLM ($\tilde {\sigma }$).

Tables (4)

Tables Icon

Table 1. Quantitative analysis on the generated HR OCT images comparing different single-image SR approaches with our VSR approach. Arrows show whether higher/lower is better.

Tables Icon

Table 2. Quantitative analysis on the generated HR OCT images using different training strategies.

Tables Icon

Table 3. Ablation study on the importance of aggregating temporal information.

Tables Icon

Table 4. Quantitative analysis on the generated HR OCT images using conventional denoising filters.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

L G A N = E y Y log D ( y ) + E x X log ( 1 D ( G ( x ) ) )
O F t b = O F ( x t , x t + 1 )
h t b = R e c ( W a r p ( O F t b , h t + 1 b ) , x t )
O F t f = O F ( x t , x t 1 )
h t f = R e c ( W a r p ( O F t f , h t 1 f ) , x t )
y ^ t = U p ( A g g r ( h t f , h t b ) )
L C o n t r = l L i = 1 P ( l ) log exp ( u i v i / τ ) exp ( u i v i / τ ) + k = 1 K ( l ) exp ( u i n k / τ )
L t o t a l = λ 1 L G A N + λ 2 L C o n t r + λ 3 L P e r c
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.