Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Sm-Net OCT: a deep-learning-based speckle-modulating optical coherence tomography

Open Access Open Access

Abstract

Speckle imposes obvious limitations on resolving capabilities of optical coherence tomography (OCT), while speckle-modulating OCT can efficiently reduce speckle arbitrarily. However, speckle-modulating OCT seriously reduces the imaging sensitivity and temporal resolution of the OCT system when reducing speckle. Here, we proposed a deep-learning-based speckle-modulating OCT, termed Sm-Net OCT, by deeply integrating conventional OCT setup and generative adversarial network trained with a customized large speckle-modulating OCT dataset containing massive speckle patterns. The customized large speckle-modulating OCT dataset was obtained from the aforementioned conventional OCT setup rebuilt into a speckle-modulating OCT and performed imaging using different scanning parameters. Experimental results demonstrated that the proposed Sm-Net OCT can effectively obtain high-quality OCT images without the electronic noise and speckle, and conquer the limitations of reducing the imaging sensitivity and temporal resolution which conventional speckle-modulating OCT has. The proposed Sm-Net OCT can significantly improve the adaptability and practicality capabilities of OCT imaging, and expand its application fields.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical coherence tomography (OCT) is an emerging biomedical imaging technology that enables micron-scale, cross-sectional, and 3D imaging of biological tissues noninvasively [1]. Optical coherence tomography is widely used in clinical applications, including ophthalmology [27], cardiology [8], endoscopy [911], dermatology [12], and dentistry [13]. Optical coherence tomography, based on interferometry techniques, uses the coherent detection of backscattered light to image tissue morphology [14]. However, this coherence inevitably generates speckle noise [1517] that causes significant degradation in spatial resolution and prevents the imaging technique from achieving greater resolving potential. Following the development of OCT, various methods have been proposed to denoise OCT images and improve its resolving power via hardware-based and software-based approaches.

Conventional hardware-based approaches modify the acquisition system to produce uncorrelated speckle patterns within or between B-scans, including angular [18,19], spatial [20,21], polarization, and frequency compounding [22]. These hardware-based methods can partly remove speckle noise while they increase system complexity. In addition, some hardware-based methods also require repeated scans that affect the temporal resolution of the system. More software-based approaches have been proposed. Some straightforward software-based approaches include image averaging, digital filtering using low-pass filter [23], median filter [24], and so on. More complex software-based approaches have also been proposed, including wavelet-based methods [2527], block-matching and 3D collaborative filtering [28,29], dictionary learning [3032], and so on. These software-based denoising methods had shown better performance. However, these methods still cannot remove speckle thoroughly and reveal information that was lost because of speckle, and post-processing for these methods is time-intensive and decreases the spatial resolution of the OCT images, tending to show over-smoothening or losing meaningful subtle features.

Recently, some works have achieved big progress in reducing the speckle in OCT. One is speckle-modulating OCT [33]. By using a moving diffuser, speckle-modulating OCT can acquire an unlimited number of uncorrelated speckle patterns and effectively remove speckle noise without degrading the spatial resolution of the images, which helps speckle-modulating OCT clarify and reveal structures that are otherwise obscured or undetectable [33,34]. Another is deep learning networks, such as generative adversarial networks (GAN), which were used to denoise OCT images [35,36]. Deep learning is playing a dominant role in the field of natural image processing over the past few years and has already demonstrated the great power to address OCT denoising tasks [37,38]. However, both of them have obvious shortages. Speckle-modulating OCT uses a moving diffuser in the optical path and has to perform repeating B-scan scanning, which seriously reduces the imaging sensitivity and temporal resolution of the OCT system. The performance of deep learning networks heavily relies on the dataset and ground-truth dataset used to train the networks, while existing works lack the corresponding dataset. Recently, averaged retinal OCT images have been used as the training dataset in most deep-learning-based OCT denoising work. However, this kind of training dataset has two problems: one is that eyeball motion makes it difficult to acquire such in-vivo data; the other is that speckle patterns from retinal OCT images are limited which makes it not suitable for other OCT setups and samples, because speckle patterns also depend on scanning volume sizes and sample structures. Author’s previous work [39] has first proposed to use datasets from speckle-modulating OCT as the training dataset, however, the results from previous work still have the speckle noise, because of limitations of the quantity and quality of speckle patterns obtained, and lack of deep integration of OCT setup and GAN network.

In this paper, we proposed a deep-learning-based speckle-modulating OCT, termed Sm-Net OCT, that can conquer the shortcomings of conventional speckle-modulating OCT when performing reducing speckle. The proposed Sm-Net OCT deeply integrated conventional OCT setup and generative adversarial network trained with a customized large speckle-modulating OCT dataset. The customized large speckle-modulating OCT dataset containing massive speckle patterns was obtained from different samples using different imaging parameters, based on the aforementioned conventional OCT setup rebuilt into a speckle-modulating OCT. The customized large training dataset was much better than the previous work. Here we performed experiments on Scotch tape and pork meat to demonstrate the robust performance of the proposed Sm-Net OCT.

2. Methods

2.1 Speckle patterns and customized speckle-modulating OCT dataset

Optical coherence tomography is based on the interference between the backscattered light from the sample and reference arms, which generates speckle. Two main processes influence the spatial coherence of the backscattered light: (1) multiple backscattering of the beam inside and outside of the desired sample volume and (2) random delays of the forward-propagating and returning beam caused by multiple forward scattering [17]. Therefore, different scattering numbers affected by volume sizes and sample structure will cause different speckle patterns [40,41].

Previous work [33] on speckle-modulating OCT has verified that speckle can be removed if enough speckle patterns can be acquired. However, for each B-scan position, those hardware-based kinds of speckle-modulating OCT have to perform the repeating scan with a moving diffuser to obtain enough speckle uncorrelated patterns to obtain the speckle-free OCT frame. Recently, deep learning networks have shown their powerful ability to extract image characteristics to generate new images, which indicates that deep learning networks can be used to extract the speckle pattern characteristics of a conventional OCT setup. Here we proposed the Sm-Net OCT to achieve a deep-learning-based speckle-modulating optical coherence tomography by deeply integrating a conventional OCT setup and a GAN network. To extract the speckle pattern characteristics of a conventional OCT setup, the GAN network was trained with a customized large speckle-modulating OCT dataset containing massive speckle patterns and obtained by rebuilding the conventional OCT setup to a speckle-modulating OCT. By this, the proposed Sm-Net OCT can perform a speckle-free OCT frame at each B-scan position without performing repeat scanning that reduces the sensitivity and temporal resolution.

In detail, the conventional OCT system was first rebuilt into a conventional speckle-modulating OCT, and thirty different types of samples were imaged with the rebuilt speckle-modulating OCT setup using multiple scan patterns. Here a broadband light source (cBLMD-T-850-HP, Superlum) with a center wavelength of $\lambda{_c} = 850 \ {\textrm{nm}}$ and a full-width at half maximum bandwidth of $\Delta \lambda = 165 \ {\textrm{nm}}$ was used. The corresponding spectrometer used was a 2048-pixel spectrometer (Cobra-S 800, Wasatch Photonics). The other main components used were a 2-axis scanning galvanometer scanner (GVSM002-EC/M, Thorlabs), a 2 × 2 850-nm wideband fiber optic coupler (TW850R5A2, Thorlabs), two identical collimators (TC25APC-850, Thorlabs), and two polarization controllers (FPC030, Thorlabs). To rebuild the conventional OCT into a speckle-modulating OCT, a 4f system composed of two B-coating lenses (AC254-100-B, Thorlabs) with a focus length f=100 mm had been added into the sample arm, and a B-coating diffuser (DG10-1500-B, Thorlabs) was added in the sample arm and moved in the conjugate image plane created by the 4f imaging system, as shown in Fig. 1. To obtain different speckle patterns, 4X, 10X, 20X objective (Olympus) lenses were configured when imaging samples.To obtain massive speckle patterns of the OCT setup for the deep learning networks, we imaged 30 different fresh biological tissue samples and Scotch tape, from various meats, fruits, and vegetables, as listed in Table 1. For each sample, we had imaged different parts of the sample using the above rebuilt speckle-modulating OCT setup with different scan patterns.

 figure: Fig. 1.

Fig. 1. Schematic of speckle-modulating OCT rebuilt from the conventional OCT, FC: fiber coupler, PC: polarization controller; GS: galvanometer scanner; L1-L3: lens; C1-C2: collimator.

Download Full Size | PDF

Tables Icon

Table 1. Tissue samples imaged using the rebuilt speckle-modulating OCT setup

Multiple scan patterns were used to make the speckle-pattern dataset more general. For each sample, we used different objective lenses to acquire images at several different locations with different scanning ranges, exposure time (25.0 µs and 50.0 µs), and A-line numbers. For each imaging, we repeatedly scanned the same position 100 times. In detail, for each exposure time used, we used the following scan patterns to obtain more speckle patterns and make the speckle-pattern dataset more general, as Table 2 shows.

Tables Icon

Table 2. Scan patterns used to obtain the speckle-modulating OCT dataset.

2.2 Generative adversarial networks and loss functions

Here we used a GAN network that contains two parts: the generator network G and the discriminator network D, as Fig. 2 shows. The generator network operates to generate the prediction and the discriminator network works as a judging tool to evaluate the generator’s output. The target of the generator network is to fool the discriminator network by producing speckle-free images that the discriminator cannot distinguish from the ground truth. The discriminator network is to become more sophisticated at identifying generated images. Backpropagation is applied for both generator and discriminator networks to optimize their performance. The networks D and G are trained alternatively by fixing one and updating the other. The training terminates when the generator produces images that the discriminator network cannot distinguish from the ground truth.

 figure: Fig. 2.

Fig. 2. GAN network. (a) Generator network G and discriminator network D.

Download Full Size | PDF

Figure 2(a) shows the detailed structure of the generator network, the generator network uses speckle OCT images as input data. Here the generator network has 16 identical residual blocks that operated to balance the trade-off between training time and GAN performance. As shown in Fig. 2(a), each residual block has two convolutional layers (Conv) with 3 × 3 kernels followed by a batch normalization layer (BN). LeakyReLU plays as the residual blocks’ activation layers. Each residual block contains an elementwise sum layer that operates to sum up the input and the output features of the residual block at the end of the residual block. This summation operation is to maintain input feature information and minimizes information loss.The generator loss is defined as a weighted summation of content loss ${L_C}$ and adversarial loss ${L_A}$, as Eq. (1) shows. The content loss compares feature differences between generator output images and ground truth images. Here pixel-wise means square error (MSE) loss was used as a content loss for Sm-Net, as Eq. (2) shows.

$${L_G} = {L_C} + {10^{ - 3}} \times {L_A}$$
$$L_C^{MSE} = \frac{1}{{W \times H}}\mathop \sum \nolimits_{i = 1}^W \mathop \sum \nolimits_{j = 1}^H { [{y_{i,j}} - G({{{\hat{y}}_{i,j}}} )]^2}$$
where W and H indicate image width and height, respectively; ${\hat{y}_{i,j}}$ and ${y_{i,j}}$ represent input images and ground truth images, respectively. Because MSE loss causes over smoothing for some structures in generated images which leads to feature loss, here a VGG-19 network was used to extract feature maps from ground truth images and generator’s output images separately. The VGG content loss $L_C^{VGG}$ is defined as the Euclidean distance between these two extracted feature maps. With this method, the loss function is also called VGG loss, and the definition is shown in Eq. (3),
$$L_C^{VGG} = \frac{1}{{W \times H}}\mathop \sum \nolimits_{i = 1}^W \mathop \sum \nolimits_{j = 1}^H {\{ \varphi {(y)_{i,j}} - \varphi {[G({\hat{y}} )]_{i,j}}\} ^2}$$
where ϕ represents the VGG-19 network operator that is acting on input images and produces the VGG-19 network output.

Content loss is always used to measure the difference between generator output and ground truth, which can help improve the generator's output image quality close to ground truth images. Adversarial loss ${L_A}$ operating as complementary to content loss can help the GAN network to generate images that preserve features from input images, expressed as Eq. (4).

$${L_A} ={-} \mathop \sum \nolimits_{n = 1}^N log D[{G({\hat{y}} )} ]$$
where N is the total number of training images.

The discriminator network is to estimate the output image quality from the generator network, whose structure is shown in Fig. 2(b). The discriminator network has seven convolutional blocks with identical structures as the central part. Each block includes three layers: a convolutional layer with 3 × 3 kernels, a batch normalization layer, and a LeakyReLU layer. After these convolutional blocks, two dense layers followed by a sigmoid activation function output the classification probability. We tested different amounts of convolutional blocks to acquire a good balance between the total training time and discriminator network performance, and here seven convolutional blocks were used to achieve optimal performance for the discriminator network.

Discriminator loss was utilized for the discriminator network to update its weights during the training, expressed as Eq. (5). The generator network’s output images and ground truth images are regarded as ${L_D}$ inputs of the discriminator network independently. If the input is a ground truth image, the discriminator network output is labeled real discriminator output; if the input is the output from the generator network, the discriminator network output is labeled fake discriminator output. Discriminator loss is calculated based on the real and fake discriminator output.

$${L_D} = \mathop \sum \nolimits_{n = 1}^N \{ log [D(y )] - log \{ 1 - D[{G({\hat{y}} )} ]\} $$

2.3 Training details

To obtain the software part of the Sm-Net OCT, we first pre-processed the customized speckle-modulating dataset to obtain the training data, including the input images with speckle and the corresponding ground-truth images, as Fig. 3 shows. The speckle images were acquired by splitting the 100-frame speckle-modulating image sets, and each individual speckle image had its corresponding ground-truth image obtained by averaging the 100-frame speckle-modulating images, as shown in Fig. 3(a). Then the training data were used to train the GAN network, and the well-trained GAN network model was integrated with the conventional OCT setup to finally obtain the Sm-Net OCT, as shown in Fig. 3(b).

 figure: Fig. 3.

Fig. 3. Training data preparation flow and Sm-Net OCT. (a) Processing flow of training data preparation, (b) Sm-Net OCT.

Download Full Size | PDF

With the pre-processed customized speckle-modulating dataset, the GAN network model was further trained on NVIDIA GeForce RTX 2080TI GPU with a Tensorflow based environment. Here our training process employed a two-step strategy for training the GAN network model, which can improve training efficiency and avoid undesired local optima. The generator network needed a longer training time to optimize the network performance, for its much deeper network structure and much larger numbers of training parameters compared to the discriminator network. In addition, it can find a good starting point for the generator network and lead to a quick converge and better performance for the second training step with this transfer learning strategy. To quickly find the initial values, only the generator network was trained with the MSE content loss function during the first training stage. The content loss in Eq. (1) was set to the MSE loss as shown in Eq. (2). During this step, we used Adam optimizer [41] to train the generator network. The initial learning rate was set to 10−5 and decreased by a factor of 0.1 for every 105 training iterations.

Here the second training stage utilized a transfer learning strategy to adjust the network. During this step, it used the output weights of the generator network obtained from the first training step as the initial weights. Both the generator and the discriminator network were trained during this step. During the second training stage, VGG content loss, instead of the MSE content loss, was used to help generate low-noise images without over smoothing. The content loss in Eq. (1) was set to the VGG loss as shown in Eq. (3). The second training step used the same optimizer and initial learning as the first training step, with a decay rate of 10−6. By employing the two-step training strategy, the Sm-Net model was trained and converged quickly. The output images preserved features of input images and minimized information loss.

3. Results

3.1 Customized speckle-modulating OCT dataset

Some speckle-modulating OCT images and the corresponding ground truth images are shown in Fig. 4. Each group of cross-sectional images was acquired 100 times at the same location of the sample when the optical diffuser was moving. As a result, the speckle pattern changed randomly from frame to frame. The corresponding ground truth image was obtained by averaging 100 repeated speckle-modulating OCT cross-sectional images, which significantly reduced the speckle noise.

We have scanned 10 positions of each sample in Table 1 using multiple scan patterns shown in Table 2, and collected 3,600 speckle-modulating OCT sub-datasets to obtain the customized speckle-modulating OCT dataset. Each dataset in the customized speckle-modulating OCT dataset was comprised of 100 repeated B-scan frames. All 100 repeated frames for each dataset were used as either training data or validation data. No images from the same dataset were used as both training and validation data. Augmentation operations, such as image flip, contrast adjustment, and so on, were employed to the other 600,000 images to increase the training data size. After augmentation, a total of 960,000 OCT images were used for training, and 120,000 OCT images were used as validation data.

 figure: Fig. 4.

Fig. 4. Speckle OCT images and their corresponding ground-truth images of different samples obtained from speckle-modulating OCT using 10X objective, 500 A-lines per B-scan, and different scan ranges. (a)-(d) are the speckle OCT images and (e)-(h) are the corresponding ground truth images. (a) and (b) are speckle OCT images of Scotch tape using different scan ranges, respectively, and (e) and (f) are their corresponding ground-truth images, respectively. (c) and (d) are speckle OCT images of pork meat using different scan ranges, respectively, and (g) and (h) are their corresponding ground-truth images, respectively.

Download Full Size | PDF

3.2 Results of Sm-Net OCT

To demonstrate the performance of Sm-Net OCT here, we have imaged some samples using conventional OCT and Sm-Net OCT, respectively. Figure 5 shows the Scotch tape images obtained from the conventional OCT and Sm-Net OCT, and Fig. 6 shows the pork meat images obtained from the conventional OCT and Sm-Net OCT. To demonstrate the proposed Sm-Net OCT has conspicuous advantages over conventional speckle-modulating OCT, we have also imaged some samples using conventional OCT, conventional speckle-modulating OCT, and Sm-Net OCT, respectively, as shown in Fig. 7. We also measured their system sensitivities and temporal resolutions, as shown in Table 3.

 figure: Fig. 5.

Fig. 5. Scotch tape images obtained from conventional OCT (a) and Sm-Net OCT (b). (c) and (d) are close-up views on the Box areas in (a), and (e) and (f) are close-up views on the Box areas in (b), respectively.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Pork meat images obtained from conventional OCT (a) and Sm-Net OCT (b). (c) and (d) are close-up views on the Box areas in (a), and (e) and (f) are close-up views on the Box areas in (b), respectively.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Scotch tape images obtained from (a) conventional OCT, (b) Sm-Net OCT, and (c) speckle-modulating OCT. Pork meat images obtained from (e) conventional OCT, (f) Sm-Net OCT, and (g) speckle-modulating OCT. Here we used the same parameters during the data processing, such as the same gray threshold used for obtaining the images of conventional OCT, speckle-modulating OCT, and Sm-Net OCT. To obtain the better image quality of speckle-modulating OCT, here we refocused the setup of speckle-modulating OCT when performing scanning.

Download Full Size | PDF

Tables Icon

Table 3. Sensitivity and imaging time (IT) of different systems at a 25.0 µs exposure time

Figure 5(a), (c), and (d) are the images of Scotch tape obtained from conventional OCT, while Fig. 5(b), (e), and (f) are the images of Scotch tape obtained from Sm-Net OCT. Figure 6(a), (c), and (d) are the images of pork meat obtained from conventional OCT, while Fig. 6(b), (e), and (f) are the images of port meat obtained from Sm-Net OCT. Compared to Fig. 5(a) and Fig. 6(a), Fig. 5(b) and Fig. 6(b) have shown that Sm-Net OCT can effectively remove the speckle noise and the electronic noise, and reveal the detailed structure information that was lost because of speckle. In Fig. 5(b), we can see the speckle has been removed efficiently in the tape, while it can be easily observed in Fig. 5(a). Figure 6(b) has shown more small structures which have been lost in Fig. 6(a) because of the speckle.

Figure 7 contains images of Scotch tape and pork meat obtained from conventional OCT, Sm-Net OCT, and conventional speckle-modulating OCT. In Fig. 7, we can see that both speckle-modulating OCT and Sm-Net OCT can effectively remove the noise including the speckle noise, and show more structure details than the conventional OCT. However, because of reducing the imaging system sensitivity, speckle-modulating OCT has an obvious disadvantage of imaging depth than Sm-Net OCT, as Fig. 7(b) and Fig. 7(c), Fig. 7(f) and Fig. 7(g) show. In Fig. 7(c) and Fig. 7(g), we can see that conventional speckle-modulating OCT has a much shallower imaging depth than conventional OCT and Sm-Net OCT because the diffuser in the speckle-modulating OCT reduces the light power of the sample arm and reduces the system sensitivity, as Table 3 shows.

Simultaneously, Sm-Net OCT has also an obvious advantage at imaging temporal resolution over conventional speckle-modulating OCT, as shown in Table 3. Here the imaging time contained the camera exposure time and the galvanometer flyback time, and it used 100-repeated 800-Aline scanning when performing conventional speckle-modulating OCT imaging. From Table 3, we can see that speckle-modulating OCT has drastically increased the imaging time and reduced the temporal resolution because speckle-modulating OCT has to perform repeated scanning, while Sm-Net OCT just added a little data processing time that can be further reduced by using parallel computing in the future. Therefore the physical scanning times of the conventional OCT and Sm-Net OCT are the same, and Sm-Net OCT just needs some time for data processing, with previous deep integration of the conventional OCT setup and the trained GAN network. With these advantages mentioned above, we also performed 3D speckle-free imaging experiments on the pork meat using the proposed Sm-Net OCT, as shown in Supplementary Movie S1 (see Visualization 1) containing all cross-sections of the 3D dataset. The 3D Sm-Net OCT dataset of pork meat has demonstrated that it is convenient to perform 3D speckle-free imaging using Sm-Net OCT, compared with conventional speckle-modulating OCT.

4. Discussions and conclusion

In this work, we revisited the speckle in OCT imaging and pointed out the shortages of the previous works on speckle removing, including software-based methods and hardware-based methods. By revisiting the speckle and speckle-modulating OCT, and combining the features of deep learning, here we proposed a deep-learning-based speckle-modulating OCT, the Sm-Net OCT. By deeply integrating a conventional OCT setup and a GAN network trained the customized large training dataset containing massive speckle patterns, the proposed Sm-Net OCT can conquer the speckle-modulating OCT limitations of reducing the sensitivity and the temporal resolution, and provide higher-quality images with deeper imaging depth than speckle-modulating OCT.

Here we should point out that the obtained speckle patterns play a key role in the performance of the Sm-Net OCT, because it is important to let the deep learning network extract the speckle characteristics as much as possible to finally achieve the deep-learning-based speckle-modulating OCT. By now speckle-modulating OCT still has a better resolving ability in some cases, and more work should be done to analyze the number of speckle patterns needed and further improve the performance of Sm-Net OCT in the future. In addition, the number of the speckle-modulating images from repeated scanning images used also affects the performance of the Sm-Net OCT. The previous research [33] has shown that more than 25 repeat-scanning speckle OCT images are needed to obtain appropriate speckle-modulating OCT. To balance the time consumption and the performance of the trained deep learning network, here we used 30 images from a 100-frame repeated image, and a suitable choice may achieve a better balance between the performance and the time costing of the Sm-Net OCT.

In conclusion, we proposed a deep-learning-based speckle-modulating OCT, Sm-Net OCT, which can conquer the shortcomings of conventional speckle-modulating OCT. With the advantages of reducing speckle and electronic noise, keeping the sensitivity and the temporal resolution, the proposed Sm-Net OCT can significantly improve the adaptability and practicality capabilities of OCT.

Funding

National Natural Science Foundation of China (61905036, 61575037, 61927821); China Postdoctoral Science Foundation (2021T140090, 2019M663465).

Acknowledgment

We thank Chao Zhou and all members of Z-lab for their help and support.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]  

2. W. Drexler, U. Morgner, R. K. Ghanta, F. X. Kärtner, J. S. Schuman, and J. G. Fujimoto, “Ultrahigh-resolution ophthalmic optical coherence tomography,” Nat. Med. 7(4), 502–507 (2001). [CrossRef]  

3. O. Carrasco-Zevallos, B. Keller, C. Viehland, L. Shen, G. Waterman, B. Todorich, C. Shieh, P. Hahn, S. Farsiu, and A. Kuo, “Live volumetric (4D) visualization and guidance of in vivo human ophthalmic surgery with intraoperative optical coherence tomography,” Sci. Rep. 6(1), 31689 (2016). [CrossRef]  

4. J. Campbell, M. Zhang, T. Hwang, S. Bailey, D. Wilson, Y. Jia, and D. Huang, “Detailed vascular anatomy of the human retina by projection-resolved optical coherence tomography angiography,” Sci. Rep. 7(1), 42201 (2017). [CrossRef]  

5. G. Cennamo, M. Romano, M. Breve, N. Velotti, M. Reibaldi, and G. De Crecchio, “Evaluation of choroidal tumors with optical coherence tomography: enhanced depth imaging and OCT-angiography features,” Eye 31(6), 906–915 (2017). [CrossRef]  

6. E. M. Frohman, J. G. Fujimoto, T. C. Frohman, P. A. Calabresi, G. Cutter, and L. J. Balcer, “Optical coherence tomography: a window into the mechanisms of multiple sclerosis,” Nat Rev Neurol 4(12), 664–675 (2008). [CrossRef]  

7. M. Ibrahim, Y. Sepah, R. Symons, R. Channa, E. Hatef, A. Khwaja, M. Bittencourt, J. Heo, D. Do, and Q. Nguyen, “Spectral-and time-domain optical coherence tomography measurements of macular thickness in normal eyes and in eyes with diabetic macular edema,” Eye 26(3), 454–462 (2012). [CrossRef]  

8. L. Liu, J. A. Gardecki, S. K. Nadkarni, J. D. Toussaint, Y. Yagi, B. E. Bouma, and G. J. Tearney, “Imaging the subcellular structure of human coronary atherosclerosis using micro-optical coherence tomography,” Nat. Med. 17(8), 1010–1014 (2011). [CrossRef]  

9. H. Pahlevaninezhad, M. Khorasaninejad, Y.-W. Huang, Z. Shi, L. P. Hariri, D. C. Adams, V. Ding, A. Zhu, C.-W. Qiu, and F. Capasso, “Nano-optic endoscope for high-resolution optical coherence tomography in vivo,” Nat. Photonics 12(9), 540–547 (2018). [CrossRef]  

10. W. Yuan, R. Brown, W. Mitzner, L. Yarmus, and X. Li, “Super-achromatic monolithic microprobe for ultrahigh-resolution endoscopic optical coherence tomography at 800 nm,” Nat. Commun. 8(1), 1531 (2017). [CrossRef]  

11. D. C. Adler, Y. Chen, R. Huber, J. Schmitt, J. Connolly, and J. G. Fujimoto, “Three-dimensional endomicroscopy using optical coherence tomography,” Nat. Photonics 1(12), 709–716 (2007). [CrossRef]  

12. M. Ulrich, L. Themstrup, N. De Carvalho, M. Manfredi, C. Grana, S. Ciardo, R. Kästle, J. Holmes, R. Whitehead, and G. B. Jemec, “Dynamic optical coherence tomography in dermatology,” Dermatology 232(3), 298–311 (2016). [CrossRef]  

13. E. B. Shokouhi, M. Razani, A. Gupta, and N. Tabatabaei, “Comparative study on the detection of early dental caries using thermo-photonic lock-in imaging and optical coherence tomography,” Biomed. Opt. Express 9(9), 3983–3997 (2018). [CrossRef]  

14. W. Drexler and J. G. Fujimoto, Optical Coherence Tomography (Springer International Publishing, 2015. [CrossRef]  

15. M. Bashkansky and J. Reintjes, “Statistics and reduction of speckle in optical coherence tomography,” Opt. Lett. 25(8), 545–547 (2000). [CrossRef]  

16. J. M. Schmitt, S. Xiang, and K. M. Yung, “Speckle in optical coherence tomography: an overview,” inSaratov Fall Meeting'98: Light Scattering Technologies for Mechanics, Biomedicine, and Material Science (International Society for Optics and Photonics1999), pp. 450–462.

17. J. M. Schmitt, S. Xiang, and K. M. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt. 4(1), 95–105 (1999). [CrossRef]  

18. N. Iftimia, B. E. Bouma, and G. J. Tearney, “Speckle reduction in optical coherence tomography by” path length encoded” angular compounding,” J. Biomed. Opt. 8(2), 260–263 (2003). [CrossRef]  

19. A. Desjardins, B. Vakoc, W.-Y. Oh, S. Motaghiannezam, G. Tearney, and B. Bouma, “Angle-resolved optical coherence tomography with sequential angular selectivity for speckle reduction,” Opt. Express 15(10), 6200–6209 (2007). [CrossRef]  

20. B. F. Kennedy, T. R. Hillman, A. Curatolo, and D. D. Sampson, “Speckle reduction in optical coherence tomography by strain compounding,” Opt. Lett. 35(14), 2445–2447 (2010). [CrossRef]  

21. D. Alonso-Caneiro, S. A. Read, and M. J. Collins, “Speckle reduction in optical coherence tomography imaging by affine-motion image registration,” J. Biomed. Opt. 16(11), 116027 (2011). [CrossRef]  

22. M. Pircher, E. Götzinger, R. A. Leitgeb, A. F. Fercher, and C. K. Hitzenberger, “Speckle reduction in optical coherence tomography by frequency compounding,” J. Biomed. Opt. 8(3), 565–570 (2003). [CrossRef]  

23. M. R. Hee, J. A. Izatt, E. A. Swanson, D. Huang, J. S. Schuman, C. P. Lin, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography of the human retina,” Arch. Ophthalmol. 113(3), 325–332 (1995). [CrossRef]  

24. K. L. Boyer, A. Herzog, and C. Roberts, “Automatic recovery of the optic nervehead geometry in optical coherence tomography,” IEEE Trans. Med. Imaging 25(5), 553–570 (2006). [CrossRef]  

25. S. Chitchian, M. A. Fiddy, and N. M. Fried, “Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform,” J. Biomed. Opt. 14(1), 014031 (2009). [CrossRef]  

26. Z. Jian, L. Yu, B. Rao, B. J. Tromberg, and Z. Chen, “Three-dimensional speckle suppression in optical coherence tomography based on the curvelet transform,” Opt. Express 18(2), 1024–1032 (2010). [CrossRef]  

27. F. Zaki, Y. Wang, H. Su, X. Yuan, and X. Liu, “Noise adaptive wavelet thresholding for speckle noise removal in optical coherence tomography,” Biomed. Opt. Express 8(5), 2720–2731 (2017). [CrossRef]  

28. B. Chong and Y.-K. Zhu, “Speckle reduction in optical coherence tomography images of human finger skin by wavelet modified BM3D filter,” Opt. Commun. 291, 461–469 (2013). [CrossRef]  

29. L. Wang, Z. Meng, X. S. Yao, T. Liu, Y. Su, and M. Qin, “Adaptive speckle reduction in OCT volume data based on block-matching and 3-D filtering,” IEEE Photonics Technol. Lett. 24(20), 1802–1804 (2012). [CrossRef]  

30. L. Fang, S. Li, Q. Nie, J. A. Izatt, C. A. Toth, and S. Farsiu, “Sparsity based denoising of spectral domain optical coherence tomography images,” Biomed. Opt. Express 3(5), 927–942 (2012). [CrossRef]  

31. M. Esmaeili, A. M. Dehnavi, H. Rabbani, and F. Hajizadeh, “Speckle noise reduction in optical coherence tomography using two-dimensional curvelet-based dictionary learning,” J Med Signals Sens 7(2), 86 (2017). [CrossRef]  

32. M. Esmaeili, A. M. Dehnavi, F. Hajizadeh, and H. Rabbani, “Three-dimensional curvelet-based dictionary learning for speckle noise removal of optical coherence tomography,” Biomed. Opt. Express 11(2), 586–608 (2020). [CrossRef]  

33. O. Liba, M. D. Lew, E. D. SoRelle, R. Dutta, D. Sen, D. M. Moshfeghi, S. Chu, and A. de la Zerda, “Speckle-modulating optical coherence tomography in living mice and humans,” Nat Commun 8, 15845 (2017). [CrossRef]  

34. D. Yecies, O. Liba, E. D. SoRelle, R. Dutta, E. Yuan, H. Vogel, G. A. Grant, and A. de la Zerda, “Speckle modulation enables high-resolution wide-field human brain tumor margin detection and in vivo murine neuroimaging,” Sci. Rep. 9(1), 10388–9 (2019). [CrossRef]  

35. Y. Huang, Z. Lu, Z. Shao, M. Ran, J. Zhou, L. Fang, and Y. Zhang, “Simultaneous denoising and super-resolution of optical coherence tomography images based on generative adversarial network,” Opt. Express 27(9), 12289–12307 (2019). [CrossRef]  

36. Z. Chen, Z. Zeng, H. Shen, X. Zheng, P. Dai, and P. Ouyang, “DN-GAN: Denoising generative adversarial networks for speckle noise reduction in optical coherence tomography images,” Biomedical Signal Processing and Control 55, 101632 (2020). [CrossRef]  

37. Y. Ma, X. Chen, W. Zhu, X. Cheng, D. Xiang, and F. Shi, “Speckle noise reduction in optical coherence tomography images based on edge-sensitive cGAN,” Biomed. Opt. Express 9(11), 5129–5146 (2018). [CrossRef]  

38. N. A. Kande, R. Dakshane, A. Dukkipati, and P. K. Yalavarthy, “SiameseGAN: A Generative Model for Denoising of Spectral Domain Optical Coherence Tomography Images,” IEEE Transactions on Medical Imaging (2020).

39. Z. Dong, G. Liu, G. Ni, J. Jerwick, L. Duan, and C. Zhou, “Optical coherence tomography image denoising using a generative adversarial network with speckle modulation,” J. Biophotonics 13, e201960135 (2020). [CrossRef]  

40. T. S. Tkaczyk, K. W. Gossage, and J. K. Barton, “Speckle image properties in optical coherence tomography,” in Coherence Domain Optical Methods in Biomedical Science and Clinical Applications VI (International Society for Optics and Photonics, 2002), pp. 59–70.

41. B. Karamata, K. Hassler, M. Laubscher, and T. Lasser, “Speckle statistics in optical coherence tomography,” J. Opt. Soc. Am. A 22(4), 593–596 (2005). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Supplementary Movie S1: Sm-Net OCT 3D dataset of pork meat.avi

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Schematic of speckle-modulating OCT rebuilt from the conventional OCT, FC: fiber coupler, PC: polarization controller; GS: galvanometer scanner; L1-L3: lens; C1-C2: collimator.
Fig. 2.
Fig. 2. GAN network. (a) Generator network G and discriminator network D.
Fig. 3.
Fig. 3. Training data preparation flow and Sm-Net OCT. (a) Processing flow of training data preparation, (b) Sm-Net OCT.
Fig. 4.
Fig. 4. Speckle OCT images and their corresponding ground-truth images of different samples obtained from speckle-modulating OCT using 10X objective, 500 A-lines per B-scan, and different scan ranges. (a)-(d) are the speckle OCT images and (e)-(h) are the corresponding ground truth images. (a) and (b) are speckle OCT images of Scotch tape using different scan ranges, respectively, and (e) and (f) are their corresponding ground-truth images, respectively. (c) and (d) are speckle OCT images of pork meat using different scan ranges, respectively, and (g) and (h) are their corresponding ground-truth images, respectively.
Fig. 5.
Fig. 5. Scotch tape images obtained from conventional OCT (a) and Sm-Net OCT (b). (c) and (d) are close-up views on the Box areas in (a), and (e) and (f) are close-up views on the Box areas in (b), respectively.
Fig. 6.
Fig. 6. Pork meat images obtained from conventional OCT (a) and Sm-Net OCT (b). (c) and (d) are close-up views on the Box areas in (a), and (e) and (f) are close-up views on the Box areas in (b), respectively.
Fig. 7.
Fig. 7. Scotch tape images obtained from (a) conventional OCT, (b) Sm-Net OCT, and (c) speckle-modulating OCT. Pork meat images obtained from (e) conventional OCT, (f) Sm-Net OCT, and (g) speckle-modulating OCT. Here we used the same parameters during the data processing, such as the same gray threshold used for obtaining the images of conventional OCT, speckle-modulating OCT, and Sm-Net OCT. To obtain the better image quality of speckle-modulating OCT, here we refocused the setup of speckle-modulating OCT when performing scanning.

Tables (3)

Tables Icon

Table 1. Tissue samples imaged using the rebuilt speckle-modulating OCT setup

Tables Icon

Table 2. Scan patterns used to obtain the speckle-modulating OCT dataset.

Tables Icon

Table 3. Sensitivity and imaging time (IT) of different systems at a 25.0 µs exposure time

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

L G = L C + 10 3 × L A
L C M S E = 1 W × H i = 1 W j = 1 H [ y i , j G ( y ^ i , j ) ] 2
L C V G G = 1 W × H i = 1 W j = 1 H { φ ( y ) i , j φ [ G ( y ^ ) ] i , j } 2
L A = n = 1 N l o g D [ G ( y ^ ) ]
L D = n = 1 N { l o g [ D ( y ) ] l o g { 1 D [ G ( y ^ ) ] }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.