Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Convolutional neural network-based common-path optical coherence tomography A-scan boundary-tracking training and validation using a parallel Monte Carlo synthetic dataset

Open Access Open Access

Abstract

We present a parallel Monte Carlo (MC) simulation platform for rapidly generating synthetic common-path optical coherence tomography (CP-OCT) A-scan image dataset for image-guided needle insertion. The computation time of the method has been evaluated on different configurations and 100000 A-scan images are generated based on 50 different eye models. The synthetic dataset is used to train an end-to-end convolutional neural network (Ascan-Net) to localize the Descemet’s membrane (DM) during the needle insertion. The trained Ascan-Net has been tested on the A-scan images collected from the ex-vivo human and porcine cornea as well as simulated data and shows improved tracking accuracy compared to the result by using the Canny-edge detector.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical Coherence Tomography (OCT) has shown that it can provide high-resolution, three-dimension images of biological samples in real-time [1,2]. In ophthalmology, OCT has been used for visualization of the internal structure and morphology of ocular tissues, which enables the ability to monitor the tissue’s physiological changes during diagnosis, surgery, and therapy [36]. The handheld OCT was first introduced into the surgical room in 2001 [7] and the microscope-integrated OCT system has been developed for better imaging reproducibility [8]. Various ophthalmic procedures are using the volumetric OCT system for surgical guidance, such as glaucoma filtering procedures [9], corneal transplant surgery [1013], cataract surgery [14], macular surgery [15], and subretinal injection [16]. In recent years, machine learning techniques are being applied to OCT images to track and segment target tissues such as the retina [1722].

Model-driven approaches have been proposed for OCT image segmentation like intensity gradient methods and graph-based methods [17]. Deep learning, especially convolutional neural network (CNN) is a promising technique that can be used to segment the OCT image as well. Many algorithms have been proposed to segment the multi-layer of retina. Wang et al. [18] proposed a boundary aware U-Net for retinal layers segmentation and the results were compared to various methods including U-Net and RelayNet. CNN has also been used to analyze the A-scan signal from the common-path (CP)-OCT system for layer tracking and subretinal injection on retinal samples [19]. CP-OCT is one of the OCT variants that uses a single arm interferometer setup in which the same light path is used for reference and sample arm [23]. Since CP-OCT can have arbitrary sample/reference arm length it can be integrated into various tools such as tweezers, cannula, and needles for tool tracking [2426].

Unlike retina, corneal tissue has fewer layers consisting of the epithelium, Bowman’s, Descemet’s membrane (DM), and endothelium layers that are very thin, generally less than 10 microns. A customized CNN called CorneaNet [20] was proposed by Santos et al. which targeted cornea layer segmentation. A modified capsule network algorithm was used to identify the three major boundaries of the corneal layers [21]. Wang et al. proposed the BG-CNN [22] to effectively extract different corneal layers for disease analysis. The major challenge here to use the CNN for the specific CP-OCT A-scan is the lack of training datasets. A well-trained network normally requires thousands of OCT images from different eyes with carefully annotated ground truth. Firstly, due to the unique setup of the fiber integrated needle, the acquired A-scan images are different than the publicly available OCT dataset, and the fiber is moving together with the needle as it’s inserted into the corneal tissue which will change the total thickness of the tissue in front of the fiber. The only way to obtain enough data is to conduct experiments on real corneal tissue with different qualities. Both healthy and pathological cornea tissues should be used as a high-quality training dataset. Interpretation of the CP-OCT A-scan signal is not an easy task. The photons contributing to the signal of the DM went through multiple scattering events and this is further attenuated by the limited detection angle and radius of the fiber. Labeling the DM location in CP-OCT A-scan will require a trained expert and it’s a time-consuming process. Achieving a pixel-level accuracy in the A-scan is unrealistic which will directly affect the final performance of the trained network.

However, instead of only acquiring training datasets from experiments, realistic CP-OCT images and datasets can also be generated from the Monte Carlo (MC) simulation. Using MC modeling of light transport in multi-layered tissues, simulated OCT images were successfully generated [27,28]. Over the years, the MC algorithm has been refined to simulate OCT images from different aspects for different purposes. Kirillin et al. simulated the OCT B-scan images by using MC modeling based on the polarization vector approach [29]. OCT images with embedded objects [30] and angiographic OCT images [31] can also be simulated by the MC simulation. A specific method for MC simulation of Fourier-domain OCT images has also been proposed [32].

In this study, we are combining the MC simulation and the CNN to provide a better way to train and interpret the CP-OCT A-scan signal for the possible application in corneal transplant surgery. Specifically, the focus of this work is on OCT guided needle insertion for deep anterior lamellar keratoplasty (DALK) as illustrated in Fig. 1(a). The DALK procedure is currently performed by skilled surgeons through operating microscopes, utilizing freehand techniques, and manually operated precision micro-instruments. The key step is to insert the needle towards the DM at around 90% depth of the cornea thickness and inject air to separate the stroma tissue from DM to form a ‘big bubble’ [33]. The reported perforation rate is around 11.7% [34] which is mostly limited by the surgeon's skill and experience level.

 figure: Fig. 1.

Fig. 1. (a) Illustration of the DALK with the B-scan image of the bovine cornea, (b) Schematic of the CP-OCT fiber sensor integrated needle, (c) A-scan image of the cornea from the CP-OCT system, (d) A-scan image of the cornea from the standard OCT system.

Download Full Size | PDF

To overcome these human factors, the CP-OCT distal sensor integrated needle is provided for real-time needle position feedback [26]. The optical fiber used for imaging is integrated inside the 27-gauge needle as shown in Fig. 1(b) and the half-ball lens in the front is used for keeping reference beam power constant when the needle and fiber are inside the cornea. With this setup, the A-scan image of the cornea can be acquired along the needle insertion direction as shown in Fig. 1(c). In the standard OCT system with a separate reference arm, the laser beam that comes out of the fiber is focused by a scanning lens or objective onto the sample which is perpendicular to the beam propagation direction and the example A-scan image is shown as Fig. 1(d). Compared to the A-scan image acquired from the standard OCT system, the image quality of the CP-OCT, especially the signal-to-noise ratio (SNR) at DM is lower. This is due to the lensed fiber having a smaller collection angle and aperture compared to the scanning lens or objective used in the standard B-scan OCT system. Also, the fiber integrated needle is inserted at 60 degrees from vertical for surgical purposes and this will significantly reduce the ballistic photons collected by the fiber which is indicated by strong signal peaks from the epithelium layer and DM in Fig. 1(d). To better utilize the CP-OCT system for autonomous surgical guidance or motion compensation, it’s crucial to label and track the DM layer precisely.

To train, we had initially proposed to use the MC simulation to generate the synthetic training dataset with the pixel-level ground truth for the CNN and use the trained network to predict the DM location on the dataset collected during real experiments. The proof-of-concept work was published by the authors in the conference proceedings [35]. Here we further expand our earlier work and propose a parallel MC CP-OCT simulation platform and discuss the result in detail. The computation time of the proposed method was evaluated on different configurations and compared to the single thread version. The generated CP-OCT A-scan images during the needle insertion were analyzed and compared to the data from real experiments. We proposed the end-to-end convolutional neural network for tracking the DM layer (Ascan-Net). The Ascan-Net was trained by the generated synthetic data and tested on two different datasets collected from real experiments and one MC generated dataset. The final results were discussed and compared with the Canny-edge detector based tracking method. The Ascan-Net was further fine-tuned on additional A-scan images from the ex-vivo human cornea to evaluate the domain shift between different datasets.

2. Methods

2.1 Importance-sampling based Monte Carlo simulation for CP-OCT A-scan image

The diagram of the proposed simulation method is given in Fig. 2. Before the algorithm starts, a detailed tissue profile including the layer structure and its optical properties are given. The basic concept of simulating light propagation in multilayered tissue is based on the consequent propagation of a random photon trajectory. The properties of tissue are defined by the absorption coefficient (µa), scattering coefficient(µs), and scattering anisotropy(g). The photon packet of weight (w) is injected with a propagation direction normal to the surface of the tissue. The step size of the photon packet between two successive elastic scattering events can be described as in Eq. (1).

 figure: Fig. 2.

Fig. 2. Diagram of parallel MC CP-OCT simulation algorithm

Download Full Size | PDF

$$s = - \ln (\xi )/{\mu _t}.$$

Here, the step size is determined by the random number ξ which uniformly distributes in [0,1], and the attenuation coefficient is defined as µt= µa + µs. The photon then moves according to the step size with the current direction vector. If the photon remains in the same medium, the photon will be moved into the next stage. However, if the photon packet crosses a boundary, the reflection will be calculated. Furthermore, the step size will be updated according to the parameters in the new layer. Once the current step size is finished, the absorption caused weight loss is calculated. If the weight of the photon packet is below a certain threshold, the photon enters the stage where it is given a chance of surviving due to energy conservation. If the photon survives from this or the weight is still larger than the threshold, it will enter the scattering event where its direction vector is determined by the Henyey-Greenstein (HG) scattering function as Eq. (2).

$${f_{HG}}(\cos \theta ) = \frac{{1 - {g^2}}}{{2{{(1 + {g^2} - 2g\cos \theta )}^{3/2}}}}.$$

The angle (θ) is the angle between the incoming photon propagation direction and the scattered direction. This can be solved by finding the accumulated distribution of the HG scattering function as in Eq. (3). ξ is the number randomly picked in [0,1] and g is the anisotropy of the tissue. Regarding most biological tissue, g is close to 1 which will make the photon tend to scatter in the forward direction (θ is close to 180 degrees) [36].

$$\cos \theta = \frac{1}{{2g}}\left[ {1 + {g^2} - {{\left( {\frac{{1 - {g^2}}}{{1 + g + 2g\xi }}} \right)}^2}} \right],g \ne 0.$$

This will make detecting a photon from the launch plane an extremely rare event, which only happens 1 out of 1000-10000 times when the photon packet is launched. Most photon packets going through the simulation will not contribute to the reconstruction of the OCT image and the simulation needs to run a very long time to get a decent image. The simulation cost increases significantly as the structure of the tissue becomes more complicated. To improve the simulation of the standard MC simulation, importance-sampling is one of the common methods used to increase the probability of events of interest [30,37]. To be specific, we use an updated scattering function as in Eq. (4) to ensure the propagation direction of the photon after random scattering is pointing towards the direction of collecting fiber, in which the angle between the two vectors is no less than 90 degrees.

$${f_B}(\cos {\theta _B}) = {\left( {1 - \frac{{1 - a}}{{\sqrt {{a^2} + 1} }}} \right)^{ - 1}}\frac{{a({1 - a} )}}{{{{({1 + {a^2} - 2a\cos {\theta_B}} )}^{3/2}}}}.$$

Here, a is the given biased coefficient which ranges from 0 to 1. By using the modified equation, the new function for calculating the angle is shown in Eq. (5). ξ is the number randomly picked in [0,1] to ensure the randomness of the scattering. As a getting closer to 1, the biased scattering angle will be closer to 0 degrees regardless of the value of ξ, which makes the photon tend to return to the fiber. In this way, the most photon can be redirected to the fiber probe after scattering.

$$\cos {\theta _B} = \frac{1}{{2a}}\left\{ {{a^2} + 1 - {{\left[ {\xi (\frac{1}{{1 - a}} - \frac{1}{{\sqrt {{a^2} + 1} }}) + \frac{1}{{\sqrt {{a^2} + 1} }}} \right]}^{ - 2}}} \right\},a \ne 0.$$

At each scattering event, we give a probability p to determine if the photon will undergo a biased or unbiased scattering. To compensate for the weight of the photon, the likelihood ratio of each scattering event is calculated according to Eq. (6) and the total likelihood ratio is the product of all the likelihood ratios of each independent scattering the photon packet has undergone.

$${L_j} = \frac{{{f_{HG}}(\cos {\theta _s})}}{{p{f_B}(\cos {\theta _B}) + (1 - p){f_{HG}}(\cos {\theta _s})}}.$$

After the scattering event, the packet will be given a new step size and repeats the above steps. The photon packet will exit the simulation loop if it loses all its weight, or it exits the boundary of the tissue. Due to the scattering event, part of the photon will exit the boundary on the top layer. To determine the photon that contributes to the CP-OCT system signal, the photon can be divided into two different groups. The first group is denoted as Class I diffuse reflectance (P1), which refers to the ballistic and quasi-ballistic photons reflected inside the tissue. The Class II diffuse reflectance (P2) is the photons that go through multiple scattering events. We can divide these two types of photons by applying the spatial and temporal filters given in Eq. (7):

$${r_i} < {r_{fiber}},{\Phi _i} < {\Phi _{fiber}},\left\{ \begin{array}{l} {P_1}:|{{z_{opl}} - 2{z_{\max }}} |< ({l_c}/2)\\ {P_2}:|{{z_{opl}} - 2{z_{\max }}} |> ({l_c}/2) \end{array} \right..$$

Once the photon returns back to the emitting surface, it needs to meet the two criteria to be able to contribute to the OCT image. The ith photon’s radial distance (ri) is within the fiber collecting radius (rfiber) as well as the emitting angle (Φi) is within the acceptance angle of the fiber (Φfiber). For the Class I photon, the difference between the maximum depth (zmax) is reached and the photon path length (zopl) should be less than half of the coherence length. This is the opposite for the Class II photon and it means there is a mismatch between the path length and the depth. By applying the aforementioned conditions, we can further distinguish the photon types and allocate the photon to a certain depth by their path length. This is important to simulate the CP-OCT A-scan image due to the Class I photon is significantly reduced by the small detection angle and radius of the fiber probe. Combined with the different photon types mentioned above, we will have the equation for the reconstructed CP-OCT signal at a certain probing depth z as in Eq. (8).

$$I(z) = {I_0}\left( {\alpha \sum\limits_{i = 1}^{{N_{ph - I}}} {\sqrt {{w_i}{L_i}} \exp ({ - {{(\Delta L/{l_c})}^2}} )} + \sum\limits_{i = 1}^{{N_{ph - II}}} {\sqrt {{w_i}{L_i}} \exp ({ - {{(\Delta L/{l_c})}^2}} )} } \right).$$

This equation is based on the time-domain OCT system [31] for simplicity in which the reference is scanned across different depths. Here, wi is the photon reflection ratio from MC simulation, lc is the coherence length of the OCT system, Li is the total likelihood ratio, and ΔL is the path difference between the optical path length and the reference location. In our simulation, we are trying to match the results with our CP-OCT A-scan signal. We set the parameters of the MC simulation based on our system parameters and they are listed in Table 1.

Tables Icon

Table 1. Lists of parameters for MC simulation

2.2 Accelerated simulation by parallel computing

With the implementation of the importance-sampling, the simulation time is reduced. However, the speed for generating a single A-scan is still very slow considering we want to generate tens of thousands A-scan images for the training dataset. The current method is limited by the photon packet being launched one by one. To address this issue, we propose to implement a parallel algorithm that launches multiple photon packets at the same time using the graphics processing unit (GPU). Several works have used CUDA or other parallel computing tools to accelerate the MC simulation for photon migration inside tissue and achieved consistent results as the single thread version with a much faster speed [3840]. To simulate multiple photon packets at the same time, the challenges lie in the nature of the MC simulation. Each photon packet will go through a different path, and this will cause thread divergence which could be a potential performance killer. All the divergent functions such as calculating the reflection in our MC CP-OCT simulation are modified to be as brief as possible. This helps the program return to the main execution path with minimum delays. A parallel random number generator is used to make sure all the photon packets will have completely independent sequences of random numbers for simulation. Finally, all the threads are synchronized before the photon weights are added together. Other processing steps such as the importance-sampling, photon classification, and calculation of the final OCT signal intensity are also implemented using CUDA.

To generate a comprehensive training dataset with enough A-scan images, the first thing to consider is to generate an ideal cornea model for MC CP-OCT simulation. The typical optical properties and thickness of each corneal layer are given in Table 2 based on the previous studies [4143] and our observation during experiments on ex-vivo eyes. However, using one set of parameters to represent the corneal tissue is insufficient since each eye’s A-scan signal may be different. Especially for pathological tissue, the optical parameters may have a large variance from the standard value. Thus, in our parallel MC CP-OCT simulation platform, we simulate different eye models by randomly picking the optical properties and thickness of the layers within a certain range. To be specific, the scattering and absorption coefficients of the stroma are changing within +/- 1.0 cm-1 from the given standard values. The reason why only the properties of the stroma were changed is that the stroma counts for over 90% of the total cornea thickness and the DM and endothelium are very thin layers. Also, many diseases requiring cornea transplants are due to the decreased opacity of the stroma tissue. To make the corneal model consistent with the experimental data collected during ex-vivo studies, we ignore the epithelium and Bowman’s layer since the signals from these layers are suppressed during the ex-vivo imaging. To simulate the CP-OCT A-scan image during the insertion for each eye model, the thickness profiles of the cornea were changed to simulate the different locations of the needle. Once the fiber probe touches the corneal tissue, the thickness of the stroma tissue in front of it will decrease as the needle is inserted. We followed this setup and let the parallel CP-OCT MC simulation algorithm run 5 times at each location and then the stroma thickness is decreased to represent the needle is moving forward. In the simulation, the thickness of the stroma changes from 1500 µm to 420 µm, and the step size is the same as the grid resolution along the z-axis which is 2.7 µm. These values for the thickness of the stroma are selected based on two factors. Firstly, the fiber and needle are moving towards the DM layer at 60 degrees, the actual tissue thickness in front of the depth-sensing fiber is twice the vertical thickness. Secondly, the needle tip and the CP-OCT A-scan reference plane have an offset of 550 µm to lower the insertion resistance. Thus, the distance between the needle tip and the DM layer should subtract the offset value from the distance measured from the A-scan. So, the stroma thickness we set represents the needle tip starting from 950 µm away from the DM until it perforates the DM (-130 µm). The ground truth of the DM location is recorded at the same time when the A-scan image is generated. Combining parallel computing with the automatic parameter selection, we can generate a massive amount of CP-OCT A-scan images during needle insertion on different eye models.

Tables Icon

Table 2. Thickness and optical properties of corneal layers

2.3 End to end neural network for DM tracking

Here we design the end-to-end Ascan-Net for tracking the DM as illustrated in Fig. 3. The neural network follows an encoder-localization structure. Due to different eye models having different parameters and the A-scan images of the corneal tissue being different during the different needle insertion stages, it’s important to select the features that will contribute to localizing the DM’s location. The size of the input A-scan is 1×1024 and we design the 1-D feature extraction module (FEM) to extract the features of the input signal. The FEM module has three units: 1-D convolution layer, batch normalization, and rectified linear unit (ReLU). Here, the convolution layer has a kernel size of 4 by 4, and the channels of the input are increased by 4 times to save different features. Each FEM module decreases the size of the input A-scan but increases the number of channels and this encodes the A-scan image into a deep representation of all features. By stacking five of these FEM modules, we have a final multi-channel output representation of the A-scan with the size of 256 ×1×1. Then, output features will be flattened to decrease the dimension of the matrix. The dropout unit is applied to the flattened features with the probability of 0.1 to decrease the redundancy. This step helps reduce overfitting and improve generalization errors. The localization step is handled by two fully connected (FC) layers, which can effectively combine all the input features and output the location of the DM layer. The first FC layer decreases the size of the input features from 256 to 16 and uses it for the next FC layer input. The final output DM location is in the form of the pixel number in the original A-scan which ranges in [1, 1024]. The application we are targeting is the real-time OCT image guidance for DALK. In such a case, a higher sensing rate and lower computational cost are always favorable. Thus, a single A-scan rather than the consecutive A-scan is used, and a straightforward neural network is designed. Another reason for using a single A-scan as input is to compare fairly with the previously developed A-scan tracking method.

 figure: Fig. 3.

Fig. 3. The architecture of the end-to-end Ascan-Net for DM layer tracking

Download Full Size | PDF

2.4 Training, testing, and evaluation of the proposed Ascan-Net

A total of 50 different eye models were generated and each contained 2000 A-scan images. The ground truth of the DM’s location was generated as well during the simulation which was used as the label for the training. Data from five eye models were randomly selected into the validation set and to ensure that the model is not overfitting. The rest 45 eye models were used as the training set. Each A-scan image was standardized by the standard scalar to make sure the whole dataset had a zero mean and unit variance. Then, the training and validation set were shuffled. Here, the MSE loss (L) was used to train the network by calculating the squared error between the ground truth of the DM (xi) and the predicted location ($\hat{x}_i$) by the Ascan-Net as defined in Eq. (9).

$$L = \frac{1}{n}\sum\limits_{i = 1}^n {{{({{x_i} - {{\hat{x}}_i}} )}^2}} .$$

The optimization of the network was performed by the Adam optimizer [44] with parameters β1 = 0.9, β2 = 0.999, ɛ=1e-8, and weight decay equal 0. The initial learning rate is set to 0.01. The learning rate scheduler was used to decrease the learning rate by a factor of two if the total loss didn’t improve for 5 epochs. A total of 150 epochs were used and the batch size is set to 32. The network was trained on a workstation with Intel i7-9750H 2.60GHz CPU, 16GB memory, and Nvidia RTX2070 Max-Q GPU.

For the testing dataset, we used the data collected from ex-vivo experiments. One dataset with 707 A-scan images was collected from the ex-vivo human cornea imaging with a CP-OCT probe integrated needle inserted by a step motor as discussed in [26]. The other testing set with 374 A-scan images was collected using porcine cornea with the free-hand needle insertion technique performed by an experienced surgeon. In both datasets, the fiber was set inside a 27G needle with a 550 µm offset from the needle tip and approached the cornea at 60 degrees from the vertical. The needle tip was guided as close as possible to the DM in both ex-vivo tests and the A-scan images were standardized as well for the testing. Besides that, no other preprocessing and filtering were applied to these testing sets. The DM locations in these A-scan images were carefully labeled to use as a reference for the evaluation. The final tracking error was defined as the absolute error between the Ascan-Net output and the reference location. Another MC generated dataset was used for testing which contains 1032 A-scan images with DM located at 1080 µm to 550 µm evenly. This matches the DM locations in the ex-vivo dataset. The eye model for MC simulation for generating this testing dataset is different than the models used for the training set. To further validate the performance, we compared the results with the previously developed Canny-Edge detector based DM tracking algorithm. For each A-scan, it first denoised the image and then analyzed the intensity profile by back searching the certain peak points, which had been described in [26].

Furthermore, to investigate the domain shift between the two ex-vivo datasets and MC simulated dataset, we fine-tuned the model using another 500 A-scan images from the ex-vivo human cornea during the needle insertion. A total of 45 epochs were used to tune the model and the initial learning rate was decreased to 0.001. 5% of the training data were used as a validation set to make sure the tuning converges and avoid overfitting. Then, the model was tested on the same two ex-vivo datasets and the MC simulated dataset to compare the performance change.

1. Results

3.1 MC simulation generated A-scan image based on different eye models

We first evaluated the computation time of our parallel CP-OCT MC simulation algorithm on different hardware configurations. Here, RTX 2070 Max-Q (2560 CUDA cores, 1770 MHz), GTX 1070Ti (2432 CUDA cores, 1607 MHz), and MX 150 (384 CUDA cores, 1532 MHz) were used to evaluate the algorithm. Although the CPU used for each GPU was different, the most workload of the parallel MC CP-OCT simulation platform was on GPU and the usage of all the CPUs during the tests was below 30%. For comparison, the single thread version code was evaluated on the Intel i7-9750H 2.60GHz CPU. Each algorithm was run to generate 100 A-scan images and repeated 3 times. The timer function was implemented into the code right at the beginning and the end of the simulation. The averaged computation time for a single A-scan was calculated and plotted in Fig. 4. The fastest computation time for a single A-scan image is 0.45 seconds with RTX 2070 Max-Q GPU. With the decreased CUDA cores and frequency, the computation time slightly increases to 0.6 seconds with GTX 1070Ti. With the entry-level dedicated GPU MX-150, the average time was 3.15 seconds but this is still 8 times faster than the single thread version code simulated by CPU alone which took 25.51 seconds. With the help of GPU, the computation time is largely improved which is critical for generating large amounts of data.

 figure: Fig. 4.

Fig. 4. Comparison of averaged computation time for a single A-scan image on different configurations

Download Full Size | PDF

Next, we compared the results simulated from the normal eye model using both the parallel and single thread version with the data acquired from the ex-vivo studies. As plotted in Fig. 5, we choose two specific locations for evaluation. One is the DM located at 1150 µm from the reference plane of the sensor and the other is located at 750 µm. When the DM is located at 1150 µm, we can see the SNR of the DM is very low due to most of the photon being scattered/absorbed by the very thick stroma tissue in the front. When the DM’s location changes to 750 µm, the DM structure can be easily identified in the A-scan from the porcine cornea and the generated A-scan with GPU. From the other two A-scan images it is still very hard to distinguish the DM from the stroma. Due to the nature of the OCT signal and MC simulation, even with the same anatomical structure, the simulated or acquired A-scan images will be different. This is a natural of the statistical coherent interference. Compared to the A-scan signals from the ex-vivo porcine/human cornea and the simulated results from GPU and CPU, we can see although the details are not the same, they do follow a certain pattern based on the structure of the cornea. This shows that our algorithm can effectively simulate the A-scan signals at different locations and the results are comparable to the image acquired from the ex-vivo experiments.

 figure: Fig. 5.

Fig. 5. Comparison of A-scan images with DM located at 1170 µm and 750 µm from (a) ex-vivo human cornea, (b) ex-vivo porcine cornea, (c) MC simulation performed on CPU, and (d) MC simulation performed on GPU.

Download Full Size | PDF

Here, we show three M-mode images and corresponding A-scan images with DM located at 720 µm in Fig. 6. These simulated results are based on three different eye models. From the left to right, the first one represents the baseline model which used the original input parameters. The second model has a lower scattering coefficient of the stroma layer and from the corresponding A-scan image, we can see that the DM has a strong reflection. The last one shows the corneal stroma tissue with a higher scattering coefficient and the overall image is brighter compared to the other two. This could represent the cornea with certain diseases such as Schnyder corneal dystrophy (SCD) which gives the cornea a cloudiness that may reduce vision [45]. With the randomized parameter selection, cornea under different conditions can be simulated automatically. A total of 100000 A-scan images based on 50 different eye models were generated using the RTX 2070 Max-Q in 20 minutes.

 figure: Fig. 6.

Fig. 6. Examples from three different eye models: (a) M-mode images, (b) A-scan images with DM located at 720 µm (red square).

Download Full Size | PDF

3.2 Performance of the proposed Ascan-Net

The generated A-scan images were used to train the Ascan-Net and it took 50 minutes on the aforementioned workstation. For the Canny-edge detector based method, we manually tuned the parameters such as threshold value and Gaussian filter size for the best tracking results. To evaluate the overall performance of the proposed Ascan-Net, we use the absolute tracking error between the tracking results and the reference location of the DM labeled in the testing dataset. The overall results are summarized in Table 3. On the ex-vivo human cornea dataset, the Ascan-Net achieves the averaged tracking error of 8.45 µm with a standard deviation (Std) of 7.58 µm compared to the 14.72 +/- 16.22 µm by the Canny-edge detector. The Ascan-Net performs better on the ex-vivo porcine dataset as well by achieving the tracking error of 11.89 µm +/- 10.60 µm compared to 21.53 µm +/- 23.42 µm. On the MC generated dataset, the averaged tracking error is 6.90 µm +/- 7.39 µm compared to the 16.61 µm +/- 8.11 µm achieved by the Canny-edge detector. This result is better compared to the two ex-vivo datasets, which could due to the smaller gap between the simulated A-scan images despite different eye models being used.

Tables Icon

Table 3. Quantitative performance comparison of DM position tracking accuracy

Here, the performance of the fine-tuned Ascan-Net by the ex-vivo human dataset is also listed in Table 3. For the ex-vivo human dataset, the performance improved slightly compared to the model purely trained by the simulated data, which indicates that the simulated data can represent the ex-vivo human model sufficiently well, and the domain gap between the simulated dataset and the ex-vivo human dataset is small. Regarding the performance of the ex-vivo porcine dataset, the performance decreased slightly from 11.89 µm to 16.76 on average. These results suggest after fine-tuning the model, the Ascan-Net is more adapted to the ex-vivo human dataset and there is a certain domain gap between the two ex-vivo datasets. However, the tracking accuracy achieved on both ex-vivo datasets is still better compared to the Canny-edge detector based method. Lastly, regarding the MC generated dataset, the performance shows a significant decrease from 6.90 µm to 16.35 µm on average, which is similar to the Canny-edge detector based results. This could be because the eye model used for generating the A-scan has a quite large difference from the A-scan of an ex-vivo human cornea. Comparing the overall performance of all the three testing datasets, the model trained purely by the MC generated dataset achieved better tracking accuracy, which suggests that the MC generated training dataset exhibits a comprehensive representation of different eye models and datasets.

It’s noticeable that both the Ascan-Net and the Canny edge detector performed better on the ex-vivo human dataset than on the ex-vivo porcine dataset. We plot the overall tracking results together with the M-mode image of the testing datasets in Fig. 7 and selected four regions of interest (ROI) for further analysis. As in Fig. 7(a), the M-mode image taken from the ex-vivo human testing dataset has an overall smoother boundary and this is due to the needle being inserted with a step motor that has better control of the motion. On the other hand, the ex-vivo porcine dataset is acquired while the needle is handheld and freehand inserted where the hand tremor of the surgeon is unavoidable. This is obvious at the beginning of the needle insertion as shown in ROI 3. Due to the low SNR of the DM during the initial insertion (ROI 1 and 3), the tracking error is expected to be larger than the average. However, the Ascan-Net can better follow the reference in both ROIs and the Canny edge method has a large variance from the reference location. The ROI 2 and 4 are chosen at the end of the insertion and we can see that both methods’ performance improved. The tracking results from the Ascan-Net are closer to the reference labeled in the ex-vivo porcine dataset and this is more obvious in the ex-vivo human dataset.

 figure: Fig. 7.

Fig. 7. M-mode image with the tracking results from Ascan-Net and Canny edge detector on (a) Ex-vivo human dataset and (b) Ex-vivo porcine dataset.

Download Full Size | PDF

We also plot the estimated DM location against the reference location in both ex-vivo datasets in Fig. 8 to see the correlations. From Fig. 8(a) and 8(c), we can see that the overall predicted DM locations are well correlated with the expert labeled reference locations from 550 µm to 1150 µm. In Fig. 8(b) and 8(d), the Canny edge detector’s results are less consistent, and the results are worse in the ex-vivo porcine dataset due to the hand motion. Another factor to consider it’s the small variations in needle insertion angles and positions. Even if these parameters are set before the needle insertion, the tissue will deform as the needle moves forward which will then change the relative angle and the position between the fiber and tissue. This is even more affected when the needle is inserted by freehand in the ex-vivo porcine dataset. Due to the hand tremor, the surgeon couldn’t keep the angle of the needle and fiber strictly at 60 degrees from vertical. However, the collecting angle and radius of the fiber are relatively small, with a small variation from the previous position, the detected photon will still be very limited, and our MC generated data won’t be affected a lot in this case. The tracking results suggest that the Ascan-Net is more robust on low SNR A-scan images as well as different corneal tissues and small variations in insertion angles and position.

 figure: Fig. 8.

Fig. 8. Scatter plot comparing (a) proposed network and the reference on the ex-vivo human cornea, (b) Canny-Edge detector and the reference on ex-vivo human, (c) proposed network and the reference on the ex-vivo porcine cornea, (d) Canny-Edge detector and the reference on ex-vivo porcine.

Download Full Size | PDF

The goal of the needle guidance for DALK is to guide the needle as close as possible to the DM and this makes the tracking accuracy even more critical when the needle is very close to DM. To further evaluate the performance, we calculated the tracking results within 700 µm which corresponds to the needle tip being 150 µm to the DM. The results are summarized in Table 4. Compared to the full results, both mean and Std of the tracking error on two ex-vivo datasets were reduced due to the SNR improvement when the DM’s location is closer to the fiber. The difference between the two methods’ performance is obvious in the ex-vivo human dataset and this is consistent with the overall performance. On the ex-vivo porcine dataset, the performance difference is smaller due to the quality of the dataset. On the MC generated dataset, the Ascan-Net improves the accuracy to 3.72 µm +/- 3.26 µm and for the Canny-edge detector, the averaged accuracy improved to 12.57 µm and the Std changed to 1.84 µm. This indicates that the Canny-edge detector method misidentified the DM’s location which led to a constant offset in the tracking result.

Tables Icon

Table 4. Quantitative performance comparison of DM position tracking accuracy within 700 µm

2. Conclusion

In this work, we propose and evaluate a parallel MC CP-OCT simulation platform that can generate A-scan images of a wide range of corneal models during needle insertion. The generated images agreed with the experimental A-scan images from the ex-vivo eyes. Using these synthetic datasets to train the proposed end-to-end Ascan-Net, we achieved less absolute tracking error on locating the DM compared to the Canny edge detector-based method on both experimentally obtained and MC generated datasets. The results also suggest that the Ascan-Net can perform better on tracking the DM on the different corneal tissue and is more robust to noise and small variations in insertion angles and position. Also, compared to the fine-tuned Ascan-Net, we can see that the MC generated dataset exhibits a comprehensive representation of different eye models by achieving better tracking accuracy on different datasets. The future work will be focused on implementing the Ascan-Net for real-time processing and providing the location for the autonomous needle insertion. A more complex design of the neural network could be evaluated to further improve the tracking accuracy. MC CP-OCT simulation platform can generate consecutive A-scan images or B-scan images to use as the training datasets for different types of applications. The MC CP-OCT simulation platform also can generate a wider range of datasets for other applications or structures for rapid and accurate CNN training and validation.

Funding

National Eye Institute; National Institutes of Health (1R01EY032127-01).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. K. Zhang and J. U. Kang, “Real-time 4D signal processing and visualization using graphics processing unit on a regular nonlinear-k Fourier-domain OCT system,” Opt. Express 18(11), 11772–11784 (2010). [CrossRef]  

2. Y. Huang, X. Liu, and J. U. Kang, “Real-time 3D and 4D Fourier domain Doppler optical coherence tomography based on dual graphics processing units,” Biomed. Opt. Express 3(9), 2162–2174 (2012). [CrossRef]  

3. J. U. Kang, Y. Huang, J. Cha, K. Zhang, Z. Ibrahim, W. A. Lee, G. Brandacher, and P. Gehlbach, “Real-time three-dimensional Fourier-domain optical coherence tomography video image guided microsurgeries,” J. Biomed. Opt 17(8), 081403 (2012). [CrossRef]  

4. M. Wojtkowski, V. Srinivasan, J. G. Fujimoto, T. Ko, J. S. Schuman, A. Kowalczyk, and J. S. Duker, “Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography,” Ophthalmology 112(10), 1734–1746 (2005). [CrossRef]  

5. I. I. Bussel, G. Wollstein, and J. S. Schuman, “OCT for glaucoma diagnosis, screening and detection of glaucoma progression,” Br. J. Ophthalmol. 98(Suppl 2), ii15–ii19 (2014). [CrossRef]  

6. N. Z. Gregori, B. L. Lam, and J. L. Davis, “Intraoperative use of microscope-integrated optical coherence tomography for subretinal gene therapy delivery,” Retina 39(1), S9–S12 (2019). [CrossRef]  

7. S. Radhakrishnan, A. M. Rollins, J. E. Roth, S. Yazdanfar, V. Westphal, D. S. Bardenstein, and J. A. Izatt, “Real-time optical coherence tomography of the anterior segment at 1310 nm,” Arch. Ophthalmol. (Chicago, IL, U. S.) 119(8), 1179–1185 (2001). [CrossRef]  

8. J. Ehlers, “Intraoperative optical coherence tomography: past, present, and future,” Eye 30(2), 193–201 (2016). [CrossRef]  

9. R. S. Kumar, M. U. Jariwala, A. V. Sathidevi, J. P. Venugopal, N. K. Puttaiah, R. Balu, Dhanaraj Rao A S, and R. Shetty, “A pilot study on feasibility and effectiveness of intraoperative spectral-domain optical coherence tomography in glaucoma procedures,” Transl. Vis. Sci. & Technol. 4(2), 2 (2015). [CrossRef]  

10. J. P. Ehlers, Y. S. Modi, P. E. Pecen, J. Goshe, W. J. Dupps, A. Rachitskaya, S. Sharma, A. Yuan, R. Singh, P. K. Kaiser, J. L. Reese, C. Calabrise, A. Watts, and S. K. Srivastava, “The discover study 3-year results: feasibility and usefulness of microscope-integrated intraoperative OCT during ophthalmic surgery,” Ophthalmology 125(7), 1014–1027 (2018). [CrossRef]  

11. A. Singh, N. Gupta, A. Ganger, D. Singh, S. Kashyap, and R. Tandon, “Sutureless customized lamellar corneal transplant in a patient with gelatinous drop-like corneal dystrophy,” Exp Clin Transplant 17(6), 844–848 (2019). [CrossRef]  

12. N. Sharma, N. Aron, P. Kakkar, and J. S. Titiyal, “Continuous intraoperative OCT guided management of post-deep anterior lamellar keratoplasty descemet’s membrane detachment,” Saudi J. Ophthalmol. 30(2), 133–136 (2016). [CrossRef]  

13. L. De Benito-Llopis, J. S. Mehta, R. I. Angunawela, M. Ang, and D. T. Tan, “Intraoperative anterior segment optical coherence tomography: a novel assessment tool during deep anterior lamellar keratoplasty,” Am. J. Ophthalmol. 157(2), 334–341.e3 (2014). [CrossRef]  

14. N. S. Anisimova, L. B. Arbisser, N. F. Shilova, M. A. Melnik, A. V. Belodedova, B. Knyazer, and B. E. Malyugin, “Anterior vitreous detachment: risk factor for intraoperative complications during phacoemulsification,” J. Cataract. & Refract. Surg. 46, 55–62 (2020). [CrossRef]  

15. C. I. Falkner-Radler, C. Glittenberg, M. Gabriel, and S. Binder, “Intrasurgical microscope-integrated spectral domain optical coherence tomography–assisted membrane peeling,” Retina 35(10), 2100–2106 (2015). [CrossRef]  

16. J. P. Ehlers, D. S. Petkovsek, A. Yuan, R. P. Singh, and S. K. Srivastava, “Intrasurgical assessment of subretinal tPA injection for submacular hemorrhage in the PIONEER study utilizing intraoperative OCT,” Ophthalmic Surg Lasers Imaging Retina 46(3), 327–332 (2015). [CrossRef]  

17. R. Kafieh, H. Rabbani, and S. Kermani, “A review of algorithms for segmentation of optical coherence tomography from retina,” J Med Signals Sens 3(1), 45 (2013). [CrossRef]  

18. B. Wang, W. Wei, S. Qiu, S. Wang, D. Li, and H. He, “Boundary aware U-Net for retinal layers segmentation in optical coherence tomography images,” IEEE J. Biomed. Health Inform. 25(8), 3029–3040 (2021). [CrossRef]  

19. S. Lee and J. U. Kang, “CNN-based CP-OCT sensor integrated with a subretinal injector for retinal boundary tracking and injection guidance,” J. Biomed. Opt. 26(06), 068001 (2021). [CrossRef]  

20. V. A. Dos Santos, L. Schmetterer, H. Stegmann, M. Pfister, A. Messner, G. Schmidinger, G. Garhofer, and R. M. Werkmeister, “CorneaNet: fast segmentation of cornea OCT scans of healthy and keratoconic eyes using deep learning,” Biomed. Opt. Express 10(2), 622–641 (2019). [CrossRef]  

21. H. J. D. Koresh, S. Chacko, and M. Periyanayagi, “A modified capsule network algorithm for oct corneal image segmentation,” Pattern Recognit. Lett. 143, 104–112 (2021). [CrossRef]  

22. L. Wang, M. Shen, Q. Chang, C. Shi, Y. Zhou, and J. Pu, “BG-CNN: a boundary guided convolutional neural network for corneal layer segmentation from optical coherence tomography,” in Proceedings of the 2020 5th International Conference on Biomedical Signal and Image Processing, (2020), pp. 1–6.

23. X. Liu, X. Li, D.-H. Kim, I. Ilev, and J. U. Kang, “Fiber-optic Fourier-domain common-path OCT,” Chin. Opt. Lett. 6(12), 899–901 (2008). [CrossRef]  

24. J. U. Kang and G. W. Cheon, “Demonstration of subretinal injection using common-path swept source OCT guided microinjector,” Appl. Sci. 8(8), 1287 (2018). [CrossRef]  

25. G. W. Cheon, B. Gonenc, R. H. Taylor, P. L. Gehlbach, and J. U. Kang, “Motorized microforceps with active motion guidance based on common-path SSOCT for epiretinal membranectomy,” IEEE/ASME Trans. Mechatron. 22(6), 2440–2448 (2017). [CrossRef]  

26. S. Guo, N. R. Sarfaraz, W. G. Gensheimer, A. Krieger, and J. U. Kang, “Demonstration of optical coherence tomography guided big bubble technique for deep anterior lamellar keratoplasty (DALK),” Sensors 20(2), 428 (2020). [CrossRef]  

27. L. Wang, S. L. Jacques, and L. Zheng, “MCML—Monte Carlo modeling of light transport in multi-layered tissues,” Computer Methods and Programs in Biomedicine 47(2), 131–146 (1995). [CrossRef]  

28. G. Yao and L. V. Wang, “Monte Carlo simulation of an optical coherence tomography signal in homogeneous turbid media,” Phys. Med. Biol. 44(9), 2307–2320 (1999). [CrossRef]  

29. M. Kirillin, I. Meglinski, V. Kuzmin, E. Sergeeva, and R. Myllylä, “Simulation of optical coherence tomography images by Monte Carlo modeling based on polarization vector approach,” Opt. express 18(21), 21714–21724 (2010). [CrossRef]  

30. V. Periyasamy and M. Pramanik, “Importance sampling-based Monte Carlo simulation of time-domain optical coherence tomography with embedded objects,” Appl. Opt. 55(11), 2921–2929 (2016). [CrossRef]  

31. A. E. Hartinger, A. S. Nam, I. Chico-Calero, and B. J. Vakoc, “Monte Carlo modeling of angiographic optical coherence tomography,” Biomed. Opt. Express 5(12), 4338–4349 (2014). [CrossRef]  

32. Y. Wang and L. Bai, “Accurate Monte Carlo simulation of frequency-domain optical coherence tomography,” Int J Numer Meth Biomed Engng 35(4), e3177 (2019). [CrossRef]  

33. M. Anwar and K. D. Teichmann, “Big-bubble technique to bare Descemet’s membrane in anterior lamellar keratoplasty,” J. Cataract. & Refract. Surg. 28(3), 398–403 (2002). [CrossRef]  

34. M. Ünal, B. Bilgin, I. Yucel, Y. Akar, and C. Apaydin, “Conversion to deep anterior lamellar keratoplasty (DALK): learning curve with big-bubble technique,” Ophthalmic Surg Lasers Imaging 41(6), 642–650 (2010). [CrossRef]  

35. S. Guo and J. U. Kang, “Convolutional Neural Network-based Optical Coherence Tomography (OCT) A-scan Segmentation and Tracking Platform using Advanced Monte Carlo Simulation,” in Novel Techniques in Microscopy, (Optical Society of America, 2021), paper JW1A–16.

36. D. Chicea and L. Chicea, “On light scattering anisotropy of biological fluids (urine) characterization,” Romanian J. Phys. 52, 383 (2007).

37. I. T. Lima, A. Kalra, H. E. Hernández-Figueroa, and S. S. Sherif, “Fast calculation of multipath diffusive reflectance in optical coherence tomography,” Biomed. Opt. Express 3(4), 692–700 (2012). [CrossRef]  

38. T. Young-Schultz, S. Brown, L. Lilge, and V. Betz, “FullMonteCUDA: a fast, flexible, and accurate GPU-accelerated Monte Carlo simulator for light propagation in turbid media,” Biomed. Opt. Express 10(9), 4711–4726 (2019). [CrossRef]  

39. E. Alerstam, T. Svensson, and S. Andersson-Engels, “Parallel computing with graphics processing units for high-speed Monte Carlo simulation of photon migration,” J. Biomed. Opt. 13(6), 060504 (2008). [CrossRef]  

40. E. Alerstam, W. C. Y. Lo, T. D. Han, J. Rose, S. Andersson-Engels, and L. Lilge, “Next-generation acceleration and code optimization for light transport in turbid media using GPUs,” Biomed. Opt. Express 1(2), 658–675 (2010). [CrossRef]  

41. A. V. Yuzhakov, A. P. Sviridov, O. I. Baum, E. M. Shcherbakov, and E. N. Sobol, “Optical characteristics of the cornea and sclera and their alterations under the effect of nondestructive 1.56-µm laser radiation,” J. Biomed. Opt. 18(5), 058003 (2013). [CrossRef]  

42. S. Patel and L. Tutchenko, “The refractive index of the human cornea: A review,” Contact Lens and Anterior Eye 42(5), 575–580 (2019). [CrossRef]  

43. S. V. Patel, J. W. McLaren, D. O. Hodge, and W. M. Bourne, “Normal human keratocyte density and corneal thickness measurement by using confocal microscopy in vivo,” Investig. Ophthalmology & Visual Science 42, 333–339 (2001).

44. D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” in International Conference on Learning Representations (ICLR), (2015), pp. 13.

45. S. Siebelmann, P. Scholz, S. Sonnenschein, B. Bachmann, M. Matthaei, C. Cursiefen, and L. M. Heindl, “Anterior segment optical coherence tomography for the diagnosis of corneal dystrophies according to the IC3D classification,” Surv. Ophthalmol. 63(3), 365–380 (2018). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. (a) Illustration of the DALK with the B-scan image of the bovine cornea, (b) Schematic of the CP-OCT fiber sensor integrated needle, (c) A-scan image of the cornea from the CP-OCT system, (d) A-scan image of the cornea from the standard OCT system.
Fig. 2.
Fig. 2. Diagram of parallel MC CP-OCT simulation algorithm
Fig. 3.
Fig. 3. The architecture of the end-to-end Ascan-Net for DM layer tracking
Fig. 4.
Fig. 4. Comparison of averaged computation time for a single A-scan image on different configurations
Fig. 5.
Fig. 5. Comparison of A-scan images with DM located at 1170 µm and 750 µm from (a) ex-vivo human cornea, (b) ex-vivo porcine cornea, (c) MC simulation performed on CPU, and (d) MC simulation performed on GPU.
Fig. 6.
Fig. 6. Examples from three different eye models: (a) M-mode images, (b) A-scan images with DM located at 720 µm (red square).
Fig. 7.
Fig. 7. M-mode image with the tracking results from Ascan-Net and Canny edge detector on (a) Ex-vivo human dataset and (b) Ex-vivo porcine dataset.
Fig. 8.
Fig. 8. Scatter plot comparing (a) proposed network and the reference on the ex-vivo human cornea, (b) Canny-Edge detector and the reference on ex-vivo human, (c) proposed network and the reference on the ex-vivo porcine cornea, (d) Canny-Edge detector and the reference on ex-vivo porcine.

Tables (4)

Tables Icon

Table 1. Lists of parameters for MC simulation

Tables Icon

Table 2. Thickness and optical properties of corneal layers

Tables Icon

Table 3. Quantitative performance comparison of DM position tracking accuracy

Tables Icon

Table 4. Quantitative performance comparison of DM position tracking accuracy within 700 µm

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

s = ln ( ξ ) / μ t .
f H G ( cos θ ) = 1 g 2 2 ( 1 + g 2 2 g cos θ ) 3 / 2 .
cos θ = 1 2 g [ 1 + g 2 ( 1 g 2 1 + g + 2 g ξ ) 2 ] , g 0.
f B ( cos θ B ) = ( 1 1 a a 2 + 1 ) 1 a ( 1 a ) ( 1 + a 2 2 a cos θ B ) 3 / 2 .
cos θ B = 1 2 a { a 2 + 1 [ ξ ( 1 1 a 1 a 2 + 1 ) + 1 a 2 + 1 ] 2 } , a 0.
L j = f H G ( cos θ s ) p f B ( cos θ B ) + ( 1 p ) f H G ( cos θ s ) .
r i < r f i b e r , Φ i < Φ f i b e r , { P 1 : | z o p l 2 z max | < ( l c / 2 ) P 2 : | z o p l 2 z max | > ( l c / 2 ) .
I ( z ) = I 0 ( α i = 1 N p h I w i L i exp ( ( Δ L / l c ) 2 ) + i = 1 N p h I I w i L i exp ( ( Δ L / l c ) 2 ) ) .
L = 1 n i = 1 n ( x i x ^ i ) 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.