Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multi-phase level set algorithm based on fully convolutional networks (FCN-MLS) for retinal layer segmentation in SD-OCT images with central serous chorioretinopathy (CSC)

Open Access Open Access

Abstract

As a function of the spatial position of the optical coherence tomography (OCT) image, retinal layer thickness is an important diagnostic indicator for many retinal diseases. Reliable segmentation of the retinal layer is necessary for extracting useful clinical information. However, manual segmentation of these layers is time-consuming and prone to bias. Furthermore, due to speckle noise, low image contrast, retinal detachment, and also irregular morphological features make the automatic segmentation task challenging. To alleviate these challenges, in this paper, we propose a new coarse-fine framework combining the full convolutional network (FCN) with a multiphase level set (named FCN-MLS) for automatic segmentation of nine boundaries in retinal spectral OCT images. In the coarse stage, FCN is used to learn the characteristics of specific retinal layer boundaries and achieve classification of four retinal layers. The boundaries are then extracted and the remaining boundaries are initialized based on a priori information about the thickness of the retinal layer. In order to prevent the overlapping of the segmentation interfaces, a regional restriction technique is used in the multi-phase level to evolve the boundaries to achieve fine nine retinal layers segmentation. Experimental results on 1280 B-scans show that the proposed method can segment nine retinal boundaries accurately. Compared with the manual delineation, the overall mean absolute boundary location difference and the overall mean absolute thickness difference were 5.88 ± 2.38μm and 5.81 ± 2.19μm, which showed a good consistency with manual segmentation by the physicians. Our experimental results also outperform state-of-the-art methods.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Spectral domain optical coherence tomography (SD-OCT) [1] technique is of great value in the evaluation of retinal sections in vivo. Accurate detection and segmentation of retinal structures in SD-OCT is crucial for diagnosis, prediction and monitoring of retinal diseases. Central serous chorioretinopathy (CSC) is the fourth most common retinopathy following age-related macular degeneration, diabetic retinopathy, and branch retinal vein occlusion. Impaired retinal pigment epithelial (RPE) barrier function leads to serous RPE and/or Neuroretinal detachment. On the OCT images, the main manifestations are the arches of the boundary of myoid and ellipsoid of inner segments (BMEIS). The arched area is the CSC lesion area (see Fig. 1(b)). Figure 1(a) shows a B-scan of normal retinal OCT image. Most of the patients are young adults. The patient's mild vision decreased. The visual object deformed, became smaller, and accompanied by color vision changes. Dark spots appear in the center of the field of view. The patient's hyperopic refractive also changes due to serous detachment in the macular area. The disease is easy to relapse, repeated many times Can cause irreversible damage to visual function. So, it is important to clearly identify the CSC boundaries, which can help doctors diagnose, predict, and monitor central serous chorioretinopathy. Whereas, manual segmentation of OCT retinal layers is tedious, time consuming and often irreproducible. Therefore, many computer-aided segmentation methods were developed.

 figure: Fig. 1

Fig. 1 B-scans of OCT retina. (a) A B-scan of OCT normal retina. (b) A B-scan of OCT retina with CSC.

Download Full Size | PDF

However, automatic segmentation of retinal layers is challenging due to the following two reasons. (1) Local contrast between boundaries of different retinal layers is low (see Fig. 1). (2) Retinal layers always have irregular morphological features (see Fig. 1).

ILM: inner limiting membrane; RNFL: retinal nerve fiber layer; GCL: ganglion cell layer; IPL: inner plexiform layer; INL: inner nuclear layer; OPL: outer plexiform layer; ONL: outer nuclear layer; BMEIS: Boundary of myoid and ellipsoid of inner segments; IB-OPR: the lower boundary related to CSC; OB-RPE: outer boundary of retinal pigment epithelium; CL1: the layer between BMEIS and IB-OPR; CL2: the layer between IB-OPR and OB-RPE;

The left-side of each figure corresponds to the retinal layer boundaries: 1. ILM; 2. RNFL- GCL ;3. GCL- IPL; 4.IPL -INL; 5. INL -OPL; 6. OPL-HFL; 7. BMEIS; 8. IB-OPR; 9. OB-RPE.

The right-side of each figure corresponds to the retina layers: vitreous; RNFL; GCL; IPL; INL; OPL; ONL; CL1; CL2; choroid.

Generally, the existing retinal layer segmentation algorithms can be divided into two categories: fixed mathematical model-based methods [2–13] and machine learning-based methods [14–19].

As the fixed mathematical model-based methods, Mayer et al. [2] presented a method to determine the RNFL layer thickness in OCT images based on anisotropic noise suppression and deformable splines. Mujat et al. [3] presented a retinal nerve fiber layer segmentation algorithm for frequency domain data that can be applied on scans from both normal healthy subjects and glaucoma patients. Farsiu et al. [4] presented an automatic method for detection and segmentation of drusen in retinal images captured via high speed SD-OCT systems. Niu et al. [5] presented an automated algorithm to segment GA regions in SD-OCT images. These active contour-based methods effectively solve the problem of topological changes in the retinal structure, but they rely on initialization and require strong prior information of the retinal image layer and thickness. Chiu et al. [6] presented an automatic approach for segmenting seven retinal layers in SD-OCT images using graph theory and dynamic programming. Chiu et al. [7] presented a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming. LaRocca et al. [8] presented an automatic approach for segmenting corneal layer boundaries in SD-OCT images using graph theory and dynamic programming. Quellec et al. [9] presented an automated method for segmenting 3D fluid and fluid-associated abnormalities in the retina, from 3D OCT retinal images of subjects suffering from exudative age-related macular degeneration. Keller et al. [10] introduced a metric in graph search and demonstrated its application for segmenting retinal OCT images of macular pathology. Tian et al. [11] presented an automatic OCT retinal image analysis algorithm to segment OCT volume data in the macular region accurately. Srinivasan et al. [12] presented an automatic approach for segmenting retinal layers in SD-OCT images using sparsity based denoising, support vector machines, graph theory, and dynamic programming. Karri et al. [13] presented an algorithm for layer-specific edge detection in retinal OCT images through a structured learning algorithm that could simultaneously identifies individual layers and their corresponding edges. However, these methods of graph-based surface segmentation and contour modeling require professionally designed, application-specific transformations, including cost functions, constraints, and model parameters.

As the machine learning-based methods (i.e., support vector machines, neural networks, random forest and so on) are trained to extract features from each layer or its boundaries for determining layer boundaries [14,15]. In Lang et al. [14], authors built a random forest classifier to segment eight retinal layers in macular cube images acquired by OCT. The random forest classifier learns the boundary pixels between layers, producing an accurate probability map for each boundary, which is then processed to finalize the boundaries. In McDonough et al. [15], authors proposed a method by which the boundaries of retinal layers in OCT images can be identified from a simple initial user input. The proposed method is trained to identify points within each layer, from which, the boundaries between the retinal layers are estimated. This method focuses on training neural networks to identify layers themselves, instead of boundaries. With the development of deep learning methods, deep neural network [16–18] is considered to be a very powerful tool in the field of computer vision. The implementation of deep learning methods slowly matured from the convolutional neural network (CNN) proposed by Krizhevsky et al. [19]. It automatically learns a series of increasingly complex features and related classifiers directly from the training data. In recent years, CNN based methods have been proposed as an alternative to traditional machine learning for normal and pathological classification of OCT images. Fang et al. [20] proposed a new convolutional neural network and graph-based search method for automatically segmenting nine-layer boundaries on non-exudative AMD OCT images. This was the first CNN to segment the inner layer of the retina. Shah et al. [21] presented a method for performing multiple surface segmentation using CNNs and applied it to retinal layer segmentation in normal and intermediate AMD OCT scans at the B-scan level in a single pass. Apostolopoulos et al. [22] proposed a fully convolutional neural network architecture which combines dilated residual blocks in an asymmetric U-shape configuration, and can segment multiple layers of highly pathological eyes in one shot. And demonstrated the effectiveness of data on patients with advanced AMD. Ben-Cohen et al. [23] explored a U-Net-based full convolution model, training the network to segment different layers, extracting probability maps of different layers from each test case, and then using Sobel edge filter along with graph search methods to detect each layers boundary. Roy et al. [24] proposed a new fully convolutional deep architecture (ReLayNet) for semantic segmentation of retinal OCT B-scan into 7 retinal layers and fluid masses, and substantiated its effectiveness on a publicly available benchmark. Although these frames proved to be effective, they are dependent on the availability of large annotated data sets.

Some of the successful applications of level set algorithms, such as Yazdanpanah et al. [25] presented a semiautomated segmentation algorithm to detect intra-retinal layers in OCT images acquired from rodent models of retinal degeneration. A multi-phase framework with a circular shape prior is adopted to model the boundaries of retinal layers and estimate the shape parameters using least squares. A contextual scheme is also employed to balance the weight of different terms in the energy function. Novosel et al. [26] presented a segmentation method that operated on attenuation coefficients and incorporated anatomical knowledge about the retina. Then, the layers in the retina are simultaneously segmented via a new flexible coupling approach. Novosel et al. [27] presented an approach to jointly segment retinal layers and lesions in eyes with topology-disrupting retinal diseases by a loosely coupled level sets framework.

The priori information of the middle boundaries of the retinal layer is clear, and the relative positional relationship between these boundaries is minimally affected by retinopathy. Therefore, the prior knowledge can be well used as the initialization of the level set function, which can alleviate the main disadvantage of the level set algorithm that it is relies on initialization and make sure the perfectly combining between the level set algorithm and the deep learning network. Inspired by knowledge above, we proposed a new segmentation model of retinal layers with CSC of SD-OCT images. The combination of FCN model and multilevel level set method (named FCN-MLS) can produce semantic accurate prediction and detailed segmentation mapping with a certain computational efficiency. We first decompose the trained OCT image into blocks, which are divided into five classes. The FCN is adopted to automatically extract features centered on the retinal layer boundaries and to train the soft-max layer to divide the five boundaries. The initialization of the remaining boundaries is based on prior information about the thickness of the retinal layer. Finally, we use a multi-stage level set model to segment and evolve to the final boundaries to be segmented. In summary, there are two main contributions of our work:

  • (1) The multi-stage level set model is combined with FCN network to implement the coarse-fine segmentations of nine retinal layers simultaneously.
  • (2) To alleviate problems caused by low contrast and layer boundary blur, regional restriction is employed to avoid the overlap of the segmentation interface and obtain accurate segmentation of retinal layers.

In the following, we will first introduce the deep-learning-based model for OCT images segmentation, and then, two evaluation metrics, i.e., mean absolute boundary location difference (mabld) and mean absolute thickness difference (matd) are employed to evaluate experimental results of OCT images in normal and central serous retinopathy.

2. Method

The proposed framework can be split into three stages: Pre-processing, FCN for layer boundary classification and distance regularization level set (DRLSE) algorithm for layer segmentation. The flowchart of the proposed method is illustrated in Fig. 2.

 figure: Fig. 2

Fig. 2 Flowchart of the proposed method.

Download Full Size | PDF

2.1 Pre-processing

In practical applications, due to the high scattering of biological tissues, the image quality in OCT is greatly affected by speckle noise, and speckle noise can blur the image details and reduce image clarity. In order to preserve more boundary information after denoising the retina image, we use an improved anisotropic double-sided filter proposed by Tomasi et al. [28] to denoise the image with σ=2 and Half-size = 4.

2.2 Coarse layer boundary classification by FCN

FCN proposed by Long et al. [29] transforms the full connection layer of CNN into the convolution layer and add deconvolution layer after the last convolution layer to restore the original image size. The category of each pixel is calculated through the soft-max layer. The output of FCN has same size with original image

We adopted the network structure of FCN-8s proposed by Long et al. [29] based on VGG-16 proposed by Yu et al. [30] to segment retinal layers coarsely. The model consists of 13 convolutional layers, 5 pooling layers, ReLU layers proposed by Nair et al. [31] and 3 fully connection layers with according deconvolution layers. Cross-Entropy loss function is adopted in FCN networks. The basic formula is:

Llog=i,jω(xi,j)yl(xi,j)logpl(xi,j)
whereωis the weight assigned to the pixelxi,jof OCT images, andylis the true classification of the pixelxi,j, ifxi,jbelongs to l, thenylis equal to 1. Otherwise, ylis equal to 0; plis the probability; i,j is the size of the image.

2.3 Fine layer segmentation by level set functions with regional restriction

2.3.1 Layer segmentation by level set functions

The initialization of level set function z=ϕ(x,y) [32,33] is based on the classification results of FCN network. Five boundaries with gradient information changed significantly are selected as zero level(ϕ=0)of the level set function, which is the target contour of segmentation. Positive level (ϕ>0) contains the upper parts of retinal layers. Below the boundary is the negative level (ϕ<0) (see Fig. 3, Fig. 4). On the right of Fig. 3 and Fig. 4 is the projection of the level set function on a two-dimensional plane. Then, based on the priori information of the thickness of the retinal image layer in OCT, the remaining four boundaries are initialized.

 figure: Fig. 3

Fig. 3 Schematic diagram of level set function initialization and its projection in a two-dimensional plane.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Schematic diagram of the proposed level set function initialization and its projection in a two-dimensional plane. (The blue area represents the positive horizontal level of the level set function and the red area represents the negative level of the level set function.)

Download Full Size | PDF

In the evolution of level set function, an edge based geometric active contour model is adopted to obtain the segmentation of remaining boundaries. The energy function is defined as:

ε(ϕ)=μR(ϕ)+αL(ϕ)+βS(ϕ)
L(ϕ)=ΔΩgδ(ϕ)|ϕ|dx
S(ϕ)=ΔΩgH(ϕ)dx
g=Δ11+|Gg*I|2
δ(η)={12ε[1+cos(πηε)],|η|ε0,|η|>ε
H(ϕ)={12(1+ηε+1πsin(πηε)),|η|ε1,0,η>εη<ε
whereα>0andβRare the coefficients of the energy functionsL(ϕ)andS(ϕ). gis the edge indicator function defined in Eq. (5). Iis an image on a domainΩ. Gσis a Gaussian kernel with a standard deviation α. The Dirac delta function δ and Heaviside functionH in L and S are approximated by the following smooth functions δ (Eq. (6)) andH (Eq. (7)) as in many level set methods. Note that δ is the derivative ofH,i.e.,H=δ.The parameterεis usually set to 1.5. The algorithm aims at minimizing the energy function (Eq. (2)). In the evolution process,L(ϕ)computes the line integral of the functiongalong the zero level contour of ϕ through the Dirac delta function δ. By parameterizing the zero level set of ϕ as a contourC:[0,1]Ω, L(ϕ) can be expressed as a line integral01g(C(s))|C'(s)|ds. L(ϕ)is minimized when the zero-level contour of ϕ is located at the retinal layer boundaries. In the evolution of the level set, S(ϕ)computes a weighted area of the regionΩϕ=Δ{η:ϕ(η)<0}, which is introduced to accelerate the motion of the zero-level contour, especially when the initial contour is far from the target boundary. Specifically, if the initial contour is below retinal layer, the coefficients in the weighted area term should be positive, thus making the zero-level contour move upward during the evolution of the level set. If the initial contour is above retinal layer, the coefficientβshould be negative to drive the contour downward motion. When the zero-level contour reaches the boundary of retinal layer, the value of gis 1.

2.3.2 Regional restriction

Because the gradient information of the middle retinal layer interface is not obvious, the optimal solution obtained by the evolution of multiple level set functions may be overlapped. Therefore, in order to avoid the overlapping of interfaces, we adopt the idea of regional restriction coupling (see Fig. 5), so that the evolution region of each level set function is not lower than the minimum value of the upper boundary of the interface to be segmented, and not higher than the maximum value of the lower boundary of the interface to be segmented at the same time. With the continuous evolution of the level set function of the upper and lower interfaces, the evolving region of the interface to be segmented is updated in real time.

 figure: Fig. 5

Fig. 5 Principle of regional restriction.

Download Full Size | PDF

3. Experimental results and analysis

The MATLAB language MatConvNet toolkit is adopted to implement FCN model. The CPU used for Intel (R) Core i7-4980HQ 2.80GHz. GPU is NVIDIA GeForce GTX 980M. Hyperparameters during the training phase are set as: a total of 50 epochs, the batch-size is 10 and the initial learning rate is 0.0001. 10 SD-OCT data are selected as experimental subjects, which including 5 abnormal eyes with CSC (640 B-scans) and 5 normal eyes (640 B-scans). A data set in total of 1280 B-scan images is built. All of the cubes were obtained using a Cirrus SD-OCT device (Carl Zeiss Meditec, Inc., Dublin, CA). Each SD-OCT cube contained 1280 contiguous 512 × 1024 pixels B-scan images. Manual segmentation of retinal boundaries by an experienced grader is deemed as ground-truth. 860 B-scans were randomly selected as the neural network training set, 360 B-scans were randomly selected as the validation set of the neural network, the remaining 60 B-scans were used as the neural network test set. Figure 6 shows an example of classification by FCN and the corresponding extracted boundaries. Class 0 represents the background, Class 1 represents the ILM layer, Class 2 represents the area between RNFL-GCL and BMEIS; Class 3 represents the CSC lesion area, and Class 4 represents the CL2 layer between IB-OPR and OB- RPE, a total of 5 classes (see Fig. 6(a)). The five boundaries extracted include ILM, RNFL-GCL, BMEIS, IB-OPR, OB-RPE (see Fig. 6(b)). The initialization of the remaining boundaries, including GCL-IPL, IPL-INL, INL-OPL, OPL-HFL (see Fig. 6(c)).

 figure: Fig. 6

Fig. 6 Automatic surface detection. (a)layer classification through FCN; (b) Initialization interface; (c) Remainder layer initialization based on retinal layer thickness.

Download Full Size | PDF

3.1 Quantitative evaluation

In order to quantitatively evaluate the performance of the proposed method, we compared our segmentation results with FCN-based method proposed by Longet et al. [29], graph-based segmentation proposed by Chiu et al. [6], OCT Explorer [34,35] and ReLayNet proposed by Roy et al. [24]. Results show that the similarity with manual delineations is higher by our proposed method than others.

3.1.1. Evaluation of the layer boundary position

We use the mean absolute boundary difference(mabd) metric proposed by Niu et al. [36] to estimate the boundaries difference between the segmentation methods, which can be defined as:

mabd=1Ni=1N1512|Cb11Cb2i|
whereNis the width of each B-Scans, theCb1iandCb2iare defined as the corresponding coordinate value of the segmentation result from the same boundary i-th in the axial direction of B-scan of the manual segmentation method and the method to be compared, respectively.

The standard deviation values (SD) could be expressed as:

sd=1Ni=1N(1512|Cb1iCb2i|mabd)

We performed comparisons on 4 normal SD-OCT cubes and 5 SD-OCT cubes with CSC lesions. The results of the mean absolute boundary position difference and standard deviation values are shown in Tables 1 and Tables 2, respectively.

Tables Icon

Table 1. Mean absolute boundary positioning differences (mean ± SD) calculated from the abnormal images with CSC in micrometers.

Tables Icon

Table 2. Mean absolute boundary positioning differences (mean ± SD) calculated from the normal images in micrometers.

As can be seen from Table 1 and Table 2, the overall mean absolute boundary position difference of the proposed method for the retinal images of normal and CSC lesions is 5.88 ± 2.38 and 5.59 ± 3.21, respectively, which is significantly better than Chiu et al. [6], OCT-Explore [34,35], Long et al. [29] and Roy et al. [24]. For segmentation of OCT images with CSC lesions, the results produced by OCT-Explorer are inaccurate. Since the open method of the graph-based algorithm [6] does not divide the GCL-IPL, IB-OPR boundary, the average absolute difference is expressed as “empty”. The segmentation effect of the single FCN network on the middle boundaries of the retinal layer is not ideal, and the segmentation effect of the boundaries with obvious gradient change is better than Chiu et al. [6] and OCT-Explorer [34,35] algorithms. For low-contrast OCT images, Roy et al. [24] have a slightly better segmentation effect than the FCN network. As can be seen from Table 1 and Table 2, the overall mean absolute boundary position difference of the proposed method for the retinal images of normal and CSC lesions is 5.88 ± 2.38 and 5.59 ± 3.21, respectively, which is significantly better than Chiu et al. [6], OCT-Explore [34,35], Long et al. [29] and Roy et al. [24]. For segmentation of OCT images with CSC lesions, the results proposed by OCT-Explorer [34,35] are inaccurate. Since the open method of the graph-based algorithm [6] does not divide the GCL-IPL, IB-OPR boundary, the average absolute difference is expressed as “empty”. The segmentation effect of the single FCN network on the middle boundaries of the retinal layer is not ideal, and the segmentation effect of the boundaries with obvious gradient change is better than Chiu et al. [6] and OCT-Explorer [34,35] algorithms. For low-contrast OCT images, Roy et al. [24] have a slightly better segmentation effect than the FCN network. But it still can't separate the IPL and INL layers very well. Our method first uses FCN to segment the boundaries with distinct gradient change, so that the specific boundaries have the best segmentation effect. The remaining boundaries are then located based on a priori knowledge of layer thickness and the classification result, which is more accurate. In the gradient-based level set evolution, the regional restriction technique is used to avoid overlap between interfaces, and the segmentation effect is also excellent. Therefore, the overall mean absolute boundary difference of our method and the mean absolute boundary difference of each boundary have the smallest error, i.e., the highest similarity to the ground-truth.

3.1.2. Quantitative evaluation of retina thickness

Since changes in fundus disease are mainly manifested in changes in the thickness of the retinal layer, reliable measurement of retinal thickness is critical to measuring the extent of changes in fundus diseases. We use the mean absolute thickness differences (mahd) metric proposed by Niu et al. [36] to estimate the layers difference between the segmentation methods. Calculate the thickness of each retinal layer by making use of the boundary positions of adjacent layers, which can be defined as:

mahd=1Ni=1N(1512|lj1ilj2i|)
where N is the width of each A-Scan, j presents the thickness of each retinal layer, which is calculated by the boundary position of the adjacent layer and can be expressed as:
Ij18=CbjCbj1
where j is the number of layers of the retina.

Tables 3 and 4 show the results of the mean absolute boundary position difference and standard deviation of the retinal layer thickness compared to other methods.

Tables Icon

Table 3. Mean absolute thickness differences (mean ± SD) calculated from the abnormal images with CSC in micrometers.

Tables Icon

Table 4. Mean absolute thickness differences (mean ± SD) calculated from the normal eye’s images in micrometers.

It can be seen from the data in the Table 3 and Table 4, for retinal layer segmentation with CSC lesions, the overall mean absolute thickness difference of our method and the mean absolute thickness difference of each boundary were 5.81 ± 2.16, 4.78 ± 2.85, and the error was the smallest compared with manual segmentation.

3.2. Qualitative evaluation

Figure 7 shows the effect of the FCN network full classification and the effect of splitting only specific boundaries. It can be seen from the first row in the figure that if the full classification is adopted, the middle boundaries which are not obvious for the change of the gradient information cannot be accurately positioned, and the segmentation effect is not ideal. The initialization process requires a lot of interpolation and fitting operations. Therefore, we use the FCN network to segment five critical boundaries with significant gradient changes. The second row shows that the classification effect is more accurate. Each column corresponds to the same B-scan, the first column represents the normal retinal image, the second and the third columns represent the retinal image with CSC lesions in the foveal region.

 figure: Fig. 7

Fig. 7 The first row (a), (b), (c) are the result of the FCN network full classification; and the second row (d), (e), (f) are the result of the interface we need to be classified by the FCN network.

Download Full Size | PDF

It can be seen that since the gradient changes between specific boundaries are obvious and do not interfere with each other, the segmentation effect with less segmentation boundaries is more ideal. However, there is still a slight misclassification in the fovea area, so the fine segmentation of the next level set solves this problem.

In order to directly evaluate the quality of the segmentation process, Fig. 8 shows the cross-sectional images from the same cube which are segmented by four different methods. It can be seen from the figure that Chiu et al. [6] method based on graph theory has the worst segmentation effect, while OCT-Explore [34,35] is not very good at the upper boundary of CSC lesion. It is worth mentioning that the overall segmentation of the ReLayNet network proposed by Roy et al. [24] is not bad, but there is a phenomenon of overlap between the GCL-IPL and IPL-INL boundaries. Therefore, most of the deep learning networks we see do not divide this layer, because the average thickness of the IPL layer is too small, the gradient information changes are not obvious, and training on low-contrast OCT images is more difficult to distinguish. From this we can see that regional restriction is important. In addition, the IB-OPR boundary is not segmented by gradient. In contrast, our approach is more suitable for CSC lesion segmentation, the closest to manual segmentation.

 figure: Fig. 8

Fig. 8 (a), (b), (c), (d),(e) are the comparisons among the method proposed by Chiu et al. [6], OCT-Explore [34,35], Roy et al. [24], our proposed method and manual segmentation result.

Download Full Size | PDF

Since layer segmentation is performed over the entire image area, we calculated a 2D position map (see Fig. 9) for each retinal boundary. The position of each layer boundary is given by the depth at the top of the cross-sectional image. The 2D position maps (e.g. Figure 9(a)-9(i)) of the ILM, RNFL-GCL, GCL-IPL, IPL-INL, INL-OPL, OPL-HFL, BMEIS, IB-OPR and OB-RPE boundaries, respectively. We subtract the position of the ILM boundary from the position of the RPE boundary to obtain the total retinal thickness map (see Fig. 9(j)), and the scale represents the distance between the two boundaries.

 figure: Fig. 9

Fig. 9 (a), (b), (c), (d), (e), (f), (g), (h), (i) are the boundary positions maps of a cube respectively; and (j) is the total thickness map.

Download Full Size | PDF

As we can see from Fig. 9, the color bar area on the right side of each figure represents the coordinate value

of the boundary. The deeper the yellow, the larger the value. Starting from the ILM boundary, the value of each next boundary is gradually increased. In addition, the value at the center of the position map is small, which indicates that the boundary around the fovea is correctly segmented.

A cube represents a set of data for a patient, and we calculate a 3-D visualization of each boundary segmentation of the entire set of data shown in Fig. 10. The 3D visualization (e.g. Figure 10(a)-10(i)) of the segmentation results of 9 boundaries including ILM, RNFL-GCL, GCL-IPL, IPL-INL, INL-OPL, OPL-HFL, BMEIS, IB-OPR and OB-RPE, respectively. The 3-D visualization of the segmentation results of all boundaries (see Fig. 10(j)).

 figure: Fig. 10

Fig. 10 The 3-D surfaces visualization of a cube.

Download Full Size | PDF

It can be seen from Fig. 10(a)-10(i) that the coordinate value of the result graph of each 3-D visualization is getting lower and lower. This indicates that the position of the segmentation boundary in the OCT retinal image becomes lower and lower. In addition, Fig. 10(j) can clearly compare the positional relationship of all the segmented boundaries and can be seen that there is no overlap between the boundaries.

4. Conclusion

In this paper, a new joint automatic segmentation model of SD-OCT retinal layer and CSC lesion is proposed. The framework is based on FCN network and multi-stage level set method. Firstly, the retinal layers is roughly classified by FCN neural network as the initialization of the level set function. Based on the prior information of retinal image such as layer and thickness, the multi-stage level set method is used to realize the automatic segmentation of nine boundaries of OCT image. This model guarantees continuous boundaries of the retinal layer in all images and provides an efficient segmentation model for low-quality imaging with low contrast and blurred boundaries. Compared with the traditional level set method, the overlapping phenomenon of retinal layer boundary is avoided, and the need of large number of training data sets is effectively reduced compared with the simple use of neural network model.

Funding

National Natural Science Foundation of China (No.61471226, 61802234, 61701192), the Natural Science Foundation for Distinguished Young Scholars of Shandong Province (No.JQ201516), and the Taishan Scholars Project of Shandong Province, Primary Research and Development Plan of Shandong Province (No.2018GGX101018), Natural Science Foundation of Shandong Province (No.ZR2019QF007, ZR2017QF004), China Postdoctoral Project (No.40411583, 2017M612178).

Disclosures

The authors declare there are no conflicts of interest.

References

1. J. F. de Boer, B. Cense, B. H. Park, M. C. Pierce, G. J. Tearney, and B. E. Bouma, “Improved signal-to-noise ratio in spectral-domain compared with time-domain optical coherence tomography,” Opt. Lett. 28(21), 2067–2069 (2003). [CrossRef]   [PubMed]  

2. M. A. Mayer, J. Hornegger, C. Y. Mardin, and R. P. Tornow, “Retinal nerve fiber layer segmentation on FD-OCT scans of normal subjects and glaucoma patients,” Biomed. Opt. Express 1(5), 1358–1383 (2010). [CrossRef]   [PubMed]  

3. M. Mujat, R. Chan, B. Cense, B. Park, C. Joo, T. Akkin, T. Chen, and J. de Boer, “Retinal nerve fiber layer thickness map determined from optical coherence tomography images,” Opt. Express 13(23), 9480–9491 (2005). [CrossRef]   [PubMed]  

4. S. Farsiu, S. J. Chiu, J. A. Izatt, and C. A. Toth, “Fast detection and segmentation of drusen in retinal optical coherence tomography images,” in Ophthalmic Technologies XVIII(International Society for Optics and Photonics2008), p. 68440D.

5. S. Niu, L. de Sisternes, Q. Chen, T. Leng, and D. L. Rubin, “Automated geographic atrophy segmentation for SD-OCT images using region-based C-V model via local similarity factor,” Biomed. Opt. Express 7(2), 581–600 (2016). [CrossRef]   [PubMed]  

6. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18(18), 19413–19428 (2010). [CrossRef]   [PubMed]  

7. S. J. Chiu, C. A. Toth, C. Bowes Rickman, J. A. Izatt, and S. Farsiu, “Automatic segmentation of closed-contour features in ophthalmic images using graph theory and dynamic programming,” Biomed. Opt. Express 3(5), 1127–1140 (2012). [CrossRef]   [PubMed]  

8. F. LaRocca, S. J. Chiu, R. P. McNabb, A. N. Kuo, J. A. Izatt, and S. Farsiu, “Robust automatic segmentation of corneal layer boundaries in SDOCT images using graph theory and dynamic programming,” Biomed. Opt. Express 2(6), 1524–1538 (2011). [CrossRef]   [PubMed]  

9. G. Quellec, K. Lee, M. Dolejsi, M. K. Garvin, M. D. Abràmoff, and M. Sonka, “Three-dimensional analysis of retinal layer texture: identification of fluid-filled regions in SD-OCT of the macula,” IEEE Trans. Med. Imaging 29(6), 1321–1330 (2010). [CrossRef]   [PubMed]  

10. B. Keller, D. Cunefare, D. S. Grewal, T. H. Mahmoud, J. A. Izatt, and S. Farsiu, “Length-adaptive graph search for automatic segmentation of pathological features in optical coherence tomography images,” J. Biomed. Opt. 21(7), 076015 (2016). [CrossRef]   [PubMed]  

11. J. Tian, B. Varga, G. M. Somfai, W.-H. Lee, W. E. Smiddy, and D. C. DeBuc, “Real-time automatic segmentation of optical coherence tomography volume data of the macular region,” PLoS One 10(8), e0133908 (2015). [CrossRef]   [PubMed]  

12. P. P. Srinivasan, S. J. Heflin, J. A. Izatt, V. Y. Arshavsky, and S. Farsiu, “Automatic segmentation of up to ten layer boundaries in SD-OCT images of the mouse retina with and without missing layers due to pathology,” Biomed. Opt. Express 5(2), 348–365 (2014). [CrossRef]   [PubMed]  

13. S. P. Karri, D. Chakraborthi, and J. Chatterjee, “Learning layer-specific edges for segmenting retinal layers with large deformations,” Biomed. Opt. Express 7(7), 2888–2901 (2016). [CrossRef]   [PubMed]  

14. A. Lang, A. Carass, M. Hauser, E. S. Sotirchos, P. A. Calabresi, H. S. Ying, and J. L. Prince, “Retinal layer segmentation of macular OCT images using boundary classification,” Biomed. Opt. Express 4(7), 1133–1152 (2013). [CrossRef]   [PubMed]  

15. K. McDonough, I. Kolmanovsky, and I. V. Glybina, “A neural network approach to retinal layer boundary identification from optical coherence tomography images,” in 2015 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB)(IEEE2015), pp. 1–8. [CrossRef]  

16. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436 (2015). [CrossRef]  

17. G. E. Hinton, and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” science 313, 504–507 (2006). [CrossRef]  

18. G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput. 18(7), 1527–1554 (2006). [CrossRef]   [PubMed]  

19. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst. 60, 1097–1105 (2012).

20. L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search,” Biomed. Opt. Express 8(5), 2732–2744 (2017). [CrossRef]   [PubMed]  

21. A. Shah, L. Zhou, M. D. Abrámoff, and X. Wu, “Multiple surface segmentation using convolution neural nets: application to retinal layer segmentation in OCT images,” Biomed. Opt. Express 9(9), 4509–4526 (2018). [CrossRef]   [PubMed]  

22. S. Apostolopoulos, S. De Zanet, C. Ciller, S. Wolf, and R. Sznitman, “Pathological OCT retinal layer segmentation using branch residual u-shape networks,” in International Conference on Medical Image Computing and Computer-Assisted Intervention(Springer2017), pp. 294–301. [CrossRef]  

23. A. Ben-Cohen, D. Mark, I. Kovler, D. Zur, A. Barak, M. Iglicki, and R. Soferman, “Retinal layers segmentation using fully convolutional network in OCT images,” RSIP Vision (2017).

24. A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian, C. Wachinger, and N. Navab, “ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks,” Biomed. Opt. Express 8(8), 3627–3642 (2017). [CrossRef]   [PubMed]  

25. A. Yazdanpanah, G. Hamarneh, B. R. Smith, and M. V. Sarunic, “Segmentation of intra-retinal layers from optical coherence tomography images using an active contour approach,” IEEE Trans. Med. Imaging 30(2), 484–496 (2011). [CrossRef]   [PubMed]  

26. J. Novosel, G. Thepass, H. G. Lemij, J. F. de Boer, K. A. Vermeer, and L. J. van Vliet, “Loosely coupled level sets for simultaneous 3D retinal layer segmentation in optical coherence tomography,” Med. Image Anal. 26(1), 146–158 (2015). [CrossRef]   [PubMed]  

27. J. Novosel, K. A. Vermeer, J. H. de Jong, Ziyuan Wang, and L. J. van Vliet, “Joint segmentation of retinal layers and focal lesions in 3-D OCT data of topologically disrupted retinas,” IEEE Trans. Med. Imaging 36(6), 1276–1286 (2017). [CrossRef]   [PubMed]  

28. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in ICCV (1998), p. 2.

29. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition(2015), pp. 3431–3440.

30. W. Yu, K. Yang, Y. Bai, T. Xiao, H. Yao, and Y. Rui, “Visualizing and comparing AlexNet and VGG using deconvolutional layers,” in Proceedings of the 33 rd International Conference on Machine Learning(2016).

31. V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learning (ICML-10)(2010), pp. 807–814.

32. C. Li, C. Xu, C. Gui, and M. D. Fox, “Level set evolution without re-initialization: a new variational formulation,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on (IEEE2005), pp. 430–436.

33. C. Li, C. Xu, C. Gui, and M. D. Fox, “Distance regularized level set evolution and its application to image segmentation,” IEEE Trans. Image Process. 19(12), 3243–3254 (2010). [CrossRef]   [PubMed]  

34. B. Antony, M. D. Abràmoff, L. Tang, W. D. Ramdas, J. R. Vingerling, N. M. Jansonius, K. Lee, Y. H. Kwon, M. Sonka, and M. K. Garvin, “Automated 3-D method for the correction of axial artifacts in spectral-domain optical coherence tomography images,” Biomed. Opt. Express 2(8), 2403–2416 (2011). [CrossRef]   [PubMed]  

35. M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Retinal imaging and image analysis,” IEEE Rev. Biomed. Eng. 3, 169–208 (2010). [CrossRef]   [PubMed]  

36. S. Niu, Q. Chen, L. de Sisternes, D. L. Rubin, W. Zhang, and Q. Liu, “Automated retinal layers segmentation in SD-OCT images using dual-gradient and spatial correlation smoothness constraint,” Comput. Biol. Med. 54, 116–128 (2014). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 B-scans of OCT retina. (a) A B-scan of OCT normal retina. (b) A B-scan of OCT retina with CSC.
Fig. 2
Fig. 2 Flowchart of the proposed method.
Fig. 3
Fig. 3 Schematic diagram of level set function initialization and its projection in a two-dimensional plane.
Fig. 4
Fig. 4 Schematic diagram of the proposed level set function initialization and its projection in a two-dimensional plane. (The blue area represents the positive horizontal level of the level set function and the red area represents the negative level of the level set function.)
Fig. 5
Fig. 5 Principle of regional restriction.
Fig. 6
Fig. 6 Automatic surface detection. (a)layer classification through FCN; (b) Initialization interface; (c) Remainder layer initialization based on retinal layer thickness.
Fig. 7
Fig. 7 The first row (a), (b), (c) are the result of the FCN network full classification; and the second row (d), (e), (f) are the result of the interface we need to be classified by the FCN network.
Fig. 8
Fig. 8 (a), (b), (c), (d),(e) are the comparisons among the method proposed by Chiu et al. [6], OCT-Explore [34,35], Roy et al. [24], our proposed method and manual segmentation result.
Fig. 9
Fig. 9 (a), (b), (c), (d), (e), (f), (g), (h), (i) are the boundary positions maps of a cube respectively; and (j) is the total thickness map.
Fig. 10
Fig. 10 The 3-D surfaces visualization of a cube.

Tables (4)

Tables Icon

Table 1 Mean absolute boundary positioning differences (mean ± SD) calculated from the abnormal images with CSC in micrometers.

Tables Icon

Table 2 Mean absolute boundary positioning differences (mean ± SD) calculated from the normal images in micrometers.

Tables Icon

Table 3 Mean absolute thickness differences (mean ± SD) calculated from the abnormal images with CSC in micrometers.

Tables Icon

Table 4 Mean absolute thickness differences (mean ± SD) calculated from the normal eye’s images in micrometers.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

L log = i , j ω ( x i , j ) y l ( x i , j ) log p l ( x i , j )
ε ( ϕ ) = μ R ( ϕ ) + α L ( ϕ ) + β S ( ϕ )
L ( ϕ ) = Δ Ω g δ ( ϕ ) | ϕ | d x
S ( ϕ ) = Δ Ω gH ( ϕ ) dx
g = Δ 1 1 + | G g * I | 2
δ ( η ) = { 1 2 ε [ 1 + cos ( π η ε ) ] , | η | ε 0 , | η | > ε
H ( ϕ ) = { 1 2 ( 1 + η ε + 1 π sin ( π η ε ) ) , | η | ε 1 , 0 , η > ε η < ε
m a b d = 1 N i = 1 N 1 512 | C b 1 1 C b 2 i |
s d = 1 N i = 1 N ( 1 512 | C b 1 i C b 2 i | m a b d )
m a h d = 1 N i = 1 N ( 1 512 | l j 1 i l j 2 i | )
I j 1 8 = C b j C b j 1
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.