Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Intra-retinal layer segmentation in optical coherence tomography images

Open Access Open Access

Abstract

Retinal layer thickness, evaluated as a function of spatial position from optical coherence tomography (OCT) images is an important diagnostics marker for many retinal diseases. However, due to factors such as speckle noise, low image contrast, irregularly shaped morphological features such as retinal detachments, macular holes, and drusen, accurate segmentation of individual retinal layers is difficult. To address this issue, a computer method for retinal layer segmentation from OCT images is presented. An efficient two-step kernel-based optimization scheme is employed to first identify the approximate locations of the individual layers, which are then refined to obtain accurate segmentation results for the individual layers. The performance of the algorithm was tested on a set of retinal images acquired in-vivo from healthy and diseased rodent models with a high speed, high resolution OCT system. Experimental results show that the proposed approach provides accurate segmentation for OCT images affected by speckle noise, even in sub-optimal conditions of low image contrast and presence of irregularly shaped structural features in the OCT images.

©2009 Optical Society of America

1. Introduction

Optical coherence tomography (OCT) is a powerful imaging technique, capable of acquiring non-invasive, high resolution, 3D images of the structural composition of biological tissue [1,2]. One important biomedical application of OCT is in ophthalmology, where high resolution volumetric retinal imaging allows for clinical diagnosis and investigation of retinal diseases [3]. Morphological characteristics that can be viewed and quantified from OCT tomograms, such as the thickness of individual intra-retinal layers, the shape, spatial distribution and optical properties of other structural features such as drusen, cysts, macular holes, and blood vessels, can be used as markers in retinal disease diagnostics and clinical investigation of retinal diseases. For example, retinal nerve fiber layer (RNFL) thickness as a function of the spatial position in the retina is an important marker for the clinical diagnosis of glaucoma [4, 5].

The accurate segmentation and layer thickness measurement of retinal layers from OCT tomograms is a fundamental problem which enhances the diagnosis process for retinal disease analysis [6]. This segmentation step can be very challenging for several reasons. Since OCT is based on the detection of the interference of partially coherent optical beams, OCT tomograms are subject to presence of speckle [7]. Although speckle is dependent on the local optical properties of the imaged object and includes information of the object’s structural composition, it also carries out a strong noise component. Speckle noise in OCT images causes difficulty in the precise identification of the boundaries of layers or other structural features in the image either through direct observation or use of segmentation algorithms. Furthermore, low optical contrast in some OCT retinal images, related either to sub-optimal imaging conditions, or observed directly below blood vessels, resulting from the high haemoglobin absorption of light, can also cause malfunction in the segmentation algorithms or reduce their precision.

Given the difficulty of the problem and the time-consuming nature of manual segmentation by trained experts, a number of segmentation approaches have been proposed to segment the individual retinal layers [6, 816]. Fernández et al. [6] proposed to first apply complex diffusion filtering to reduce speckle noise and then determine the individual retinal layers based on intensity peaks. Ishikawa et al. [8] proposed the application of a modified mean filter to reduce speckle noise and employs an adaptive thresholding scheme based on the reflectivity histogram of each A-scan line. Similarly, Ahlers et al. [9] employed adaptive thresholding and intensity peak detection to determine individual retinal layers, with morphological filtering applied to the thresholded results. However, these methods are sensitive to intensity inconstancy within the individual layers, which may not be the case in situations characterized by low image contrast and the presence of blood vessels or other morphological features within the retina. Fabritius et al. [10] proposed an efficient maximum intensity search-based approach to segmenting the macular based on identification of the internal limiting membrane (ILM) and the retinal pigment epithelium (RPE), which is less sensitive to intensity variations. Götzinger et al. [16] attempted to alleviate some of the issues associated with image contrast variation by utilizing polarization sensitive OCT (PS-OCT) [17]. While certain layers such as the RPE and the RNFL become better defined in polarization sensitive OCT images, the disadvantage of this approach is that it requires a PS-OCT instrument.

Mujat et al. [11] proposed an active contour based method, where Gaussian and anisotropic diffusion filtering techniques are employed to reduce speckle noise prior to determining the boundary contour along the RNFL based on the extracted edge gradient information. While less prone to the effects of illumination variations, this automatic active contour approach is highly sensitive to the presence of blood vessels and other morphological features of the retina. Furthermore, the speckle noise reduction methods employed by all of the aforementioned segmentation methods, perform poorly under the high speckle noise associated with OCT retinal tomograms, and have poor structural and edge preservation under such situations. Therefore, the previously used combinations of speckle denoising and segmentation algorithms are only appropriate for segmenting high contrast and well-defined retinal layers such as the retinal nerve fibre layer or the RPE. Haeker et al. [13] and Niemeijer et al. [14] utilize a minimum-cost closed set approach to identifying the individual retinal layers based on a linear combination of domain-specific cost functions. Finally, Garvin et al. [15] employed an optimal graph search method which attempts to find a closed set in a geometric 3-D graph that minimizes the associated costs and constraints.

This paper proposes a segmentation algorithm for the segmentation of all intra-retinal layers in OCT images. The proposed method employs a novel approach that uses an external force derived from the image gradient through an adaptive vector-valued kernel function to account for the presence of speckle noise in a direct fashion. A new dynamic programming-based force balance equation is introduced to identify the continuous retinal layers within the OCT retinal tomograms. The proposed approach is highly efficient and allows for the segmentation of retinal layers.

2. Methods

2.1. Theory

The traditional active contour first proposed by Kass et al. [18] is an energy minimizing spline v(s), s∈ [0,1] whose energy functional is defined as,

E=s=01(α(v(s))2+(βv(s))2InternalenergyλI(v(s))Externalenergy)ds,

where α, β and λ are usually implemented as constant weight factors for the internal and external energies, and ∇I is the gradient of the image. Kass et al. followed an iterative approach in an Euler framework to find a curve v*(s) that minimizes E. By replacing -|∇I (v(s))| with ψ (|∇I (v(s))|) and setting β=0 [19], the Eq. (1) can be rewritten as,

E=s=01(α(v(s))2+λψ(I(v(s)))2)ds,

where ψ is a strictly decreasing function and can be expressed as,

ψ(x)=11+τx2.

where τ is a control constant (set to τ=5 in our experiments). Caselles et al. [19] introduced a new approach called the geodesic active contour to minimize the Eq. (2) in a Riemann manifold as follows,

v=argminv(s=01ψ(I(v(s)))v(s)ds).

In the discrete domain, Eq. (4) can be represented as

v=argminv1,v2vn(Σi=1qψisi),

where ψi=ψ (|∇I (v(si))|) and si=‖v (s i-1)-v(si)‖2.

There are two main difficulties with solving Eq. (4) in the context of retinal layer segmentation in OCT imagery. The first main difficulty is in determining the initial conditions such that the solution converges to the individual retinal layers. The second main difficulty is in setting up the external force term to account for speckle noise and other artifacts inherent in OCT imagery in a direct manner to improve segmentation accuracy. To address the first issue associated with initial conditions, the proposed method employs a sparse dynamic programming method to find the rough, approximate locations of the retinal layers, which can be described as follows.

Let a sparse trellis be defined on the entire image consisting of Q vertical normals, where each normal contains U nodes. An example of such a trellis overlayed on an OCT image is shown in Fig. 1(a) and can be defined mathematically as

v={v1,v2,···,vn}wheren=UQ.

The goal is to obtain the approximate locations of the z retinal layers v * m,m=[1,2,3,⋯,z] lying on the sparse trellis such that

v1*=argminv(siψi),
v2*=argminvv1*(siψi),
v3*=argminvv1*,v2*(siψi),
,
vz*=argminvv1*,v2*,···,vz1*(siΨi).

To solve this constrained optimization problem, a Viterbi-based dynamic programming approach [20, 21] is employed to obtain the z retinal layers v * 1,v * 2,v * 3,⋯,v * z. The approximate locations of the retinal layers determined for an OCT image are shown in Fig. 1(b). The approximate locations of the retinal layers are then used as initial conditions to a set of local retinal layer optimization problems for determining the precise locations of the retinal layers.

 figure: Fig. 1.

Fig. 1. Demonstration of the initial layer local approximation step on an OCT image. (a) The sparse trellis (red lines with green nodes) is overlayed on the OCT image. A possible solution for a retinal layer is shown in blue. (b) The approximate locations of the retinal layers (cyan lines) overlayed on the OCT image.

Download Full Size | PDF

For each layer, a local optimization strategy is employed to determine its precise location in the OCT image, and can be described as follows. First, a dense trellis V is defined along the approximate location of the retinal layer v *. The dense trellis V consists of q discrete normals with u discrete points each, as shown in Fig. 2, and can be defined as

V={Vij},{xij,yij},i[1,q],j[1,u].

The goal then is to obtain an open curve V * ij which represents the precise location of the retinal layer boundary from the OCT image. Such an open curve is typically obtained by minimizing the sum of the product of the external potentials along the boundary ψ(Vij) and the Euclidian arc-length ∇(Vij) as expressed in Eq. (5). However, finding such an optimal boundary in an exhaustive manner has tremendously high computationally complexity. Therefore, a Viterbi-based dynamic programming optimization method [20, 21] is used to find such an optimal

boundary in an efficient manner. The total algorithmic complexity of the proposed two step dynamic programming method is z(U 2 Q+u 2 q).

 figure: Fig. 2.

Fig. 2. Demonstration of the precise layer boundary optimization step on an OCT image. The dense trellis (red lines with green nodes) for a particular retinal layer is overlayed on the OCT image. The approximate location of the retinal layer used as an initial condition for the optimization is shown in cyan, while a possible solution for a retinal layer is shown in blue.

Download Full Size | PDF

To address the second difficulty associated with setting up the external force term, the proposed method incorporates an adaptive vector-valued kernel function in the precise layer boundary optimization step to account for speckle noise and other artifacts inherent in OCT imagery in a direct manner. The two-step kernel based optimization scheme employed by the proposed segmentation method for determining the optimal boundary representing the retinal layer being segmented can be described as follows. In the first step, the likelihood of all nodes along a normal i belonging to a boundary point, denoted as p(I|Vij), is computed. The position of the node with maximum probability is marked as vm(s) and can be written as

vm(s)=argmaxj[1,u](p(I|Vij)),i[1,q],

where

p(I|Vij)=1Zextexp(ψ(Vij)),

where term Zext is a normalization constant to make p(f|Vij) a probability distribution function along the normal i. In the second step, a smoothness constraint derived from the spatial and external force distributions is enforced on the boundary vm(s) to account for speckle noise and other OCT-related artifacts, as well as obtain a continuous boundary along the retinal layer being segmented. The smoothness constraint is incorporated directly into the boundary vm(s) by convolving the boundary with an adaptive kernel function h,

v*(s)=κκh(k)vm(sk)dk,

where h(k) is a product of spatial and gradient penalties,

h(k)=1zsexp((vm(s)vmsk2σs)2)1ztexp((ψ(vm(s))ψ(vm(sk))2σt)2).

where zs and zt are normalization constants, and the parameters σs and σt represent the spatial and temporal (intensity) standard deviations, respectively. In the experiment, σs=4 and σt=0.1 are used as they provide accurate segmentation results. ψ is computed in a manner such that the gradient ∇I along the tangent is diffused, while the gradient along the normal is not diffused at all. To achieve such goal, we regularize image I using an anisotropic diffusion kernel [22] and compute the gradient of the regularized image. The kernel is used to enforce curve continuity constraints such that a smooth, continuous retinal layer boundary can be obtained. The regularization parameter K1=0.1 and 20 scales are used for the anisotropic diffusion approach.

2.2. Experimental Verification

To evaluate the effectiveness of the proposed approach to retinal layer segmentation from OCT retinal tomograms, the method was applied to a set of OCT images acquired in-vivo from healthy and diseased rodent retina. The images were acquired with a research grade, high speed, high resolution OCT system operating in the 1060nm wavelength range. A detailed description of the system can be found in [23]. Briefly, the OCT system utilizes a spectral domain design and is powered with a super-luminescent diode (Superlum Ltd., λc=1020nm, Δλ=110nm,Pout=10mW) and data is acquired with a 47kHz data rate, InGaAs linear array, 1024 pixel camera (SUI, Goodrich). The OCT system provides 3µm axial and 5µm lateral resolution in the rat retina and 101dB sensitivity for 1.3mW optical power incident on the cornea. 2D and 3D images were acquired in-vivo from healthy retinas and rat retinas with drug induced photoreceptor degeneration. All imaging procedures were carried out in accordance with an approved ethics protocol established by the University of Waterloo Ethics Review Board. Only raw (unprocessed) rat retina OCT images were used for testing the performance of the novel algorithm. The segmentation approach under evaluation was implemented in MATLAB and tested on an Intel Pentium 4 2.4 GHz machine with 1 GB of RAM. The total execution time of the proposed algorithm is approximately 5 seconds per image.

3. Results and Discussion

The novel segmentation algorithm was tested on a large set of OCT images acquired from healthy and diseased rat retinas. Fig. 3 shows a representative, unprocessed image of the rat retina [Fig. 3(a)] and a version of the same image segmented with the novel algorithm [Fig. 3(b)]. The original image shows the layered structure of the rat retina with individual layers clearly visible. This representative image was selected specifically to contain a large blood vessel on the surface of the retina (red arrow), so that the performance of the segmentation algorithm can be explored in the area directly below the blood vessel where image contrast is severely reduced due to haemoglobin absorption of light. The image was also selected to contain irregular features of high reflectivity and image contrast near the retinal layer boundaries, such as the small capillaries imbedded in the retinal inner plexiform (IPL) and outer plexiform layer (OPL) to test the performance of the layer segmentation algorithm with such irregularities present. Cross-sections of the capillaries appear as black circular features marked with the yellow arrows in the image on Fig. 3(a). The segmented image in Fig. 3(b) shows that the novel algorithm can correctly identify and segment all intra-retinal layers regardless of the variation of image contrast and the presence of irregular features in the OCT tomogram. A closer look at the region marked with the green box in Fig. 3(b) and magnified by 4x in Fig. 3(c) (original image) and Fig. 3(d) (segmented image) demonstrates that the segmentation code correctly identifies the boundaries of very thin retinal layers, such as the OPL and the external limiting membrane (ELM) even in region with severely reduced image SNR and contrast.

 figure: Fig. 3.

Fig. 3. OCT cross-sectional tomograms (1000×250 pixels) of a healthy rat retina acquired in-vivo. Fig. 3(a) shows the raw (unprocessed) image, with a large blood vessel locate on the retinal surface (red arrow) and cross-sections of tiny capillaries imbedded in the inner and outer plexiform layers of the retina visible as black circular features (yellow arrows). Fig. 3(c) and Fig. 3(d) show 4x magnification of the region marked with the green box in Fig. 3(b), Fig. 3(e), and Fig. 3(f) show 4x magnification of the region marked with the yellow box in Fig. 3(b), containing cross-sections of retinal capillaries.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. OCT cross-sectional tomograms (1000×250 pixels) of a diseased rat retina acquired in-vivo. Fig. 4(a) shows the original, unprocessed image, while the segmented image is shown in Fig. 4(b). Red arrow points at clusters of PR and RPE debris.

Download Full Size | PDF

An expanded and 4x magnified copy of the region in Fig. 3(b) marked with the yellow box is shown in Fig. 3(e) (original image) and Fig. 3(f) (segmented image). Close comparison of the two magnified images shows that the new segmentation algorithm cannot separate highly reflective image features positioned directly at the interface between two retinal layers from the layer boundary. On such occasions the algorithm closely fits the outlines of the highly reflective feature and includes it in the boundaries of the retinal layer with higher optical reflectivity.

Layer segmentation of thick retinal layers, such as the IPL and the outer nuclear layer (ONL) in OCT tomograms is fairly straightforward when the images are acquired in healthy retinas that are characterized with well defined, parallel layers. However, diseased retinas contain a variety of morphological features such as macular holes, detachments, drusen, etc. that vary in size, shape and image contrast and interrupt the regular layered structure of the retina. To test the capability of the proposed algorithm to properly segment retinal layers in diseased retinas, we have tested the code on OCT images acquired from rat retinas with drug-induced photoreceptor degeneration. A representative raw image of the diseased rat retina is shown in Fig. 4(a). The photoreceptor (PR) degeneration is characterized with complete disintegration of the ELM, the inner and outer segments (IS/OS) of the PR, with local damage of the RPE, clustering of the PR and RPE debris and global damage to the ONL and OPL. Also the overall image contrast is reduced as compared to the retinal image displayed in Fig. 4(a). Even under such sub-optimal conditions, the novel segmentation algorithm is capable of identifying correctly the boundaries of the NFL and the IPL, as well as the position of the RPE layer. It also accurately outlines the debris clusters.

4. Conclusion

In this paper, a novel algorithm for segmenting individual retinal layers in retinal OCT imagery is proposed. The proposed algorithm was demonstrated to achieve accurate intra-retinal segmentation on retinal OCT imagery under low image contrast and in the presence of irregularly shaped structural features. The proposed method shows great potential for quantitative analysis of retinal layer thickness.

Acknowledgment

The authors would like to thank S. Hariri, A. Moayed, and S. Shakheel for assistance with the retinal images acquisition and data processing. This research has been sponsored by the Natural Sciences and Engineering Research Council (NSERC) of Canada and the Ontario Centres of Excellence.

References and links

1. D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, C. Puliafito, and J. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991). [CrossRef]   [PubMed]  

2. A.F. Fercher, “Optical coherence tomography,” J. Biomed. Opt. 1, 157 (1996). [CrossRef]  

3. J. Fujimoto, W. Drexler, J. Schuman, and C. Hitzenberger, “Optical Coherence Tomography (OCT) in ophthalmology: introduction,” Opt. Express 17(5), 3978–3979 (2009), http://www.opticsinfobase.org/ abstract.cfm?uri=oe-17-5-3978. [CrossRef]  

4. C. Leung, C. Cheung, R. Weinreb, K. Qiu, S. Liu, H. Li, G. Xu, N. Fan, C. Pang, R. Tse, and D. Lam, “Evaluation of retinal nerve fiber layer progression in glaucoma with optical coherence tomography guided progression analysis (GPA),” Invest. Ophthamal. Visual Sci. (2009), http://www.iovs.org/cgi/rapidpdf/ iovs.09-3468v1.pdf.

5. S. Taliantzis, D. Papaconstantinou, C. Koutsandrea, M. Moschos, M. Apostolopoulos, and G. Georgopoulos, “Comparative studies of RNFL thickness measured by OCT with global index of visual fields in patients with ocular hypertension and early open angle glaucoma,” Clin. Ophthalmol. 3, 373–379 (2009). [CrossRef]   [PubMed]  

6. D. Fernández, H. Salinas, and C. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express 13(25), 10200–10216 (2005), http://www.opticsinfobase.org/ oe/abstract.cfm?uri=oe-13-25-10200. [CrossRef]  

7. J. Schmitt, S. Xiang, and K. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt. 4, 95–105 (1999). [CrossRef]  

8. H. Ishikawa, D. Stein, Ga. Wollstein, S. Beaton, J. Fujimoto, and J. Schuman, “Macular segmentation with optical coherence tomography,” Invest. Ophthamal. Visual Sci. 46(6), 2012–2017 (2005). [CrossRef]  

9. C. Ahlers, C. Simader, W. Geitzenauer, G. Stock, P. Stetson, S. Dastmalchi, and U. Schmidt-Erfurth, “Automatic segmentation in three-dimensional analysis of fibrovascular pigmentepithelial detachment using high-definition optical coherence tomography,” Br. J. Ophthalmol. 92, 197–203 (2008). [CrossRef]  

10. T. Fabritius, S. Makita, M. Miura, R. Myllylä, and Y. Yasuno, “Automated segmentation of the macula by optical coherence tomography,” Opt. Express 17(18), 15659–15669 (2009), http://www.opticsinfobase. org/abstract.cfm?URI=oe-17-18-15659. [CrossRef]  

11. M. Mujat, R. C. Chan, B. Cense, B. H. Park, C. Joo, T. Akkin, T. C. Chen, and J. F. de Boer, “Retinal nerve fiber layer thickness map determined from optical coherence tomography images,” Opt. Express 13, 9480–9491 (2005), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-13-23-9480. [CrossRef]   [PubMed]  

12. M. Szkulmowski, M. Wojtkowski, B. Sikorski, T. Bajraszewski, V. J. Srinivasan, A. Szkulmowska, J. J. Kaluzny, J. G. Fujimoto, and A. Kowalczyk, “Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retina and retinal pathologies,” J. Biomed. Opt. 12(4), 041207 (2007). [CrossRef]  

13. M. Haeker, M. Sonka, R. Kardon, V. A. Shah, X. Wu, and M. Abràmoff, “Automated segmentation of intraretinal layers from macular optical coherence tomography images,” Proc. the SPIE: Medical Imaging 6512, (2007).

14. M. Niemeijer, M. Garvin, B. van Ginneken, M. Sonka, and M. Abràmoff, “Vessel segmentation in 3D spectral OCT scans of the retina,” Proc. SPIE 6914, 69141R (2008). [CrossRef]  

15. M. Garvin, M. Abràmoff, R. Kardon, S. Russell, X. Wu, and M. Sonka, “Intraretinal Layer Segmentation of Macular Optical Coherence Tomography Images Using Optimal 3-D Graph Search,” IEEE Trans. Med. Imaging 27(10), 1495–1505 (2008). [CrossRef]  

16. E. Götzinger, M. Pircher, W. Geitzenauer, C. Ahlers, B. Baumann, S. Michels, U. Schmidt-Erfurth, and C. Hitzenberger, “Retinal pigment epithelium segmentation by polarization sensitive optical coherence tomography,” Opt. Express 16(21), 16410–16422 (2008), http://www.opticsinfobase.org/abstract. cfm?URI=oe-16-21-16410. [CrossRef]  

17. M. Hee, D. Huang, E. Swanson, and J. Fujimoto, “Polarization-sensitive low-coherence reflectometer for birefringence characterization and ranging,” J. Opt. Soc. Am. B-Opt. Phys. 9(6), 903–908 (1992), http://www. opticsinfobase.org/abstract.cfm?URI=josab-9-6-903. [CrossRef]  

18. M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: active contour models,” Int. J. Comput. Vision 1(4), 321–331 (1988). [CrossRef]  

19. V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” Int. J. Comput. Vision 22(1), 61–97 (1997). [CrossRef]  

20. A. Mishra, P. Fieguth, and D. Clausi, “Accurate boundary localization using dynamic programming on snake,” Proc. Canadian Conference on Computer and Robot Vision, 261–268 (2008). [CrossRef]  

21. A. Mishra, P. Fieguth, and D. Clausi, “Robust snake convergence based on dynamic programming,” Proc. IEEE International Conference on Image Processing , 1092–1095 (2008).

22. P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 12(7), 629–639 (1990). [CrossRef]  

23. P. Puvanathasan, P. Forbes, Z. Ren, D. Malchow, S. Boyd, and K. Bizheva, “High-speed, high-resolution Fourier-domain optical coherence tomography system for retinal imaging in the 1060 nm wavelength region,” Opt. Lett. 33, 2479–2481 (2008). [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Demonstration of the initial layer local approximation step on an OCT image. (a) The sparse trellis (red lines with green nodes) is overlayed on the OCT image. A possible solution for a retinal layer is shown in blue. (b) The approximate locations of the retinal layers (cyan lines) overlayed on the OCT image.
Fig. 2.
Fig. 2. Demonstration of the precise layer boundary optimization step on an OCT image. The dense trellis (red lines with green nodes) for a particular retinal layer is overlayed on the OCT image. The approximate location of the retinal layer used as an initial condition for the optimization is shown in cyan, while a possible solution for a retinal layer is shown in blue.
Fig. 3.
Fig. 3. OCT cross-sectional tomograms (1000×250 pixels) of a healthy rat retina acquired in-vivo. Fig. 3(a) shows the raw (unprocessed) image, with a large blood vessel locate on the retinal surface (red arrow) and cross-sections of tiny capillaries imbedded in the inner and outer plexiform layers of the retina visible as black circular features (yellow arrows). Fig. 3(c) and Fig. 3(d) show 4x magnification of the region marked with the green box in Fig. 3(b), Fig. 3(e), and Fig. 3(f) show 4x magnification of the region marked with the yellow box in Fig. 3(b), containing cross-sections of retinal capillaries.
Fig. 4.
Fig. 4. OCT cross-sectional tomograms (1000×250 pixels) of a diseased rat retina acquired in-vivo. Fig. 4(a) shows the original, unprocessed image, while the segmented image is shown in Fig. 4(b). Red arrow points at clusters of PR and RPE debris.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

E = s = 0 1 ( α ( v ( s ) ) 2 + ( β v ( s ) ) 2 Internal energy λ I ( v ( s ) ) External energy ) ds ,
E = s = 0 1 ( α ( v ( s ) ) 2 + λ ψ ( I ( v ( s ) ) ) 2 ) ds ,
ψ ( x ) = 1 1 + τ x 2 .
v = arg min v ( s = 0 1 ψ ( I ( v ( s ) ) ) v ( s ) ds ) .
v = arg min v 1 , v 2 v n ( Σ i = 1 q ψ i s i ) ,
v = { v 1 , v 2 , · · · , v n } where n = U Q .
v 1 * = arg min v ( s i ψ i ) ,
v 2 * = arg min v v 1 * ( s i ψ i ) ,
v 3 * = arg min v v 1 * , v 2 * ( s i ψ i ) ,
,
v z * = arg min v v 1 * , v 2 * , · · · , v z 1 * ( s i Ψ i ) .
V = { V ij } , { x ij , y ij } , i [ 1 , q ] , j [ 1 , u ] .
v m ( s ) = arg max j [ 1 , u ] ( p ( I | V ij ) ) , i [ 1 , q ] ,
p ( I | V ij ) = 1 Z ext exp ( ψ ( V ij ) ) ,
v * ( s ) = κ κ h ( k ) v m ( s k ) d k ,
h ( k ) = 1 z s exp ( ( v m ( s ) v m s k 2 σ s ) 2 ) 1 z t exp ( ( ψ ( v m ( s ) ) ψ ( v m ( s k ) ) 2 σ t ) 2 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.