Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automated fine structure image analysis method for discrimination of diabetic retinopathy stage using conjunctival microvasculature images

Open Access Open Access

Abstract

The conjunctiva is a densely vascularized mucus membrane covering the sclera of the eye with a unique advantage of accessibility for direct visualization and non-invasive imaging. The purpose of this study is to apply an automated quantitative method for discrimination of different stages of diabetic retinopathy (DR) using conjunctival microvasculature images. Fine structural analysis of conjunctival microvasculature images was performed by ordinary least square regression and Fisher linear discriminant analysis. Conjunctival images between groups of non-diabetic and diabetic subjects at different stages of DR were discriminated. The automated method’s discriminate rates were higher than those determined by human observers. The method allowed sensitive and rapid discrimination by assessment of conjunctival microvasculature images and can be potentially useful for DR screening and monitoring.

© 2016 Optical Society of America

1. Introduction

Diabetic retinopathy (DR) is a major cause of vision loss in working age adults [1]. DR is considered a microvascular disease and the earliest signs of microvasculopathies occur at the level of the small blood vessels or capillaries. In fact, retinal tissue ischemia due to capillary non-perfusion and macular edema due to increased vascular permeability cause vision loss in DR. Currently, discrimination of stages of DR is based on clinical visual examination of the retinal tissue for signs of microaneurysms, hard exudates, hemorrhages, and other pathologies. Additionally, clinical fluorescein angiography allows visualization of retinal vascular perfusion and leakage, and optical coherence tomography provides quantitative assessment of retinal thickening due to macular edema. In recent years, several research imaging techniques have been developed to assess retinal microvasculature abnormalities in diabetic subjects [2–7]. Furthermore, methods for fractal analysis have been developed that relate the pattern of major retinal blood vessels to age, blood pressure, and diabetes [8–10]. Recently, optical coherence tomography angiography has become available for imaging of the retinal microvasculature [11,12]. Application of this technology to DR subjects has demonstrated alterations in the retinal microvasculature, including capillary non-perfusion, microaneurysms, and pre-retinal neovascularization [13–15].

Since diabetes is a systemic disease, it is expected that microvasculopathies present in the retinal tissue to be also evident, at least in part, in the microvasculature of other tissues. One such tissue is the conjunctiva, a densely vascularized mucus membrane covering the sclera of the eye with a unique advantage of accessibility for direct visualization and non-invasive imaging. Indeed, several studies have evaluated and reported microvascular abnormalities in the conjunctiva of diabetic subjects, similar to those reported in the retinal tissue [16–23].

Several techniques have been developed for imaging of the conjunctival microvasculature, including Retinal Functional Imager [24], orthogonal polarization spectral imaging [25], slit lamp biomicroscopy [26,27], and intravital microscopy [28,29]. Furthermore, conjunctival microvasculature has been assessed quantitatively using automated [30] and semi-automated [26,31] software. However, methods for discrimination health and disease based on evaluation of conjunctival microvasculature have not been reported. In the current study, we present application of a method [32] for fine structure analysis of conjunctival microvasculature images for discriminating subjects at progressive stages of DR.

2. Method

2.1 Subjects

The study was approved by an institutional board of the University of Illinois at Chicago. The study was explained to subjects and informed consents were obtained with accordance to declaration of Helsinki. A total of 76 subjects participated in the study. The subjects underwent a dilated retinal examination by a retinal specialist who used the conventional clinical categories to classify each retina as non-diabetic control subjects (NC) (N = 22), diabetic without clinical retinopathy (NDR) (N = 17), with non-proliferative diabetic retinopathy (NPDR) (N = 17), or proliferative diabetic retinopathy (PDR) (N = 20). The subjects were 27 males and 49 females. Images of one eye of each subject were included. The subjects’ ages (mean ± SD) were 63 ± 12 years, 57 ± 13 years, 62 ± 8 years, and 53 ± 10 years in NC, NDR, NPDR and PDR groups, respectively (P = 0.02). PDR subjects were significantly younger than NC subjects (P = 0.03).

2.2 Image acquisition

Imaging was performed using our previously described system EyeFlow [30], consisting of a slit lamp biomicroscope (2X magnification) coupled to a digital camera (Prosilica GT, AVT, Exton, PA). Active camera sensor size was 8.8 mm × 6.6 mm with a fill factor of 100%, and approximate quantum efficiency of 50%. The conjunctiva was illuminated by white light passed through a green filter (540 ± 4 nm) which enhanced image contrast. Each image was 1024 × 1360 pixels at 3.12 µm/pix on the object plane, and thus covered a 3.2 mm × 4.2 mm area of the conjunctiva. Contiguous conjunctival microvasculature images with approximately 10% overlap were acquired at regions temporal to the limbus.

2.3 Image processing

Conjunctival microvasculature images were montaged to generate a single mosaic image using MosaicJ software, a semi-automated image processing plug-in for ImageJ (ImageJ 1.48V). Final adjustments to generate a seamless mosaic image were performed by the plug-in. The mosaic image displayed a conjunctival microvasculature region up to 9.6 mm × 12.6 mm area. An example of a cropped conjunctival mosaic image in a diabetic (PDR) subject is shown in Fig. 1. Presence of light illumination artifacts and image blur due to eye movement precluded the use of entire mosaic for analysis. Thus, from the mosaic image, a conjunctival region of interest (ROI) (1000 × 1000 pixels) covering a 3.1 mm × 3.1 mm area was selected. The ROI showed a dense vascularized region with good focus and devoid of illumination artifacts. Examples of two selected ROIs are outlined by squares overlaid on the mosaic image shown in Fig. 1.

 figure: Fig. 1

Fig. 1 An example of a cropped mosaic image of the conjunctiva of a diabetic (PDR) subject. Regions with light illumination artifacts and blur are visualized. Two 3.1 mm × 3.1 mm regions of interest (ROIs) that were selected for analysis are outlined by squares.

Download Full Size | PDF

2.4 Automated image discrimination

Conjunctival image discrimination was performed based on a previously reported fine structure image analysis (FSIA) method [32], using a customized algorithm written in MATLAB (Release 2015b, MathWorks, Inc., Natick, MA, USA). The motivation for FSIA is the long and successful history of time series analysis (TSA) in being able to rigorously model multiple time series data which are visually indistinguishable [33]. The mathematical basis of TSA is ordinary differential equations identified by applying Ordinary Least Squares (OLS) regression to difference equation approximations [34]. It is shown in [32] that images can be considered as solutions to partial difference equations such as the general autoregressive one given in Eq. (1). Further, it is shown that a Kronecker matrix-to-vector transformation applied to Eq. (1) results in an OLS format of the equation to which TSA can be applied [35]. Briefly, pixels of each conjunctival ROI were shifted by 1 or 2 pixels row-wise, column-wise, and along the diagonal to yield 8 unique combinations of the original image. Shifting the pixels allows evaluation of fluctuation in intensity values over a pixel's neighborhood. Since these shifts are at most 2 pixels out of a thousand, each and every image appears visually identical, quite analogous to the TSA visual experience. Each of the shifted images was vectorized by stacking columns of the 2D image into a 1D vector which occupied one column of a matrix. A new column of ones was incorporated to the foremost left column of this matrix to account for sample means which improved discrimination because it removes the parameter estimation bias attributable to a nonzero sample mean [33]. The model image was defined as the weighted sum of shifted images, as shown in Eq. (1).

yi,j=k=02l=02bklyik,jl+ui,jk+l>0
where yi,j is the modeled image, yi-k,j-l are the shifted images, bkl are coefficients to be estimated, and ui,j is a 2 dimensional random process error with zero mean. The OLS regression was performed to compute coefficients bkl by minimizing the variance of ui,j. If y0 is the vectorized matrix yi,j and X = [x0 x1… x8] a matrix of vectorized yi-k,j-1 shifted images and x0 a columns of ones, then the vector b of bkl parameters is

b=(XTX)1XTy0

The vector b represents coefficients which minimize the variance between the original and shifted images. The coefficient vector obtained from images in one group of subjects is expected to share common features which can be use to characterize that group.

Computed OLS coefficients for images in each group of size Ni, i = NC, NDR, NPDR, and PDR were assembled into matrices (of size Ni subjects by 9 estimated bkl parameters) to be use for discrimination. For every two groups of subjects, a Fisher’s Linear Discriminator (FLD) was employed to compute a projection vector (v) which projects bkl parameters of each image onto a scalar z - projection axis. The maximum separation of sample means of projections was obtained with v which satisfied the FLD eigenvector identity [36]. For two comparison groups of images, N1 and N2 subjects, let their respective OLS coefficients be assembled in matrices B1 and B2. The “pooled sample” or combined matrix Bp is B1 stacked on B2. For an n by k matrix B with n samples of k parameters let Bm be B with its column sample means subtracted. Then the estimated covariance matrix of B is Ω = BmTBm/(n-1). The optimizing projection vector v satisfies the eigenvector identity of the B1, B2, and Bp covariance matrices was computed using Eq. (3).

(n1Ωpn2Ω1n3Ω2)v=γ1(n2Ω1+n3Ω2)v
where γ1 is the only non-zero eigenvalue, n1 = N1 + N2-1, n2 = N1-1, and n3 = N2-1. The FLD vector v maximizes the absolute difference between the sample means of two groups normalized by the sum of the covariance of each group.

For discrimination of each group pair, graphical presentation was provided for visualization and interpretation. The z - projection values (x-axis) was plotted against their corresponding probability density values (y-axis).

The Kolmogorov–Smirnov (KS) test was used to verify that z - projections in each group were normally distributed [36], hence allowing the use of Kullback-Leibler discrimination (KLD) statistics. KLD statistics are a special case of the Neyman-Pearson log-likelihood ratio hypothesis test. Applied to 2 normally distributed z - projection density functions, f1(z) and f2(z), for example as seen in Fig. 2 through Fig. 4, KLD statistics are values of a discrimination function L(z) given by Eq. (4).

L1,2(z)=Ln(f1(z)f2(z))=Ln(s2s1)+(zm2)2s222(zm1)2s122
where mi and si, i = 1,2 are sample means and standard deviations of the hypothesized z - projection distributions. If the groups are perfectly separated, L1 values for all cases in group 1 will be positive and L2 values for all cases in group 2 will be negative and L2,1 = -L1,2. Misclassified group 1 z - projections have negative L1 values and misclassified group 2 z - projections have positive L2 values. The larger L1 value is for a group 1 the more likely it is a true positive and the smaller L2 value is for a group 2 the more likely it is a true negative.

 figure: Fig. 2

Fig. 2 Probability densities of z - projections, and L1, L2, L3, and L4 values between non-diabetic subjects (NC, squares) and diabetic subjects (NDR, NPDR, and PDR, triangles). a) NC group 1 and NDR group 2, b) NC group 1 and NPDR group 3, c) NC group 1 and PDR group 3. Misclassified cases in group 1 have negative L1 values and misclassified cases in groups 2,3, and 4 have positive L2, L3 and L4 values, respectively. The larger L1 values and the smaller L2, L3 and L4 values denote more likely true positive and true negative cases, respectively.

Download Full Size | PDF

The automated method was applied to images obtained in 4 groups of subjects, namely, NC (group 1), NDR (group 2), NPDR (group 3), and PDR (group 4). This resulted in comparison of 6 group pairs. The discrimination rate of each group pair was calculated by the ratio of number of misclassifications to the total number of cases.

2.5 Image discrimination validation

Two tests were performed to establish the validity of the method. First, the effect of ROI selection on the discrimination rate was investigated by applying the method twice for discrimination of all group pairs, using 2 different non-overlapping ROIs for each subject. Second, a negative control test was performed by applying the discrimination method to subgroups of group 1 (NC). The subgroups (groups 1a and 1b) were created by randomly dividing the group 1 into two groups of equal size.

2.6 Human observer image discrimination

Expert retinal specialists (N. P. Blair and F. Chau) with experience in retinal vascular diseases served as human observers and performed image discrimination. The human observers were masked to the grouping of the subjects and the result of the automated discrimination method. They visually inspected images of the conjunctival microvasculature in paired groups and assigned each image to one of the 2 groups. The discrimination rates were calculated similar to the automated method.

3. Results

The KS test verified normal distribution of z – projections in all groups (p < 0.0001). Figure 2 shows probability densities of z – projections, and the KLD statistics between non-diabetic and diabetic groups. Figure 2(a) displays discrimination results between group 1 (NC) and group 2 (NDR). The automated method discrimination rate was 72% (28/39) with 4 and 7 misclassifications in group 1 and group 2, respectively. The range of L1 values for correctly discriminated cases in group 1 was between 0 and 2. The range of L2 values for correctly discriminated cases in group 2 was between −5 and 0. Figure 2(b) displays discrimination results between group 1 (NC) and group 3 (NPDR). The automated method discrimination rate was 90% (35/39) with 2 misclassifications in each group. The range of L1 values for correctly discriminated cases in group 1 was between 0 and 9. The range of L3 values for correctly discriminated cases in group 3 was between −3 and 0. Figure 2(c) displays discrimination results between group 1 (NC) and group 4 (PDR). The automated method discrimination rate was 95% (40/42) with 0 and 2 misclassifications in group 1 and group 4, respectively. The range of L1 values for correctly discriminated cases in group 1 was between 0 and 5. The range of L4 values for correctly discriminated cases in group 4 was between −11 and 0.

As listed in Table 1, using a second set of ROIs, the discrimination rates were 72%, 85%, and 93% between group 1 (NC) and group 2 (NDR) and between group 1 (NC) and group 3 (NPDR) and between group 1 (NC) and group 4 (PDR), respectively. The difference between discrimination rates determined using different ROIs was on average 2%. As expected, the discrimination rate was lowest between NC and NDR with no retinopathy, and highest between NC and PDR with the most advanced retinopathy.

Tables Icon

Table 1. Discrimination rates derived by the automated method and both human observers

Figure 3 shows probability densities of z - projections, and the KLD statistics between diabetic groups. Figure 3(a) displays discrimination results between group 2 (NDR) and group 3 (NPDR). The automated method discrimination rate was 91% (31/34) with 1 and 2 misclassifications in group 2 and group 3, respectively. The range of L2 values for correctly discriminated cases in group 2 was between 0 and 6. The range of L3 values for correctly discriminated cases in group 3 was between −11 and 0. Figure 3(b) displays discrimination results between group 2 (NDR) and group 4 (PDR). The automated method discrimination rate was 84% (31/37) with 2 and 4 misclassifications in group 2 and group 4, respectively. The range of L2 values for correctly discriminated cases in group 2 was between 0 and 4. The range of L4 values for correctly discriminated cases in group 4 was between −7 and 0. Figure 3(c) displays discrimination results between group 3 (NPDR) and group 4 (PDR). The automated method discrimination rate was 95% (35/37) with 0 and 2 misclassifications in group 3 and group 4, respectively. The range of L3 values for correctly discriminated cases in group 3 was between 0 and 3. The range of L4 values for correctly discriminated cases in group 4 was between −23 and 0.

 figure: Fig. 3

Fig. 3 Probability densities of z - projections, and L2, L3, and L4 values between diabetic groups. a) NDR group 2 (squares) and NPDR group 3 (triangles). Misclassified cases in groups 2 and 3 have negative L2 values and positive L3 values, respectively. b) NDR group 2 (squares) and PDR group 4 (triangles). Misclassified cases in groups 2 and 4 have negative L2 values and positive L4 values, respectively. c) NPDR group 3 (squares) and PDR group 4 (triangles). Misclassified cases in groups 3 and 4 have negative L3 values and positive L4 values, respectively. The larger L2 values denote more likely true positive cases and the smaller L3 and L4 values denote more likely true negative cases, except for comparison of groups 3 and 4, in which the larger L3 values denote more likely true positive cases.

Download Full Size | PDF

As listed in Table 1, using a second set of ROIs, the discrimination rates were 82%, 82%, and 97% between group 1 (NDR) and group 3 (NPDR) and between group 2 (NDR) and group 4 (PDR) and between group 3 (NPDR) and group 4 (PDR), respectively. The difference between discrimination rates determined using different ROIs was on average 4%.

Results of discrimination obtained by the negative control test in group 1a (NC) and group 1b (NC) is shown in Fig. 4. The automated method discrimination rate was 55% (12/22) with 6 and 4 misclassifications in group 1a and group 1b, respectively. The range of L1a values for correctly discriminated cases in group 1a was between 0 and 4. The range of L1b values for correctly discriminated cases in group 1b was between −2 and 0.

 figure: Fig. 4

Fig. 4 Probability densities of z - projections, and L1a, L1b values between non-diabetic groups. Images in NC subjects were randomly stratified into 2 groups of equal size, group 1a (squares) and group 1b (triangles). Misclassified cases in groups 1a and 1b have negative L1a values and positive L1b values, respectively. The larger L1a values and the smaller L1b values denote more likely true positive and true negative cases, respectively.

Download Full Size | PDF

Conjunctival image discrimination rates derived by the automated method and both human observers are summarized in Table 1. The two human observers’ discrimination rates between NC and each of 3 diabetic groups, NDR, NPDR, and PDR, were 56% and 56%, 56% and 59%, and 45% and 67%, respectively. The human observers’ discrimination rates comparing NDR with NPDR and NDR with PDR were 59% and 62% and 62% and 57%, respectively. Comparison of NPDR and PDR groups yielded discrimination rates of 59% and 54% for the 2 observers. The human observers’ discrimination rates were on average 56% and 59%, meaning similar to discriminating images by chance. The discrimination rates derived by the automated method were consistently higher than those determined by both human observers.

4. Discussion and conclusion

Assessment of the conjunctival microvasculature can potentially provide information about microvascular abnormalities due to systemic vascular diseases. In the current study, we demonstrated application of an automated method for discrimination of conjunctival microvasculature images according to stages of DR. Furthermore, quantitative assessment of the strength of discrimination (i.e. the likelihood that an image is correctly discriminated) was demonstrated using KLD statistics. The automated method was validated by first demonstrating the discrimination is independent of the selected ROIs, and second, by showing a considerably lower discrimination rate between two groups of control subjects.

The accuracy of discriminating PDR subjects was over 90% by the automated method, similar to the previously reported application of the method for classification of magnetic resonance imaging (MRI) between normal and demented brain [32]. The discrimination rates of the automated method for clinical and non-clinical DR were over 80% and 70%, respectively. All automated discrimination rates were higher than rates determined by the human observers. The lower discrimination rates obtained by the human observer suggests that the automated method can identify alterations in the microvasculature undetected by a trained observer.

Since DR is a progressive microvascular disease, it is important to detect and monitor the presence of abnormalities at early stages of retinopathy. Current clinical techniques require dilated retinal exam by a specialist, which may not be accessible or affordable to many diabetic people. The availability of a non-invasive conjunctival microvasculature imaging and an automated image analysis technique can be potentially useful for quick and frequent screening of subjects and referral to specialist for monitoring and treatment.

Previous studies have reported conjunctival microvascular alterations in diabetic subjects [17–23]. However, the current study is the first to our knowledge to apply a fine structure analysis method for discrimination of conjunctival microvasculature images according to DR stages. Compared to retinal examination, conjunctival imaging takes a few seconds, does not require pupil dilation, and is more cost efficient. Another advantage of the method is the rapid image analysis which required less than 4 seconds to analyze all images in the group pairs on a 1.3 GHz system with 8 GB RAM. This enables classification of very large image data sets.

The discrimination method detects global alterations that are not visually detectable in the microvascular network to determine the probability that an image belongs to a certain group. The method consists of in-depth mathematical and statistical analysis of pixel-by-pixel intensity variations in the entire image, enabling a computer-based discrimination to detect features and their pattern that may not be visually discernable by human observers. Considering each pixel as an independent variable, each image contains 106 features that can contribute to image classification. Therefore, the automated method of image discrimination is different from trained retinal specialists who examine the gross anatomy of blood vessels such as vessel dilation, obliteration, and increased tortuosity. For example, the method may detect features such as vessel wall thickening and stiffening which are not evident by visual inspection of images, but can influence the results obtained by the automated discrimination method. Future studies are needed to determine specific image features that influence automated image discrimination rates. Additionally, the application of the method requires good conjunctival image quality which can be affected by eye movement and curvature. A potential solution is to incorporate a more rapid image acquisition system coupled with an autofocus lens. In the current study, there was a significant difference in age between NC and PDR subjects. Future studies are needed to determine the effects of age and other confounding factors on the discrimination results. Finally, further studies in larger sample sizes are needed to validate these preliminary results and also establish the sensitivity of the method for screening of DR subjects. Nevertheless, the findings of the current demonstrated the feasibility of successful application of an automated image analysis method to the conjunctival microvasculature images for discrimination of stages of DR. Due to the accessibility of conjunctiva for direct imaging, this method shows promise for DR screening and monitoring.

Acknowledgments

Supported by NIH grants DK104393 and EY001792, Senior Scientific Investigator (MS) and departmental awards from Research to Prevent Blindness. The authors would like to acknowledge the contributions of Anthony Felder and Michael Tan for image acquisition and Nicole Camardo for image preparation.

References and links

1. Centers for Disease Control and Prevention, “National diabetes statistics report: estimates of diabetes and its burden in the United States, 2014,” Atlanta, GA: US Department of Health and Human Services (2014).

2. M. Lombardo, M. Parravano, S. Serrao, P. Ducoli, M. Stirpe, and G. Lombardo, “Analysis of retinal capillaries in patients with type 1 diabetes and nonproliferative diabetic retinopathy using adaptive optics imaging,” Retina 33(8), 1630–1639 (2013). [CrossRef]   [PubMed]  

3. B. Antal and A. Hajdu, “An ensemble-based system for microaneurysm detection and diabetic retinopathy grading,” IEEE Trans. Biomed. Eng. 59(6), 1720–1726 (2012). [CrossRef]   [PubMed]  

4. R. Bernardes, S. Nunes, I. Pereira, T. Torrent, A. Rosa, D. Coelho, and J. Cunha-Vaz, “Computer-assisted microaneurysm turnover in the early stages of diabetic retinopathy,” Ophthalmologica 223(5), 284–291 (2009). [CrossRef]   [PubMed]  

5. A. D. Fleming, K. A. Goatman, S. Philip, G. J. Williams, G. J. Prescott, G. S. Scotland, P. McNamee, G. P. Leese, W. N. Wykes, P. F. Sharp, J. A. Olson, and Scottish Diabetic Retinopathy Clinical Research Network, “The role of haemorrhage and exudate detection in automated grading of diabetic retinopathy,” Br. J. Ophthalmol. 94(6), 706–711 (2010). [CrossRef]   [PubMed]  

6. M. D. Abràmoff, J. C. Folk, D. P. Han, J. D. Walker, D. F. Williams, S. R. Russell, P. Massin, B. Cochener, P. Gain, L. Tang, M. Lamard, D. C. Moga, G. Quellec, and M. Niemeijer, “Automated analysis of retinal images for detection of referable diabetic retinopathy,” JAMA Ophthalmol. 131(3), 351–357 (2013). [CrossRef]   [PubMed]  

7. C. Y. Cheung, W. Hsu, M. L. Lee, J. J. Wang, P. Mitchell, Q. P. Lau, H. Hamzah, M. Ho, and T. Y. Wong, “A new method to measure peripheral retinal vascular caliber over an extended area,” Microcirculation 17(7), 495–503 (2010). [PubMed]  

8. J. Grauslund, A. Green, R. Kawasaki, L. Hodgson, A. K. Sjølie, and T. Y. Wong, “Retinal vascular fractals and microvascular and macrovascular complications in type 1 diabetes,” Ophthalmology 117(7), 1400–1405 (2010). [CrossRef]   [PubMed]  

9. G. Liew, J. J. Wang, N. Cheung, Y. P. Zhang, W. Hsu, M. L. Lee, P. Mitchell, G. Tikellis, B. Taylor, and T. Y. Wong, “The retinal vasculature as a fractal: methodology, reliability, and relationship to blood pressure,” Ophthalmology 115(11), 1951–1956 (2008). [CrossRef]   [PubMed]  

10. M. Z. C. Azemin, D. K. Kumar, T. Y. Wong, J. J. Wang, P. Mitchell, R. Kawasaki, and H. Wu, “Age-related rarefaction in the fractal dimension of retinal vessel,” Neurobiol. Aging 33(1), 194 (2012). [CrossRef]   [PubMed]  

11. D. Huang, Y. Jia, and S. S. Gao, “Principles of Optical Coherence Tomography Angiography ” in OCT Angiography Atlas, H. D. Lumbros B, Rosenfield P, Chen C, Rispoli M, Romano A, ed. (Jaypee Brothers Medical Publishers, New Delhi, 2015).

12. Y. Jia, S. T. Bailey, T. S. Hwang, S. M. McClintic, S. S. Gao, M. E. Pennesi, C. J. Flaxel, A. K. Lauer, D. J. Wilson, J. Hornegger, J. G. Fujimoto, and D. Huang, “Quantitative optical coherence tomography angiography of vascular abnormalities in the living human eye,” Proc. Natl. Acad. Sci. U.S.A. 112(18), E2395–E2402 (2015). [CrossRef]   [PubMed]  

13. A. Ishibazawa, T. Nagaoka, A. Takahashi, T. Omae, T. Tani, K. Sogawa, H. Yokota, and A. Yoshida, “Optical coherence tomography angiography in diabetic retinopathy: a prospective pilot study,” Am. J. Ophthalmol. 160(1), 35–44 (2015). [CrossRef]   [PubMed]  

14. T. S. Hwang, S. S. Gao, L. Liu, A. K. Lauer, S. T. Bailey, C. J. Flaxel, D. J. Wilson, D. Huang, and Y. Jia, “Automated quantification of capillary nonperfusion using optical coherence tomography angiography in diabetic retinopathy,” JAMA Ophthalmol. 134(4), 367–373 (2016). [CrossRef]   [PubMed]  

15. M. Zhang, J. Wang, A. D. Pechauer, T. S. Hwang, S. S. Gao, L. Liu, L. Liu, S. T. Bailey, D. J. Wilson, D. Huang, and Y. Jia, “Advanced image processing for optical coherence tomographic angiography of macular diseases,” Biomed. Opt. Express 6(12), 4661–4675 (2015). [CrossRef]   [PubMed]  

16. C. G. Owen, R. S. Newsom, A. R. Rudnicka, T. J. Ellis, and E. G. Woodward, “Vascular response of the bulbar conjunctiva to diabetes and elevated blood pressure,” Ophthalmology 112(10), 1801–1808 (2005). [CrossRef]   [PubMed]  

17. A. T. Cheung, A. R. Price, P. L. Duong, S. Ramanujam, J. Gut, E. C. Larkin, P. C. Chen, and D. M. Wilson, “Microvascular abnormalities in pediatric diabetic patients,” Microvasc. Res. 63(3), 252–258 (2002). [CrossRef]   [PubMed]  

18. C. G. Owen, R. S. Newsom, A. R. Rudnicka, S. A. Barman, E. G. Woodward, and T. J. Ellis, “Diabetes and the tortuosity of vessels of the bulbar conjunctiva,” Ophthalmology 115(6), e27–e32 (2008). [CrossRef]   [PubMed]  

19. N. Stuebiger, W. Smiddy, J. Wang, H. Jiang, and D. C. DeBuc, “Assesment of Conjunctival Microangiopathy in a Patient with Diabetes Mellitus Using the Retinal Function Imager,” J. Clin. Exp. Ophthalmol. 6(1), 400 (2015). [CrossRef]   [PubMed]  

20. W. J. To, D. G. Telander, M. E. Lloyd, P. C. Chen, and A. T. Cheung, “Correlation of conjunctival microangiopathy with retinopathy in type-2 diabetes mellitus (T2DM) patients,” Clin. Hemorheol. Microcirc. 47(2), 131–141 (2011). [PubMed]  

21. A. T. Cheung, S. Ramanujam, D. A. Greer, L. F. Kumagai, and T. T. Aoki, “Microvascular abnormalities in the bulbar conjunctiva of patients with type 2 diabetes mellitus,” Endocr. Pract. 7(5), 358–363 (2001). [CrossRef]   [PubMed]  

22. D. M. Worthen, B. M. Fenton, P. Rosen, and B. Zweifach, “Morphometry of diabetic conjunctival blood vessels,” Ophthalmology 88(7), 655–657 (1981). [CrossRef]   [PubMed]  

23. S. Devaraj, A. T. Cheung, I. Jialal, S. C. Griffen, D. Nguyen, N. Glaser, and T. Aoki, “Evidence of increased inflammation and microcirculatory abnormalities in patients with type 1 diabetes and their role in microvascular complications,” Diabetes 56(11), 2790–2796 (2007). [CrossRef]   [PubMed]  

24. H. Jiang, Y. Ye, D. C. DeBuc, B. L. Lam, T. Rundek, A. Tao, Y. Shao, and J. Wang, “Human conjunctival microvasculature assessed with a retinal function imager (RFI),” Microvasc. Res. 85, 134–137 (2013). [CrossRef]   [PubMed]  

25. R. van Zijderveld, C. Ince, and R. O. Schlingemann, “Orthogonal polarization spectral imaging of conjunctival microcirculation,” Graefes Arch. Clin. Exp. Ophthalmol. 252(5), 773–779 (2014). [PubMed]  

26. M. Shahidi, J. Wanek, B. Gaynes, and T. Wu, “Quantitative assessment of conjunctival microvascular circulation of the human eye,” Microvasc. Res. 79(2), 109–113 (2010). [CrossRef]   [PubMed]  

27. A. G. Koutsiaris, S. V. Tachmitzi, N. Batis, M. G. Kotoula, C. H. Karabatsas, E. Tsironi, and D. Z. Chatzoulis, “Volume flow and wall shear stress quantification in the human conjunctival capillaries and post-capillary venules in vivo,” Biorheology 44(5-6), 375–386 (2007). [PubMed]  

28. A. T. Cheung, R. V. Perez, and P. C. Chen, “Improvements in diabetic microangiopathy after successful simultaneous pancreas-kidney transplantation: a computer-assisted intravital microscopy study on the conjunctival microcirculation,” Transplantation 68(7), 927–932 (1999). [CrossRef]   [PubMed]  

29. A. T. Cheung, P. C. Chen, E. C. Larkin, P. L. Duong, S. Ramanujam, F. Tablin, and T. Wun, “Microvascular abnormalities in sickle cell disease: a computer-assisted intravital microscopy study,” Blood 99(11), 3999–4005 (2002). [CrossRef]   [PubMed]  

30. M. M. Khansari, J. Wanek, A. E. Felder, N. Camardo, and M. Shahidi, “Automated Assessment of Hemodynamics in the Conjunctival Microvasculature Network,” IEEE Trans. Med. Imaging 35(2), 605–611 (2016). [PubMed]  

31. H. Jiang, J. Zhong, D. C. DeBuc, A. Tao, Z. Xu, B. L. Lam, C. Liu, and J. Wang, “Functional slit lamp biomicroscopy for imaging bulbar conjunctival microvasculature in contact lens wearers,” Microvasc. Res. 92, 62–71 (2014). [CrossRef]   [PubMed]  

32. W. O’Neill, R. Penn, M. Werner, and J. Thomas, “A theory of fine structure image models with an application to detection and classification of dementia,” Quant. Imaging Med. Surg. 5(3), 356–367 (2015). [PubMed]  

33. E. P. Box, G. M. Jenkins, and G. C. Reinsel, Time Series Analysis (Prentice Hall, Upper Saddle River, N.J. 1994).

34. N. Draper and H. Smith, Applied Regression Analysis (Wiley and Sons, New York, 1981).

35. J. Magnus, Linear Structures (Oxford U. Press, 1988).

36. S. S. Wilks, ed., Mathematical Statistics (Wiley, 1963).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1
Fig. 1 An example of a cropped mosaic image of the conjunctiva of a diabetic (PDR) subject. Regions with light illumination artifacts and blur are visualized. Two 3.1 mm × 3.1 mm regions of interest (ROIs) that were selected for analysis are outlined by squares.
Fig. 2
Fig. 2 Probability densities of z - projections, and L1, L2, L3, and L4 values between non-diabetic subjects (NC, squares) and diabetic subjects (NDR, NPDR, and PDR, triangles). a) NC group 1 and NDR group 2, b) NC group 1 and NPDR group 3, c) NC group 1 and PDR group 3. Misclassified cases in group 1 have negative L1 values and misclassified cases in groups 2,3, and 4 have positive L2, L3 and L4 values, respectively. The larger L1 values and the smaller L2, L3 and L4 values denote more likely true positive and true negative cases, respectively.
Fig. 3
Fig. 3 Probability densities of z - projections, and L2, L3, and L4 values between diabetic groups. a) NDR group 2 (squares) and NPDR group 3 (triangles). Misclassified cases in groups 2 and 3 have negative L2 values and positive L3 values, respectively. b) NDR group 2 (squares) and PDR group 4 (triangles). Misclassified cases in groups 2 and 4 have negative L2 values and positive L4 values, respectively. c) NPDR group 3 (squares) and PDR group 4 (triangles). Misclassified cases in groups 3 and 4 have negative L3 values and positive L4 values, respectively. The larger L2 values denote more likely true positive cases and the smaller L3 and L4 values denote more likely true negative cases, except for comparison of groups 3 and 4, in which the larger L3 values denote more likely true positive cases.
Fig. 4
Fig. 4 Probability densities of z - projections, and L1a, L1b values between non-diabetic groups. Images in NC subjects were randomly stratified into 2 groups of equal size, group 1a (squares) and group 1b (triangles). Misclassified cases in groups 1a and 1b have negative L1a values and positive L1b values, respectively. The larger L1a values and the smaller L1b values denote more likely true positive and true negative cases, respectively.

Tables (1)

Tables Icon

Table 1 Discrimination rates derived by the automated method and both human observers

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

y i,j = k=0 2 l=0 2 b kl y ik,jl + u i,j k+l>0
b= ( X T X) 1 X T y 0
( n 1 Ω p n 2 Ω 1 n 3 Ω 2 )v= γ 1 ( n 2 Ω 1 + n 3 Ω 2 )v
L 1,2 ( z )=Ln( f 1 (z) f 2 (z) )=Ln( s 2 s 1 )+ ( z m 2 ) 2 s 2 2 2 ( z m 1 ) 2 s 1 2 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.