Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Portable widefield fundus camera with high dynamic range imaging capability

Open Access Open Access

Abstract

Fundus photography is indispensable for the clinical detection and management of eye diseases. Low image contrast and small field of view (FOV) are common limitations of conventional fundus photography, making it difficult to detect subtle abnormalities at the early stages of eye diseases. Further improvements in image contrast and FOV coverage are important for early disease detection and reliable treatment assessment. We report here a portable, wide FOV fundus camera with high dynamic range (HDR) imaging capability. Miniaturized indirect ophthalmoscopy illumination was employed to achieve the portable design for nonmydriatic, widefield fundus photography. Orthogonal polarization control was used to eliminate illumination reflectance artifacts. With independent power controls, three fundus images were sequentially acquired and fused to achieve HDR function for local image contrast enhancement. A 101° eye-angle (67° visual-angle) snapshot FOV was achieved for nonmydriatic fundus photography. The effective FOV was readily expanded up to 190° eye-angle (134° visual-angle) with the aid of a fixation target without the need for pharmacologic pupillary dilation. The effectiveness of HDR imaging was validated with both normal healthy and pathologic eyes, compared to a conventional fundus camera.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Vision-threatening eye diseases, such as diabetic retinopathy (DR), glaucoma, and age-related macular degeneration (AMD) affect approximately 400 million people worldwide, and the number of people with these conditions is projected to increase to 560 million by the year 2045 [13]. Early detection and prompt treatment assessment are important to prevent vision loss due to these diseases. Since eye conditions can affect both the peripheral and central retina, widefield fundus photography is important for the clinical management of eye diseases. Scanning laser ophthalmoscopy (SLO), such as Optomap (Optos, Marlborough, MA, USA) and Eidon (Icare USA Inc., Raleigh, NC, USA) systems can provide widefield and ultra-widefield fundus imaging [46] by combing two or more laser wavelengths. However, sophisticated scanning and illumination devices needed for SLOs make them typically bulky and expensive. As almost 89% of visually impaired patients are from low- and middle-income countries (LMIC) [7] with a lack of eye care providers to care for the population, a portable fundus camera to facilitate affordable telemedicine is preferable.

Currently available commercial portable fundus cameras have a limited field of view (FOV) and image contrast, which are the common limitations of annular trans-pupillary illumination [8,9]. The FOV of conventional fundus cameras is typically around 45°-67.5° eye-angle (30°- 45° visual angle) [10]. In conventional fundus cameras, the illumination light is delivered through the annular shaped peripherical region of the available pupil, and the imaging light from the back of the eye is collected through the center of the pupil (Fig. 1 (a)). At the pupil plane, a proper buffer zone between the illumination and imaging windows is required to prevent reflection artifacts from the cornea and crystalline lens [11]. At the retina plane, the region for imaging must be fully covered by the illumination light delivered through the annular-shaped window. Therefore, the regions used for light illumination and imaging observation should be carefully balanced. This tradeoff limits the FOV, thus pharmacologic pupillary dilation, which is stressful for both patient and operator, is often required to expand the FOV for comprehensive eye examination [12].

 figure: Fig. 1.

Fig. 1. (a) Schematic illustration of conventional transpupillary illumination with annular pattern in traditional fundus cameras and (b) miniaturized indirect illumination with simplified single-point pattern.

Download Full Size | PDF

Another common challenge of fundus photography is that the luminescence range needed to capture sufficient details in the human fundus may exceed the dynamic range of the camera sensors. For example, the light reflected from the optic nerve head is multiple order greater than that of the macular region due to the different density of pigmented cells and nerve fibers [13]. Therefore, when the illumination power is adjusted to image the peripapillary region, the macular region falls below the noise floor of the camera sensor. Adjusting the power level for better macula imaging often leads to saturation at the peripapillary region. The structural details residing below the noise floor or in the saturated region cannot be recovered. As the compromised image contrast and inhomogeneity can hinder grading of retinal disease severely [14], a more holistic image is preferable. A plethora of research has been performed to study fundus image quality improvement by correcting scene inhomogeneity or enhancing the contrast between retinal segments [15,16]. These algorithms perform poorly when preserving details in saturated regions and can often create false colors that cause misdiagnosis.

Miniaturized indirect illumination (Fig. 1 (b)) provides a simple workaround for the limitation of the available pupil, making it an alternative solution for developing widefield portable fundus cameras [17]. In this work, we present a miniaturized indirect illumination based fundus camera to achieve portable, nonmydriatic, widefield fundus imaging. Orthogonal polarization control is used to eliminate the back reflected light artifacts encountered in miniaturized indirect illumination-based fundus systems [17,18]. High dynamic range (HDR) imaging capability is integrated to increase the dynamic range and enhance the contrast of fundus image for better visualization of the retina and retinal disease biomarkers.

In fundus imaging, the conventional unit for FOV evaluation is the visual-angle degree. However, recently emerging widefield fundus imagers such as Optomap (Optos Inc., Marlborough, MA, USA), ICON (Neolight, Pleasanton, CA, USA) and Retcam (Natus Medical Systems, Pleasanton, CA, USA) are using eye-angle degree as the unit of measurement. For easy comparison, we will provide both visual-angle and eye-angle values in this article [19].

2. Materials and methods

2.1 Experimental setup

Figure 2 illustrates the optical layout (Fig. 2(a)) and representative photograph (Fig. 2(c)) of the system during imaging. The camera lens (CL) (8 mm f/2.5 micro video lens, 58-203, Edmund Optics Inc., Barrington, NJ) and light source (LS) are situated on the same plane which is conjugated to the pupil of the eye. The ophthalmic lens (OL) (25 mm focal length) creates an image of the retina at the retina conjugate plane (RCP) which is relayed to the camera sensor (CS) (FL3-U3-120S3C-C, FLIR Integrated Imaging Solutions Inc., Richmond, Canada). The magnification from the eye pupil to the CL-LS plane is set to be 4X and the retina to the sensor is 0.22X. Assuming the pupil diameter is 4 mm at room light condition without pupillary dilation, this corresponds to a 16 mm diameter region available in the CL-LS plane for the camera lens and light source to be located in (layout separately shown in Fig. 2(b)). The light source contains an 810 nm LED (M850LP1, Thorlabs Inc., Newton, NJ, USA) for near infrared (NIR) imaging guidance and a broadband (FWHM 104 nm) LED with center wavelength at 565 nm (M565D2, Thorlabs Inc., Newton, NJ, USA) for color fundus imaging. The 565 nm illumination is optimal for visualizing retinal vasculature, differentiating the arterioles and the venules [20]. For understanding the back-reflected light distribution without polarization control, non-sequential ray tracing was implemented with Zemax OpticStudio 18.7 (ZEMAX LLC., Kirkland, WA, USA). Figure 2(d) shows the reflectance artifact pattern at the camera sensor. There are two bright spots at the center with maximum intensity, as well as reflected light rays distributed throughout the whole sensor plane. Hence, apart from having two saturated spots at the center of the image, there would be an overall haze in the fundus image if the back-reflected light is not rejected. Figure 2(e) shows a fundus image taken without any polarization control where the reflectance artifacts and overall haze could be observed. Hence, orthogonal polarizers P1 and P2 (LPVISE050-A, Thorlabs Inc., Newton, NJ, USA) are set in front of the light source and camera lens to achieve cross polarized light illumination and detection to remove back reflected light from the ophthalmic lens surface facing the camera lens.

 figure: Fig. 2.

Fig. 2. (a) Optical diagram of the imaging system. (b) Layout of CL-LS plane. (c) Photographic illustration of the imaging operation. (d) Back reflection pattern at the sensor. (e) Representative fundus image with back reflection artifact (marked with white arrow).

Download Full Size | PDF

2.2 High dynamic range imaging

In signal theory, the dynamic range is expressed as the ratio of the largest measurable signal to the smallest measurable signal. In the case of imaging, the signal is the light reflected from any surface and falling onto the sensor. The sensor is composed of pixels, and each pixel generates an electronic signal proportional to the number of photons falling into it. If the amount of light on a pixel is equal to its full well capacity (${Q_{max}}$), the pixel is saturated, and a further increase in light intensity will not change the output value. Similarly, there is a minimum amount of energy ($Q_{min}^\ast $) that must fall onto a pixel to generate any signal. Moreover, due to sensor’s inherent noise, the actual minimum energy that causes perceptible signal output is higher, we will call it ${Q_{min}}$. Therefore, the dynamic range of the sensor is

$$D = \frac{{{Q_{max}}}}{{{Q_{min}}}}$$

It is evident that, if part of a scene reflects less energy than ${Q_{min}}$ or more energy than ${Q_{max}}$, the sensor will not be able to capture those details. The actual dynamic range is not dependent on the number of bits in the image created by the sensor but determined by the pixel size and the material property of the sensor. Since a tradeoff is inevitable among the sensor resolution, pixel size, and imaging speed, the dynamic range of a digital camera is limited. HDR imaging is a technique to extend the dynamic range of an imaging system beyond the dynamic range of the digital camera sensor.

Figure 3 illustrates the basic principle of HDR imaging. The horizontal bars show the luminescence range of the scene in dim, moderate and bright illumination conditions. $LD{R_1}$, $LD{R_2}$ and $LD{R_3}$ are the corresponding images of the scene. In dim illumination condition, the region marked by the yellow square in $LD{R_1}$ is below the noise floor, thus the details are imperceptible. By increasing the scene illumination, the brightness of this region could be lifted above the noise floor, shown in $LD{R_3}$. However, as we increase the scene illumination, regions which were bright in $LD{R_1}$, could go above the saturation, depicted by the region marked with the blue square in $LD{R_3}$. It is evident that, combinedly $LD{R_1}$, $LD{R_2}$ and $LD{R_3}$ improved the visualization of details present in the scene, although individually they show compromised visualization due to light saturation or background noise. The HDR image combines the information preserved by the image-set and produces a composite image containing crucial details in both highlights and shadows.

 figure: Fig. 3.

Fig. 3. Schematic illustration of the HDR imaging principle.

Download Full Size | PDF

If N images of a scene are taken with N different illumination conditions, then the intensity value for each pixel of HDR image ${L_{ij}}\; $ can be estimated using this formula [21] :

$${L_{ij}} = \mathop \sum \limits_{k = 1}^N \frac{{{f^{ - 1}}({z_{ij}})w({{z_{ij}}} )}}{{\Delta x}}\textrm{ / }\mathop \sum \limits_{k = 1}^N w({z_{ij}})$$
where ${z_{ij}}$ is the pixel value of the LDR images and $w({{z_{ij}}} )$ is the weighting function associated with that pixel. These weights determine how much each LDR pixel would contribute to the corresponding HDR pixel. The camera response function is denoted as f. LDR images could be taken by varying the illumination parameters, such as exposure time, flash power, etc. $\Delta x$ denotes the parameter that is varied, and each pixel of an LDR image should be divided by the respective parameter value (exposure time or the power used to take that image) for normalization. The weighting function is calculated from the camera response function [2224].

Theoretically, increasing the number of LDR images would reduce the effect of sensor noise and thus increase the details preserved [25]. But for nonmydriatic fundus imaging, the total acquisition time should be below the pupillary reflex time, which is around 150 ms. Therefore, we took three images of the eye, each with 35 ms exposure time, and varied the illuminating power. The rationale of changing the illumination power instead of exposure time is as follows. First, there is a hardware overhead delay when the exposure setting is changed between two acquisitions. Second, the long exposure time needed to get the high intensity LDR image causes motion blur in some cases. The power levels for three images were experimentally set after calibrating with subjects from different eye pigmentation levels. Increasing the illumination power by a factor of two with each acquisition worked the best for this study. With this configuration, all the shadow, midtone and highlight information were reasonably preserved in the HDR images.

2.3 Human subjects and imaging protocol

This study was approved by the Institutional Review Board of the University of Illinois at Chicago and followed the ethical standards stated in the Declaration of Helsinki. Informed consent was obtained from each subject before the experiment. It was confirmed that none of the subjects had any history of seizure since the experiment involved bright flashes of light. The minimum required pupil size is 4 mm for our system, which is readily available in dimly lit room conditions. The alignment of the system and focusing was done using NIR light, so it did not stimulate the pupillary reflex. After focusing, three sequential visible light images were taken with the illumination power being doubled with each acquisition (e.g., 2 mW, 4 mW and 8 mW at the pupil plane for the subject shown in Fig. 4) and with an exposure time of 35 ms for each image. A LabVIEW interface was created to stream the live view of the alignment procedure and to take sequential images. After capturing the image sequence, HDR images were created by a custom-built MATLAB program. In order to compare the quality of the images taken by our device, fundus images were taken afterwards with a commercially available portable fundus imaging device Pictor Plus (Volk, Mentor, OH, USA). For FOV extension, a dimly lit external LED fixation target was placed in front of the subjects, and they were instructed to follow the target with the eye that was not being imaged. After taking a fundus image with the macula at the center, two images were captured by changing the image location approximately 30°-33° visual-angle (45°–50° eye-angle) away from the macula centered image in both left and right directions. Afterwards, four images were taken by changing the macula centered image location approximately 10°-15° visual-angle (15°–22.5° eye-angle) diagonally.

2.4 Light safety

ISO standard “Ophthalmic Instruments—Fundus Cameras” (10940:2009) [26] was used to quantitatively calculate the ocular safety of the retina against photochemical hazards. The ISO standard allowed 10J/cm2 radiant exposure on the retina which is 10 times lower than the retinal photochemical damage threshold. It also provides photochemical hazard weighting function to calculate the weighted irradiance. We neglect any light scattered by the cornea or the crystalline lens and assume all the lights are falling on the cornea and reach the retina. For visible light, we calculated the weighted irradiance of each flash by considering the spectrum of the LED and the photochemical hazard function. Afterwards, we added the products of weighted irradiance and exposure time of all three flashes to get the radiant exposure on the retina for each acquisition. The radiant exposure of each acquisition was calculated to be $4.71 \times {10^{ - 6}}$ J/cm2, which is well below the safety limit.

For the NIR guidance illumination, weighted irradiance was calculated to be 0.06 mW/cm2. We calculated the permitted time for continuous guidance using NIR wavelength with this formula:

$$\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; {T_{max}} = \frac{{Maximum\; allowed\; weighted\; irradiance}}{{Calculated\; weighted\; irradiance}}\; $$

The maximum exposure time for continuous illumination with NIR light is 46.3 hours.

3. Results

Figure 4 illustrates the imaging procedure done on a healthy subject. The NIR guidance is illustrated in Fig. 4(a). As the design wavelength of the polarizer is in the visible light region, the NIR image had back-reflection artifacts. Since NIR wavelength was exclusively used for guidance, these reflection artifacts were not an issue for us. Three LDR images taken with three illumination power levels are shown in Fig. 4 (b-d). The visible light images are free of any reflection artifacts.

 figure: Fig. 4.

Fig. 4. (a) NIR guidance image. (b) LDR image using low power flash. (c) LDR image using medium power flash. (d) LDR image using high power flash. (e) HDR image composed of the LDR images.

Download Full Size | PDF

Details in different fundus regions are preserved in different images of the image set. For example, the low power image shown in Fig. 4(b) preserves the information at the optic disc, but the other regions are barely recognizable. The LDR image taken with medium power level in Fig. 4 (c) preserves the structural details of the nerve fibers around the optic disc. And finally, the high power LDR image in Fig. 4 (d) preserves the details near the macula and the periphery, but the optic disc is saturated. Notably, none of the LDR images are holistic and each excludes crucial details of the retina which are irrecoverable since they are either near the noise floor or saturation. Figure 4(e) is the HDR image created from the LDR images. All of the above-mentioned information contained in the set of LDR images is preserved in the single HDR image.

Figure 5(a) and 5(b) show the LDR images and the HDR image from a patient diagnosed with DR. A small section around the optic disc (marked with blue square) was cropped from the LDR and HDR images, shown in Fig. 5(c). Similarly, another more peripheral (marked with yellow square) region was also selected, shown in Fig. 5(d). It is evident from Fig. 5(c) and 5(d) that the HDR image preserves the information of small vessel growth (neovascularization, marked with a black arrowhead) overlying the optic disc, as well as microaneurysms (marked with a white arrowhead) present in the periphery, giving the clinicians a comprehensive understanding of the stage of DR in this patient. None of the LDR images preserved both of this information at the same time.

 figure: Fig. 5.

Fig. 5. (a) LDR images of an eye diagnosed with DR with different illumination power levels. (b) HDR image of the eye. (c) Cropped portion marked with blue squares in LDR and HDR images. (d) Cropped portion marked with yellow squares in LDR and HDR images.

Download Full Size | PDF

For comparative evaluation, Fig. 6 and Fig. 7 show HDR images captured with our prototype system (Fig. 2) and a commercial portable fundus camera Volk Pictor Plus (Volk, Mentor, OH) from patients diagnosed with DR and AMD, respectively. A small portion of the fundus showing microaneurysms were cropped from Fig. 6(a) and Fig. 6(b) and the corresponding hot colormaps are presented in Fig. 6(c). It is evident from the hot colormaps that the contrast of the microaneurysms (marked with black circles) is much better in the image taken with the prototype system. In a similar manner, a section around the macula of a patient diagnosed with AMD was cropped (Fig. 7). The pigment clumping (black spots) and RPE atrophy (white granular structures) are clearly more recognizable in the HDR image, evident from the respective hot colormaps.

 figure: Fig. 6.

Fig. 6. (a) HDR image of a patient diagnosed with DR. (b) Image from Volk Pictor Plus from the same subject. (c) Hot colormap of the cropped regions marked with yellow squares.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. (a) HDR image of a patient diagnosed with AMD. (b) Image from Volk Pictor Plus from the same subject. (c) Hot colormap of the cropped region marked with yellow square.

Download Full Size | PDF

To demonstrate the capability of imaging the peripheral retina, we took seven images with the aid of an external fixation target and merged them together (Fig. 8 (a)). A comparative image taken with ultra-widefield SLO Optomap is shown in Fig. 8 (b). The FOV of the image in Fig. 8(a) is estimated to be 190° eye-angle (134° visual-angle) horizontally and 146° eye-angle (100° visual-angle) vertically. From Fig. 8 it is evident that all major features present in the horizontal direction (marked with the white arrows 1-6 in both images) of the image taken with the Optomap SLO are also preserved in the image taken with the portable HDR fundus camera. In the vertical direction, the presence of eyelashes could block the laser scanning light of the Optomap. Without the scanning requirement, the snapshot HDR fundus camera is not affected by the eyelashes, and thus can image the peripherical structures (yellow arrows, Fig. 8(a)) beyond the accessible region of the Optomap (Fig. 8(b)). The subjects of Fig. 4 and Fig. 8 are Asian, and the rest of the images are from subjects of Hispanic racial background.

 figure: Fig. 8.

Fig. 8. (a) Ultra-widefield HDR image by merging seven HDR images together. (b) Representative image taken with Optomap.

Download Full Size | PDF

4. Discussion

In summary, we have demonstrated a portable widefield fundus camera with nonmydriatic HDR imaging capability. The HDR fundus camera employed miniaturized indirect ophthalmoscopy illumination to enable the portable design for widefield fundus photography. NIR light was used for image focusing and guidance. Pulsed visible light illumination with flexible power control allowed rapid, nonmydriatic imaging, within a time window before visible light illumination caused pupillary response.

To overcome the limitation of FOV in traditional fundus cameras with transpupillary illumination, trans-pars-planar and trans-palpebral illumination have been demonstrated to increase the FOV up to the ora serrata, i.e., the far end of the retina [2730]. Nevertheless, the illumination efficiency depends on the light wavelength, illumination location and the pigmentation level of the subject. Further investigation is required to standardize the trans-pars-planar and trans-palpebral illumination and imaging protocols for clinical deployment. Miniaturized indirect illumination has been demonstrated as an alternative illumination strategy for developing widefield portable fundus camera. However, the illumination reflectance artifact and light efficiency inhomogeneity limit the fundus image contrast. By rotating the ophthalmic lens, Toslak et. al. [17] captured two frames with lens artifacts in different locations, and merged these two frames to get an artifact free image. The moving components used in the system hindered its application as a portable device. In this study, we employed orthogonal polarization illumination and imaging control to eliminate the back reflected light. Since there was no moving component in the system, the portable fundus camera design was readily achieved with simplified indirect illumination.

HDR imaging has been well-established in the field of digital imaging to expand the dynamic range and enhance the contrast of images. For example, most of the smartphone cameras have such HDR imaging option. However, the HDR imaging has not been previously reported in fundus cameras. For regular imaging situations, the exposure time periods can be flexibly controlled to optimize the visibility of low and high brightness components in sub-frames for subsequent HDR processing. However, for nonmydriatic fundus imaging, the available acquisition time is limited due to pupillary response caused by visible light illumination. In this study, we combined NIR light guidance and rapid visible light power control to meet the requirement of nonmydriatic fundus imaging. With the demonstrated HDR function, the portable widefield fundus camera showed superior capability to reveal pathological markers such as microaneurysms caused by DR (Fig. 6), and pigment clumping and RPE atrophy caused by AMD (Fig. 7), compared to a commercial portable fundus camera Volk Pictor Plus (Volk, Mentor, OH). It should be noted that Volk Pictor Plus uses a broadband white light source. The light attenuation, due to scattering and absorption in biological tissues, is known to be wavelength dependent. For fundus imaging, the short wavelength, such as blue or green, light has a much lower efficiency, compared to that of red light with relatively long wavelength [28], Therefore, the images taken with Volk Pictor Plus are consistently red dominant (Fig. 6(b) and Fig. 7(b)). In contrast, the light source used in our prototype system is a LED that has center wavelength at 565 nm (M565D2, Thorlabs Inc., Newton, NJ, USA) and 104 nm bandwidth (FWHM). In other words, the LED in the prototype system is green light dominated to optimize the visibility of retinal structure [27].

For comprehensive eye examination, fundus imaging of both central and peripheral regions is important. Our demonstrated portable HDR fundus camera has a 101° eye-angle (67° visual-angle) snapshot FOV for nonmydriatic fundus photography. In coordination with a fixation target, the FOV could be extended up to 190° eye-angle (134° visual-angle) horizontally and 146° eye-angle (100° visual-angle) vertically to visualize the retinal periphery for comprehensive eye examination. We conducted comparative imaging and analysis with the portable HDR fundus camera and an ultra-widefield SLO Optomap imager to evaluate the FOV and image contrast. As shown in Fig. 8, the portable HDR fundus camera showed similar imaging performance, and even better capability to reach peripheral fundus in the vertical direction. The widefield SLOs have been well established for improved clinical management of eye diseases, compared to traditional fundus cameras. However, the high cost and bulky design make them challenging for telemedicine application, which is particularly important for rural and underserved areas with limited access to experienced ophthalmologists and expensive medical devices. A low-cost, portable, widefield fundus camera may offer a unique opportunity to foster telemedicine ophthalmology to reduce disparities of medical care in rural and underserved areas.

Although the HDR function is demonstrated with a lab prototype nonmydriatic device, we anticipate that the same HDR function can be implemented with clinical fundus cameras. In principle, multiple images with different light exposures are required to selectively capture the details in different fundus regions for following HDR processing. For nonmydriatic fundus photography, all images should be captured before the pupillary response caused by visible light illumination. For mydriatic fundus photography, the required image-set can be readily acquired with either illumination exposure time or power control and after subsequent registering, HDR fundus image can be generated.

5. Conclusion

A portable, widefield fundus camera with nonmydriatic HDR imaging capability has been demonstrated and validated with both normal healthy and pathologic eyes. Miniaturized indirect ophthalmoscopy illumination was employed to achieve wide FOV, and orthogonal polarization control was used to eliminate illumination reflectance artifacts. Flash bracketed image acquisition and HDR processing were validated to implement high contrast fundus imaging. The portable HDR fundus camera provided superior capability to reveal pathological markers, compared to a conventional fundus camera. Because of the contrast improvement with HDR function, the fundus image contrast is comparable to the SLO Optomap imager. This portable, widefield, nonmydriatic HDR fundus camera promises a unique solution to facilitate affordable telemedicine.

Funding

Richard and Loan Hill Department of Biomedical Engineering, University of Illinois at Chicago; Research to Prevent Blindness; National Eye Institute (P30 EY001792, R01 EY023522, R01 EY029673, R01 EY030101, R01 EY030842, R44 EY028786).

Disclosures

Patent applications for wide field fundus illumination and photography.

Data availability

All data for supporting the conclusion have been included in the manuscript. Other data may be obtained from the authors upon reasonable request.

References

1. Z. L. Teo, Y. C. Tham, M. Yu, M. L. Chee, T. H. Rim, N. Cheung, M. M. Bikbov, Y. X. Wang, Y. Tang, Y. Lu, I. Y. Wong, D. S. W. Ting, G. S. W. Tan, J. B. Jonas, C. Sabanayagam, T. Y. Wong, and C. Y. Cheng, “Global prevalence of diabetic retinopathy and projection of burden through 2045: systematic review and meta-analysis,” Ophthalmology 128(11), 1580–1591 (2021). [CrossRef]  

2. K. C. Allison, D. Patel, and O. Alabi, “Epidemiology of glaucoma: the past, present, and predictions for the future,” Cureus 12, e11686 (2020). [CrossRef]  

3. W. L. Wong, X. Su, X. Li, C. M. Cheung, R. Klein, C. Y. Cheng, and T. Y. Wong, “Global prevalence of age-related macular degeneration and disease burden projection for 2020 and 2040: a systematic review and meta-analysis,” Lancet Global Health 2(2), e106–e116 (2014). [CrossRef]  

4. N. Quinn, L. Csincsik, E. Flynn, C. A. Curcio, S. Kiss, S. R. Sadda, R. Hogg, T. Peto, and I. Lengyel, “The clinical relevance of visualising the peripheral retina,” Prog. Retinal Eye Res. 68, 83–109 (2019). [CrossRef]  

5. X. Yao, T. Son, and J. Ma, “Developing portable widefield fundus camera for teleophthalmology: technical challenges and potential solutions,” Experimental biology and medicine (Maywood, N.J.) 247(4), 289–299 (2022). [CrossRef]  

6. A. Nagiel, R. A. Lalane, S. R. Sadda, and S. D. Schwartz, “Ultra-widefield fundus imaging: a review of clinical applications and future trends,” Retina 36(4), 660–678 (2016). [CrossRef]  

7. P. Ackland, S. Resnikoff, and R. Bourne, “World blindness and visual impairment: despite many successes, the problem is growing,” Community Eye Health 30, 71–73 (2017).

8. K. Tran, T. A. Mendel, K. L. Holbrook, and P. A. Yates, “Construction of an inexpensive, hand-held fundus camera through modification of a consumer “point-and-shoot” camera,” Invest. Ophthalmol. Visual Sci. 53(12), 7600–7607 (2012). [CrossRef]  

9. M. De Smet, “Handheld portable fundus imaging system and method,” (Google Patents, U.S. Patent Application No. 13/405,809.).

10. N. Panwar, P. Huang, J. Lee, P. A. Keane, T. S. Chuan, A. Richhariya, S. Teoh, T. H. Lim, and R. Agrawal, “Fundus photography in the 21st century—a review of recent technological advances and their implications for worldwide healthcare,” Telemedicine and e-Health 22(3), 198–208 (2016). [CrossRef]  

11. E. DeHoog and J. Schwiegerling, “Optimal parameters for retinal illumination and imaging in fundus cameras,” Appl. Opt. 47(36), 6769–6777 (2008). [CrossRef]  

12. D. A. Salz and A. J. Witkin, “Imaging in diabetic retinopathy,” Middle East Afr. J Ophthalmol 22(2), 145–150 (2015). [CrossRef]  

13. L. A. Remington, “Retina,” in Clinical Anatomy and Physiology of the Visual System (Third Edition), L. A. Remington, ed. (Butterworth-Heinemann, 2012), Chapter 4, pp. 61–92.

14. A. Youssif, A. Ghalwash, and A. Ghoneim, “A comparative evaluation of preprocessing methods for automatic detection of retinal anatomy,” Proceedings of the Fifth International Conference on Informatics and Systems (INFOS 07)2430, 24–30.

15. R. Kolar, J. Odstrcilik, J. Jan, and V. Harabis, “Illumination correction and contrast equalization in colour fundus images,” European Signal Processing Conference, 298–302 (2011).

16. S. K. Saha, D. Xiao, and Y. Kanagasingam, “A novel method for correcting non-uniform/poor illumination of color fundus photographs,” J Digit Imaging 31(4), 553–561 (2018). [CrossRef]  

17. D. Toslak, C. Liu, M. N. Alam, and X. Yao, “Near-infrared light-guided miniaturized indirect ophthalmoscopy for nonmydriatic wide-field fundus photography,” Opt. Lett. 43(11), 2551–2554 (2018). [CrossRef]  

18. D. Toslak, A. Ayata, C. Liu, M. K. Erol, and X. Yao, “Wide-field smartphone fundus video camera based on miniaturized indirect ophthalmoscopy,” Retina 38(2), 438–441 (2018). [CrossRef]  

19. X. Yao, D. Toslak, T. Son, and J. Ma, “Understanding the relationship between visual-angle and eye-angle for reliable determination of the field-of-view in ultra-wide field fundus photography,” Biomed. Opt. Express 12(10), 6651–6659 (2021). [CrossRef]  

20. T. Alterini, F. Díaz-Doutón, F. Burgos-Fernández, L. González, C. Mateo, and M. Vilaseca, “Fast visible and extended near-infrared multispectral fundus camera,” J. Biomed. Opt. 24, 096007 (2019). [CrossRef]  

21. E. Reinhard, W. Heidrich, P. Debevec, S. Pattanaik, G. Ward, and K. Myszkowski, High Dynamic Range Imaging: Acquisition, Display, and Image-based Lighting (Morgan Kaufmann, 2010).

22. S. Mann and R. Picard, “On being ‘undigital’ with digital cameras: extending dynamic range by combining differently exposed pictures,” Proc. IS & T Annual Meeting 48 (1996).

23. T. Mitsunaga and S. K. Nayar, “Radiometric self calibration,” in Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)(1999), pp. 374–380 Vol. 371.

24. P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in ACM SIGGRAPH 2008 classes(2008), pp. 1–10.

25. A. A. Bell, D. Meyer-Ebrecht, A. Bocking, and T. Aach, “HDR-microscopy of cell specimens: imaging and image analysis,” 2007 Conference Record of the Forty-First Asilomar Conference on Signals, Systems and Computers, 1303–1307 (2007).

26. I. O. f, Standardization, “Ophthalmic instruments—fundamental requirements and test methods—part 2: light hazard protection,” (ISO Geneva (Switzerland), 2007).

27. D. Toslak, T. Son, M. K. Erol, H. Kim, T.-H. Kim, R. V. P. Chan, and X. Yao, “Portable ultra-widefield fundus camera for multispectral imaging of the retina and choroid,” Biomed. Opt. Express 11(11), 6281–6292 (2020). [CrossRef]  

28. T. Son, J. Ma, D. Toslak, A. Rossi, H. Kim, R. V. P. Chan, and X. Yao, “Light color efficiency-balanced trans-palpebral illumination for widefield fundus photography of the retina and choroid,” Sci. Rep. 12(1), 13850 (2022). [CrossRef]  

29. B. Wang, D. Toslak, M. N. Alam, R. V. P. Chan, and X. Yao, “Contact-free trans-pars-planar illumination enables snapshot fundus camera for nonmydriatic wide field photography,” Sci. Rep. 8(1), 8768 (2018). [CrossRef]  

30. D. Toslak, F. Chau, M. Erol, C. Liu, R. Chan, T. Son, and X. Yao, “Trans-pars-planar illumination enables a 200° ultra-wide field pediatric fundus camera for easy examination of the retina,” Biomed. Opt. Express 11(1), 68 (2020). [CrossRef]  

Data availability

All data for supporting the conclusion have been included in the manuscript. Other data may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. (a) Schematic illustration of conventional transpupillary illumination with annular pattern in traditional fundus cameras and (b) miniaturized indirect illumination with simplified single-point pattern.
Fig. 2.
Fig. 2. (a) Optical diagram of the imaging system. (b) Layout of CL-LS plane. (c) Photographic illustration of the imaging operation. (d) Back reflection pattern at the sensor. (e) Representative fundus image with back reflection artifact (marked with white arrow).
Fig. 3.
Fig. 3. Schematic illustration of the HDR imaging principle.
Fig. 4.
Fig. 4. (a) NIR guidance image. (b) LDR image using low power flash. (c) LDR image using medium power flash. (d) LDR image using high power flash. (e) HDR image composed of the LDR images.
Fig. 5.
Fig. 5. (a) LDR images of an eye diagnosed with DR with different illumination power levels. (b) HDR image of the eye. (c) Cropped portion marked with blue squares in LDR and HDR images. (d) Cropped portion marked with yellow squares in LDR and HDR images.
Fig. 6.
Fig. 6. (a) HDR image of a patient diagnosed with DR. (b) Image from Volk Pictor Plus from the same subject. (c) Hot colormap of the cropped regions marked with yellow squares.
Fig. 7.
Fig. 7. (a) HDR image of a patient diagnosed with AMD. (b) Image from Volk Pictor Plus from the same subject. (c) Hot colormap of the cropped region marked with yellow square.
Fig. 8.
Fig. 8. (a) Ultra-widefield HDR image by merging seven HDR images together. (b) Representative image taken with Optomap.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

D = Q m a x Q m i n
L i j = k = 1 N f 1 ( z i j ) w ( z i j ) Δ x  /  k = 1 N w ( z i j )
T m a x = M a x i m u m a l l o w e d w e i g h t e d i r r a d i a n c e C a l c u l a t e d w e i g h t e d i r r a d i a n c e
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.