Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spatial loss factor for the analysis of accommodation depth cue on near-eye light field displays

Open Access Open Access

Abstract

In order to address the vergence-accommodation conflict problem, a generalized model consisting of a human visual system for the 4D light field display system was proposed. This model includes the key factors, such as retinal resolution, the central depth plane (CDP), and the proposed spatial loss factor, for the light field display system. The spatial resolution of the target plane in the depth of field (DOF) were quantitatively evaluated based on the proposed model. The results showed that the inconsistency of spatial resolution in DOF results in unstable eye accommodation response. Based on the fovea resolution-limit, we evaluated and simulated the resolution of perceived images on retina and the accommodation response based on spatial loss factor. The simulation results verified that a near-eye light field display (NE-LFD) configuration with a spatial loss factor greater than 0.8, corresponding to 2 by 2 views, or a minimum spatial loss factor 0.6, corresponding to 3 by 3 views, has the ability to render a nearly correct focus cues and accommodation response.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Physiological cues, such as binocular disparity, motion parallax, and accommodation, could help human perceive the real depth information from natural scene. Researchers has tried to imitate this process and evaluate the 3D visual perception for near-eye light field displays (NE-LFD). For example, 3D scene are rendered by a pair of two-dimensional (2D) perspective images [1] with binocular disparities or by three-dimensional (3D) stereo images [2] with correct or nearly correct focus cues. The perceived images on retina are reconstructed by gathered those viewpoints and fused in the brain. Although those devices could provide compelling depth perceptions, the common limitations of them are lack of accurate monocular focus cues, accommodation response and retinal blur. This phenomenon leads to a well-known vergence-accommodation conflict (VAC) problem and visual fatigue [36]. Eye vergence [7] refers that human visual system could change gaze direction and hold the perceived images of the target object in the real world onto the fovea by eye movement system. On the other hand, Eye accommodation refers that the eye can focus on a fixated object with a clarity of retinal image by changing the refractive power of the crystalline lens under the contraction and relaxation of the ciliary muscles. Moreover, the retinal image will appear blurry when the eye’s accommodation distance is different from the depth of the object.

In order to focus on a fixated object, two or more beams of light with different perspective view from the object are essential to stimulate the retina through the pupil. Moreover, the angular separation of those perspective view should be around 0.2°∼0.4° to ensure a sufficiently high angular resolution to support the accommodation response [8]. Holographic displays [911] can reconstruct accurate wave front of real object via phase and amplitude modulation, but the system demands complexity computation and optical elements which against to miniaturization for 3D-NED. For the multi-focal plane displays, including additive [12,13] and multiplicative display [14,15], the DOF of the virtual object is severely limited by the gaps between the adjacent display units. Note that, large DOF and the powerful modulation of light rays can effectively suppress the AC problem. The light field display method based on the micro-lens array (MLA) is considered as one of the most feasible approach for 3D display. However, the optical efficiency and bandwidth of MLA or micro-pinhole array (MPA) are limited.

In this paper, spatial loss factor for the analysis of accommodation depth cue on near-eye light field displays was proposed. The resolution of perceived images on retina and the accommodation response were evaluated and simulated. As our knowledge, the monocular depth perception of near-eye light field displays based on spatial loss factor is first proposed in this paper. Our primary technical contributions are:

  • • A generalized model consisting of human visual system and 4D light field display system was built based on human visual system and spatial loss factor. This model incorporates all factors including retinal resolution, the central depth plane (CDP), angular resolution and spatial resolution of the light field system.
  • • We quantitatively evaluated the spatial resolution of the target light field rendered by NE-LFD. The results turned out that the inconsistency of spatial resolution in the depth of field (DOF) leads to unstable eye accommodation response. According to this analysis, spatial loss factor was defined for the first time to evaluate the rendering capability of a NE-LFD.
  • • The resolution of perceived images on retina and the accommodation response were evaluated and simulated based on the generalized model. The relationship between the resolvable spot size of perceived images and key system parameters were fully evaluated by comparing the light distribution on retina with different spatial loss factor.
This paper is organized as follows. Section 2.1 describes generalized model of near-eye light field displays. In section 2.2, we focus on the model-based view sampling optimization. Section 3 and Section 4 show the simulation setup, results and conclusions, respectively.

2. Analysis of near-eye light field displays based on visual characteristics of retina

In order to overcome the VAC problem and provide a comfortable viewing experiences, the relationship between the resolvable spot size of perceived images on retina and the key system parameters were fully evaluated and optimized.

2.1 Generalized model of near-eye light field displays

In viewing an ideal NE-LFD, the display system can provide high enough angular resolution and spatial resolution to render the target light field with correct focus cues. Figure 1 (a) shows a schematic layout of a generalized monocular NE-LFD, which mainly consists of visual characteristics and CDP rendered by modulated display system. In natural vision, for a sharply perception image on retina, the depth of accommodation response must be close to the simulated distance of the target point in the real world, as shown in Fig. 1 (b). However, for conventional near-eye display, the virtual images are rendered by a display system in a limited depth range or even a 2D flat surface on CDP, as shown the orange line in Fig. 1 (c). The stimuli for accommodation may be conflicting when eye try to focus on virtual images in front or behind of CDP. Figure 1 (d) show that Multi-layer display, which consists of Multi-layer transparent liquid crystal panels or multiple display units reflected through half mirrors, could provide multiple depth planes (the red solid line). It is close to the natural line (black dotted line). But the stepped depth plane leads to fragmentation of the target light field. Integrated imaging (II)-based light field display, which could render a continuous depth of field, has the ability to provide nearly correct focus cues. However, the depth of accommodation response is not perfectly consistent with the distance of the target image. The offset of the two curves is called the accommodation response error which defined as the mismatch between the actual accommodation response and the accommodation cue rendered by NE-LFD [3], as shown in Fig. 1(e). The threshold of acceptable accommodation error is lower than 0.3 diopters in the theoretical analysis [16,17]. Figure 1(f) show that the accommodation error is related to the shift depth Δz of the target image from CDP and the view density. Figure 1(f) plots that the error will decrease as not only the decrease of the absolute value of shift depth Δz, but also the increase of view density. However, the contrast of perceived images on retina is negatively related to view density [6], as shown in Fig. 1(g). So a systematic modeling and optimization of near-eye light field displays based on human visual system is indispensable to properly alleviate the VAC problem and create comfortable viewing experiences.

 figure: Fig. 1.

Fig. 1. A schematic layout of a generalized monocular NE-LFD, which mainly consists of visual characteristics of retina and CDP rendered by modulated display system. (b) natural vision, (c) conventional near-eye display, (d) Multi-layer display, (e) Integrated imaging (II)-based light field display, (f) accommodation error and shift depth, (g) the contrast of perceived images and shift depth.

Download Full Size | PDF

Based on the above analysis, a generalized model, which could simulate the perceived image formation process on retina of and accommodation response, were built based on a 4-D light field function L(x, y, u, v), as shown in Fig. 2. A display system was placed at a distance de in front of pupil D (u, v), consisting of a screen panel with pixel size P and a modulation layer (such as micro-lens array (MLA) or micro-pinhole array (MPA)) with pitch Wl. The gap between them was dl. For this configuration, the magnified pixel size on CDP (x, y) was PCDP and the pixel size on the plane with a shifted depth Δz from CDP was PΔz, which were calculated by Eq. (1) and Eq. (2), respectively. The resolution limits of display system resulted in miss-sampling issue and aliasing effects in conventional NE-LFD systems.

$${p_{CDP}} = \frac{{{z_{CDP}}}}{{{d_l}}}p$$
$${p_{\Delta \textrm{z}}} = \left\{ {\begin{array}{{c}} {\frac{{{\textrm{w}_l}}}{{{z_{CDP}}}}\Delta \textrm{z} + \frac{{{z_{CDP}} + \Delta \textrm{z}}}{{{z_{CDP}}}}MP, \quad \Delta \textrm{z} \le 0}\\ {\frac{{{\textrm{w}_l}}}{{{z_{CDP}}}}\Delta \textrm{z} + \frac{{{z_{CDP}} - \Delta \textrm{z}}}{{{z_{CDP}}}}MP,\Delta z > 0} \end{array}} \right.$$
where M is the system magnification at CDP. If the spatial resolution of target plane was define as the number of magnified pixels in the field of view, so the spatial loss factor $\eta$ was obtained by dividing Np on CDP by NΔz on the plane with shifted depth Δz from CDP
$$\eta = \frac{{{N_{\Delta \textrm{z}}}}}{{{N_\textrm{p}}}} = \frac{{\frac{{2{z_{\varDelta \textrm{z}}}\tan (\alpha /2)}}{{{p_{\Delta \textrm{z}}}}}}}{{\frac{{2{z_{CDP}}\tan (\alpha /2)}}{{{p_{CDP}}}}}} = \frac{{{z_{\Delta \textrm{z}}}{p_{CDP}}}}{{{z_{CDP}}{p_{\Delta z}}}}$$

 figure: Fig. 2.

Fig. 2. Schematic of the configuration of the proposed generalized model.

Download Full Size | PDF

The spatial loss factor can be regarded as a scaling factor and it defines the ratio of the footprint of each elemental view to view pitch. Figure 3 show a plot of the spatial loss factor $\eta$ as a function of the depth shift of reconstruction from CDP from −1D to 1D. The depth of target images were shifted from the CDP in the range of ± 1 diopters with an increment of 0.2 diopters. The red line represented that the virtual image could be reconstructed with the highest resolutions when it was rendered at CDP, as shown the upper disk in Fig. 3. The spatial resolution was reduced to 60 percent of that on CDP when the depth of target images were shifted 0.65D from CDP, as shown the lower disk. Theoretically, retinal blur varies consistently with changes in scene depth, and the maximum contrast value of the perception image were achieved when the actual accommodation depth was equal to the depth of the target object. However, in near-eye light field display system, the spatial resolution of target image on CDP was clearly superior to that on the plane away from CDP and the inconsistency of spatial resolution in DOF leaded to unstable eye accommodation response. This phenomenon will induce accommodative response to move to CDP when the focus cues rendered by a LF-NED for a target point located far from the CDP, which also result in a limited depth range and eye accommodation error.

 figure: Fig. 3.

Fig. 3. Plot of the spatial loss factor as a function of the depth shift of reconstruction from CDP from −1D to 1D.

Download Full Size | PDF

Based on the generalized model of near-eye 4D light field display system, in order to address the VAC problem and create comfortable viewing experiences, a set of viewpoints were necessary to distribute with the pupil [8]. So we assumed that the total number of viewpoints (NOV) was N and the 2D slice of the 4D light field function L(x, y, u, v) could be express as L(x, u), where the parameters x and u were calculate by [6], respectively

$${\textrm{u}_\textrm{n}}\textrm{ = (n - }\frac{{N - 1}}{2}\textrm{)}\Delta u;$$
$${\textrm{x}_\textrm{n}} = \frac{{{\textrm{d}_m}\Delta z + h{z_{CDP}}}}{{{z_{CDP}} + \Delta z}};$$
where n represented the nth viewpoint and Δu was the displacements between two footprints of elemental views on the pupil. A spatial loss factor of 1 meant an ideal state that there is no loss of spatial resolution rendered by NE-LFDs on CDP in range of ± 1D. In other words, the NE-LFDs could provide a correct focus cues without accommodation error and the target light field was rendered by collimating beam. However, it was impossible or exceedingly difficult for conventional NE-LEDs with a fixed CDP due to the diffraction effect and the limitations of display device. Furthermore, the improving of the value of spatial loss factor directly changed the effective working NA of NE-LEDs, which increased the spatial resolution and the depth of field by sacrificing the brightness and angular resolution of the system. We assumed that the eyeball was an ideal imaging system. The stimulus location Sn of the ray on retina, which emits from the position x on CDP passing through un on the pupil, was calculate by,
$${S_\textrm{n}} = {u_n} - {T_{\textrm{re}tina}}{T_{pupil}}\left[ {\begin{array}{{c}} {{u_n}}\\ {{\alpha_n}} \end{array}} \right]$$
where Tretina and Tpupil were the transfer function of the lens and vitreous humor, respectively. ${\alpha _n}$ was the incident angle of the nth viewpoint, which was calculate by Eq. (4) and Eq. (5). So the resolvable spot size on retina (RSSR) of the target point shifted Δz from CDP was calculate by,
$${S_{\Delta \textrm{z}}} = ({\textrm{max(}{\textrm{S}_n}) - \textrm{min}({\textrm{S}_n})} ){|_{n \in [1, N],\Delta u \in [0, D]}}$$
The perceived retinal image quality of NE-LFD enables to be estimated through the resolvable spot size on retina with a specific configuration. What should be noted was that the angular resolution of around 10∼15 pixel per degree (ppd), which is provided by the most of current commercial VR/AR displays, is far below the accuracy of a normal person with 20/20 vision is about 1 arcmin, namely 60 ppd, or the highest part in the fovea region (∼ ± 5°) [18,19]. This situation makes those near-eye display lose the ability to provide accurate focus cues and accommodation response. So the relationship between the resolvable spot size of perceived images on retina and key system parameters should be fully evaluated.

2.2 Evaluation results

Based on the above analysis, the number of viewpoints, the depth of target point and accommodation, the location of CDP were evaluated in Fig. 4. As discussed in Section 2.1, RSSR was evaluated as a function of the depth shift of accommodation from CDP by setting with eye-relief de=40mm, pixel size P = 0.12mm and MLA pitch size Wl=10P. The CDP was 1D away from the pupil and the target point depth was shifted 100mm from the CDP. Figure 4(a) plotted that RSSR was evaluated as a function of the number of viewpoints (i.e. 1 by 1, 2 by 2, 3 by 3, 4 by 4, and 5 by 5), when eye accommodation depth was shifted from CDP in range of ± 200mm. The blue curve plotted that the target point was rendered by one viewpoint with a peak on CDP. The result meant that there was only one fixed target plane and the system couldn’t provide focus cues for this target point. However, when the number of viewpoints increased to two or more from one, NE-LFD had the ability to render a nearly correct focus cues. Due to the target point rendered at 100mm shifted away from CDP, the maximum contrast values were achieved when eye accommodated at the depth of target point. From the RSSR variations, it can be seen: 1) the difference were inconspicuous when the number of viewpoints exceeds to 3 by 3, so we just focused on two and three viewpoints in the follow discussion. 2) the trend of RSSR decay between them was slightly slower than elsewhere. Because spatial resolution of target image on CDP was clearly better than that away from CDP according to the discussion in Section 2.1.

 figure: Fig. 4.

Fig. 4. Plot of RSSR as a function of (a) the number of viewpoints from 1 by 1 to 5 by 5, (b) the five different reconstructed depth Δz: −100mm, −50mm, 0mm, 50mm, and 100mm (c) the depth shift of reconstruction from CDP, corresponding to CDP @ 4D, 2D, 1D, 0.5D, and 0D. (d) the depth shift of accommodation from CDP, corresponding to CDP @ 1D, 0.5D, 0.3D, 0.25D, and 0.2D.

Download Full Size | PDF

Figure 4(b) plotted the RSSR for the five different value Δz: −100mm, −50mm, 0mm, 50mm, and 100mm. The black curve marked the location where the maximum contrast was achieved for each of the target points. Because image contrast decreased as the number of viewpoints increases and the images with one viewpoint generally have the highest resolutions. Figure 4(c) plotted the RSSR as a function of the depth shifted of target points from 0 to 5m for CDP = 4D, 2D,1D,0.5D, and 0D, respectively . It can be seen that the deceleration of RSSR in the foreground of CDPs was much faster than that in the background of CDPs, which was similar to the results in [20]. However, the difference, compared to Ref [13], was that we have four non-zero peaks among five curves: 0.0223mm, 3.4828e-08mm, 3.8875e-09mm, 4.5844e-10mm and 0mm, respectively. More obvious results were shown in Fig. 4(c). Because the expansion angle of the conical ray rundle, emitting from the magnified elemental image, decreased as the distance of CDP increased.

According to human visual physiology data [2124], the retinal fovea is defined as the constituent responsible for sharp central vision. For a clear acute vision, the stimulation from the target object must be held steadily on this region on retina passing through pupil. Visual acuity along horizontal direction is the highest in the fovea region which is up to 1 arcmin corresponding to RSSR 0.005mm around ± 5°. It will decrease by 50% if the stimulation moves 5 degree away from the fovea center. Meanwhile, for visual axis direction, the depth threshold of acceptable accommodation error for binocular vision is 6.4mm and that for monocular vision is 46.14mm. Therefore, the key system parameters can be optimized based on the resolution limit of retinal fovea along horizontal and visual axis direction. According to Fig. 4(a), it can be seen that the trend of RSSR decay between them is slightly slower than elsewhere. So retinal fovea can effectively distinguish any two points along visual axis direction if the extreme situation, that there are two points with 0mm and 46.14mm displacement from CDP, respectively, can be distinguished.

In order to examine the effects of spatial loss factor η and the number of viewpoints on accommodation response and RSSR was evaluated as a function of the depth shift of accommodation from CDP by setting with eye-relief de=40mm, CDP = 1D, Δz = 46.14mm. The spatial loss factors were setting with 0.9, 0.8, 0.7, 0.6, and 0.5, respectively. Here we set the threshold of 0.005mm (the yellow box shown in Fig. 5) for the resolvable spot size on retina. The result in Fig. 5 showed that the RSSR of perspective images on retina were significantly reduced as the spatial loss factor increased. Meanwhile, in a NE-LFDs with 2×2 views, by improving spatial loss factor up to 0.9, the target point with 46.14mm displacement from CDP can be distinguished from the point with a depth of ZCDP, as shown in Fig. 5 (a). However, for 3×3 views in (b), those two points can be effectively distinguished with a spatial loss factor as lower as 0.6. In this case, a near-eye light field displays (NE-LFD) configuration with a spatial loss factor greater than 0.8, corresponding to 2 by 2 views, or a minimum spatial loss factor 0.6, corresponding to 3 by 3 views, has the ability to render a nearly correct focus cues and accommodation response.

 figure: Fig. 5.

Fig. 5. Plot of RSSR as a function of the depth shift of accommodation from CDP from 0 mm to 50mm, corresponding to (a) 2 by 2 views and (b) 3 by 3 views.

Download Full Size | PDF

3. Simulation setup and results

To demonstrate the feasibility of the proposed generalized model based on a spatial loss factor in Fig. 2, we simulated the perceived retinal image of a NE-LFDs in ZEMAX [25], which is based on the ray tracing and capable of analyzing modulation transfer function (MTF), imaging simulation and light source. For simplicity and without loss of generality, we set the CDP at 1 diopter and the light source with three wavelengths, 470nm, 555 nm, and 650 nm, respectively. The most important component in this generalized model was the human eye model which must be able to realistically simulate the human eye imaging process and change its optical properties according to the accommodation depth. The Arizona eye model was adopted from several typical eye model [2627], which had the ability to simulate the accommodation response by adjusting the structural parameters, such as curvature radius, conic constant, refractive index, etc. In order to account for the spatial loss factor effect, the overall specifications of the NE-LFDs mode was listed in Table 1.

Tables Icon

Table 1. The structure parameters of this system

As discussed in Section 2.1 and Section 2.2, a NE-LFDs with 3 by 3 views (the results are shown in Fig. 6) and 2 by 2 views (as shown in Fig. 7) were simulated, respectively. The spatial resolution and the accuracy of focus cues of a NE-LFDs were vary with not only the number of viewpoints changed but also the spatial loss factor chose. Furthermore, when the accommodation was near or far from the target object, the perceived retinal image was blur. When the accommodation was on the target object, there were two situations. Situations I, the retinal image was in the best condition with highest image resolution and sharpest image boundaries when the target object was rendered on CDP. Meanwhile, situations II, the retinal image was also in a better condition, though it was slightly blurred, when the target object was shifted a distance from CDP. However, the spatial resolution of NE-LFDs in this position was worse than that on CDP in theory, as discussed in Eq. (3). This was why the retinal image quality under situations II was worse than that under situations I. So this part aimed to find and verify the optimal spatial loss factor of NE-LFDs corresponding to 3 by 3 and 2 by 2 views, respectively.

 figure: Fig. 6.

Fig. 6. 3 by 3 views with spatial loss factor: 0.9, 0.8, 0.7, 0.6, and 0.5. (Unit is millimeter)

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. 2 by 2 views with spatial loss factor: 0.9, and 0.8. (Unit is millimeter)

Download Full Size | PDF

Based on the generalized model described above, Fig. 6 showed the retinal images of the target point rendered by the NE-LFDs with 3×3 views, corresponding to a spatial loss factor of 0.9, 0.8, 0.7, 0.6, and 0.5, respectively. The target depth was shifted −46.14 mm from CDP, as shown in (a) and (b) columns. According to situations II, when the accommodation was on the target object, as shown in (a) column, the perceptual spot size on retina dramatically deteriorated as the spatial loss factor decreases from 0.9 to 0.5. Those perceptual spots were better than that (the (b) column) when the accommodation is on CDP. However, the perceptual spots were obviously blurred when the spatial loss factor was less than or equal to 0.5. According to situations I, when the target point was rendered on CDP, the perceptual spots (the (c) column) was much sharper than those (the (d) column) when the accommodation was shifted −46.14 mm from CDP.

In comparison, when the target point were rendered by NE-LFDs with 2×2 views, the results showed that the perceptual spots were obviously blurred when the spatial loss factor was less than or equal to 0.8 and the images on the CDPs, in all cases, have the highest resolutions due to the focused beams.

4. Conclusions

We proposed a generalized framework to systematic modeling and optimization of near-eye light field displays based on human visual system and spatial loss factor. A generalized model consisting of human visual system and 4D light field display system is built, which incorporates all factors including retinal resolution, the central depth plane (CDP), angular resolution and spatial resolution of the light field system. We also quantitatively evaluate the spatial resolution of the target images located on and away from the central depth plane (CDP), respectively, which turn out that the inconsistency of spatial resolution in the depth of field (DOF) will lead to unstable eye accommodation response. We further evaluate and simulate the resolution of perceived images on retina and the accommodation response by the generalized model based on the fovea resolution-limited. The results show that a near-eye light field displays (NE-LFD) configuration with a spatial loss factor greater than 0.8, corresponding to 2 by 2 views, or a minimum spatial loss factor 0.6, corresponding to 3 by 3 views, has the ability to render a nearly correct focus cues and accommodation response. Overall, the paper provides a generalized framework which could effectively evaluate the retinal image quality of NE-LFDs by designing and optimizing device parameters.

Funding

National Key R&D Program of China (2017YFB1002900); Fundamental Research Funds for the Central Universities (KYLX15_0212).

Acknowledgments

The authors gratefully acknowledge the participants in the user study and the anonymous reviewers for their constructive comments.

References

1. J. Zhao and J. Xia, “Virtual viewpoints target via Fourier slice transformation,” J. Soc. Inf. Disp. 26(8), 463–469 (2018). [CrossRef]  

2. J. Zhao, Q. Ma, J. Xia, J. Wu, B. Du, and H. Zhang, “Hybrid Computational Near-Eye Light Field Display,” IEEE Photonics J. 11(1), 1–10 (2019). [CrossRef]  

3. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017). [CrossRef]  

4. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital target,” Opt. Lett. 26(3), 157 (2001). [CrossRef]  

5. K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004). [CrossRef]  

6. H. Huang and H. Hua, “Effects of ray position sampling on the visual responses of 3D light field displays,” Opt. Express 27(7), 9343–9360 (2019). [CrossRef]  

7. J. Iskander, M. Hossny, and S. Nahavandi, “A Review on Ocular Biomechanic Models for Assessing Visual Fatigue in Virtual Reality,” IEEE Access 6, 19345–19361 (2018). [CrossRef]  

8. Y. Takaki, “High-Density Directional Display for Generating Natural Three-Dimensional Images,” Proc. IEEE 94(3), 654–663 (2006). [CrossRef]  

9. J. S. Lee, Y. K. Kim, and Y. H. Won, “See-through display combined with holographic display and Maxwellian display using switchable holographic optical element based on liquid lens,” Opt. Express 26(15), 19341–19355 (2018). [CrossRef]  

10. J. S. Lee, Y. K. Kim, and Y. H. Won, “Time multiplexing technique of holographic view and Maxwellian view using a liquid lens in the optical see-through head mounted display,” Opt. Express 26(2), 2149–2159 (2018). [CrossRef]  

11. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017). [CrossRef]  

12. S. Lee, J. Cho, B. Lee, Y. Jo, C. Jang, D. Kim, and B. Lee, “Foveated Retinal Optimization for See-Through Near-Eye Multi-Layer Displays,” IEEE Access 6, 2170–2180 (2018). [CrossRef]  

13. K. J. MacKenzie, D. M. Hoffman, and S. J. Watt, “Accommodation to multiple-focal-plane displays: Implications for improving stereoscopic displays and for accommodation control,” J Vis. 10(8), 22 (2010). [CrossRef]  

14. M. Liu, C. Lu, H. Li, and X. Liu, “Bifocal computational near eye light field displays and Structure parameters determination scheme for bifocal computational display,” Opt. Express 26(4), 4060–4074 (2018). [CrossRef]  

15. S. Liu, H. Hua, and D. Cheng, “A Novel Prototype for an Optical See-Through Head-Mounted Display with Addressable Focus Cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010). [CrossRef]  

16. F. W. Campbell and G. Westheimer, “Dynamics of accomodation responses of the human eye,” Journal of Physiology-London 151(2), 285–295 (1960). [CrossRef]  

17. W. N. Charman and H. Whitefoot, “Pupil Diameter and Depth-of-field of Human Eye as Measured by Laser Speckle,” Opt. Acta 24(12), 1211–1216 (1977). [CrossRef]  

18. C. A. Curcio, K. R. Sloan, R. E. Kalina, and A. E. Hendrickson, “Human Photoreceptor Topography,” J. Comp. Neurol. 292(4), 497–523 (1990). [CrossRef]  

19. G. Tan, Y. H. Lee, T. Zhan, J. Yang, S. Liu, D. Zhao, and S. T. Wu, “Foveated imaging for near-eye displays,” Opt. Express 26(19), 25076–25085 (2018). [CrossRef]  

20. Z. Qin, Z. Qin, P.-Y. Chou, J.-Y. Wu, Y.-T. Chen, C.-T. Huang, N. Balram, and Y.-P. Huang, “Image formation modeling and analysis of near-eye light field displays,” J. Soc. Inf. Disp. 27(4), 238–250 (2019). [CrossRef]  

21. H. Hua, “Enabling Focus Cues in Head-Mounted Displays,” Proc. IEEE 105(5), 805–824 (2017). [CrossRef]  

22. A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 1–12 (2016). [CrossRef]  

23. B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3D Graphics,” ACM Trans. Graph. 31(6), 1 (2012). [CrossRef]  

24. H. Strasburger, I. Rentschler, and M. Juttner, “Peripheral vision and pattern recognition: a review,” J Vis. 11(5), 13 (2011). [CrossRef]  

25. ZEMAX. Available: https://www.zemax.com/.

26. I. Escudero-Sanz and R. Navarro, “Off-axis aberrations of a wide-angle schematic eye model,” J. Opt. Soc. Am. A 16(8), 1881 (1999). [CrossRef]  

27. E. Greivenkamp, J. Schwiegerling, J. M. Miller, and M. D. Mellinger, “Visual acuity modeling using optical raytracing of schematic eyes,” Am. J. Ophthalmol. 120(2), 227–240 (1995). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. A schematic layout of a generalized monocular NE-LFD, which mainly consists of visual characteristics of retina and CDP rendered by modulated display system. (b) natural vision, (c) conventional near-eye display, (d) Multi-layer display, (e) Integrated imaging (II)-based light field display, (f) accommodation error and shift depth, (g) the contrast of perceived images and shift depth.
Fig. 2.
Fig. 2. Schematic of the configuration of the proposed generalized model.
Fig. 3.
Fig. 3. Plot of the spatial loss factor as a function of the depth shift of reconstruction from CDP from −1D to 1D.
Fig. 4.
Fig. 4. Plot of RSSR as a function of (a) the number of viewpoints from 1 by 1 to 5 by 5, (b) the five different reconstructed depth Δz: −100mm, −50mm, 0mm, 50mm, and 100mm (c) the depth shift of reconstruction from CDP, corresponding to CDP @ 4D, 2D, 1D, 0.5D, and 0D. (d) the depth shift of accommodation from CDP, corresponding to CDP @ 1D, 0.5D, 0.3D, 0.25D, and 0.2D.
Fig. 5.
Fig. 5. Plot of RSSR as a function of the depth shift of accommodation from CDP from 0 mm to 50mm, corresponding to (a) 2 by 2 views and (b) 3 by 3 views.
Fig. 6.
Fig. 6. 3 by 3 views with spatial loss factor: 0.9, 0.8, 0.7, 0.6, and 0.5. (Unit is millimeter)
Fig. 7.
Fig. 7. 2 by 2 views with spatial loss factor: 0.9, and 0.8. (Unit is millimeter)

Tables (1)

Tables Icon

Table 1. The structure parameters of this system

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

p C D P = z C D P d l p
p Δ z = { w l z C D P Δ z + z C D P + Δ z z C D P M P , Δ z 0 w l z C D P Δ z + z C D P Δ z z C D P M P , Δ z > 0
η = N Δ z N p = 2 z Δ z tan ( α / 2 ) p Δ z 2 z C D P tan ( α / 2 ) p C D P = z Δ z p C D P z C D P p Δ z
u n  = (n -  N 1 2 ) Δ u ;
x n = d m Δ z + h z C D P z C D P + Δ z ;
S n = u n T re t i n a T p u p i l [ u n α n ]
S Δ z = ( max( S n ) min ( S n ) ) | n [ 1 , N ] , Δ u [ 0 , D ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.