Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Super multi-view near-eye display to solve vergence–accommodation conflict

Open Access Open Access

Abstract

A super multi-view (SMV) technique is applied to near-eye displays to solve the vergence–accommodation conflict that causes visual fatigue. The proposed SMV near-eye display employs a high-speed spatial light modulator (SLM), a two-dimensional (2D) light source array, and an imaging optics for each eye. The imaging optics produces a virtual image of the SLM and real images of the light sources to generate a 2D array of viewpoints. The SMV images are generated using a time-multiplexing technique: the multiple light sources sequentially emit light while the SLM synchronously displays corresponding parallax images. A monocular experimental system was constructed using a ferroelectric liquid crystal display and an LED array. A full-parallax SMV image generation with 21 viewpoints was demonstrated and a comparison of full-parallax and horizontal parallax SMV images provided.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Virtual reality (VR) and augmented reality (AR) technologies have made significant progress in recent times. Head-mounted displays (HMDs) are visual interface devices used for both technologies and are recently called near-eye displays. Although HMDs were invented in the 1960s [1], its recent low-cost commercialization has accelerated their wide use. However, conventional HMDs suffer from the issue of vergence–accommodation (VA) conflict [2] that causes visual fatigue and discomfort. This study develops a new near-eye display based on the super multi-view (SMV) technique to solve the VA conflict.

The VA conflict has been known to cause visual fatigue for stereoscopic displays. Vergence and accommodation are physiological cues for humans to perceive depth. Vergence perceives the depth of an object from the rotation angles of both eyes, and accommodation perceives the depth from the focusing information of the eyes. When left and right images are displayed to the corresponding eyes, the vergence correctly perceives the depth of a three-dimensional (3D) image. However, the eyes do not focus on the 3D image but on the display screen where the two images are displayed; thus accommodation does not correctly perceive the depth of the 3D image. Because there is a close interaction between vergence and accommodation [3], this conflict leads to visual fatigue. The visual fatigue problem is more significant for near-eye displays [4]. For VR and AR applications, 3D images are often produced near viewers, although the two images are usually projected at several meters away from the eyes. Thus, the VA conflict becomes serious for near-eye displays.

Several techniques have been proposed to address the VA conflict in near-eye displays. The light field display technique has been used [5–7] that involves the reproduction of rays with a sufficient density to enable the eyes to focus on 3D images. Near-eye light field displays are constructed by combining a microlens array with a microdisplay [5,6] or by stacking multiple liquid crystal displays [7]. The multi-focal-plane display technique has also been used [8] to produce multiple images at different depth positions using a deformable membrane mirror device. The adaptive focus technique has also been used [9]. It employs a focus-tunable lens between the eye and a microdisplay, whereby the focal length of the focus-tunable lens is dynamically changed. The accommodation-invariant display that evokes disparity-driven accommodation rather than blur-driven accommodation has also been proposed [10]. The image position is continuously varied using the focal tunable lens to keep blur unchanged on the retina even when the focus of the eyes changes. The holographic technique has also been used for constructing near-eye displays [11–13]; because wavefronts emitted from objects are reconstructed, holographic images naturally support all depth cues of the human visual system and the VA conflict does not occur.

SMV displays have been developed to solve the VA conflict [14–18]. The interval between viewpoints of the SMV displays is made smaller than the pupil diameter, allowing the eyes to focus on 3D images, thereby preventing the VA conflict. Accommodation responses to SMV displays have been measured, revealing that the SMV display technique can evoke the accommodation of eyes [19,20]. Most of the previously developed SMV displays provide only horizontal parallax because a large number of viewpoints are required to satisfy the SMV condition. For near-eye displays, because the viewpoints are generated only in the eye box areas, the required number of viewpoints is limited. Therefore, the SMV near-eye display developed in this study can produce full-parallax SMV images.

In this study, a time-multiplexing technique for constructing SMV near-eye displays is proposed. The experimental system is constructed to verify the proposed technique. Preliminary experimental results have been previously reported in a conference paper [21], where only full-parallax SMV images with a viewpoint interval of 2 mm were shown. In this paper, a detailed explanation about the optical system of the proposed SMV near-eye display has been added. Also, we have newly included an evaluation of the constructed experimental system, such as the intensity distributions of the viewpoints and the image distortion. Full-parallax SMV image generation with a viewpoint interval of 4 mm and a discussion about the effects of the viewpoint interval have also been added. Moreover, the generation of SMV images with only horizontal parallax and a discussion about the differences between full-parallax and horizontal parallax SMV images has also been added. A near-eye display based on the pupil-tracked light field projection, which scans the light convergence point on the pupil, was recently proposed in [22]. Although this technique is based on light field displays and our technique is based on SMV displays, they share the common idea that addresses the VA conflict. However, while the display system proposed in [22] employed a laser projector and a scanning mirror, which provided 3D images with only horizontal parallax, the display system proposed in our system does not require the mechanical part and provides 3D images with full-parallax.

2. SMV displays

In this section, before explaining the proposed SMV near-eye display, the SMV display technique [14–18] is briefly explained.

As shown in Fig. 1, an SMV display generates dense viewpoints with an interval smaller than the pupil diameter of the eyes. When this SMV display condition is satisfied, two or more viewpoints exist in the pupil so that two or more rays passing through an identical point on the 3D image enter the pupil simultaneously. When the eye focuses on the display screen, as shown in Fig. 1(a), the rays do not converge on the retina. On the contrary, when the eye focuses on a point on the 3D image, as shown in Fig. 1(b), the rays converge on the retina. Therefore, the eyes can focus on 3D images, thereby preventing the VA conflict from occurring.

 figure: Fig. 1

Fig. 1 SMV display technique: eyes focused at (a) display screen, and (b) 3D image.

Download Full Size | PPT Slide | PDF

Several types of SMV displays have been developed; the projection type with 64 viewpoints [14], the flat-panel type with 72 viewpoints [15], the multi-projection type with 256 viewpoints employing 16 lenticular displays [16], the flat-panel type with 16 viewpoints using an eye-tracking system [17], and the time-multiplexing type with 64 viewpoints using four high-speed projector units [18].

3. Proposed SMV near-eye display

The display system suitable for constructing a SMV near-eye display is considered. The projection type requires plural projectors for each eye so the system size is increased. The flat-panel type can provide low-resolution 3D images because the 3D resolution is given by dividing the flat-panel resolution by the number of viewpoints. Therefore, this study employs a time-multiplexing technique for constructing the SMV near-eye display, making it possible for a compact and high-resolution display system to be constructed. The previously developed SMV display using time-multiplexing technique [18] provided SMV images with only horizontal parallax. The SMV near-eye display developed in this study can provide both full-parallax SMV images and horizontal parallax SMV images.

Figure 2 illustrates the proposed SMV near-eye display. It consists of a high-speed SLM, a two-dimensional (2D) light source array, and an imaging optics for each eye. The imaging optics has two functions: virtual image generation and viewpoint generation. A virtual image of the SLM screen is produced by lens 1. When the focal length of lens 1 is denoted by f1, the SLM screen is placed at a distance less than f1 from it. For multiple viewpoints generation, the combination of lens 1 and lens 2 produces an image of the light source array in front of the pupil. When the focal length of lens 2 is denoted by f2, the distance between the two lenses is f1 + f2. The light source array is placed on the focal plane of lens 2 and multiple viewpoints are generated on the focal plane of lens 1. When one of the light sources emits light, the virtual image can be observed through the corresponding viewpoint. The multiple light sources emit light sequentially and the corresponding parallax images are synchronously displayed on the SLM. By generating viewpoints with a smaller interval than the pupil diameter, SMV images are produced using a time-multiplexing technique. A half mirror is used for each imaging optics to provide the see-through function.

 figure: Fig. 2

Fig. 2 Schematic diagram of a SMV near-eye display.

Download Full Size | PPT Slide | PDF

When the distance between the SLM screen and lens 1 is denoted by s, the virtual image is produced at s’ = (1/f1 − 1/s)−1 from lens 1. When the width of the screen of the high-speed SLM is denoted by w, the width of the virtual image is given by W = w f1 / (f1s). The field of view of the virtual image is given by 2 tan−1(w / 2f1). When the interval of the light sources is denoted by p, the interval of the viewpoints produced is given by v = p f1 / f2. When the frame rate of the high-speed SLM is denoted by fSLM and the number of viewpoints is denoted by n, the frame rate of the 3D image generation is given by f3D = fSLM/n.

The viewpoint generation scheme can be flexibly changed by properly programming the light source array. When the light sources emit light one by one and the corresponding parallax images are displayed on the high-speed SLM, SMV images with full-parallax are generated. When light sources in the same column simultaneously emit light and the corresponding parallax images are displayed on the SLM, SMV images with only horizontal parallax are generated. When light sources in the same row emit light simultaneously and the corresponding parallax images are displayed, SMV images with only vertical parallax are generated.

4. Experimental system

An experimental system was constructed to verify the proposed technique. Because only one high-speed SLM was available at the time, a monocular experimental system was constructed.

The experimental system is depicted in Fig. 3. The ferroelectric liquid crystal on silicon (FLCOS) SLM shown in Fig. 4 was used as the high-speed SLM. The maximum frame rate was 5,000 Hz, with a resolution of 2,048 × 2,048, a pixel pitch of 7.8 μm, and a screen size of 16.0 × 16.0 mm2. It can display binary images. Since the FLCOS SLM was a reflective type SLM, the front illumination system was constructed, including a polarization beam splitter (PBS). A 2D LED array was used as light source array. The number of LEDs was 8 × 8, the interval of the LEDs was p = 4.0 mm, and the central wavelength was 574 nm. An Arduino microcontroller was used to synchronize the FLOCS SLM and the LED array.

 figure: Fig. 3

Fig. 3 Experimental system.

Download Full Size | PPT Slide | PDF

 figure: Fig. 4

Fig. 4 FLCOS SLM used for the experimental system.

Download Full Size | PPT Slide | PDF

The focal lengths of lenses 1 and 2 were 40 and 80 mm, respectively. The size of the virtual image was 216 × 216 mm2 (12.0 in.) and the virtual image was projected at a distance of s’ = −500 mm from lens 1 when s = 37.0 mm. The field of view of the virtual image was 22.6° × 22.6°. We used 21 LEDs in the LED array in the vicinity of the optical axis to produce 21 viewpoints (n = 21) to avoid distorted viewpoint generation in the peripheral area. The interval of the viewpoints was v = 2.0 mm. The frame rate of the FLCOS SLM was set to fSLM = 2,000 Hz. Thus, the frame rate of the 3D image generation was f3D = 95.2 Hz. Figure 5 shows a photograph of the constructed experimental system.

 figure: Fig. 5

Fig. 5 Constructed experimental system.

Download Full Size | PPT Slide | PDF

The generation of the viewpoints was evaluated. Figure 6 shows the captured intensity distributions of all the 21 viewpoints. An image sensor plane of a cooled CCD camera was placed on the plane where the viewpoints were produced. A white image was displayed on the SLM and all the 21 LEDs emitted light to capture the intensity distributions. The intensity distributions of the viewpoints were virtually separated; thus, the crosstalk among the viewpoints was small. The intervals of the viewpoints were measured; the average interval was 1.99 mm. The distortion of the intensity distributions of the viewpoints was large on the top and bottom rows and on the left and right columns.

 figure: Fig. 6

Fig. 6 Evaluation of the intensity distributions of all viewpoints.

Download Full Size | PPT Slide | PDF

The distortion of the virtual image was measured. An identical grid pattern was displayed for all viewpoints and a digital camera was placed on the viewpoint-generation plane. The captured image is as shown in Fig. 7. The maximum and minimum distances between the leftmost and rightmost lines were measured and the horizontal distortion was obtained based on the definition of the TV distortion, by calculating the ratio of the half difference of the maximum and minimum distances to the minimum distance. The same calculation was performed in the vertical direction to obtain the vertical distortion. The measured distortions were 1.16% and 1.11% in the horizontal and vertical directions, respectively.

 figure: Fig. 7

Fig. 7 Evaluation of the image distortion of the experimental system.

Download Full Size | PPT Slide | PDF

5. Experimental results

Full-parallax SMV images were generated with an interval of 2 mm in both the horizontal and vertical directions. Figure 8 shows the captured retinal images. The test image consisted of 3 × 3 asterisks, displayed at different depth positions from −150 mm behind the virtual screen to + 90 mm in front of the virtual screen, with an interval of 30 mm. The upper left asterisk was displayed at the farthest position and the lower right asterisk displayed at the nearest position. A digital camera was used to capture the retinal images instead of using the real eye. The entrance pupil of the digital camera was placed on the viewpoint-generation plane and its diameter set to 5 mm (average pupil diameter). The focus of the digital camera was set at the depth positions where the asterisks were displayed. The asterisk in focus was indicated by a circle. In all captured images, the focused asterisk looked the sharpest among the nine asterisks. The asterisks looked more blurred when they were displayed farther from the focused depth position.

 figure: Fig. 8

Fig. 8 Retinal images for a viewpoint interval of 2 mm: focuses are at (a) −150 mm, (b) −120 mm, (c) −90 mm, (d) −60 mm, (e) −30 mm, (f) 0 mm, (g) + 30 mm, (h) + 60 mm, and (i) + 90 mm.

Download Full Size | PPT Slide | PDF

Next, the interval of the viewpoints was increased to 4 mm in the horizontal and vertical directions. By producing one viewpoint using 2 × 2 LEDs, 3 × 2 viewpoints were generated. The captured retinal images are as shown in Fig. 9. Like the experimental results shown in Fig. 8, the asterisks in focus looked sharp. However, the captured images contained larger and more unnatural blurs compared to those shown in Fig. 8. Figure 9 shows the retinal images when nearly 2 × 2 viewpoints were contained in the entrance pupil of the camera. Figure 10 shows the results when the entrance pupil contained almost two viewpoints that were located vertically. The asterisks blurred more in the vertical direction than in the horizontal direction.

 figure: Fig. 9

Fig. 9 Retinal images for a viewpoint interval of 4 mm when almost 2 × 2 viewpoints were contained in the camera entrance pupil: focuses are at (a) −150 mm, (b) −120 mm, (c) −90 mm, (d) −60 mm, (e) −30 mm, (f) 0 mm, (g) + 30 mm, (h) + 60 mm, and (i) + 90 mm.

Download Full Size | PPT Slide | PDF

 figure: Fig. 10

Fig. 10 Retinal images for a viewpoint interval of 4 mm when almost 1 × 2 viewpoints were contained in the camera entrance pupil: focuses are at (a) −150 mm, (b) −120 mm, (c) −90 mm, (d) −60 mm, (e) −30 mm, (f) 0 mm, (g) + 30 mm, (h) + 60 mm, and (i) + 90 mm.

Download Full Size | PPT Slide | PDF

The photographs of the 3D images produced by the prototype system are shown in Fig. 11. The interval between the viewpoints was 2 mm. Because the number of gray levels of the FLCOS SLM was only two, the error diffusion technique was used to represent grayscale images. Teapot and dragon were displayed at −120 mm behind the virtual screen and + 80 mm in front of the screen, respectively. The digital camera was focused on each object in turn to capture the images. The focused object looked sharp while the other object looked blurred. A movie of the 3D image is also provided to show the changes in the 3D image depending on the camera focus.

 figure: Fig. 11

Fig. 11 3D images produced by the prototype system: focuses are at (a) −120 mm (teapot), and (b) + 80 mm (dragon) (see Visualization 1).

Download Full Size | PPT Slide | PDF

6. Discussion

From the experimental results shown in Sec. 5, the proposed technique can produce SMV images that could be focused on by the video camera. Whereas we confirmed that the SMV images could be focused on with the eyes, the accommodation responses of the eyes to the SMV images should be measured [19,20]. Auto refractometers are often used to measure accommodation responses. Because the length between the viewpoints and the imaging lens (lens 1 shown in Fig. 3) is short, it is difficult to place an auto refractometer between them. Konrad, et al. [10] developed a relay optical system to increase the distance between the eye and the display system. We plan to develop an accommodation measurement system for an SMV near-eye display by referring to this work.

The circled asterisks shown in Figs. 9(a) and 9(i) were not as sharp as that shown in Fig. 9(f). Because all parallax images were projected on the virtual image plane, the captured image of each parallax image was blurred except when the camera focused on the virtual image plane. Therefore, the circled asterisk shown in Fig. 9(f) was the sharpest while the other circled asterisks were more blurred when the camera was focused farther from the virtual image plane. However, in Fig. 8, all the circled asterisks looked sharp: the circled asterisks shown in Figs. 8(a) and 8(i) were approximately as sharp as that shown in Fig. 8(f). This can be explained by considering the depth of field (DOF) of the eyes [23]. The DOF range depends on the pupil diameter of the eyes. Under the SMV condition, the interval of the viewpoints becomes the practical pupil diameter because the distribution of the viewpoints practically becomes the pupil of the eye lens, and we had considered the width of the viewpoints to be approximately equal to the interval of the viewpoints. Here, we assume that the focal length of the human eye is 17 mm and the allowable blur on the retina is 15 μm. When the interval of the viewpoints was 2 mm, the DOF range existed from −169 mm behind the virtual image plane to + 104 mm in front of it. Therefore, all the circled asterisks shown in Fig. 8 were displayed in the DOF range. The experimental results shown in Fig. 9 were captured with a 4 mm interval of the viewpoints. Because the pupil diameter of the camera lens was 5 mm and 2 × 2 viewpoints were contained in the entrance pupil of the camera as shown in Fig. 9, the effective pupil diameter became 2.5 mm. In this case, the DOF range existed from −127 mm to + 86 mm. Therefore, the circled asterisks shown in Figs. 9(a) and 9(i) were displayed outside the DOF range.

As described above, the DOF of eyes in this study determines the depth range in which 3D images are displayed without blurs. In Ref. 5, the DOF of elemental imaging systems determines the depth range. Because an array of elemental imaging systems is used to provide multiple images to an eye, the DOF range of these images is determined by the lens diameter of the elemental imaging systems. The lens diameter is small and each elemental imaging system has an eye box that is larger than the eye pupil. However, the technique proposed in this study uses one large lens to provide multiple images through multiple narrow-pitch viewpoints, which causes the DOF range of the multiple images to be determined by the width of the viewpoints. This effect also appears in the technique proposed in [22].

As described above, since the practical pupil diameter decreases under the SMV condition, the resolution of the retinal images decreases. However, even when the interval of viewpoints was 2 mm, the circled asterisks shown in Fig. 8 appeared sharp. The resolvable angle is given by θ = 1.22λ/d from the Rayleigh Criterion, where λ is the wavelength of light and d is the pupil diameter. When the interval of the viewpoint is 2 mm, the practical pupil diameter is 2 mm. Thus, θ = 0.0201° (λ = 574 nm) corresponding to a visual acuity of 0.83. Therefore, the obvious deterioration of the resolution was not perceived by the eyes. A similar and more detailed analysis of the DOF and the resolution is provided in [22].

This study demonstrated the generation of SMV images using the time-multiplexing technique. No flicker was observed with the experimental system. Although monochromatic images were generated in this study, color images can also be generated using the time-multiplexing technique. R, G, and B LED arrays are used as light source array while R, G, and B images are sequentially displayed on the FLCOS SLM. To achieve a frame rate of 60 Hz for the color image generation with 21 viewpoints, the required frame rate for the SLM is 3,780 Hz. Because the maximum frame rate of the FLCOS SLM is 5,000 Hz, 3D color images can be produced using the present FLCOS SLM. In our experiments, the grayscale image was generated using the error diffusion technique. The grayscale images can also be generated by increasing the frame rate of the SLM and modulating the light intensities emitted by the LEDs. For example, SMV images with eight gray levels (3 bits) can be obtained by setting the frame rate of the SLM to 3,780 Hz. The number of gray levels can also be increased by reducing the frame rate of the SMV image generation and reducing the number of viewpoints.

The experimental system can generate not only 3D images with full-parallax but also those with horizontal parallax only. The LEDs in the same column simultaneously emit light in the horizontal parallax mode. In the horizontal parallax mode, five viewpoints were generated in the horizontal direction with a viewpoint interval of 2 mm. Figure 12 shows the captured retinal images for the horizontal parallax mode. The focused asterisks looked nearly as sharp as those obtained by the full-parallax mode shown in Fig. 8. The captured images using the full-parallax mode were compared to those captured using the horizontal parallax mode, as shown in Fig. 13. The focus of the digital camera was at −150 mm. When the lower right asterisks were compared, the full-parallax mode provided more blurred image than the horizontal parallax mode because the vertical blur was as subtle as that of the focused image in the horizontal parallax mode. Therefore, the horizontal parallax mode provides less blurred unfocused images than the full-parallax mode. The effects of this difference on the human accommodation responses should be investigated using the auto refractometer. The appropriate SMV display conditions should be determined by measuring the accommodation responses of human eyes.

 figure: Fig. 12

Fig. 12 Retinal images for the horizontal parallax mode with a viewpoint interval of 2 mm: focuses were at (a) −150 mm, (b) −120 mm, (c) −90 mm, (d) −60 mm, (e) −30 mm, (f) 0 mm, (g) + 30 mm, (h) + 60 mm, and (i) + 90 mm.

Download Full Size | PPT Slide | PDF

 figure: Fig. 13

Fig. 13 Comparison of retinal images generated by (a) full-parallax mode and (b) horizontal parallax mode at a focus of −150 mm.

Download Full Size | PPT Slide | PDF

We have considered properties of the proposed near-eye SMV display as they pertain to practical applications. The eye box size of the experimental system was 10 × 10 mm2 and limited by the distortion of the generation of viewpoints (as shown in Fig. 6). In the experimental system, spherical lenses were used for the generation of the viewpoints, as lenses 1 and 2. With the use of aspherical lenses, the distortion of the viewpoint generation will decrease and the eye box size will increase. The eye relief, which is the separation between lens 1 and the viewpoints, was 40 mm, which is appropriate for VR applications. For AR applications, a half mirror should be inserted between lens 1 and the viewpoints, which would result in the reduction of the eye relief. The size of the experimental system was dominated by the illumination system (shown in Fig. 5) because the pitch of the LEDs was 4 mm. With the use of a small-pitch LED array, the system size and weight can be reduced. A compact system construction will also be enabled by use of freeform optics [24].

7. Conclusion

In this study, an SMV near-eye display system that combined a high-speed SLM with a 2D light source array was proposed to solve the VA conflict. The system generated multiple viewpoints using a time-multiplexing technique. The monocular experimental system was constructed using FLCOS SLM and an LED array that generated 21 viewpoints, with an interval of 2.0 mm. The field of view was 22.6° × 22.6°, the 3D resolution was 2,048 × 2,048, and the frame rate was 95.2 Hz.

We achieved the generation of full-parallax SMV images with 2 and 4 mm viewpoint intervals and demonstrated that it was possible to focus SMV images produced in the depth range from −150 mm behind the virtual screen to + 90 mm in front of the screen. A comparison of the full-parallax and horizontal parallax SMV display modes revealed that the horizontal parallax mode provided less blurred unfocused images than the full-parallax mode. In the future, we will construct a binocular experimental system and measure the accommodation responses to it, and improve the experimental system to generate 3D color images.

Funding

Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Number 17K18872.

Acknowledgments

The authors would like to thank Citizen Watch Co., Ltd. for providing the FLCOS SLM.

References

1. I. E. Sutherland, “A head-mounted three dimensional display,” Proc. of Fall Joint Computer Conference, 757–764 (1968).

2. S. Yano, S. Ide, T. Mitsuhashi, and H. Thwaites, “A study of visual fatigue and visual comfort for 3D HDTV/HDTV images,” Displays 23(4), 191–201 (2002). [CrossRef]  

3. C. M. Schor, “A dynamic model of cross-coupling between accommodation and convergence: simulations of step and frequency responses,” Optom. Vis. Sci. 69(4), 258–269 (1992). [CrossRef]   [PubMed]  

4. G. A. Koulieris, B. Bui, M. S. Banks, and G. Drettakis, “Accommodation and comfort in head-mounted displays,” ACM Trans. Graph. 36(4), 1 (2017). [CrossRef]  

5. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 220 (2013). [CrossRef]  

6. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]   [PubMed]  

7. F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015). [CrossRef]  

8. X. Hu and H. Hua, “High-resolution optical see-through multi-focal-plane head-mounted display using freeform optics,” Opt. Express 22(11), 13896–13903 (2014). [CrossRef]   [PubMed]  

9. N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. U.S.A. 114(9), 2183–2188 (2017). [CrossRef]   [PubMed]  

10. R. Konrad, N. Padmanaban, K. Molner, E. A. Cooper, and G. Wetzstein, “Accommodation-invariant computational near-eye displays,” ACM Trans. Graph. 36(4), 88 (2017). [CrossRef]  

11. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014). [CrossRef]   [PubMed]  

12. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017). [CrossRef]  

13. E. Murakami, Y. Oguro, and Y. Sakamoto, “Study on compact head-mounted display system using electro-holography for augmented reality,” IEICE Trans. Electron. E100C(11), 965–971 (2017).

14. Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006). [CrossRef]  

15. Y. Takaki, “Thin-type natural three-dimensional display with 72 directional images,” Proc. SPIE 5664, 56–63 (2005). [CrossRef]  

16. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010). [CrossRef]   [PubMed]  

17. Y. Takaki, K. Tanaka, and J. Nakamura, “Super multi-view display with a lower resolution flat-panel display,” Opt. Express 19(5), 4129–4139 (2011). [CrossRef]   [PubMed]  

18. T. Kanebako and Y. Takaki, “Time-multiplexing display module for high-density directional display,” Proc. SPIE 6803, 68030P (2008). [CrossRef]  

19. H. Mizushina, J. Nakamura, Y. Takaki, and H. Ando, “Super multi-view 3D displays reduce conflict between accommodative and vergence responses,” J. Soc. Inf. Disp. 24(12), 747–756 (2016). [CrossRef]  

20. J. Nakamura, K. Tanaka, and Y. Takaki, “Increase in depth of field of eyes using reduced-view super multi-view displays,” Appl. Phys. Express 6(2), 022501 (2013). [CrossRef]  

21. T. Ueno and Y. Takaki, “Super multi-view near-eye display using time-multiplexing technique,” in 3D Image Acquisition and Display: Technology, Perception and Applications, OSA Technical Digest (online) (Optical Society of America, 2018), paper 3Tu2G.4.

22. C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017). [CrossRef]  

23. F. W. Campbell, “The depth of field of the human eye,” J. Mod. Opt. 4(4), 157–164 (1957).

24. H. Huang and H. Hua, “High-performance integral-imaging-based light field augmented reality display using freeform optics,” Opt. Express 26(13), 17578–17590 (2018). [CrossRef]   [PubMed]  

References

  • View by:

  1. I. E. Sutherland, “A head-mounted three dimensional display,” Proc. of Fall Joint Computer Conference, 757–764 (1968).
  2. S. Yano, S. Ide, T. Mitsuhashi, and H. Thwaites, “A study of visual fatigue and visual comfort for 3D HDTV/HDTV images,” Displays 23(4), 191–201 (2002).
    [Crossref]
  3. C. M. Schor, “A dynamic model of cross-coupling between accommodation and convergence: simulations of step and frequency responses,” Optom. Vis. Sci. 69(4), 258–269 (1992).
    [Crossref] [PubMed]
  4. G. A. Koulieris, B. Bui, M. S. Banks, and G. Drettakis, “Accommodation and comfort in head-mounted displays,” ACM Trans. Graph. 36(4), 1 (2017).
    [Crossref]
  5. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 220 (2013).
    [Crossref]
  6. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014).
    [Crossref] [PubMed]
  7. F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015).
    [Crossref]
  8. X. Hu and H. Hua, “High-resolution optical see-through multi-focal-plane head-mounted display using freeform optics,” Opt. Express 22(11), 13896–13903 (2014).
    [Crossref] [PubMed]
  9. N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. U.S.A. 114(9), 2183–2188 (2017).
    [Crossref] [PubMed]
  10. R. Konrad, N. Padmanaban, K. Molner, E. A. Cooper, and G. Wetzstein, “Accommodation-invariant computational near-eye displays,” ACM Trans. Graph. 36(4), 88 (2017).
    [Crossref]
  11. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014).
    [Crossref] [PubMed]
  12. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017).
    [Crossref]
  13. E. Murakami, Y. Oguro, and Y. Sakamoto, “Study on compact head-mounted display system using electro-holography for augmented reality,” IEICE Trans. Electron. E100C(11), 965–971 (2017).
  14. Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006).
    [Crossref]
  15. Y. Takaki, “Thin-type natural three-dimensional display with 72 directional images,” Proc. SPIE 5664, 56–63 (2005).
    [Crossref]
  16. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010).
    [Crossref] [PubMed]
  17. Y. Takaki, K. Tanaka, and J. Nakamura, “Super multi-view display with a lower resolution flat-panel display,” Opt. Express 19(5), 4129–4139 (2011).
    [Crossref] [PubMed]
  18. T. Kanebako and Y. Takaki, “Time-multiplexing display module for high-density directional display,” Proc. SPIE 6803, 68030P (2008).
    [Crossref]
  19. H. Mizushina, J. Nakamura, Y. Takaki, and H. Ando, “Super multi-view 3D displays reduce conflict between accommodative and vergence responses,” J. Soc. Inf. Disp. 24(12), 747–756 (2016).
    [Crossref]
  20. J. Nakamura, K. Tanaka, and Y. Takaki, “Increase in depth of field of eyes using reduced-view super multi-view displays,” Appl. Phys. Express 6(2), 022501 (2013).
    [Crossref]
  21. T. Ueno and Y. Takaki, “Super multi-view near-eye display using time-multiplexing technique,” in 3D Image Acquisition and Display: Technology, Perception and Applications, OSA Technical Digest (online) (Optical Society of America, 2018), paper 3Tu2G.4.
  22. C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017).
    [Crossref]
  23. F. W. Campbell, “The depth of field of the human eye,” J. Mod. Opt. 4(4), 157–164 (1957).
  24. H. Huang and H. Hua, “High-performance integral-imaging-based light field augmented reality display using freeform optics,” Opt. Express 26(13), 17578–17590 (2018).
    [Crossref] [PubMed]

2018 (1)

2017 (5)

C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017).
[Crossref]

G. A. Koulieris, B. Bui, M. S. Banks, and G. Drettakis, “Accommodation and comfort in head-mounted displays,” ACM Trans. Graph. 36(4), 1 (2017).
[Crossref]

N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. U.S.A. 114(9), 2183–2188 (2017).
[Crossref] [PubMed]

R. Konrad, N. Padmanaban, K. Molner, E. A. Cooper, and G. Wetzstein, “Accommodation-invariant computational near-eye displays,” ACM Trans. Graph. 36(4), 88 (2017).
[Crossref]

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017).
[Crossref]

2016 (1)

H. Mizushina, J. Nakamura, Y. Takaki, and H. Ando, “Super multi-view 3D displays reduce conflict between accommodative and vergence responses,” J. Soc. Inf. Disp. 24(12), 747–756 (2016).
[Crossref]

2015 (1)

F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015).
[Crossref]

2014 (3)

2013 (2)

J. Nakamura, K. Tanaka, and Y. Takaki, “Increase in depth of field of eyes using reduced-view super multi-view displays,” Appl. Phys. Express 6(2), 022501 (2013).
[Crossref]

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 220 (2013).
[Crossref]

2011 (1)

2010 (1)

2008 (1)

T. Kanebako and Y. Takaki, “Time-multiplexing display module for high-density directional display,” Proc. SPIE 6803, 68030P (2008).
[Crossref]

2006 (1)

Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006).
[Crossref]

2005 (1)

Y. Takaki, “Thin-type natural three-dimensional display with 72 directional images,” Proc. SPIE 5664, 56–63 (2005).
[Crossref]

2002 (1)

S. Yano, S. Ide, T. Mitsuhashi, and H. Thwaites, “A study of visual fatigue and visual comfort for 3D HDTV/HDTV images,” Displays 23(4), 191–201 (2002).
[Crossref]

1992 (1)

C. M. Schor, “A dynamic model of cross-coupling between accommodation and convergence: simulations of step and frequency responses,” Optom. Vis. Sci. 69(4), 258–269 (1992).
[Crossref] [PubMed]

1957 (1)

F. W. Campbell, “The depth of field of the human eye,” J. Mod. Opt. 4(4), 157–164 (1957).

Ando, H.

H. Mizushina, J. Nakamura, Y. Takaki, and H. Ando, “Super multi-view 3D displays reduce conflict between accommodative and vergence responses,” J. Soc. Inf. Disp. 24(12), 747–756 (2016).
[Crossref]

Bang, K.

C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017).
[Crossref]

Banks, M. S.

G. A. Koulieris, B. Bui, M. S. Banks, and G. Drettakis, “Accommodation and comfort in head-mounted displays,” ACM Trans. Graph. 36(4), 1 (2017).
[Crossref]

Bui, B.

G. A. Koulieris, B. Bui, M. S. Banks, and G. Drettakis, “Accommodation and comfort in head-mounted displays,” ACM Trans. Graph. 36(4), 1 (2017).
[Crossref]

Campbell, F. W.

F. W. Campbell, “The depth of field of the human eye,” J. Mod. Opt. 4(4), 157–164 (1957).

Chen, K.

F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015).
[Crossref]

Cooper, E. A.

N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. U.S.A. 114(9), 2183–2188 (2017).
[Crossref] [PubMed]

R. Konrad, N. Padmanaban, K. Molner, E. A. Cooper, and G. Wetzstein, “Accommodation-invariant computational near-eye displays,” ACM Trans. Graph. 36(4), 88 (2017).
[Crossref]

Drettakis, G.

G. A. Koulieris, B. Bui, M. S. Banks, and G. Drettakis, “Accommodation and comfort in head-mounted displays,” ACM Trans. Graph. 36(4), 1 (2017).
[Crossref]

Georgiou, A.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017).
[Crossref]

Hahn, J.

Hu, X.

Hua, H.

Huang, F. C.

F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015).
[Crossref]

Huang, H.

Ide, S.

S. Yano, S. Ide, T. Mitsuhashi, and H. Thwaites, “A study of visual fatigue and visual comfort for 3D HDTV/HDTV images,” Displays 23(4), 191–201 (2002).
[Crossref]

Jang, C.

C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017).
[Crossref]

Javidi, B.

Kanebako, T.

T. Kanebako and Y. Takaki, “Time-multiplexing display module for high-density directional display,” Proc. SPIE 6803, 68030P (2008).
[Crossref]

Kim, H.

Kim, J.

C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017).
[Crossref]

Kim, M.

Kollin, J. S.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017).
[Crossref]

Konrad, R.

N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. U.S.A. 114(9), 2183–2188 (2017).
[Crossref] [PubMed]

R. Konrad, N. Padmanaban, K. Molner, E. A. Cooper, and G. Wetzstein, “Accommodation-invariant computational near-eye displays,” ACM Trans. Graph. 36(4), 88 (2017).
[Crossref]

Koulieris, G. A.

G. A. Koulieris, B. Bui, M. S. Banks, and G. Drettakis, “Accommodation and comfort in head-mounted displays,” ACM Trans. Graph. 36(4), 1 (2017).
[Crossref]

Lanman, D.

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 220 (2013).
[Crossref]

Lee, B.

C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017).
[Crossref]

Lee, S.

C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017).
[Crossref]

Luebke, D.

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 220 (2013).
[Crossref]

Maimone, A.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017).
[Crossref]

Mitsuhashi, T.

S. Yano, S. Ide, T. Mitsuhashi, and H. Thwaites, “A study of visual fatigue and visual comfort for 3D HDTV/HDTV images,” Displays 23(4), 191–201 (2002).
[Crossref]

Mizushina, H.

H. Mizushina, J. Nakamura, Y. Takaki, and H. Ando, “Super multi-view 3D displays reduce conflict between accommodative and vergence responses,” J. Soc. Inf. Disp. 24(12), 747–756 (2016).
[Crossref]

Molner, K.

R. Konrad, N. Padmanaban, K. Molner, E. A. Cooper, and G. Wetzstein, “Accommodation-invariant computational near-eye displays,” ACM Trans. Graph. 36(4), 88 (2017).
[Crossref]

Moon, E.

Moon, S.

C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017).
[Crossref]

Nago, N.

Nakamura, J.

H. Mizushina, J. Nakamura, Y. Takaki, and H. Ando, “Super multi-view 3D displays reduce conflict between accommodative and vergence responses,” J. Soc. Inf. Disp. 24(12), 747–756 (2016).
[Crossref]

J. Nakamura, K. Tanaka, and Y. Takaki, “Increase in depth of field of eyes using reduced-view super multi-view displays,” Appl. Phys. Express 6(2), 022501 (2013).
[Crossref]

Y. Takaki, K. Tanaka, and J. Nakamura, “Super multi-view display with a lower resolution flat-panel display,” Opt. Express 19(5), 4129–4139 (2011).
[Crossref] [PubMed]

Padmanaban, N.

R. Konrad, N. Padmanaban, K. Molner, E. A. Cooper, and G. Wetzstein, “Accommodation-invariant computational near-eye displays,” ACM Trans. Graph. 36(4), 88 (2017).
[Crossref]

N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. U.S.A. 114(9), 2183–2188 (2017).
[Crossref] [PubMed]

Roh, J.

Schor, C. M.

C. M. Schor, “A dynamic model of cross-coupling between accommodation and convergence: simulations of step and frequency responses,” Optom. Vis. Sci. 69(4), 258–269 (1992).
[Crossref] [PubMed]

Stramer, T.

N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. U.S.A. 114(9), 2183–2188 (2017).
[Crossref] [PubMed]

Sutherland, I. E.

I. E. Sutherland, “A head-mounted three dimensional display,” Proc. of Fall Joint Computer Conference, 757–764 (1968).

Takaki, Y.

H. Mizushina, J. Nakamura, Y. Takaki, and H. Ando, “Super multi-view 3D displays reduce conflict between accommodative and vergence responses,” J. Soc. Inf. Disp. 24(12), 747–756 (2016).
[Crossref]

J. Nakamura, K. Tanaka, and Y. Takaki, “Increase in depth of field of eyes using reduced-view super multi-view displays,” Appl. Phys. Express 6(2), 022501 (2013).
[Crossref]

Y. Takaki, K. Tanaka, and J. Nakamura, “Super multi-view display with a lower resolution flat-panel display,” Opt. Express 19(5), 4129–4139 (2011).
[Crossref] [PubMed]

Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010).
[Crossref] [PubMed]

T. Kanebako and Y. Takaki, “Time-multiplexing display module for high-density directional display,” Proc. SPIE 6803, 68030P (2008).
[Crossref]

Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006).
[Crossref]

Y. Takaki, “Thin-type natural three-dimensional display with 72 directional images,” Proc. SPIE 5664, 56–63 (2005).
[Crossref]

Tanaka, K.

J. Nakamura, K. Tanaka, and Y. Takaki, “Increase in depth of field of eyes using reduced-view super multi-view displays,” Appl. Phys. Express 6(2), 022501 (2013).
[Crossref]

Y. Takaki, K. Tanaka, and J. Nakamura, “Super multi-view display with a lower resolution flat-panel display,” Opt. Express 19(5), 4129–4139 (2011).
[Crossref] [PubMed]

Thwaites, H.

S. Yano, S. Ide, T. Mitsuhashi, and H. Thwaites, “A study of visual fatigue and visual comfort for 3D HDTV/HDTV images,” Displays 23(4), 191–201 (2002).
[Crossref]

Wetzstein, G.

N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. U.S.A. 114(9), 2183–2188 (2017).
[Crossref] [PubMed]

R. Konrad, N. Padmanaban, K. Molner, E. A. Cooper, and G. Wetzstein, “Accommodation-invariant computational near-eye displays,” ACM Trans. Graph. 36(4), 88 (2017).
[Crossref]

F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015).
[Crossref]

Yano, S.

S. Yano, S. Ide, T. Mitsuhashi, and H. Thwaites, “A study of visual fatigue and visual comfort for 3D HDTV/HDTV images,” Displays 23(4), 191–201 (2002).
[Crossref]

ACM Trans. Graph. (6)

G. A. Koulieris, B. Bui, M. S. Banks, and G. Drettakis, “Accommodation and comfort in head-mounted displays,” ACM Trans. Graph. 36(4), 1 (2017).
[Crossref]

D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 220 (2013).
[Crossref]

F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015).
[Crossref]

R. Konrad, N. Padmanaban, K. Molner, E. A. Cooper, and G. Wetzstein, “Accommodation-invariant computational near-eye displays,” ACM Trans. Graph. 36(4), 88 (2017).
[Crossref]

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 85 (2017).
[Crossref]

C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 190 (2017).
[Crossref]

Appl. Phys. Express (1)

J. Nakamura, K. Tanaka, and Y. Takaki, “Increase in depth of field of eyes using reduced-view super multi-view displays,” Appl. Phys. Express 6(2), 022501 (2013).
[Crossref]

Displays (1)

S. Yano, S. Ide, T. Mitsuhashi, and H. Thwaites, “A study of visual fatigue and visual comfort for 3D HDTV/HDTV images,” Displays 23(4), 191–201 (2002).
[Crossref]

J. Mod. Opt. (1)

F. W. Campbell, “The depth of field of the human eye,” J. Mod. Opt. 4(4), 157–164 (1957).

J. Soc. Inf. Disp. (1)

H. Mizushina, J. Nakamura, Y. Takaki, and H. Ando, “Super multi-view 3D displays reduce conflict between accommodative and vergence responses,” J. Soc. Inf. Disp. 24(12), 747–756 (2016).
[Crossref]

Opt. Express (6)

Optom. Vis. Sci. (1)

C. M. Schor, “A dynamic model of cross-coupling between accommodation and convergence: simulations of step and frequency responses,” Optom. Vis. Sci. 69(4), 258–269 (1992).
[Crossref] [PubMed]

Proc. IEEE (1)

Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006).
[Crossref]

Proc. Natl. Acad. Sci. U.S.A. (1)

N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. U.S.A. 114(9), 2183–2188 (2017).
[Crossref] [PubMed]

Proc. SPIE (2)

Y. Takaki, “Thin-type natural three-dimensional display with 72 directional images,” Proc. SPIE 5664, 56–63 (2005).
[Crossref]

T. Kanebako and Y. Takaki, “Time-multiplexing display module for high-density directional display,” Proc. SPIE 6803, 68030P (2008).
[Crossref]

Other (3)

I. E. Sutherland, “A head-mounted three dimensional display,” Proc. of Fall Joint Computer Conference, 757–764 (1968).

E. Murakami, Y. Oguro, and Y. Sakamoto, “Study on compact head-mounted display system using electro-holography for augmented reality,” IEICE Trans. Electron. E100C(11), 965–971 (2017).

T. Ueno and Y. Takaki, “Super multi-view near-eye display using time-multiplexing technique,” in 3D Image Acquisition and Display: Technology, Perception and Applications, OSA Technical Digest (online) (Optical Society of America, 2018), paper 3Tu2G.4.

Supplementary Material (1)

NameDescription
Visualization 1       3D images produced by the experimental system

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 SMV display technique: eyes focused at (a) display screen, and (b) 3D image.
Fig. 2
Fig. 2 Schematic diagram of a SMV near-eye display.
Fig. 3
Fig. 3 Experimental system.
Fig. 4
Fig. 4 FLCOS SLM used for the experimental system.
Fig. 5
Fig. 5 Constructed experimental system.
Fig. 6
Fig. 6 Evaluation of the intensity distributions of all viewpoints.
Fig. 7
Fig. 7 Evaluation of the image distortion of the experimental system.
Fig. 8
Fig. 8 Retinal images for a viewpoint interval of 2 mm: focuses are at (a) −150 mm, (b) −120 mm, (c) −90 mm, (d) −60 mm, (e) −30 mm, (f) 0 mm, (g) + 30 mm, (h) + 60 mm, and (i) + 90 mm.
Fig. 9
Fig. 9 Retinal images for a viewpoint interval of 4 mm when almost 2 × 2 viewpoints were contained in the camera entrance pupil: focuses are at (a) −150 mm, (b) −120 mm, (c) −90 mm, (d) −60 mm, (e) −30 mm, (f) 0 mm, (g) + 30 mm, (h) + 60 mm, and (i) + 90 mm.
Fig. 10
Fig. 10 Retinal images for a viewpoint interval of 4 mm when almost 1 × 2 viewpoints were contained in the camera entrance pupil: focuses are at (a) −150 mm, (b) −120 mm, (c) −90 mm, (d) −60 mm, (e) −30 mm, (f) 0 mm, (g) + 30 mm, (h) + 60 mm, and (i) + 90 mm.
Fig. 11
Fig. 11 3D images produced by the prototype system: focuses are at (a) −120 mm (teapot), and (b) + 80 mm (dragon) (see Visualization 1).
Fig. 12
Fig. 12 Retinal images for the horizontal parallax mode with a viewpoint interval of 2 mm: focuses were at (a) −150 mm, (b) −120 mm, (c) −90 mm, (d) −60 mm, (e) −30 mm, (f) 0 mm, (g) + 30 mm, (h) + 60 mm, and (i) + 90 mm.
Fig. 13
Fig. 13 Comparison of retinal images generated by (a) full-parallax mode and (b) horizontal parallax mode at a focus of −150 mm.

Metrics

Select as filters


Select Topics Cancel
© Copyright 2022 | Optica Publishing Group. All Rights Reserved