Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Full-color see-through near-eye holographic display with 80° field of view and an expanded eye-box

Open Access Open Access

Abstract

A full-color see-through near-eye holographic display is proposed with 80° field of view (FOV) and an expanded eye-box. The system is based on a holographic optical element (HOE) to achieve a large FOV while the image light is focused at the entrance to human pupil and the image of entire field enters human eye. As we know, one of the major limitations of the large FOV holographic display system is the small eye-box that needs to be expanded. We design a double layer diffraction structure for HOE to realize eye-box expansion. The HOE consists of two non-uniform volume holographic gratings and a transparent substrate. Two fabricated holographic gratings are attached to front and back surfaces of the substrate to multiplex image light and achieve eye-box expansion. Simultaneously, the HOE is also manufactured for RGB colors to realize full-color display. The experiment results show that our proposed display system develops 80° round FOV and an enlarged eye-box of 7.5 mm (H) ×5 mm (V) at the same time. The dynamic display ability is also tested in the experiments. The proposed system provides a new solution for the practical application of augmented reality display.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In recent years, augmented reality (AR) as a promising technology has been widely used in the various fields such as industry, military, medicine, consumer and entertainment [1,2]. AR near-eye display presents virtual signals superimposed to the viewer’s real-world scenes. There are many AR display technology solutions including free-form prisms scheme [3], deformable mirror scheme [4], holographic waveguide scheme [57], and holographic 3D display [810]. Holographic 3D display can provide natural monocular focus cues and a high resolution as well as vision-correction ability, making it strong candidate for future near-eye display. Holographic 3D display has some typical issues, such as speckle noise, complex system, a large amount of computational data and limited spatial bandwidth. Various related academic studies have been performed to overcome the issues for holographic display [1117]. For the application of holography in AR display, the limited bandwidth of the spatial light modulator (SLM) imposes an important trade-off relationship between field of view (FOV) and eye-box size. The product of the FOV and the eye-box size made by the system must be equal the etendue of the display system which is determined by the size and pixel pitch of the SLM [18]. It is a common goal to achieve large FOV and big enough eye-box for near-eye display to get better immersion.

Recently, holographic optical element (HOE) is popular to be suggested as an observation device for holographic near-eye display which can provide a large FOV [1924]. The HOE functions as eyepiece to get external scene while the holographic display guarantees that we can observe the real 3D images without vergence-accommodation conflict. Maimone et al. [19] integrated a convex lens recorded in a volume hologram and free-form optics to develop a lightweight AR display prototype with a viewing angle of 80°. Won et al. [2021] reported a liquid lens based optical see-through head mounted display that can simultaneously display both a Maxwellian view and a hologram. The metasurface technique has great development potential for near-eye display, while Lee et al. [22] firstly demonstrated a metasurface application to realize a compact near-eye display system for augmented reality with a FOV of 90° for monochrome imaging and a FOV of 76° for full-color imaging. But there still have some limitations such as difficulty in production, high cost, lower efficiency and small size with short eye-relief distance. The main limitation of the large FOV holographic display is the small eye-box of the system. Jang et al. [23] proposed an eye-box expansion method through a lens array to generate a point light array, in which eye-box is shifted with the shift of the point light source. This method needs a real-time detection of the eye-box position of the human eye to achieve expansion which requires real-time coordination of the human eye tracking system, and the field image also have an overall displacement with the switching of the eye-box. Park et al. [24] also proposed a near-to-eye display with an enlarged eye-box which includes a plane waveguide and convex lens element to achieve a retinal scanning display, whereas its FOV is 9.2° (H) ×5.2°(V). Although this method can form multiple eye-box points, the images seen from different eye-box points have a certain angular deflection and cause image ghosting, crosstalk, or loss.

It can be known from previous literature investigations that there is currently no eye-box expansion method for large FOV display systems other than human eye dynamic tracking methods.

In this paper, we propose a large FOV display system and develop a new eye-box expansion method simultaneously. This method is easy to implement and has limited crosstalk and image loss. We use a HOE to observe the holographic reconstructed images projected by SLM. The double-layer gratings and a transparent substrate constitute the HOE, functioning as a high numerical aperture eyepiece. Human eyes can observe a uniform intensity image that fills the entire FOV through the HOE. This system realizes an off-axis see-through near-eye holographic display with 80° large FOV that satisfies immersive experience of human eye. Simultaneously, the eye-box is expanded to 7.5 mm (H) ×5 mm (V) without image crosstalk. The HOE can also focus the light from SLM approximately 60° off-axis to on-axis position of the human eye, while providing a clear external scene to develop AR display. The holographic gratings are separately manufactured and attached to a substrate to enlarge eye-box.

2. System and principle

2.1 Designed display system

The display system is shown in Fig. 1, which consists of three RGB (red, green, blue) optic paths. Taking the red laser path as an example, a polarizer is placed after laser to adjust the polarization direction of the light source. A spatial filter (SF) and a lens are used to generate collimated illumination source. The phase-only SLM is used to upload CGH. Green laser light path and blue laser light path have same optical path structure. 4f filter is used to block noise caused by the dead zone of SLM. The mirror changes the light direction and HOE functions as eyepiece and converges a highly off-axis beam into an on-axis beam in front of human eye. The collimated illumination source is spatially modulated by SLM to form desired image light and three 4K SLMs respectively modulate red green and blue holograms. The modulated parallel light is coupled into HOE with 60° angle. We can observe virtual images and real scene through the HOE. The key technology to realize a large FOV for our system is to design a HOE with optical functions as shown in the Fig. 1. HOE is designed based on a reflective volume holographic grating, which can achieve a large off-axis, and provide a large processing size compared to the coaxial system. In this way, a large FOV display can be realized while ensuring a sufficient eye relief distance. The specific design parameters are described in detail in section 3.

 figure: Fig. 1.

Fig. 1. Schematic view of the wide field see-through holographic display.

Download Full Size | PDF

2.2 Eye-box expansion principle

The principle of enlarging eye-box is shown in Fig. 2. The probe light hits on HOE and is divided into three lights, that is, positive first-order diffraction light, zero-order transmission light, and zero-order reflection light. This is a property of volume holographic gratings [25]. The zero-order reflection light cannot converge to human eye, so it can be ignored in the analysis. The first-order diffraction light converges in front of the human eye. Figure 2(a) shows the ray path of the target image entering the human eye. The propagation path of light in the designed HOE is shown in Fig. 2(b). A beam of light is incident from point A with an incident angle θ0=60°, the first-order diffraction light in point A is diffracted towards the human eye and the zero-order transmission light of point A hits on point B. Point A and B have same grating period, and the direction of the first-order diffraction light on point B is same as the probe light on point A. The periodic structural correspondence ensures that the rays ① and ② emitting from point A and point E are parallel rays, that is, a beam of light carrying image information is divided into two parallel beams by the designed HOE. The displacement from point A to point E is the horizontal expansion of the eye-box. Figure 2(c) shows the geometric relationship of vector parameters for the reflection grating, where z direction is the grating thickness direction and other parameters correspond to the formula analysis below.

 figure: Fig. 2.

Fig. 2. (a)-(b) Schematic diagram of eye-box expansion, (c) Reflection grating vector diagram.

Download Full Size | PDF

As shown in Fig. 3(a), each image point has a diffraction angle as represented by the dotted line, and the solid red line corresponds to the illumination light in Fig. 2(b). The light modulated by SLM has various directional components within the diffraction angle of the SLM, as shown in Fig. 3(a), we use SLMs to achieve holographic projection and each image point of the projected image has a divergence angle ${\theta _d}$. Thus, Bragg-mismatching occurs when we use the diffraction light from the SLM as the probe wave of HOE in the reconstruction process. Therefore, angular variation of the reconstructed wave and the diffraction efficiency distribution according to the angular deviation of the probe wave need to be analyzed [26]. Since the manufactured gratings are non-uniform periodic volume grating, we only need to analyze the diffraction angle bandwidth of the position with smallest period on the grating and the relationship between the incident angle and the diffraction efficiency as shown in Fig. 3(b). It’s assumed that the lasers used in experiments are quasi-monochromatic light sources where the RGB colors curve in Fig. 3(b) correspond to RGB colors gratings. Figure 3(b) shows that when the diffraction efficiency is reduced to half, the corresponding angular bandwidth ${\theta _d}$ is about 2.5° and it means that the light transmitted from image point has a divergence angle 2.5° as shown by the dotted line in Fig. 3(a). Then based on the Kogelnik coupled wave theory [27], we analyze the relationship between angular bandwidth and diffraction efficiency.

 figure: Fig. 3.

Fig. 3. (a) Eye-box expansion geometry schematic, (b) Relationship of diffraction angle and angular bandwidth.

Download Full Size | PDF

The specific relationship between diffraction efficiency and angular bandwidth is analyzed as follows.

Grating1 and grating2 operate under Bragg diffraction conditions, which means its diffraction function can also be described by Kogelnik coupled wave theory. When the illumination light waves do not satisfy Bragg conditions, the phase mismatch $\delta$ can be introduced by

$$\delta = \Delta \theta K\sin (\phi - {\theta _\textrm{1}}),$$
where $\Delta \theta$ is the deviation of the incident angle ${\theta _r}$ to the Bragg incident angle ${\theta _\textrm{1}}$. Grating vector $K$ and grating angle $\phi$ are shown in Fig. 2(c). The coupling strength $\nu$ and Bragg mismatch parameter $\xi$ of the reflective phase grating are expressed as follows:
$$\nu = \frac{{\pi \Delta nd}}{{\lambda {{(\cos {\theta _r}\cos {\theta _s})}^{1/2}}}},$$
$$\xi = \frac{{\delta d}}{{2\cos {\theta _s}}}\textrm{ = }\frac{{\Delta \theta K\sin (\phi - {\theta _\textrm{s}})d}}{{2\cos {\theta _s}}},$$
where $\lambda$ is the wavelength in vacuum, $\Delta n$ is refractive index modulation of material, d is the thickness of the volume holographic grating and ${\theta _\textrm{s}}$ is diffraction angle.

The diffraction efficiency formula of the reflective phase grating is expressed as:

$$\eta = \frac{{\textrm{s}{\textrm{h}^2}{{({\nu ^2} - {\xi ^2})}^{1/2}}}}{{\textrm{s}{\textrm{h}^2}{{({\nu ^2} - {\xi ^2})}^{1/2}} + [1 - {{({\xi / \nu })}^2}]}}.$$

It can be seen from Eqs. (3) and (4) that the angular bandwidth and the diffraction efficiency have a certain correspondence. For the fabricated HOE, the grating thickness is 13.5µm and the substrate thickness is 2mm, and the refractive index modulation $\Delta n$ is 0.032. The refractive index n of the holographic grating is 1.52 and the refractive index $n^{\prime}$ of the glass substrate is 1.5168. The incident angle ${\theta _0}$ of the illumination light wave to point A is 60°. In our system, three SLMs all have pixel size of 3.74µm and resolution of 3840×2160. According to the grating equation [28], the diffraction angle of SLM can be calculated to be equal to $2{\alpha _{\max }}\textrm{ = }{8^\textrm{o}}$ which is greater than ${\theta _\textrm{d}}$, so we only need to set a proper diffraction distance for image to ensure that the diffraction angle β of the projected image is greater than ${\theta _\textrm{d}}$. The geometric relationship between SLM and reconstructed image is shown in Fig. 4. The diffraction distance of hologram must be range from 100mm to 329mm.

 figure: Fig. 4.

Fig. 4. Geometric relationship between SLM and reconstructed image.

Download Full Size | PDF

The size of eye-box can be calculated by:

$${\textrm{d}_{\textrm{DC}}}\textrm{ = }{\textrm{d}_{\textrm{DA}}} + {\textrm{d}_{\textrm{AC}}}\textrm{ = }{\textrm{d}_{\textrm{OA}}}\cos {\theta _0} \ast \left( {\tan \left( {{\theta_0} + \frac{{{\theta_\textrm{d}}}}{2}} \right) - \tan \left( {{\theta_0} - \frac{{{\theta_\textrm{d}}}}{2}} \right)} \right),$$
$${\textrm{d}_{\textrm{AE}}} = {\textrm{d}_{\textrm{AB}}}\cos (\frac{\pi }{2} - {\theta _\textrm{r}}) + {\textrm{d}_{\textrm{BE}}}\sqrt {1 - {{\left[ {\frac{{\sin ({\theta_s} - \pi )}}{{n^{\prime}}}} \right]}^2}} ,$$

As shown in Fig. 3(a), ${\textrm{d}_{\textrm{DC}}}$ is the distance between point D and point C, and ${\textrm{d}_{\textrm{AE}}}$ is the distance between point A and point E. ${\theta _\textrm{d}}$ is the diffraction angle of image point.${\theta _0}$ is Bragg incident angle in air and ${\theta _\textrm{r}}$ is the Bragg incident angle in HOE. $\textrm{n}^{\prime}$ is the refractive index of substrate. ${\theta _\textrm{s}}$ is the diffracted angle of point B in HOE. According to the above formula, the theoretical value of the expansion in horizontal direction is calculated to be ${\textrm{d}_{\textrm{AE}}}\textrm{ + }{\textrm{d}_{\textrm{DC}}}$ = 4.5 mm. For vertical direction, the size of eye-box equal to the distance between point D and C 2 mm. Under normal lighting conditions, the size of the human eye-box is about 3 mm. It ensures that all images can enter to human eye, while the range of movement of the human eye to the left and right can reach 7.5 mm and the vertical eye-box equal to 5 mm. The movement of the human eye within this range can ensure that there is no obvious attenuation of image brightness and loss of information. The experimental results are shown in section 4.

3. Optical fabrication of the designed system

The procedures of the monochrome HOE manufacture in proposed system are designed as shown in Fig. 5(a). We use a spherical signal wave and a plane reference wave with oblique incident angle to record the diffractive gratings. The recording position of the two gratings is shown in Fig. 5(a), and the distance between the two gratings is equal to the thickness of the substrate. During the display, the volume holographic grating is illuminated with the conjugate light of the reference wave, and the diffracted beam is the conjugate light of the signal wave. There are several substrates can be used to fabricate the HOE and here we use the K9 glass substrate which is easy to get and more economical. The grating material is transparent photopolymer. The thickness of the substrate is 2 mm and the refractive index is 1.5168. The manufactured gratings are shown in Fig. 5(b) and the diameters of these two gratings are 50 mm and 51.8 mm, respectively. The recording light source of the gratings shown in Fig. 5(b) is a green laser with a wavelength of 532 nm. The grating layer of the device can be separated from the substrate and also can be pasted on the substrate. In the process of volume grating processing, we control the diffraction efficiency of the first layer grating to 40% and the diffraction efficiency of the second layer grating to 90% to ensure that the intensity of the diffracted light corresponding to the two-layer gratings is basically the same.

 figure: Fig. 5.

Fig. 5. (a) Schematic diagram for the manufacture of HOE, (b) Grating samples.

Download Full Size | PDF

For display systems, the importance of colorful display is self-evident. Due to the good wavelength selectivity and angular selectivity of HOE devices, we have improved the system design to achieve full-color display. As shown in the Fig. 6(a), the multicolor HOE is presented and it consists of a three-layer grating structure with RGB colors gratings. When we design the colorful HOE, we superimpose a red grating and a blue grating on the green grating. Due to the wavelength and angular selectivity of the volume grating, the upper and lower layers of different colorful gratings do not have influence on actual display. When RGB lasers simultaneously illuminate the designed full-color HOE, the image crosstalk is not generated. In our display, the wavelengths of the light sources used to manufacture the full-color HOE are 632nm 532nm and 482nm, respectively. The experimental model is a full-color cube and the display result shows a full-color image without crosstalk in Fig. 6(b). Since the power of the display lasers are different, we need to adjust the power of the lasers to display a best colorful result.

 figure: Fig. 6.

Fig. 6. (a) Full-color HOE structure, (b) Display of the full-color HOE.

Download Full Size | PDF

4. Experimental results

The holographic calculation of our display system is based on the improved Fresnel diffraction algorithm. In a Fresnel holographic system, illumination light propagates from a light source and encounters a hologram plane. Hologram alters the amplitude and phase of the light source. The illumination light is diffracted and focused on one or more target points with different intensities, and the target points form the desired image. Holograms are generated by accurate compressed look-up-table (AC-LUT) point source method, which is an improved algorithm of Fresnel holography and easy to reconstruct 3D objects [29]. In the experimental process, collimated parallel light illuminates SLM and each pixel at SLM panel produces a diffracted light. The diffracted light is imaged at a certain distance from SLM. A camera replaces the human eye to capture the external scene. Here are the specifications of the system: working wavelengths 632nm, 532nm and 473nm, three 4K SLM panels (Jasper JD8714), 132° diagonal FOV camera Lens (Thorlabs MVL4WA f = 3.5mm), 1280×1024 CCD sensor (Thorlabs DCU224C).

4.1 Field of view of the designed system

We firstly test the large FOV of the proposed system. A grid image is used as target image to test FOV of display system and two real objects (Pigsy and Sha Monk) are placed at real scene. The target image as the probe wave is projected to HOE plane and enters into the human eye. We can observe both real scenes and virtual scenes through transparent HOE. The hologram of the grid image is uploaded to SLM and the desired image projects on the front focal plane of HOE. We use a camera with 132° diagonal FOV to capture target image. Figure 7 shows FOV testing result and it can be seen that the image information fills the entire field of view. The projection area of the grid image on the HOE needs to be larger than the effective area of the HOE to ensure that the edge of the FOV is completely filled by the test image. System FOV is calculated to be 80° round view according to the dimension of the virtual image and the camera sensor. After the HOE parameters are settled, the FOV of the proposed system is determined. Therefore, the eye-box expansion result is analyzed based on the fixed FOV.

 figure: Fig. 7.

Fig. 7. FOV testing result of system.

Download Full Size | PDF

In particular, as shown in Fig. 7, the captured virtual grid image has barrel distortion, which is caused by the ultra-wide-angle camera lens. In actual observation, the virtual image will not show this distortion at all, even at the edge of the field of view of the system.

4.2 Eye-box expansion effect

In this part, we test the eye-box expansion of the proposed system. The experimental results of green HOE are shown in Figs. 8(a)–8(c) and the experimental results of full-color HOE are shown in Figs. 8(d)–8(f). When the camera is placed on the far left, while the horizontal position is about -4mm, the results are shown in Figs. 8(a) and 8(d). As the camera moves to right and in the middle of eye-box, while the horizontal position is about 0, the display results are shown in Figs. 8(b) and 8(e). Figures 8(c) and 8(f) show the results when the camera moves to far right of eye-box, while the horizontal position is about 4mm. When the camera continues to move to the right, it will not be able to see the panorama of the virtual object. The movement of camera position can be reflected by the movement between the edge objects of the actual scene captured by camera and the entire field of view. In the measurement, the eye-box size of system is consistent with theoretical analysis. The two-layer design can achieve twice eye-box expansion. We will use the multi-layer design to achieve a larger eye-box size in future.

 figure: Fig. 8.

Fig. 8. Observed images at different eye positions.

Download Full Size | PDF

4.3 Dynamic display

The proposed display system also has dynamic see-through display ability. Optical experiments are performed to test its dynamic display. Dynamic 3D signal in Fig. 9 is a rotational Pegasus. We set refresh frame rate to 24fps, respect to the rate of SLM (35fps). The dynamic scenes are recorded by the camera. Several frames are extracted from the video (Visualization 1) and shown in Figs. 9(a)–9(f). The virtual Pegasus and external actual scenes can be received by camera at the same time, and it tells our proposed system can realize the fusion of virtual and real scenes. When creating models, we also add the flat shading to increase the depth perception.

 figure: Fig. 9.

Fig. 9. Dynamic 3D see-through display (Visualization 1), (a)-(f) are the extracted frames.

Download Full Size | PDF

We also perform dynamic colorful display. Six frames are extracted from the video (Visualization 2) and shown in Figs. 10(a)–10(f). In this place, there are three SLMs used to composite the colorful dynamic display. Three SLMs respectively modulate RGB colors to generate three images, which are synthesized into a colorful image in the air. In fact, the wavelength of the high-power laser used in manufacturing the blue grating is 482 nm, while the wavelength of the blue laser in full-color display is 473 nm. This difference results in aberrations in the blue portion of the color dynamic display but it does not affect our verification of the colorful display.

 figure: Fig. 10.

Fig. 10. Dynamic 3D see-through colorful display (Visualization 2), (a)-(f) are the extracted frames.

Download Full Size | PDF

The detailed parameters of the system are collected and listed in the Table 1.

Tables Icon

Table 1. Parameters of the system.

5. Discussion

There have been many previous pieces of research on expanding eye-box or FOV in a holographic display. As shown in Table 2, we compare the specifications of some representative systems. In FOV column, H and V are horizontal and vertical view, while R is round view and D is diagonal view. The physical eye-box size varies from 1.5 mm to 8 mm dependent on the lighting condition. In our system, the human eye's intrinsic eye-box size is set to 3mm, which is same as Ref. [24]. For the Ref. [23], the intrinsic eye-box size is set to 2mm. From the comparison results of the Table 2, our system has great advantages in FOV, eye-box and colorful display. Therefore, the system we proposed has strong practicability.

Tables Icon

Table 2. Comparison of the representative systems.

Since the system is designed as an off-axis structure to achieve augmented reality, it will cause aberrations of the captured image when we directly apply holographic algorithm to generate CGH in the designed system [30]. The two-dimensional standard ISO Resolution Chart for Electronic Still Cameras image is applied to display the aberration correction results. We adjust the camera to focus on the model “Pigsy” and calibrate the diffraction distance of the reconstructed image to focus on the same position on the left side of the field of view, as shown in Fig. 11(a). The projected image is a two-dimensional image, which can be seen that the off-axis system introduces aberration and causes defocus on the right side of the field of view. The holographic algorithm can independently control the diffraction distance of each target point. Therefore, we can disperse the 2D image into 2D point array and calibrate the diffraction distance of each point to focus the projected image on the same plane. The pre-compensating hologram loaded on the SLM, clear imaging can be observed throughout the field as shown in Fig. 11(b). Before pre-compensation, the resolution of the defocused image on the right side is 0.8line/degree. After pre-compensation, the resolution of the display image is 2.5line/degree.

 figure: Fig. 11.

Fig. 11. (a) Observed image before pre-compensation, (b) Observed image after pre-compensation.

Download Full Size | PDF

And it is known that the holographic display can present 3D images with depth cues. As we all known, FOV and depth of the field are mutually restrictive relationships in a holographic near-eye display. The relationship between numerical aperture of lens and depth of field in geometric optics exist that numerical aperture is proportional to the depth of field. For display system, a larger numerical aperture corresponding to a deeper depth of field, and the relative depth information becomes less obvious. In our system, HOE is equivalent to a lens with a larger numerical aperture. With the increasing of the system FOV, the 3D viewing perception will be weakened in a certain degree. In Fig. 12, we present a depth cue testing, the camera is focused on letter “A” and model “Pigsy” as shown in Fig. 12(a) and the camera is focused on letter “B” and model “Sha Monk”. And the distance between Pigsy and Sha Monk is 150 mm while the virtual letters still have 3D viewing perception which is not that strong compared with the real Pigsy and Sha Monk objects. Therefore, we are considering to add the flat shading to increase the depth perception in future application.

 figure: Fig. 12.

Fig. 12. (a) Focus on the letter “A” and model “ Pigsy “, (b) Focus on the letter “B” and model “ Sha Monk “.

Download Full Size | PDF

The brightness uniformity of different viewing angles was tested using a 10×10 grid image loaded on the SLM. The grid image resolution is 1024×512. Figures 13(a) and 13(b) show the simulated grid and captured image, respectively. The definition of uniformity is same as defined in Refs. [31,32]. We normalized the intensity of the displayed image and set the brightness uniformity at the center of the field of view to 1. The test results are given in Fig. 13 and the Table 3 shows that the brightness uniformity is about 77.83% at edge field of view. The brightness uniformity of the system can meet the observation requirements.

 figure: Fig. 13.

Fig. 13. Test of brightness uniformity, (a) The simulated grid image, (b) The image captured in optical experiment.

Download Full Size | PDF

Tables Icon

Table 3. Brightness uniformity at different FOV position

6. Summary

We have proposed a see-through near-eye holographic display system with 80° FOV and an enlarged eye-box without image crosstalk. The system could provide large FOV, suitable eye-box and reasonable eye relief distance. It can also focus light from SLM highly off-axis to on-axis position of human eye and keep the external scene visible. The HOE has the advantages of lightweight and flexible, which can be applied in the next generation integrated display. We will further improve the system eye-box size by changing HOE materials or using more layers of grating structure and increase the system FOV by developing light path design in future. It is expected that the proposed method and system could make a contribution for the development of see-through near-eye holographic display systems.

Funding

National Natural Science Foundation of China (61575024, 61975014); National Key Research and Development Program of China (2017YFB1002900).

Acknowledgments

Rubik’s Cube used by permission from Rubik’s Brand Ltd.

Disclosures

The authors declare no conflicts of interest.

References

1. I. Rabbi and S. Ullah, “A survey on augmented reality challenges and tracking,” Acta Graph. 24(1–2), 29–46 (2013).

2. J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic, “Augmented reality technologies, system and applications,” Multimed. Tools Appl. 51(1), 341–377 (2011). [CrossRef]  

3. H. Huang and H. Hua, “High-performance integral-imaging-based light field augmented reality display using freeform optics,” Opt. Express 26(13), 17578–17590 (2018). [CrossRef]  

4. D. Dunn, C. Tippets, and K. Torell, “Wide field of view varifocal near-eye display using see-through deformable membrane mirrors,” IEEE Trans. Vis. Comput. Graph. 23(4), 1322–1331 (2017). [CrossRef]  

5. C. M. Bigler, P. A. Blanche, and K. Sarma, “Holographic waveguide heads-up display for longitudinal image magnification and pupil expansion,” Appl. Opt. 57(9), 2007–2013 (2018). [CrossRef]  

6. J. Yang, P. Twardowski, P. Gérard, and J. Fontaine, “Design of a large field-of-view see-through near to eye display with two geometrical waveguides,” Opt. Lett. 41(23), 5426–5429 (2016). [CrossRef]  

7. R. Shi, J. Liu, H. Zhao, Z. Wu, Y. Liu, Y. Hu, Y. Chen, J. Xie, and Y. Wang, “Chromatic dispersion correction in planar waveguide using one-layer volume holograms based on three-step exposure,” Appl. Opt. 51(20), 4703–4708 (2012). [CrossRef]  

8. Q. Gao, J. Liu, J. Han, and X. Li, “Monocular 3D see-through head-mounted display via complex amplitude modulation,” Opt. Express 24(15), 17372–17383 (2016). [CrossRef]  

9. Q. Gao, J. Liu, X. Duan, T. Zhao, X. Li, and P. Liu, “Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter,” Opt. Express 25(7), 8412–8424 (2017). [CrossRef]  

10. H. J. Yeom, H. J. Kim, S. B. Kim, H. J. Zhang, B. N. Li, Y. M. Ji, S. H. Kim, and J. H. Park, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015). [CrossRef]  

11. Y. Mori, T. Fukuoka, and T. Nomura, “Speckle reduction in holographic projection by random pixel separation with time multiplexing,” Appl. Opt. 53(35), 8182–8188 (2014). [CrossRef]  

12. Y. Deng and D. Chu, “Coherence properties of different light sources and their effect on the image sharpness and speckle of holographic displays,” Sci. Rep. 7(1), 5893 (2017). [CrossRef]  

13. J. Duan, J. Liu, B. Hao, T. Zhao, Q. Gao, and X. Duan, “Formulas of partially spatial coherent light and design algorithm for computer-generated holograms,” Opt. Express 26(17), 22284–22295 (2018). [CrossRef]  

14. Z. Zhang, J. Liu, Q. Gao, X. Duan, and X. Shi, “A full-color compact 3D see-through near-eye display system based on complex amplitude modulation,” Opt. Express 27(5), 7023–7035 (2019). [CrossRef]  

15. Q. Gao, J. Liu, X. Duan, T. Zhao, X. Li, and P. Liu, “Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter,” Opt. Express 25(7), 8412–8424 (2017). [CrossRef]  

16. G. Li, D. Lee, Y. Jeong, J. Cho, and B. Lee, “Holographic display for see-through augmented reality using mirror-lens holographic optical element,” Opt. Lett. 41(11), 2486–2489 (2016). [CrossRef]  

17. J. S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015). [CrossRef]  

18. G. Brooker, Modern Classical Optics (Oxford University, 2003), Chap. 11.

19. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017). [CrossRef]  

20. J. S. Lee, Y. K. Kim, and Y. H. Won, “Time multiplexing technique of holographic view and Maxwellian view using a liquid lens in the optical see-through head mounted display,” Opt. Express 26(2), 2149–2159 (2018). [CrossRef]  

21. J. S. Lee, Y. K. Kim, and Y. H. Won, “See-through display combined with holographic display and Maxwellian display using switchable holographic optical element based on liquid lens,” Opt. Express 26(15), 19341–19355 (2018). [CrossRef]  

22. G. Y. Lee, J. Y. Hong, and S. H. Hwang, “Metasurface eyepiece for augmented reality,” Nat. Commun. 9(1), 4562 (2018). [CrossRef]  

23. C. Jang, K. Bang, G. Li, and B. Lee, “Holographic near-eye display with expanded eye-box,” ACM Trans. Graph. 37(6), 1–14 (2019). [CrossRef]  

24. S. B. Kim and J. H. Park, “Optical see-through Maxwellian near-to-eye display with an enlarged eye box,” Opt. Lett. 43(4), 767–770 (2018). [CrossRef]  

25. J. Jeong, J. Lee, C. Yoo, S. Moon, and B. Lee, “Holographically customized optical combiner for eye-box extended near-eye display,” Opt. Express 27(26), 38006–38018 (2019). [CrossRef]  

26. G. Li, J. Jeong, D. Lee, J. Yeom, C. Jang, S. Lee, and B. Lee, “Space bandwidth product enhancement of holographic display using high-order diffraction guided by holographic optical element,” Opt. Express 23(26), 33170–33183 (2015). [CrossRef]  

27. H. Kogelnik, Coupled Wave Theory for Thick Hologram Gratings (Blackwell Publishing Ltd, 1969).

28. S. Joseph and W. Kelvin, “Generalized Bragg selectivity in volume holography,” Appl. Opt. 41(32), 6773–6785 (2002). [CrossRef]  

29. C. Gao, J. Liu, and X. Li, “Accurate compressed look up table method for CGH in 3D holographic display,” Opt. Express 23(26), 33194–33204 (2015). [CrossRef]  

30. K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A 20(9), 1755–1762 (2003). [CrossRef]  

31. C. Yu, Y. Peng, Q. Zhao, H. Li, and X. Liu, “Highly efficient waveguide display with space-variant volume holographic gratings,” Appl. Opt. 56(34), 9390–9397 (2017). [CrossRef]  

32. J. Han, J. Liu, X. Yao, and Y. Wang, “Portable waveguide display system with a large field of view by integrating freeform elements and volume holograms,” Opt. Express 23(3), 3534–3549 (2015). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       Dynamic 3D see-through display
Visualization 2       Dynamic 3D see-through colorful display

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Schematic view of the wide field see-through holographic display.
Fig. 2.
Fig. 2. (a)-(b) Schematic diagram of eye-box expansion, (c) Reflection grating vector diagram.
Fig. 3.
Fig. 3. (a) Eye-box expansion geometry schematic, (b) Relationship of diffraction angle and angular bandwidth.
Fig. 4.
Fig. 4. Geometric relationship between SLM and reconstructed image.
Fig. 5.
Fig. 5. (a) Schematic diagram for the manufacture of HOE, (b) Grating samples.
Fig. 6.
Fig. 6. (a) Full-color HOE structure, (b) Display of the full-color HOE.
Fig. 7.
Fig. 7. FOV testing result of system.
Fig. 8.
Fig. 8. Observed images at different eye positions.
Fig. 9.
Fig. 9. Dynamic 3D see-through display (Visualization 1), (a)-(f) are the extracted frames.
Fig. 10.
Fig. 10. Dynamic 3D see-through colorful display (Visualization 2), (a)-(f) are the extracted frames.
Fig. 11.
Fig. 11. (a) Observed image before pre-compensation, (b) Observed image after pre-compensation.
Fig. 12.
Fig. 12. (a) Focus on the letter “A” and model “ Pigsy “, (b) Focus on the letter “B” and model “ Sha Monk “.
Fig. 13.
Fig. 13. Test of brightness uniformity, (a) The simulated grid image, (b) The image captured in optical experiment.

Tables (3)

Tables Icon

Table 1. Parameters of the system.

Tables Icon

Table 2. Comparison of the representative systems.

Tables Icon

Table 3. Brightness uniformity at different FOV position

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

δ = Δ θ K sin ( ϕ θ 1 ) ,
ν = π Δ n d λ ( cos θ r cos θ s ) 1 / 2 ,
ξ = δ d 2 cos θ s  =  Δ θ K sin ( ϕ θ s ) d 2 cos θ s ,
η = s h 2 ( ν 2 ξ 2 ) 1 / 2 s h 2 ( ν 2 ξ 2 ) 1 / 2 + [ 1 ( ξ / ν ) 2 ] .
d DC  =  d DA + d AC  =  d OA cos θ 0 ( tan ( θ 0 + θ d 2 ) tan ( θ 0 θ d 2 ) ) ,
d AE = d AB cos ( π 2 θ r ) + d BE 1 [ sin ( θ s π ) n ] 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.