Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D displays in augmented and virtual realities with holographic optical elements [Invited]

Open Access Open Access

Abstract

Three-dimensional (3D) displays have been vastly investigated in the past few decades. Recent development of augmented reality (AR) and virtual reality (VR) has further demanded to compress the 3D display system into a compact platform, such as wearable near-eye displays. Holographic optical elements (HOEs) have received widespread attention owing to their lightweight, thin formfactor, and low cost, and thus have been widely deployed in various 3D display systems. In this review article, we first describe the working principle of some 3D techniques used in AR and VR headsets, and then present 3D display systems employing HOEs, and finally analyze how HOEs influence the system design and performance.

© 2021 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

With the advancement of electronic devices and computer sciences, augmented reality (AR) and virtual reality (VR) displays are gaining increasing attention because they offer a deeper level of human-computer interaction. VR provides an immersive virtual digital world with vivid visual and audible experience, while AR overlays the digital information on a physical environment. The real-time acquisition of information and interactive feature enable AR display to be a more versatile tool for daily uses.

The demand for a realistic visual experience promotes the transformation of current two-dimensional (2D) images from flat screens to three-dimensional (3D) scenes. Dating back to 1838, the first 3D technology, named as stereoscopy, was proposed by Charles Wheatstone. The binocular depth perception was provided by two mirrors reflecting two offset figures to the observer’s left and right eyes. Later, in the early 1900s, 3D methods accommodating more depth cues were implemented with a parallax barrier [1], integral photography [2], and lenticular lenses [3]. In 1948, Dennis Gabor discovered the principle of holography [4], which was originally used in electron microscopy for improving the resolving power. Until 1960, the invention of laser marked the beginning of optical holography. Subsequently, a practical optical hologram was achieved in 1962-1964 [5,6]. In the past 30 years, the growth of flat panel displays, such as liquid crystal displays (LCDs), organic light-emitting diode (OLED) displays, and liquid-crystal-on-silicon (LCOS) displays has accelerated the development of 3D technologies and gradually shaped the optical architecture into wearable devices.

At present, the VR technology keeps evolving rapidly. Some VR headsets like Oculus Quest and Sony’s PlayStation are commercially available with an affordable price. The AR headsets, on the other hand, are mostly limited to prosumers due to the formidable cost. Unlike the VR device, which looks like a black box blocking the environment, AR displays assemble all components into a glasses-like form, such as Microsoft HoloLens 2, Magic Leap 1, DigiLens, Lumus Maximus, etc. Most of the current AR/VR products adopt the stereoscopy to provide 3D graphics, because it is a mature and easily implemented technology. However, it is not an optimal 3D display because of the notable vergence-accommodation conflict (VAC) issue [7]. Magic Leap 1 alleviates the VAC issue by rendering two distinct focal planes to extend the accommodation distance, which is regarded as the multiplane display. Other 3D displays, such as multi-view displays and holographic displays, are still at the prototype level, which are hindered by the inadequate resolution of microdisplays, unsatisfying imaging performance, cumbersome formfactor, large computational burden, etc. Among these factors, formfactor is perhaps a prominent one. The conservation of etendue in traditional optical system places a tradeoff between formfactor and display performance parameters like field of view (FOV). Even if the display pixel density and computation power can be improved with advanced semiconductor technology, the optical design will ultimately hit the wall of “etendue conservation”, which prohibits the further shrinkage of formfactor. Fortunately, novel flat optics like holographic optical elements (HOEs) provides possible solutions to this issue.

In the 1960s, HOEs based on intensity holography with unique wavefront-regeneration ability announced the era of flat optics. Especially, photopolymer-based HOEs (PPHOEs) have received widespread applications because of their low cost, low scattering, and simplicity of fabrication. Since the 2000s, liquid crystal (LC) Pancharatnam-Berry optical elements (PBOEs) based on polarization holography have also emerged as a novel type of HOE. Both PPHOEs and LC-PBOEs rely on the principle of diffraction, which differs remarkably from the refractive and reflective geometric optical elements. HOEs featured with lightweight, easy fabrication, low cost, and high efficiency have been implemented in 3D displays to substitute the bulky conventional components and simultaneously offer more degrees of freedom in the designs [8,9]. In addition, optical metasurfaces, comprising of sub-wavelength nanostructures, have drawn great attention in recent years owing to its ability of arbitrary wavefront manipulation [1012]. Some 3D AR/VR systems employing metalens or metagrating have been reported [1316]. However, due to the low optical efficiency and high fabrication cost, it is still challenging to realize large size metasurfaces under current manufacture conditions.

This review paper mainly addresses the 3D display methods in AR and VR, with special emphasis on 3D optical architectures using HOEs. The narrative begins with an overview of HOEs, including intensity HOEs and LC-PBOEs. Next, we will introduce the basic principles and challenges of ray-based and wavefront-based 3D displays. Then, we will delve into the 3D architectures implemented by HOEs and elaborate the benefits and methods of further improvement. Finally, we will discuss some practical issues when 3D display technologies meet different architectures in AR and VR devices.

2. Overview of holographic optical element

HOEs exhibit unique properties in arbitrary wavefront recording and reconstruction [17,18]. Thanks to the various functionalities, e.g., gratings [19,20], lenses [21,22], and diffusers [23], with flat optical formfactor, HOEs have been widely used in 3D display systems. Generally, HOEs can be divided into intensity HOE and polarization HOE, depending on how the recording medium responds to light.

2.1 Intensity HOE

In intensity HOEs, the recording materials are sensitive to the light intensity of the interfering field. In terms of modulation types, holograms can be divided into amplitude hologram and phase hologram. In amplitude holograms, a commonly used material is silver halide emulsions [18]. The recording process is analogous to taking a photo with film. The silver halide can form nanoscale silver particles in the medium after absorbing the energy, which can then be developed to form permanent patterns and realize transmittance modulation, as Fig. 1(a) depicts. Phase holograms are based on the modulation of a material’s refractive index [24]. The holographic materials can be silver halide emulsions, dichromated gelatin, photoresist, photopolymer, holographic polymer-dispersed liquid crystal (HPDLC), and photo-thermal-refractive (PTR) glasses. The refractive index modulation of dichromate gels can reach 0.08 [25]. HPDLC contains LC that is dynamically switchable, and it has been adopted by DigiLens into waveguides [26]. PTR glasses have the advantage of high tolerance to high power laser beams with extremely small index modulation [27]. Photopolymers have found widespread applications benefiting from their high resolution, low cost, low scattering, and simple fabrication. The recording process of photopolymers is based on light-intensity-dependent polymerization rate and diffusion of monomers. As shown in Fig. 1(b), in the high-intensity regions, the monomers are polymerized by absorbing photons. The consumption of monomers causes the diffusion of monomers from dark regions to bright regions, resulting in an increased density and refractive index in the bright areas. The interference field intensity information recorded in the material as refractive index modulation is able to reproduce the original light wavefront.

 figure: Fig. 1.

Fig. 1. Conceptual diagrams of (a) an amplitude hologram based on silver halide emulsion; (b) a phase hologram based on photopolymers; (c) a photoalignment patten; (d) a LC-PB grating.

Download Full Size | PDF

2.2 Polarization HOE

Unlike intensity HOEs, which record the intensity information of interfered beams, polarization HOEs record the polarization state of an electric field. Currently, the widely used method in polarization holography is photoalignment [28], and the basic principle is shown in Fig. 1(c). When two circularly polarized beams with opposite handedness interfere, the electric field on the plane exhibits a sinusoidal polarization pattern along the x-axis. Such a patterned photoalignment layer is later used to align the LC material placed on top. Then the LC in contact with patterned photoalignment material replicates the pattern and forms functional LC-PBOEs (Fig. 1(d)). The key parameter to distinguish the transmissive and reflective properties is the grating period in the z direction, which determines the direction of the grating vector. The longitudinal grating period is controlled by the helicity of the employed chiral dopant. A chiral dopant with a high chirality produces a high helicity, and vice versa. High helicity results in a small longitudinal grating period and therefore forms reflective LC-PBOEs, also known as cholesteric liquid crystal (CLC) optical elements. To establish Bragg reflection for achieving high reflectivity, the CLC layer should have at least ten pitches. Conversely, nematic LC or CLC with a low chirality contributes to an infinite or large longitudinal grating period and thus forms transmissive PBOEs. In view of different patterns and materials, LC-PBOEs include but are not limited to PB grating/lens [2931], polarization volume grating (PVG) [32,33], and on- and off-axis CLC lens [21,22].

2.3 Similarity and difference

Both PPHOEs and LC-PBOEs can form transmissive and reflective components. For PPHOEs, whether the component is transmissive or reflective depends on if the reference wave and signal wave are on the same or opposite side with respect to the recording material. For LC-PBOEs, it is determined by the employed LC material, as mentioned earlier. Both PPHOE and LC-PBOE can achieve high efficiency and good see-through quality, which are important in near-eye displays (NEDs).

There are three major disparities between PPHOEs and LC-PBOEs, regarding the spectral/ angular bandwidth, polarization response to the incident light, and multiplexing ability. Due to different refractive index modulation (δn) of materials, LC-PBOEs (δn from ∼0.06 to ∼0.4) could exhibit a larger angular and spectral bandwidth than PPHOEs (δn < 0.07) [34]. To visualize the differences in angular and spectral bandwidths caused by different δn, we numerically simulate the first order diffraction efficiency of CLC optical elements and reflective PPHOEs based on rigorous coupled-wave analysis [35]. In our simulation model, the grating configuration is the same for these two HOEs. The δn of CLC and photopolymer is set to be 0.2 and 0.03, respectively. And the film thickness of CLC and photopolymer is 3µm and 20µm, respectively. Both input and output media are glass substrates (n=1.58). As shown in Fig. 2, both angular and spectral bandwidths are proportional to the index modulation. Assuming only two plane waves existing inside and outside grating regions, Kogelnik’s coupled wave theory can analytically predict the first order diffraction efficiency of the output light [36].

 figure: Fig. 2.

Fig. 2. The simulated (a) incident angle and (b) wavelength dependent first-order diffraction efficiency of CLC optical elements and PPHOEs.

Download Full Size | PDF

Also, LC-PBOEs are more sensitive to the polarization of incident light than PPHOEs. The transmissive and reflective LC-PBOEs follow different polarization-selectivity rules. For transmissive ones, the right-handed circularly polarized (RCP) light and left-handed circularly polarized (LCP) light experience opposite phase profiles, which can be translated to the different optical powers in PB lenses or the opposite diffraction directions in PB gratings. As for PVGs and CLC lenses (reflection-type LC-PBOEs), they obey the CLC polarization-selectivity rule. More specifically, they respond to only one circular polarization state and are transparent to the other. An important feature of PPHOEs is the capability of recording multiple holograms into one film. For LC-PBOEs, multiple LC layers are required to realize the function of multiple holograms, because each LC layer only serves as one hologram.

In most situations, PPHOEs and LC-PBOEs can be used interchangeably. But as mentioned above, there are still some disparities between them. In some cases, their unique properties can lead to totally different system design protocols, as shall be discussed later.

3. Overview of 3D displays

The acquisition of depth perception by human eyes is essential to the reproduction of 3D virtual objects. In general, human beings rely on two types of depth cues to perceive the depth, including psychological and physiological cues. The psychological cues, such as occlusion, overlapping and linear perspective, are mostly dominant in the depth perception in 2D images. The physiological cues include binocular disparity, convergence, accommodation, and motion parallax [37]. When the observer looks at an object in the real world [ Fig. 3(a)], both the focus (accommodation) and convergence are on the real object, which is a natural way to observe the world. A 3D display is defined as providing at least one physiological cue for depth perception. The conventional technique exploits stereopsis for binocular parallax to create the 3D vision for users. As shown in Fig. 3(b), the accommodation distance is fixed by the position of the focal plane, while the perceived vergence depth varies with the disparity of two images pertaining separately to the left and right eyes of the observer. Correspondingly, it fails to match the vergence and accommodation distance, resulting in visual fatigue, discomfort, and even nausea. With the demand of correct and natural focus, several 3D systems have been proposed, which can be mainly categorized into plane-based displays, view-based displays, and holographic displays. In this section, we will review these 3D technologies and their implementations in AR and VR displays, especially for near-eye displays.

 figure: Fig. 3.

Fig. 3. The depth cues in (a) real world; (b) the stereoscopic display where VAC occurs.

Download Full Size | PDF

3.1 Plane-based 3D display

Plane-based 3D display is a general classification, which refers to the systems that utilize single or multiple planes to construct a 3D scene. The stereoscopic display has only one fixed plane, thus suffering from VAC issue. The varifocal display also generates one focal plane at one time, but the focal depth varies on the vergence distance, which can be detected by the eye tracker [38]. The dynamic adjustment of focal plane therefore eliminates the VAC issue. As for multi-plane approach, the 3D graphics can be realized by multiplicatively or additively modulating the information on the multiplanes. A typical implementation of multiplicative modulation uses a backlight and multilayer LCDs [39]. However, it is subject to several limitations in practical applications, especially for AR displays, including poor image resolution, high intensity loss, and severe diffraction effect, originated from the multilayer structure.

In contrast, the additive modulation methods of multiplane image information are widely applied to AR/VR displays. The specific rendering technique can be either decomposing the virtual objects into each plane by a simple direct blending method or a more complex iterative optimization method, or approximating an additive light field [4043]. These rendering methods share the same optical structures, where multiple depth scenes are displayed “simultaneously” at a few predetermined spatial focal depths [44]. Taking the advantage of persistence of vision in human visual system, the multiple focal depths can be shown either at the physically same time or in a time-multiplexing way. The users will see all rendered focal planes and experience a natural focus-and-blurring effect when adjusting the eye focus. The occlusion cue should be carefully considered in multiplane approaches compared to the view-based displays and holographic displays, which have better occlusion effects. For a fixed viewing position, it is easy to provide the occlusion effect by optimizing the image contents shown on focal planes. To render the occlusion cue for multiple viewpoints, the rendering process needs to be well developed [39,45].

In near-eye displays, the image distance can be tuned from ∼30cm to ∼3m by the user, which can be achieved either by varying the object distance (distance based) or the system optical power (power based) [46,47]. For the distance-based type, one can choose to passively create multiple object planes or actively move the location of the object plane or the lens. Tunable lenses are the essential components in the power-based varifocal/multifocal displays, where a collection of choices can be applied, such as liquid lens, Alvarez lens, LC lens and LC PB lens [4754]. The detailed implementations will be discussed later.

3.2 View-based 3D display

The view-based 3D displays, as the term suggests, provide multiple views for each eye of the observer to deliver a 3D light field with correct focus cues. The generation of multiple views information is usually at the expense of spatial resolution or frame rate of the display device. Integral imaging (InI) exploits periodic optical structures, like lens or pinhole arrays, to convert the spatial information of light from 2D elemental images (EIs) to a 3D light field with both spatial and angular information [5557]. In this process, the spatial resolution is traded in to provide the multi-view channels (angular information). With the advanced electronic devices and flat panels, other approaches have been proposed to adopt a fast scanning mirror [58] or high frame-rate spatial light modulator (SLM) [59] to temporally create multiple views, where a high-speed display and a viewpoint-shifting equipment are vital to the multi-view construction.

A simple InI system consists of a display panel and a 2D microlens array (MLA). The gap between the panel and MLA determines the magnification factor, which in most cases is equal to view number. The display is segmented into multiple EIs, which should be well aligned with each elemental lens (EL) of MLA. Each integrated image point is located at the intersection of multiple light rays from each corresponding pixel of different EIs. The number of rays intersecting on the integrated image point determines the view number. A larger view number contributes to constructing a smoother light field but degrades the imaging resolution. After all, an InI system with fixed display pixel number and size manifests a tradeoff between the viewing resolution and view number. The maximum viewing angle is governed by the size of EL and the gap between the display panel and MLA [60]. The overlapping region of viewing angles of different EIs determines the eye-box size in a near-eye display. In addition, the MLA with a fixed focal length may face limited depth of focus (DOF) of the system. The depth range of reconstructed 3D images is restricted near the central depth plane, namely the image plane of the display panel after the MLA. Although many studies have shown how to enhance the viewing resolution [23,61], enlarge the viewing angle [60,62,63], and extend the DOF [6467], an approach capable of addressing all the issues has not been demonstrated.

The other type of view-based display, exchanging view number with the frame rate of the system, is referred to a ‘time-multiplexing’ multi-view display here. Compared to InI, the ‘time-multiplexing’ multi-view system relies on a high-speed device to form multiple viewpoints within the pupil of the observer, thus preserving the imaging resolution of each viewpoint. Jang et al. [58] proposed a retinal scanning light field display which uses a fast scanning mirror coupled with a laser beam scanning projector to directly project the light field onto the retina. The proposed scanning method only provides horizontal parallax information. Ueno et al. [59] employed a high-speed ferroelectric LCOS coupled with a 2D light source array to realize a full-parallax multi-view display. In short, the idea of sacrificing frame rate in exchange for the spatial resolution is feasible. But from a practical point of view, for a high-speed binary device, like digital micromirror and ferroelectric LCOS, to achieve multiple views and 8-bit gray levels requires a driving frequency as fast as 20 kHz. Detailed power consumption between different light engines has been reported in [9].

3.3 Holographic display

Unlike the abovementioned approaches, holographic display is characterized as a true 3D technique by reproducing both the amplitude and phase information of light emanated from the 3D objects [6871]. The traditional optical holography utilizes light interference to record the wavefront from real objects on the photographic film and later projects the reference beam to reproduce 3D virtual objects. In contrast to optical holography, computer-generated holography possesses flexible wavefront control and real-time image updating functions, thus, it has received extensive attention in AR/VR displays [72]. Abandoning the real object and photosensitive material in the recording process, the computer-generated holography can digitally compute the objective wavefronts and encode them into a computer-generated hologram (CGH), which is then loaded onto a SLM illuminated by a (partially) coherent light. The digital computation also enables the reconstruction of non-existent objects.

Although the holography is crowned with the name of ultimate 3D display, there is still a long way toward actual applications in commercial AR/VR products. The main obstacles include the degraded image resolution caused by the laser speckle, the tradeoff between FOV and eye-box size posed by limited spatial bandwidth product (SBP) of a SLM [73,74], and the high demand for a fast and real-time calculation of CGHs [75]. There exist effective solutions to suppress the speckle artifacts [76, 77]. Additionally, the laser light can be substituted by a partially coherent light source, such as LED, super luminescent LED, and microLED [78]. The coherence properties of illumination light will affect the reconstructed image qualities [79]. Before going deeper into the solutions to enhance the SBP, we briefly discuss how the SBP dominates the FOV and eye-box size. The SBP is determined by the refreshing rate and pixel number of a SLM. The number of pixels is the SLM panel size divided by the pixel pitch. The theoretical viewing area is related to the overlapping region of the principal order. The maximum diffraction angle is determined by the pixel pitch of the SLM, which is typically a few microns in size. The product of diffraction angle and SLM panel size determines the system etendue, which in turn poses an inherent tradeoff between eye-box size ($E$) and FOV (${\theta _{FOV}}$) [80]. According to Lagrange invariant, the relation can be expressed as:

$${\theta _{FOV}} = \frac{{2{\theta _d}L}}{E},$$
$${\theta _d} = \frac{\lambda }{{2p}},$$
where L is the SLM panel size, ${\theta _d}$ is the maximum diffraction angle, $\lambda $ is the wavelength of the illuminating light, and p is the pixel size of the SLM. For a commercially available 4 K SLM ($p = 3.74\; \mu m$), ideally, it can provide a horizontal FOV of ∼40° with ∼3 mm eye-box size. The 40° FOV is merely acceptable in AR displays, but the static eye-box size is too small to accommodate the eye movement and eyeball rotation. Extensive efforts are committed to developing high framerate SLMs with sub-micron pitch sizes, and how to leverage the market-available SLMs to expand the FOV or eye-box size with new optical designs.

4. Optical architectures of 3D display enabled by HOE

In the early stage of 3D system designs, the imaging or guiding optical elements (e.g., lenses, lens arrays, mirrors, beam splitters (BS)) mainly behave in a refractive or reflective manner, hence exhibiting limited functions and resulting in a cumbersome system. On the contrary, HOEs are not only flat and lightweight, but can also be integrated with multiple optical functions, such as imaging, guiding, see-through ability, etc. Both PPHOEs and LC-PBOEs have been widely investigated and utilized as imaging and guiding combiners for realizing lightweight, compact, and high-quality 3D displays. In this section, we mainly discuss the 3D systems using HOEs. As mentioned above, each 3D technology has distinct principles and challenges, leading to different requirements of HOE functionalities. We will first introduce the working principle of each system and discuss the pros and cons. Then the potential challenges will be pointed out to serve as future perspectives.

4.1 Multiplane displays with HOE

Rolland et al. [44] first proposed a multifocal display by stacking 14 transparent displays with even interplane spacing to create virtual focal planes ranging from 0 to 2 diopters. Although this design is hard to be practically used in NEDs, it points out the design guideline for general multifocal displays, such as the number of focal planes, focal plane separation, and resolution requirement. Later, several adapted distance-based multifocal designs are proposed, utilizing diffusive property of optical devices to create focal planes. Diffusive optical devices made with polymer-dispersed liquid crystals (PDLC) or reversed-mode polymer-stabilized liquid crystal (PSLC) could be electrically switched between a scattering state and transparent state, which enables multifocal display designs by spatially stacking together these electrically controllable diffusive elements [8183]. Similarly, a dual-focal display was proposed by Lee et al. [43], where two angular selective PPHOE diffusers serve as two transparent projection screens, as shown in Fig. 4(a). The HOE diffuses the probe wave under Bragg-matched condition, while transmits the light unsatisfying the Bragg condition. Also, Chen et al. [84] placed two CLC films reflecting opposite circularly polarized lights modulated by the polarization rotator (PR, e.g., a combination of quarter-wave plate and twisted-nematic LC cell), with certain separation after the beam splitter in a birdbath setup. In Fig. 4(b), RCP and LCP lights experience different optical paths, which in turn generates two depths.

 figure: Fig. 4.

Fig. 4. Schematic diagrams of multi-plane displays. The distance-based method enabled by (a) diffusive PPHOEs, and (b) CLC films. The power-based method implemented by (c) switchable PB Lenses, and (d) passive CLC lenses. EYEP: eyepiece lens; R-CLC: right-handed CLC; L-CLC: left-handed CLC.

Download Full Size | PDF

However, the distance-based design occupies too much space and is therefore not a favorable choice for NEDs. To pursue a more compact formfactor, LC-PBOEs, like PB lenses and CLC lenses can be used in power-based multifocal displays. PB lenses can be actively switched by an applied voltage [53,85] or passively modulated by altering the incident light polarization [52,54]. Figure 4(c) shows a four-focal display generated by assigning display images to different focal planes provided by two stacked actively switchable PB lenses [85]. Each switchable lens, with sub-millisecond response time, provides two opposite diopters, so two PB lenses could form 4 focal planes. CLC lenses can simultaneously perform imaging and optical see-through functions and are hence primarily used in AR displays. Li et al. [54] proposed to stack two CLC lenses with different optical powers, where each lens responds to a different circular polarization state. In Fig. 4(d), a PR is synchronized with the display panel and the display content on each depth is updated in each frame, respectively.

The main concern for diffractive lenses like PB lenses or CLC lenses to be applied in full color plane-based 3D displays is their chromatic aberration originating from the diffractive nature. In practical plane-based near-eye 3D displays, the diffractive lenses are always paired with a refractive eyepiece lens, which is the major imaging element for generating magnified images. Therefore, by controlling the optical powers of the refractive lens and diffractive lenses, the chromatic aberration can be compensated to some degree thanks to their opposite dispersive behaviors [86,87]. However, this approach only works for singe focal depth. To achieve the correction for multiple depths, a possible solution is to digitally pre-compensate the rendered images.

4.2 View-based 3D display with HOE

4.2.1 See-through InI display

Transforming a conventional InI into a see-through display encounters two obstacles: the opaque display panel and real scene aberrations caused by the lens array. In a simple case, a beam splitter or half mirror can be used as the see-through combiner. Nevertheless, it increases the device volume and loses a portion of efficiency. In recent years, a vast amount of InI systems based on HOEs have been reported. HOEs featured with the optical transparency could enable see-through capability and simultaneously display 3D images.

Many efforts have been made to improve the viewing resolution [57] and enlarge the viewing angle [60,62]. The designs are either complex or cumbersome. HOEs manipulated by spatial multiplexing or angular multiplexing methods can provide a relatively simple solution [8]. However, in either case we need to point out that the tradeoff between resolution and view number still exists, as described in Sec. 3.2. The super resolution technique can be applied in InI to boost the rendering process when dealing with multiple view numbers [88]. The limited expressible depth range is another drawback in InI. One solution is to build up multiple central depth planes [66,67,89,90], which are similar to the approaches in multiplane displays. For example, Park et al. [67] reported a bi-focal integral floating display, where the conventional floating lens is substituted by a PB lens, as shown in Fig. 5(a). With an active PR, the PB lens can be switched between two opposite powers. Combined with the refractive MLA, the magnified images can be displayed at both depths. In [67], a half mirror is utilized to demonstrate the AR prototype. The see-though function can also be performed by a BS and stacking two CLC lenses, as depicted in Fig. 5(b). Each lens with different optical power projects the image at different depth by leveraging the polarization-selectivity.

 figure: Fig. 5.

Fig. 5. DOF extension methods enabled by (a) a PB lens and (b) two CLC lenses. (c) A simplified InI system. The holographic MLA can be a reflective photopolymer MLA or CLC MLA. (d) A simplified retinal scanning light field display. The holographic lens can be a reflective PP lens or CLC lens.

Download Full Size | PDF

HOE can be implemented in InI displays as a see-through combiner with the function of MLA. If the reference beam is set as a plane wave, in the displaying stage, the probe wave should be also collimated by a telecentric optics, which is a bulky module and limits the amplification factor of display size. On the contrary, a spherical wave reconstruction can be used as a simplified solution. The system consists of an image projector and a holographic MLA, as shown in Fig. 5(c). The holographic MLA can either be a reflective photopolymer-based MLA or a CLC-based MLA. A good alignment between the computing EIs and the exposure regions of the HOE should be assured by projecting a divergent wave that satisfies the wavefront-matched condition and carefully pre-compensating the EIs, which will be unevenly stretched as a result of the spherical wave propagation. The image projector could be a laser beam scanning projector or a simple imaging projector, which have different DOF. This concept has been successfully implemented into an AR 3D display by a PPHOE patterned with lenticular lens array [61].

4.2.2 Retinal scanning light field display

The AR multi-view system demonstrated by Jang et al. [58] employs a holographic lens as the see-through combiner, as shown in Fig. 5(d). The fast scanning mirror tunes each collimated probe wave emitted from the laser beam scanning projector to illuminate the holographic lens at a different incident angle (inset of Fig. 5(d)). The holographic lens can either be a reflective photopolymer-based lens (PP lens) or a CLC lens. Although the lens can respond to a certain range of incident angles, only one beam that matches the reference wave has no aberrations, while the rest exhibits different degrees of aberrations. To correct the aberrations, one approach is to pre-compensate the images displayed on the laser beam scanning projector. However, the digital compensation is only effective for a small area within the pupil size. Jang et al. proposed to achieve a dynamic eye-box by continuously adjusting the incident angles on the holographic lens with the aid of an eye tracker. This is a feasible concept, but a more sensible solution is to replace the holographic lens with a multiplexed holographic lens, which functions as multiple concave mirrors. Since the aberrations caused by the wavefront mismatch (between the reference and probe wave) of a single holographic lens is too large to be digitally compensated, a multiplexed holographic lens with multiple focal spots greatly reduces the burden for digital correction. This idea is the same as that used in holographic displays for eye-box expansion, which will be discussed in Sec. 4.3.

4.3 See-through holographic displays with HOEs

Speckle, limited spatial bandwidth product (SBP), and large computational load are three major obstacles in holographic displays. Compressing a holographic display into a compact platform will raise more challenges because we need to consider the formfactor, weight, and wearing comfort of the system. The tradeoff between FOV and eye-box is vital due to the insufficient SBP of currently available SLMs. The primitive method is to tile numerous SLMs to spatially enhance the SBP [91,92], but it ends in a costly and cumbersome system. Another approach is to utilize high diffraction orders of one SLM to virtually represent three SLMs with time-multiplexing technique [93]. However, the three orders are separately guided by three sets of optical elements, again leading to the large form of system. HOEs can offer an effective solution to break the tradeoff relation by duplicating the eye-box. Herein, we introduce the benefits of HOEs used in holographic displays and analyze the accompanied problems.

HOE has become a strong candidate for see-through combiners in holographic displays [71,73,74, 77,78,9497]. Initially, it is designed to be a reflective lens to guide the diffracted waves from the SLM to the observer’s eye, and transmits the ambient light without any imaging effect, as shown in Fig. 6(a). However, the eye-box is limited at a favorable FOV owing to the conserved etendue and insufficient bandwidth of SLM. To create a large eye-box size, several methods, including spatial/temporal pupil duplication [98,99], pupil steering [100], and the combination of both [73] have been explored. Some of them are used in Maxwellian display, which is an approach for extending the DOF but not rendering 3D images. When incorporating these designs with 3D technique, like holographic displays, some additional difficulties need to be analyzed and overcome.

 figure: Fig. 6.

Fig. 6. Schematic diagrams of (a) an AR holographic display with a see-through combiner; (b) illustration for overlapping and blind issues; (c) the structure with varying incident angles, where the inset shows several implementations of changing the angle of incidence; (d) the structure of multi-focal spots with a prove wave under wavefront-match condition.

Download Full Size | PDF

Some holographic displays with an expanded eye-box have been reported [73,74,78,101104]. One approach utilizes a PPHOE as multiple concave mirrors to achieve spatial pupil duplication in see-through holographic displays, as shown in Fig. 6(b). Usually, the separation between each focus spot should be designed to be slightly larger than the eye pupil size to avoid the image overlapping issue, but this may also result in a blind area, especially in the case where pupil size varies across different users and under different illumination conditions. This is a common problem in pupil duplication methods because the separation of viewpoint is fixed. But the positive side is that the holographic display can provide more freedom to design an adaptive eye-box size or slightly steer it by loading appropriate CGHs on the SLM to shape the desired wavefront according to the detected eye pupil position and size [73,103].

To analyze the optical aberrations in various pupil expansion methods, we can mainly divide them into two categories, based on whether the approaches rely on different incident angles to fulfill the multiple spots or not. Figure 6(c) shows the structure with a changing incident angle, and Fig. 6(d) shows the aberration-free case. As shown in Fig. 6(c), the aberration comes from the fact that the incident angle is constantly changing for every pupil spot. In other words, the probe wave on the see-through HOE fails to match the wavefront of reference wave. In the inset of Fig. 6(c), there are several ways to change the incident angle, including mechanically rotating the image source, exploiting high diffraction orders of gratings [98,99], and using a photopolymer based MLA as the point source array [96].

On the contrary, the holographic lens in Fig. 6(d) can be customized for each focal spot, thus being diffraction-limited [100]. Both the reflective PP lens and CLC lens can perform this function. The major difference is that PP lens can achieve this in one optical film owing to its multiplexing feature, while CLC lenses should be separately fabricated and coupled with half-wave plates (HWPs) as shown in the inset of Fig. 6(d). To the best of our knowledge, so far no one has reported using a CLC lens as the see-through combiner in holographic displays. But it can accommodate a relatively large range of incident angle (∼20°) and maintain a high diffraction efficiency within this range. The “aberration-free” concept is limited to the wavefront shape of the probe wave, which should be fixed for matching the reference beam in the recording process. When it is practically applied in holographic displays, the system still suffers from the aberrations, because the probe waves on the reflective PP lens or CLC lenses have a cone angle, which is dependent on the real-time reconstructed holograms. But this aberration is much smaller than that of the former method based on incident angle modulation. To correct it, we need to pre-compensate the hologram patterns on the SLM. Real time correction for full images is hard to achieve due to the massive computation algorithms. A more realistic solution is to compensate the aberrations in a specific region where the user gazing at with the incorporation of an eye-tracker [71].

5. Discussion

We have introduced several 3D technologies and many lab-level system designs using HOEs. The emerging concepts will continue to inspire elegant solutions for enhancing the 3D systems. But when we deliver the lab-level blueprint into products, some practical issues should be further considered. In this section, we focus on three issues that are sometimes not addressed in the lab-level design.

5.1 Full-color 3D display

The display of full-color images is important to the viewing experience in AR and VR headsets. Generally, the optical architectures with refractive imaging optics will not cause much optical aberrations, like the chromatic aberration caused by the dispersion. On the contrary, the diffractive optical elements have a small Abbe number (Vd ≈ -3.5), indicating a large dispersion. Thus, it is necessary to study the imaging performance of 3D displays using diffractive optics. The chromatic aberration is highly dependent on the spectral bandwidth of the image source. The coherence degree of incident light is therefore important. In addition, the crosstalk that occurs between different red, green, and blue (RGB) optical layers should be analyzed.

The first case is when the 3D displays use incoherent microdisplays as image sources. If the 3D system is implemented in free space where the HOE serves as the imaging combiners, the color dispersion cannot be avoided. The architectures include some multifocal displays using PBLs or CLC lenses and InI displays with holographic MLAs. To alleviate the color dispersion, one approach is to combine these diffractive elements with refractive lenses, as explained in Sec. 4.1. Another method is to employ three spectral-selective HOEs, where each layer (RGB) can only transmit or reflect a narrow linewidth of the incident light. As for the color crosstalk, which originates from light leakage and overlapping spectra between the stacked optical couplers or combiners of RGB colors, the solutions depend on the specific optical architecture. In the lightguide-type AR devices, designing three lightguides with separate couplers for each color helps to eliminate the crosstalk. In the free-space AR systems, to fulfill a full color image, we usually fabricate three optical layers for RGB colors, and the crosstalk mainly occurs between the adjacent layers with overlapping spectral bands. A possible solution is to narrow the spectral bandwidth of the RGB layers, and the emission spectra of image sources should match the HOE bandwidth, otherwise it will cause intensity loss of the display. For PPHOE, this is easily achievable due to the inherent small index modulation. For LC-PBOEs, a LC material with proper birefringence should be adopted.

The second case is the 3D displays using a laser projector or SLM illuminated by a laser beam. Since the laser light has an extremely narrow linewidth, the dispersive phenomenon in each color channel can be ignored. As for crosstalk between color channels, the spectral bands of RGB HOEs should also be separated.

5.2 Free-space- or lightguide-type 3D AR displays

In AR displays, the optical frameworks can generally be divided into free-space and lightguide types. Each architecture has its own pros and cons, which can be reflected from light efficiency, light uniformity, FOV, eye-box size, and formfactor. Free-space AR has a reasonable light efficiency and uniformity, but usually occupies a large volume and sticks to etendue conservation, thus remaining a tradeoff between FOV and eye-box size. On the contrary, lightguide AR has a slim formfactor, and the system etendue can be enlarged via exit pupil expansion (EPE) process [105,106]. However, it usually suffers from extremely low light efficiency (<1%) and nonuniform output light especially in the case of 2D EPE. The latter can be improved by optimizing the local grating parameters of the out-couplers. Aside from the optical performance offered by both architectures, their adaptability for different 3D displays should be well investigated.

Free-space AR is compatible with various 3D displays, including varifocal/multiplane displays, multi-view displays, and holographic displays. For lightguide AR, the common choices are the stereoscopic displays or the varifocal/multiplane displays. However, for InI and holographic displays, we must point out a non-ignorable vulnerability when they are presented in the lightguide structure. Take the structure proposed in [107] as an example. In their design, the traditional in-coupler, surface relief grating (SRG) or holographic volume grating (HVG), is replaced by a photopolymer MLA, which guides the information of EIs on the microdisplay into the lightguide. The beams undergoing multiple total internal reflections (TIRs) are delivered to the out-coupler (HVG) and then out-coupled. The problem is that the incident light carries with both angle and depth information before going through TIRs. Since the lightguide could only remain angle information during TIRs, the depth information will be disrupted as the rays experience different optical path lengths in the lightguide. This issue can be alleviated, but not completely, by pre-compensating the input EIs, as the image distortion depends on the viewing position and the viewing angle. For the same reason, the use of holographic displays in a lightguide is also challenging. Although several approaches have been developed for pre-distortion by superposing the relevant information onto the CGHs [94,108], the correction is hard to be completely realized as the viewing angle and position of the observer change. As a result, it will add a heavy burden on the real-time rendering.

6. Conclusion

We have reviewed several 3D technologies in near-eye AR/VR optics, including plane-based displays, view-based displays, and holographic displays, discussing the basic principles and the challenges that need to be addressed for future applications. In HOEs, we discuss the consequent benefits, difference in optical design, potential issues, and ways for improvement. There is undoubtedly a long way toward the ultimate 3D displays with indistinguishable experience from real world. Still, we believe that the goal can be progressively achieved with continuous innovations in display devices, novel optical components, and the accompanied new design strategies.

Funding

GoerTek Electronics.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. F. E. Ives, “Parallax Stereogram and process of making same,” U.S. Patent 725,567 (1903).

2. G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

3. W. Hess, “Stereoscopic picture.,” U. S. Patent 1,128,979 (1962).

4. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]  

5. E. N. Leith and J. Upatnieks, “Reconstructed Wavefronts and Communication Theory*,” J. Opt. Soc. Am. 52(10), 1123–1130 (1962). [CrossRef]  

6. E. N. Leith and J. Upatnieks, “Wavefront Reconstruction with Diffused Illumination and Three-Dimensional Objects*,” J. Opt. Soc. Am. 54(11), 1295–1301 (1964). [CrossRef]  

7. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008). [CrossRef]  

8. C. Jang, C. K. Lee, J. Jeong, G. Li, S. Lee, J. Yeom, K. Hong, and B. Lee, “Recent progress in see-through three-dimensional displays using holographic optical elements [Invited],” Appl. Opt. 55(3), A71–A85 (2016). [CrossRef]  

9. J. Xiong, E. L. Hsiang, Z. He, T. Zhan, and S. T. Wu, “Augmented reality and virtual reality displays: emerging technologies and future perspectives,” Light: Sci. Appl. 10(1), 216 (2021). [CrossRef]  

10. X. Ni, A. V. Kildishev, and V. M. Shalaev, “Metasurface holograms for visible light,” Nat. Commun. 4(1), 2807 (2013). [CrossRef]  

11. S. Wang, P. C. Wu, V. C. Su, Y. C. Lai, C. Hung Chu, J. W. Chen, S. H. Lu, J. Chen, B. Xu, C. H. Kuan, T. Li, S. Zhu, and D. P. Tsai, “Broadband achromatic optical metasurface devices,” Nat. Commun. 8(1), 187 (2017). [CrossRef]  

12. W. T. Chen, A. Y. Zhu, V. Sanjeev, M. Khorasaninejad, Z. Shi, E. Lee, and F. Capasso, “A broadband achromatic metalens for focusing and imaging in the visible,” Nat. Nanotechnol. 13(3), 220–226 (2018). [CrossRef]  

13. G. Y. Lee, J. Y. Hong, S. H. Hwang, S. Moon, H. Kang, S. Jeon, H. Kim, J. H. Jeong, and B. Lee, “Metasurface eyepiece for augmented reality,” Nat. Commun. 9(1), 4562 (2018). [CrossRef]  

14. Z. B. Fan, H. Y. Qiu, H. Le Zhang, X. N. Pang, L. D. Zhou, L. Liu, H. Ren, Q. H. Wang, and J. W. Dong, “A broadband achromatic metalens array for integral imaging in the visible,” Light: Sci. Appl. 8(1), 67 (2019). [CrossRef]  

15. Z. Liu, C. Zhang, W. Zhu, Z. Huang, H. J. Lezec, A. Agrawal, and L. J. Guo, “Compact Stereo Waveguide Display Based on a Unidirectional Polarization-Multiplexed Metagrating In-Coupler,” ACS Photonics 8(4), 1112–1119 (2021). [CrossRef]  

16. C. Wang, Z. Yu, Q. Zhang, Y. Sun, C. Tao, F. Wu, and Z. Zheng, “Metalens eyepiece for 3d holographic near-eye display,” Nanomaterials 11(8), 1920 (2021). [CrossRef]  

17. D. H. Close, “Holographic optical elements,,” Opt. Eng. 14(5), 408–419 (1975). [CrossRef]  

18. J. Xiong, K. Yin, K. Li, and S. T. Wu, “Holographic Optical Elements for Augmented Reality: Principles, Present Status, and Future Perspectives,” Adv. Photonics Res. 2(1), 2000049 (2021). [CrossRef]  

19. T. Rasmussen, “Overview of high-efficiency transmission gratings for molecular spectroscopy,” Spectrosc. 29(4), 32–39 (2014).

20. Y. H. Lee, K. Yin, and S. T. Wu, “Reflective polarization volume gratings for high efficiency waveguide-coupling augmented reality displays,” Opt. Express 25(22), 27008–27014 (2017). [CrossRef]  

21. K. Yin, Z. He, and S. T. Wu, “Reflective Polarization Volume Lens with Small f-Number and Large Diffraction Angle,” Adv. Opt. Mater. 8(11), 2000170 (2020). [CrossRef]  

22. Y. Li, T. Zhan, and S. T. Wu, “Flat cholesteric liquid crystal polymeric lens with low f-number,” Opt. Express 28(4), 5875–5882 (2020). [CrossRef]  

23. Z. Yan, X. Yan, X. Jiang, H. Gao, and J. Wen, “Integral imaging based light field display with enhanced viewing resolution using holographic diffuser,” Opt. Commun. 402, 437–441 (2017). [CrossRef]  

24. M. A. Ferrara, V. Striano, and G. Coppola, “Volume holographic optical elements as solar concentrators: An overview,” Appl. Sci. 9(1), 193 (2019). [CrossRef]  

25. B. J. Chang, “Dichromated Gelatin Holograms And Their Applications,” Opt. Eng. 19(5), 642–648 (1980). [CrossRef]  

26. J. D. Waldern, A. J. Grant, and M. M. Popovich, “17-4: DigiLens AR HUD Waveguide Technology,” SID Int. Symp. Dig. Tech. Pap. 49(1), 204–207 (2018). [CrossRef]  

27. L. B. Glebov, “Volume holographic elements in a photo-thermo-refractive glass,” J. Hologr. Speckle 5(1), 77–84 (2009). [CrossRef]  

28. K. Yin, J. Xiong, Z. He, and S. T. Wu, “Patterning Liquid-Crystal Alignment for Ultrathin Flat Optics,” ACS Omega 5(49), 31485–31489 (2020). [CrossRef]  

29. J. Kim, Y. Li, M. N. Miskiewicz, C. Oh, M. W. Kudenov, and M. J. Escuti, “Fabrication of ideal geometric-phase holograms with arbitrary wavefronts,” Optica 2(11), 958–964 (2015). [CrossRef]  

30. K. Gao, H. H. Cheng, A. K. Bhowmik, and P. J. Bos, “Thin-film Pancharatnam lens with low f-number and high quality,” Opt. Express 23(20), 26086–26094 (2015). [CrossRef]  

31. Y. H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S. T. Wu, “Recent progress in Pancharatnam–Berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3(1), 79–88 (2017). [CrossRef]  

32. J. Kobashi, H. Yoshida, and M. Ozaki, “Planar optics with patterned chiral liquid crystals,” Nat. Photonics 10(6), 389–392 (2016). [CrossRef]  

33. Y. Weng, D. Xu, Y. Zhang, X. Li, and S. T. Wu, “Polarization volume grating with high efficiency and large diffraction angle,” Opt. Express 24(16), 17746–17759 (2016). [CrossRef]  

34. F. K. Bruder, J. Frank, S. Hansen, A. Lorenz, C. Manecke, R. Meisenheimer, J. Mills, L. Pitzer, I. Pochorovski, and T. Rölle, “Expanding the property profile of Bayfol HX® film towards NIR recording and ultra high index modulation,” Proc. SPIE 11765, 117650J (2021). [CrossRef]  

35. M. G. Moharam and T. K. Gaylord, “Rigorous coupled-wave analysis of planar-grating diffraction,” J. Opt. Soc. Am. 71(7), 811–818 (1981). [CrossRef]  

36. H. Kogelnik, “Coupled wave theory for thick hologram gratings,” Bell Syst. Tech. J. 48(9), 2909–2947 (1969). [CrossRef]  

37. B. Lee, S. Park, K. Hong, and J. Hong, “Design and Implementation of Autostereoscopic Displays,” (SPIE, 2016).

38. K. Aksit, W. Lopes, J. Kim, P. Shirley, and D. Luebke, “Near-eye varifocal augmented reality display using see-through screens,” ACM Trans. Graph. 36(6), 1–13 (2017). [CrossRef]  

39. G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012). [CrossRef]  

40. G. D. Love, D. M. Hoffman, P. J. W. Hands, J. Gao, A. K. Kirby, and M. S. Banks, “High-speed switchable lens enables the development of a volumetric stereoscopic display,” Opt. Express 17(18), 15716–15725 (2009). [CrossRef]  

41. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010). [CrossRef]  

42. S. Ravikumar, K. Akeley, and M. S. Banks, “Creating effective focus cues in multi-plane 3D displays,” Opt. Express 19(21), 20940–20952 (2011). [CrossRef]  

43. S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, “Additive light field displays: Realization of augmented reality with holographic optical elements,” ACM Trans. Graph. 35(4), 1–13 (2016). [CrossRef]  

44. J. P. Rolland, M. W. Krueger, and A. Goon, “Multifocal planes head-mounted displays,” Appl. Opt. 39(19), 3209–3215 (2000). [CrossRef]  

45. H. Hua, “Enabling Focus Cues in Head-Mounted Displays,” Proc. IEEE 105(5), 805–824 (2017). [CrossRef]  

46. T. Zhan, J. Xiong, J. Zou, and S. T. Wu, “Multifocal displays: review and prospect,” PhotoniX 1(1), 10 (2020). [CrossRef]  

47. Y. J. Wang and Y. H. Lin, “Liquid crystal technology for vergence-accommodation conflicts in augmented reality and virtual reality systems: a review,” Liq. Cryst. Rev. 9(1), 35–64 (2021). [CrossRef]  

48. S. Liu and H. Hua, “Time-multiplexed dual-focal plane head-mounted display with a liquid lens,” Opt. Lett. 34(11), 1642–1644 (2009). [CrossRef]  

49. S. C. McQuaide, E. J. Seibel, J. P. Kelly, B. T. Schowengerdt, and T. A. Furness, “A retinal scanning display system that produces multiple focal planes with a deformable membrane mirror,” Displays 24(2), 65–72 (2003). [CrossRef]  

50. R. E. Stevens, D. Rhodes, P. Y. Laffont, and A. Hasnain, “Varifocal technologies providing prescription and VAC mitigation in HMDs using Alvarez lenses,” Proc. SPIE 10676, 106760J (2018). [CrossRef]  

51. G. Tan, T. Zhan, Y. H. Lee, J. Xiong, and S. T. Wu, “Polarization-multiplexed multiplane display,” Opt. Lett. 43(22), 5651–5654 (2018). [CrossRef]  

52. Y. H. Lee, G. Tan, K. Yin, T. Zhan, and S. T. Wu, “Compact see-through near-eye display with depth adaption,” J. Soc. Inf. Disp. 26(2), 64–70 (2018). [CrossRef]  

53. C. Yoo, K. Bang, C. Jang, D. Kim, C. K. Lee, G. Sung, H. S. Lee, and B. Lee, “Dual-focal waveguide see-through near-eye display with polarization-dependent lenses,” Opt. Lett. 44(8), 1920–1923 (2019). [CrossRef]  

54. Y. Li, Q. Yang, J. Xiong, S. T. Wu, and K. Li, “Dual-depth augmented reality display with reflective polarization-dependent lenses,” Opt. Express 29(20), 31478–31487 (2021). [CrossRef]  

55. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: Sensing, display, and applications [Invited],” Appl. Opt. 52(4), 546–560 (2013). [CrossRef]  

56. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]  

57. H. Huang and H. Hua, “High-performance integral-imaging-based light field augmented reality display using freeform optics,” Opt. Express 26(13), 17578–17590 (2018). [CrossRef]  

58. C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: Augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 1–13 (2017). [CrossRef]  

59. T. Ueno and Y. Takaki, “Super multi-view near-eye display to solve vergence–accommodation conflict,” Opt. Express 26(23), 30703–30715 (2018). [CrossRef]  

60. G. Baasantseren, J.-H. Park, K. C. Kwon, and N. Kim, “Viewing angle enhanced integral imaging display using two elemental image masks,” Opt. Express 17(16), 14405–14417 (2009). [CrossRef]  

61. H. Deng, C. Chen, M. Y. He, J. J. Li, H. L. Zhang, and Q. H. Wang, “High-resolution augmented reality 3D display with use of a lenticular lens array holographic optical element,” J. Opt. Soc. Am. A 36(4), 588–593 (2019). [CrossRef]  

62. X. Yu, X. Sang, X. Gao, Z. Chen, D. Chen, W. Duan, B. Yan, C. Yu, and D. Xu, “Large viewing angle three-dimensional display with smooth motion parallax and accurate depth cues,” Opt. Express 23(20), 25950–25958 (2015). [CrossRef]  

63. S. Lee, C. Jang, J. Cho, J. Yeom, J. Jeong, and B. Lee, “Viewing angle enhancement of an integral imaging display using Bragg mismatched reconstruction of holographic optical elements,” Appl. Opt. 55(3), A95–A103 (2016). [CrossRef]  

64. D. H. Shin, B. Lee, and E. S. Kim, “Effect of illumination in an integral imaging system with large depth of focus,” Appl. Opt. 44(36), 7749–7753 (2005). [CrossRef]  

65. X. Shen, Y. J. Wang, H. S. Chen, X. Xiao, Y. H. Lin, and B. Javidi, “Extended depth-of-focus 3D micro integral imaging display using a bifocal liquid crystal lens,” Opt. Lett. 40(4), 538–541 (2015). [CrossRef]  

66. K. C. Kwon, Y. T. Lim, C. W. Shin, M. U. Erdenebat, J. M. Hwang, and N. Kim, “Enhanced depth-of-field of an integral imaging microscope using a bifocal holographic optical element-micro lens array,” Opt. Lett. 42(16), 3209–3212 (2017). [CrossRef]  

67. M. Park, K. I. Joo, H. R. Kim, and H. J. Choi, “An Augmented-Reality Device With Switchable Integrated Spaces Using a Bi-Focal Integral Floating Display,” IEEE Photonics J. 11(4), 1–8 (2019). [CrossRef]  

68. Y. Frauel, T. J. Naughton, O. Matoba, E. Tajahuerce, and B. Javidi, “Three-dimensional imaging and processing using computational holographic imaging,” Proc. IEEE 94(3), 636–653 (2006). [CrossRef]  

69. S. Tay, P. A. Blanche, R. Voorakaranam, A. V. Tunç, W. Lin, S. Rokutanda, T. Gu, D. Flores, P. Wang, G. Li, P. St Hilaire, J. Thomas, R. A. Norwood, M. Yamamoto, and N. Peyghambarian, “An updatable holographic three-dimensional display,” Nature 451(7179), 694–698 (2008). [CrossRef]  

70. P. A. Blanche, A. Bablumian, R. Voorakaranam, C. Christenson, W. Lin, T. Gu, D. Flores, P. Wang, W. Y. Hsieh, M. Kathaperumal, B. Rachwal, O. Siddiqui, J. Thomas, R. A. Norwood, M. Yamamoto, and N. Peyghambarian, “Holographic three-dimensional telepresence using large-area photorefractive polymer,” Nature 468(7320), 80–83 (2010). [CrossRef]  

71. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017). [CrossRef]  

72. C. Chang, K. Bang, G. Wetzstein, B. Lee, and L. Gao, “Toward the next-generation VR/AR optics: a review of holographic near-eye displays from a human-centric perspective,” Optica 7(11), 1563–1578 (2020). [CrossRef]  

73. J. H. Park and S. B. Kim, “Optical see-through holographic near-eye-display with eyebox steering and depth of field control,” Opt. Express 26(21), 27076–27088 (2018). [CrossRef]  

74. X. Duan, J. Liu, X. Shi, Z. Zhang, and J. Xiao, “Full-color see-through near-eye holographic display with 80° field of view and an expanded eye-box,” Opt. Express 28(21), 31316–31329 (2020). [CrossRef]  

75. L. Shi, B. Li, C. Kim, P. Kellnhofer, and W. Matusik, “Towards real-time photorealistic 3D holography with deep neural networks,” Nature 591(7849), 234–239 (2021). [CrossRef]  

76. S. Yang and H. Takajo, “Speckle reduction of kinoform reconstruction utilizing the 2π ambiguity of image phase differences,” Opt. Rev. 12(2), 93–96 (2005). [CrossRef]  

77. M. Makowski, “Minimized speckle noise in lens-less holographic projection by pixel separation,” Opt. Express 21(24), 29205–29216 (2013). [CrossRef]  

78. T. Kozacki and M. Chlipala, “Color holographic display with white light LED source and single phase only SLM,” Opt. Express 24(3), 2189–2199 (2016). [CrossRef]  

79. Y. Deng and D. Chu, “Coherence properties of different light sources and their effect on the image sharpness and speckle of holographic displays,” Sci. Rep. 7(1), 5893 (2017). [CrossRef]  

80. B. E. A. Saleh and M. C. Teich, “Fundamentals of Photonics,” (Wiley-Interscience, 2007).

81. H. Ren and S. T. Wu, “Reflective reversed-mode polymer stabilized cholesteric texture light switches,” J. Appl. Phys. 92(2), 797–800 (2002). [CrossRef]  

82. S. Liu, Y. Li, P. Zhou, Q. Chen, and Y. Su, “Reverse-mode PSLC multi-plane optical see-through display for AR applications,” Opt. Express 26(3), 3394–3403 (2018). [CrossRef]  

83. Z. He, K. Yin, and S. T. Wu, “Passive polymer-dispersed liquid crystal enabled multi-focal plane displays,” Opt. Express 28(10), 15294–15299 (2020). [CrossRef]  

84. Q. Chen, Z. Peng, Y. Li, S. Liu, P. Zhou, J. Gu, J. Lu, L. Yao, M. Wang, and Y. Su, “Multi-plane augmented reality display based on cholesteric liquid crystal reflective films,” Opt. Express 27(9), 12039–12047 (2019). [CrossRef]  

85. T. Zhan, Y. H. Lee, and S. T. Wu, “High-resolution additive light field near-eye display by switchable Pancharatnam–Berry phase lenses,” Opt. Express 26(4), 4863–4872 (2018). [CrossRef]  

86. T. Zhan, J. Zou, J. Xiong, X. Liu, H. Chen, J. Yang, S. Liu, Y. Dong, and S. T. Wu, “Practical Chromatic Aberration Correction in Virtual Reality Displays Enabled by Cost-Effective Ultra-Broadband Liquid Crystal Polymer Lenses,” Adv. Opt. Mater. 8(2), 1901360 (2020). [CrossRef]  

87. Y. Li, T. Zhan, Z. Yang, C. Xu, P. L. LiKamWa, K. Li, and S. T. Wu, “Broadband cholesteric liquid crystal lens for chromatic aberration correction in catadioptric virtual reality optics,” Opt. Express 29(4), 6011–6020 (2021). [CrossRef]  

88. H. Ren, Q.-H. Wang, Y. Xing, M. Zhao, L. Luo, and H. Deng, “Super-multiview integral imaging scheme based on sparse camera array and CNN super-resolution,” Appl. Opt. 58(5), A190–A196 (2019). [CrossRef]  

89. Y. Kim, J. H. Park, H. Choi, J. Kim, S. W. Cho, and B. Lee, “Depth-enhanced three-dimensional integral imaging by use of multilayered display devices,” Appl. Opt. 45(18), 4334–4343 (2006). [CrossRef]  

90. D. Shin, C. Kim, G. Koo, and Y. Hyub Won, “Depth plane adaptive integral imaging system using a vari-focal liquid lens array for realizing augmented reality,” Opt. Express 28(4), 5602–5616 (2020). [CrossRef]  

91. N. Fukaya, K. Maeno, O. Nishikawa, K. Matsumoto, K. Sato, and T. Honda, “Expansion of the image size and viewing zone in holographic display using liquid crystal devices,” Proc. SPIE 2406, 283–289 (1995). [CrossRef]  

92. J. Hahn, H. Kim, Y. Lim, G. Park, and B. Lee, “Wide viewing angle dynamic holographic stereogram with a curved array of spatial light modulators,” Opt. Express 16(16), 12372–12386 (2008). [CrossRef]  

93. G. Li, J. Jeong, D. Lee, J. Yeom, C. Jang, S. Lee, and B. Lee, “Space bandwidth product enhancement of holographic display using high-order diffraction guided by holographic optical element,” Opt. Express 23(26), 33170–33183 (2015). [CrossRef]  

94. B. Li, H. J. Yeom, H. J. Kim, H. Zhang, J. H. Park, S. H. Kim, S. B. Kim, and Y. M. Ji, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015). [CrossRef]  

95. G. Li, D. Lee, Y. Jeong, J. Cho, and B. Lee, “Holographic display for see-through augmented reality using mirror-lens holographic optical element,” Opt. Lett. 41(11), 2486–2489 (2016). [CrossRef]  

96. C. Jang, K. Bang, G. Li, and B. Lee, “Holographic near-eye display with expanded eye-box,” ACM Trans. Graph. Graph. 37(6), 1–14 (2019). [CrossRef]  

97. J. Xiao, J. Liu, Z. Lv, X. Shi, and J. Han, “On-axis near-eye display system based on directional scattering holographic waveguide and curved goggle,” Opt. Express 27(2), 1683–1692 (2019). [CrossRef]  

98. T. Lin, T. Zhan, J. Zou, F. Fan, and S. T. Wu, “Maxwellian near-eye display with an expanded eyebox,” Opt. Express 28(26), 38616–38625 (2020). [CrossRef]  

99. Z. He, K. Yin, K. H. Fan-Chiang, and S. T. Wu, “Enlarging the eyebox of maxwellian displays with a customized liquid crystal dammann grating,” Crystals 11(2), 195 (2021). [CrossRef]  

100. J. Xiong, Y. Li, K. Li, and S. T. Wu, “Aberration-free pupil steerable Maxwellian display for augmented reality with cholesteric liquid crystal holographic lenses,” Opt. Lett. 46(7), 1760–1763 (2021). [CrossRef]  

101. M. H. Choi, Y. G. Ju, and J. H. Park, “Holographic near-eye display with continuously expanded eyebox using two-dimensional replication and angular spectrum wrapping,” Opt. Express 28(1), 533–547 (2020). [CrossRef]  

102. C. Chang, W. Cui, J. Park, and L. Gao, “Computational holographic Maxwellian near-eye display with an expanded eyebox,” Sci. Rep. 9(1), 18749 (2019). [CrossRef]  

103. S. B. Kim and J. H. Park, “Optical see-through Maxwellian near-to-eye display with an enlarged eyebox,” Opt. Lett. 43(4), 767–770 (2018). [CrossRef]  

104. M. Kim, S. Lim, G. Choi, Y. Kim, H. Kim, and J. Hahn, “Expanded Exit-Pupil Holographic Head-Mounted Display With High-Speed Digital Micromirror Device,” ETRI J. 40(3), 366–375 (2018). [CrossRef]  

105. D. Grey and S. Talukdar, “Exit pupil expanding diffractive optical waveguiding device,” US Patent 10,073, 267 (2019).

106. O. Cakmakci, Y. Qin, P. Bosel, and G. Wetzstein, “Holographic pancake optics for thin and lightweight optical see-through augmented reality,” Opt. Express 29(22), 35206–35215 (2021). [CrossRef]  

107. N. Darkhanbaatar, M. U. Erdenebat, C. W. Shin, K. C. Kwon, K. Y. Lee, G. Baasantseren, and N. Kim, “Three-dimensional see-through augmented-reality display system using a holographic micromirror array,” Appl. Opt. 60(25), 7545–7551 (2021). [CrossRef]  

108. W. K. Lin, O. Matoba, B. S. Lin, and W. C. Su, “Astigmatism correction and quality optimization of computer-generated holograms for holographic waveguide displays,” Opt. Express 28(4), 5519–5527 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Conceptual diagrams of (a) an amplitude hologram based on silver halide emulsion; (b) a phase hologram based on photopolymers; (c) a photoalignment patten; (d) a LC-PB grating.
Fig. 2.
Fig. 2. The simulated (a) incident angle and (b) wavelength dependent first-order diffraction efficiency of CLC optical elements and PPHOEs.
Fig. 3.
Fig. 3. The depth cues in (a) real world; (b) the stereoscopic display where VAC occurs.
Fig. 4.
Fig. 4. Schematic diagrams of multi-plane displays. The distance-based method enabled by (a) diffusive PPHOEs, and (b) CLC films. The power-based method implemented by (c) switchable PB Lenses, and (d) passive CLC lenses. EYEP: eyepiece lens; R-CLC: right-handed CLC; L-CLC: left-handed CLC.
Fig. 5.
Fig. 5. DOF extension methods enabled by (a) a PB lens and (b) two CLC lenses. (c) A simplified InI system. The holographic MLA can be a reflective photopolymer MLA or CLC MLA. (d) A simplified retinal scanning light field display. The holographic lens can be a reflective PP lens or CLC lens.
Fig. 6.
Fig. 6. Schematic diagrams of (a) an AR holographic display with a see-through combiner; (b) illustration for overlapping and blind issues; (c) the structure with varying incident angles, where the inset shows several implementations of changing the angle of incidence; (d) the structure of multi-focal spots with a prove wave under wavefront-match condition.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

θ F O V = 2 θ d L E ,
θ d = λ 2 p ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.