Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Retinal projection type lightguide-based near-eye display with switchable viewpoints

Open Access Open Access

Abstract

We present a retinal-projection-based near-eye display with switchable multiple viewpoints by polarization-multiplexing. Active switching of viewpoints is provided by the polarization grating, multiplexed holographic optical elements and polarization-dependent eyepiece lens that can generate one of the dual-divided focus groups according to the pupil position. The lightguide-combined optical devices have a potential to enable a wide field of view (FOV) and short eye relief with compact form factor. Our proposed system can support a pupil movement with an extended eyebox and mitigate image problem caused by duplicated viewpoints. We discuss the optical design for guiding system and demonstrate that proof-of-concept system provides all-in-focus images with 37 degrees FOV and 16 mm eyebox in horizontal direction.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

An augmented reality (AR) is a promising technology of superimposing additional virtual images on real objects, which is expected to change life style more conveniently and productively. Near-eye displays (NEDs) are essential devices for implementing AR technique in various applications. Except for a video-type see-through AR using camera images, typical AR NEDs are optically see-through, providing a real scene without distortion. They require a core optical component that can simultaneously deliver virtual images with high optical transparency for the real scene, which is called an image combiner. Various image combiners have been proposed to offer immersive AR experiences, and each method has some advantages over the other and weakness as well in terms of visual performance and a form factor [1,2]. For a long time wearing in daily life, AR NED systems should be designed with a compact and lightweight form factor and moderate image quality.

In addition to the factors mentioned above, focus cues are one of the important factors to accurately provide depth perception of three-dimensional images. In display systems not supporting focus cues, vergence distance due to binocular disparity does not coincide with accommodation distance where a fixed focal plane of virtual images is physically located. This physiological discrepancy between vergence and accommodation is called the vergence-accommodation conflict (VAC), which causes visual fatigue or nausea, in stereoscopic viewing environment [35]. Moreover, the VAC problems can result in blurred virtual images depending on the distance of the real object being watched, which may hinder vivid AR experiences. To alleviate discomfort by the VAC, various solutions enabling focus cues have been proposed and are summarized in [611]. Most of them can be summarized into two representative methods: stereoscopic displays with add-on and holographic displays.

First, stereoscopic displays with focus cues are enabled by using focus tunable optical systems, multiple displays or freeform optics [1224]. They provide multiple or changeable focal planes of virtual images with time- or space-multiplexing. Although these techniques can mitigate VAC problems by visualizing 3D images with a wide-range zone of comfort [3,2527], additional optical elements and ultra-fast-driven display module are required for real time operating; which inevitably makes the optical system bulky. Second, holographic displays provide true focal cues by reproducing same wavefront as real objects [2832]. However, holographic techniques have still low-performance images with speckle issues and a heavy computational load for calculating precise wavefront [33].

Retinal projection displays (RPDs), which are also known as Maxwellian-view displays, are based on focus-free system [3436]. The RPDs provide a wide range of acceptable sharp focus regardless of accommodation response of the human eye, and can be implemented with a relatively simple configuration and a small form factor. As shown in Fig. 1(a), by focusing the light of display images on the pupil of the eye, a sharp image can be directly projected onto the retina irrespective of the focal power of the crystalline lens. Focus-invariant systems can alleviate VAC problems compared to displays with a single focal plane [37]. Despite these advantages, RPDs have an intrinsic limitation that they provide a small eyebox around a focal spot. If the entrance pupil of the eye is out of the eyebox due to rotation of the eyeball and misalignment of the eye position, virtual images are blocked partially or entirely. This limited eyebox is a major obstacle for the commercialization of the retinal projection-type NED. To enlarge the size of eyebox, several solutions have been proposed as follows.

 figure: Fig. 1.

Fig. 1. (a) Simplified schematic diagram of the retinal projection display, (b) Illustration of the double image problem with a narrow viewpoint spacing and (c) blank image problem with wide viewpoint spacing in the RPD with multiple viewpoints.

Download Full Size | PDF

Jang et al. proposed a pupil-tracked RPD system using a holographic optical element (HOE) and a steering mirror [38]. This system provides a dynamic eyebox, which can be realized by laterally shifting a focal point by the HOE combiner to track the movement of the pupil. Kim et al. suggested a dynamically moving focal point by using a mechanically moving HOE module [39]. This approach is quite similar to Jang’s method, but can provide a wider eyebox with a shorter eye relief. Both of the above mentioned methods use mechanical movement of the optical element, which causes an increase in form factor or weight due to additional moving parts. Furthermore, if the moving speed is not fast enough, image sticking or lag may occur in real time. A steerable eyebox is realized by using an array of light emitting diodes (LEDs) synchronized with pupil tracker by Hedili et al. [40]. This method can switch fixed focal spots with low motion-to-photon latency. Kim et al. showed a lightguide-type RPD with an enlarged eyebox by the multiplexed HOE [41]. Jeong et al. suggested a customized HOE combiner using a holographic printer [42]. This method has an advantage that mechanical control is not required due to multiple and concurrent focal spots. However, in an enlarged eyebox with always-on focal spots, double images or a blank can be displayed depending on the spacing between viewpoints, as shown in Figs. 1(b) and 1(c).

We proposed the lightguide-type RPD enabling switchable viewpoints with polarization- multiplexing. This novel method is implemented by polarization grating and multiplexed HOEs, which selectively change the diffraction angle of the output beam depending on the polarization state of the input beam. The proposed configuration divides the multiple focal spots into two groups, so that it can switch between each other. By employing the active control of viewpoints, image problems can be mitigated in RPD systems with multi-viewpoints. In the following chapters, a detailed operating principle and system structure are discussed and a proof-of-concept system is implemented to verify the feasibility of the proposed scheme.

2. Principle of the switchable viewpoints

The goal of the proposed method is to provide an extended eyebox that can resolve the blank or double image problem in the RPD system. We designed to switch between two viewpoint groups, which are alternately located in Sagittal plane, according to the pupil position. Polarization-dependent devices and multiplexed HOEs are employed to control the viewpoints switching depending on the polarization state.

2.1 Polarization grating

The key element for implementing the switchable viewpoints is a polarization grating that controls the incidence angle on the in-coupler depending on the polarization state of the input beam. The transmission-type polarization grating diffracts an incidence beam into two transmitted beams with opposite circular polarization states [43]. The diffraction angle depends on the design parameter such as the grating period and axis orientation, and is symmetrical across the axis of the incident beam. In conventional polarization gratings, only the diffracted beam with the 1st and the -1st diffraction orders is highly efficient; e.g. the zeroth or high order beams have very low efficiency. These polarization gratings are fabricated by an anisotropic medium (polymerized liquid crystal) or a meta-structure, all of which can be implemented with ultra-thin and lightweight characteristics [4447]. Based on such advantages as the high-efficiency and form-factor property, the polarization grating can be used as a beam steering optical element without mechanical movement [48]. In addition, it is easy to attach to a planar lightguide due to flat film-type. The polarization grating used in our experiments diffracts the input beam with the right-handed circular polarization (RCP) in a clockwise tilted direction; whereas diffracts the input beam with the left-handed circular polarization (LCP) in a counterclockwise tilted direction, as shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Schematic diagram of the transmissive polarization grating with the diffraction angle (θ) for the first-order. The polarization state of the diffracted beam is orthogonal to the input beam.

Download Full Size | PDF

2.2 Lightguide and HOE image combiner

In this paper, the lightguide is used to deliver a virtual image from the display source to the eye of users with a small form factor. The guide structure is widely used in NED devices because of the compactness [49,50]. Moreover, a design flexibility of the lightsource module can be improved compared to free-space propagation (FSP) methods that the input beam from the display module propagates into the air and directly enters the image combiner. In the glasses-type RPD with FSP methods, the incidence angle on the image combiner is limited due to the bone-structure of the skull. Moreover, the angular resolution and the spatial uniformity of the angular resolution depend on the inclination angle of the input beam, as shown in Fig. 3 [39]. Meanwhile, in the lightguide structure, the propagation angle of the input beam is determined by the diffractive optical elements (DOEs), which can mitigate these issues by designing the lightsource module undeterred by the incidence angle.

 figure: Fig. 3.

Fig. 3. Spatial distribution of the angular resolution depending on the incidence angle of the input beam (θ). In the free-space propagation method, the incidence angle toward the center of the HOE (C) is changed to provide the dynamic eyebox. The simulation is conducted under the following conditions; incidence angle of the input beam: 40°∼90°, HOE size: 190 mm, eye relief: 20 mm, and divergence angle of the lightsource: 30°.

Download Full Size | PDF

In the proposed configuration, we utilized HOEs as the in-coupler and out-coupler, which delivers the virtual images. HOEs have been widely used as the transparent image combiner, and have high diffraction efficiency and high angular-selectivity compared to the surface-relief DOE [51]. The in-coupler HOE is recorded using the angular-multiplexing technique [52], so that two separate beams entering the lightguide at different angles by the polarization grating are diffracted at different propagating angles. Similarly, the out-coupler is multiplexed to form multiple viewpoints. In the recording process of the out-coupler, the recording angle of the reference beam is the same as the propagating angles within lightguide, and the recording angle of signal beam is predetermined in consideration of the position of focal spots, which will be described in Section 3.3. In Fig. 4, we assume that the multiplexed diffraction angles of the in-coupler are θi1 and θi2, and out-coupler angles are sequentially defined as θo1 and θo2, respectively. The out-coupler is recorded to be paired with θo1 for incidence angle θi1 and with θo2 for θi2. This approach can form viewpoints at different positions depending on the guiding angle.

 figure: Fig. 4.

Fig. 4. Schematic diagram of the angular-multiplexed out-coupler; the output beam is selectively diffracted depending on the incidence angle, which is presented in different colors.

Download Full Size | PDF

In previous related works, the out-coupler HOE is fabricated to act as a concave half mirror to form focusing points [38,39,41,42]. This method simplifies system configuration because additional eyepiece lens is not required; but the recording setup of multiplexed HOE with short focal length and wide field of view (FOV) is hard to implement. To demonstrate the proposed scheme, we adopt HOE-recorded flat-mirrors and an additional eyepiece lens presented in the next section. Output beams diffracted by the out-coupler are focused at each viewpoint as passing through the eyepiece lens. By synchronizing with pupil movement extracted by an eye tracker, only the appropriate viewpoint group is activated.

2.3 Eyepiece lens

To obtain the wide FOV with a short eye relief, geometric phase lenses (GPLs), or also called Pancharatnam-Berry phase lenses, are utilized as the eyepiece lens for the AR condition. Recently, serval related researches using the GPL have been reported to enhance the system performance [5356]. GPLs are typically fabricated within a few millimeters thickness and in flat plate or film. These thinness and flatness are suitable for the compact NEDs. The GPL used in the proposed configuration operates as either a concave or convex lens depending on the polarization state of the input beam. Figure 5 shows the operating mode of the polarization-dependent eyepiece combiner (PDEC), which is composed of a quarter wave plate (QWP), two GPLs and a right-handed circular polarizer (RHCP). By inserting an RHCP between two GPL with same focal length and stacking them, this novel eyepiece combiner operates as optical transparent window or convex lens depending on the polarization of the input beam. The front-most QWP converts linearly polarized real scene and virtual image into circularly polarized light, which is essential in lightguide system where only guided beams with linear polarization can retain their polarization state, which will be discussed in detail in Section 3.3.

 figure: Fig. 5.

Fig. 5. Illustration of the composition of the PDEC and operating mode depending on the polarization state; (a) lens mode for virtual images and (b) optical windows mode for real scene. All optical elements are so thin (less than 1 mm) that the optical focusing power of the PDEC approximately nulls each other or increases two-fold in the stacked structure.

Download Full Size | PDF

2.4 Overall configuration

The two-dimensional schematic diagram of proposed lightguide-type RPD with switchable multi-viewpoints is shown in Fig. 6. A laser scanning projector (LSP) and collimating lens are used to realize retinal projection scheme with narrow beam-width. The polarization of the input beam is converted to RCP or LCP by a polarization rotator. A vertical linear polarizer is placed behind the polarization grating to convert the circularly polarized beam into linearly polarized light before entering the lightguide. Similarly, a horizontal linear polarizer is located in front of out-coupler so that the PDEC acts as an optical window.

 figure: Fig. 6.

Fig. 6. Schematic diagram of the proposed configuration. Each red line shows beam path with LCP and each blue line shows beam path with RCP. The LCP and RCP are controlled by the polarization controller.

Download Full Size | PDF

As described above, the input beam, passing through the polarization grating, becomes to be incident on the HOE at different angles according to polarization state, and propagates at different guiding angles by multiplexed in-coupler. The multiplexed out-coupler allows guided beams at different angles to diffract in different directions to the incidence plane of PDEC. The output beams split into two groups are focused by the PDEC for the retinal projection. Consequently, only one viewpoint group closest to the pupil position detected by eye tracker can be activated by controlling the polarization of input beam.

The proposed configuration can solve image problems caused by equally duplicated viewpoints by switching between two multi-viewpoint groups according to pupil location. Additional flat thin optical devices used for viewpoint control enable a compact and lightweight RPD system with an extended eyebox.

3. Optical design of a lightguide and image combiner

3.1 Requirements for lightguide system

We adopted the lightguide structure using the HOE combiners to provide the AR scene with a high efficiency and versatile design. In a conventional lightguide or waveguide-type NED system such as Microsoft HoloLens and Magic Leap One, the eyebox size is extended by using exit-pupil expansion (EPE) techniques [57]. However, these methods cannot be intactly applied to the lightguide-type RPD system in which all beams corresponding to individual pixel images are diffracted by HOE combiners and travel through the guide slab at the same angle. In the optical configuration of the RPD, HOEs are equal to the projection screen of images. Consequently, the size and location of the HOE combiner is required to match parallel input beams. If these design requirements are not reflected, unwanted images may be displayed. Figure 7 shows illustrative case under mismatching conditions. Due to the limitation by image combiners, a simple steering of the input source is subject to restrictions for expanding eyebox in the lightguide structure. Figure 8 presents the FOV graph with respect to the change of the incidence angle. This result reveals that the FOV is sharply narrowed as rotating the input beam, and the eyebox expansion (∼3 mm) by focus shift is insufficient to cover pupil movement.

 figure: Fig. 7.

Fig. 7. Optical misdesign classification. The simulation results are obtained by changing the size and position of the HOE combiners or the diffraction angle using commercial ray-tracing software (LightTools).

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Eyebox expansion by steering of the input lightsource in the wedge-shaped lightguide. (Left) The focal spot by HOE combiner is shifted by inducing Bragg mismatching condition. (Right). The simulation results show that the lateral shifted distance of the focal spot is 1.2 mm in a leftward direction and 1.2 mm in a rightward direction at full-width at half maximum (FWHM) FOV. The simulation condition is as follows: the thickness of lightguide is 7 mm, the slanted angle of the wedge is 60°, and eye relief is 20 mm.

Download Full Size | PDF

In order to guide the two split input beams from the in-coupler to the out-coupler without distortion of the original image, the guiding angle, size and position of image couplers must be optimized. We adopted a slab-type lightguide with both in- and out-coupler. The optical design of the planar lightguide is less constrained compared to the wedge-type lightguide in which the incidence angle of the input beam is limited by the wedge angle [58]. In addition, the aspect ratio is modulated by the incidence angle, which causes additional image calibrations to provide the same image regardless of the viewpoint shifting in the wedge lightguide.

The overall system is determined by several key parameters: (1) the thickness of the lightguide d, (2) the length of the lightguide L, (3) the guiding angle θg by the in-coupler, (4) the width of projection surface on HOEs wHOE, (5) the diffraction angle of the polarization grating θp and (6) the focal length of the GPL fGP. We can assume that the distance between the polarization grating and lightguide and the thickness of polarization-dependent optical elements are negligible because of the thinness and flatness. For optimizing the lightguide system parameters, the following requirements must be considered.

First, the maximum allowable FOV (MAFOV) Θ max of the entire system is defined by:

$${\Theta _{\max }} = 2{\tan ^{ - 1}}\left( {\frac{{{w_{HOE}}/2}}{{{d_{eye}}}}} \right),$$
where deye is the eye relief, which is equal to half the focal length of a GPL. The width of HOEs should be larger as the targeted FOV increases, as determined by Eq. (1). To avoid image problems as shown in Fig. 7, the following equation must be satisfied:
$${w_{HOE}} < 2d\tan {\theta _g}.$$
In Eq. (2), the wHOE is limited by the guide thickness and the guiding angle. Consequently, increasing the guide thickness allows a wider FOV, but results in a bulky form factor. Figure 9 shows the relationship between the FOV and other parameters.

 figure: Fig. 9.

Fig. 9. Guiding parameters are limited by the target FOV.

Download Full Size | PDF

Second, the difference between two Bragg angles (θB) of the multiplexed HOE should be wider than the angular diffraction range of in- and out-coupler HOEs for the two beams. This requirement can be simply written as:

$$|{{\theta_{B1}} - {\theta_{B2}}} |> \Delta {\theta _1} + \Delta {\theta _2},$$
where Δθi is the angle deviations from the Bragg angle (θ0) for which the diffraction efficiency firstly drops to near zero within the angular-selectivity range, as shown in Fig. 10. If the condition in Eq. (3) is not met, the multiplexed HOE can generate two diffraction beams under single incident beam; accordingly, independent control of the viewpoint is not allowed.

 figure: Fig. 10.

Fig. 10. The diffraction efficiency according to the incidence angle (angular-selectivity).

Download Full Size | PDF

Third, two split beams diffracted by the in-coupler should be projected on the same location inside the out-coupler. The difference between the two projected positions (Δp) results in a reduction of the effective FOV to avoid mislocation problem in Fig. 7. Given a first guiding angle θg1, the second guiding angle θg2 is determined by the following equation:

$$\begin{array}{l} \mathop {\min \textrm{ }}\limits_{{\theta _{g2}}} ({m_1} \cdot 2d\tan {\theta _{g1}} + d\tan ({\theta _p}/{n_g})) - ({m_2} \cdot 2d\tan {\theta _{g2}} - d\tan ({\theta _p}/{n_g})),\\ \textrm{subject to }{\theta _{g1}},{\theta _{g2}} \ge {\theta _c}\textrm{, } \end{array}$$
where ${m_i} \in {\mathbb N}$ is the number of total internal reflection (TIR) through the waveguide, ng is the refractive index of the lightguide and θc is critical angle. By considering the inter-pupillary distance and eye position, the longitudinal length of the lightguide is limited and then n and m are also restricted to be within specific values.

3.2 Determination and analysis of system parameters

Optimal system parameters can be acquired by satisfying design constraints above. By employing commercialized products, several key parameters are predetermined. In our experiments, the focal length of GPL is 45 mm (Edmund Optics S/N: 34-463) and the diffraction angle of polarization grating is 5 degrees in air (Edmund Optics S/N: 12-677). The holographic medium is Covestro’s Bayfol HX photopolymer film with 16 um thickness, which is thick enough to operate as volume gratings with a narrow bandwidth. Based on the Kogelnik’s coupled-wave theory (CWT), the simulation result shows that the value of Δθi is approximately within 2 degrees, which is smaller than the 3.3 degrees of the diffraction angle by the polarization grating; so that the “Bragg angle” constraint by Eq. (3) is satisfied.

Taking into account the system performance and experimental setup, other parameters are restricted as follows; in consideration of the precision and tolerances of the HOE recording setup, the guiding angle is limited between 48° and 65° with a step size of 1°. This limitation of step size hinders exact solutions in Eq. (4). To obtain nearest-neighbor solutions of the guiding angle, the value of Δp is allowed within 1 mm. The thickness of lightguide is set between 5 and 8 mm, which is thicker than typical thickness range of the eyeglass (2∼4 mm) but is chosen in accordance with a range of allowed guiding angles. The thickness can be reduced to normal spectacles by increasing guiding angle. The length of the lightguide is limited within between 50 and 80 mm. The minimum value of the MAFOV is set to be 40 degrees.

Figure 11 presents the results of the optimal solution calculated by sweeping two guiding angles. In the right graph, the effective FOV means the modulated MAFOV calculated by considering Δp. Searching results finally determine the specification of guiding system and the recording conditions of HOE combiners. In a recording process of the in-coupler HOE, the incidence angle of the reference beam is equal to θp in HOE medium and the incidence angle of the signal beams is set to be a pair of guiding angles among searching results; In the out-coupler recording, the reference beam should be matched to the signal beam of the in-coupler and the incidence angle of the signal beam, also defined as output angle θo, is determined by the position of multiple viewpoints.

 figure: Fig. 11.

Fig. 11. Optimized results of allowable guiding configurations. (a)The positional difference between the two projected beams on the out-coupler (Δp). (b) The longitudinal length of lightguide including HOE couplers. (c) The effective FOV changed by misalignment of the two diffracted beams. The blue dot lines indicate the value used in our experiments.

Download Full Size | PDF

The quasi-parallel output beam is incident on the PDEC and then focal spots are formed at the focal length of PDEC on lens mode, ignoring aberrations. The lateral shifted position of focal spots (Pi) in horizontal direction can be given as follows under the paraxial condition:

$${P_i} = d\tan {\theta _o}^i + {f_{GP}}\tan ({n_g}{\theta _o}^i)/2,$$
where the plus sign of θo is defined to denote a clockwise rotation with respect to the direction normal to the lightguide surface. If viewpoints are located symmetrically about the normal axis, the spacing between the viewpoints dv and the total eyebox width S can be calculated as:
$${d_v}^{i\sim i + 1} = |{{P_i} - {P_{i + 1}}} |, $$
$$S = \left\{ {\begin{array}{ll} \sum\limits_{i = 1}^{n - 1} {{d_v}^{i\sim i + 1} + } {D_p} &{D_p} > {d_v}\\ n \cdot {D_p} &{D_p} \le {d_v} \end{array}} \right.,$$
where Dp is the pupil diameter. The pupil size is sensitively dependent on an environment condition such as ambient light and typically varies from 2 mm to 8 mm. If the pupil size is larger than the spacing of the focal spots, continuous eyebox can be provided; but if it is larger than twice the spacing, a double image will be projected on retina. For inverse situation, discrete eyebox is generated, which can provoke the blank screen or vignetting [59].

Figure 12(a) shows the relationship between output angles and the viewpoint spacing. In our experiments, the value of dv is set to be 4 mm by considering the average pupil size of young-adult and mid-age peoples [60]. In this condition, the double image is not observed even when the pupil size becomes large up to 8 mm. The inward output angle is 2.8° and the outward angle is 8.3°, which is indicated by black dot lines in Fig. 12(a). The modulated FOV according to each viewpoint is given as

$${\Theta _i} = {\tan ^{ - 1}}\left[ {\frac{{{w_{HOE}}}}{{{f_{GP}}}} + \tan ({n_g}{\theta_o}^i)} \right] + {\tan ^{ - 1}}\left[ {\frac{{{w_{HOE}}}}{{{f_{GP}}}} - \tan ({n_g}{\theta_o}^i)} \right].$$
Figure 12(b) presents that the FOV is reduced with increasing the output angle. However, the degraded FOV corresponding to the target viewpoint spacing is within 2°.

 figure: Fig. 12.

Fig. 12. Simulation results. (a) The spacing between two viewpoints according to the output angle. The red line indicates the pair of output angles with 4 mm spacing, which are target conditions in our experiment. (b) The FOV decrement according to the output angle. The red dot lines indicate reduced FOV variations at selected conditions in our experiments.

Download Full Size | PDF

Based on the simulation results and deployability of the system above, finally selected system parameters are listed in Table 1. Figure 13 shows the simulation results of non-sequential ray-tracing in LightTools, excluding a PDEC device. In ray-tracing software, which does not support nanostructure devices such as polarization grating, the simulation modeling is modified separately according to the polarization state of the input beam. When using the parameters listed in Table 1, input beam with the width of 17 mm, equal to wHOE, can be propagated to the eyepiece part without distortion.

 figure: Fig. 13.

Fig. 13. 2-D layout of the designed lightguide system in LightTools. Each solid line is ray path. Simulation result for input beam with RCP condition (upper) and result for input beam with LCP condition (lower).

Download Full Size | PDF

Tables Icon

Table 1. Specifications of the fabricated guiding system

3.3 Polarization in lightguide system

In this section, we will discuss the persistence and optical efficiency depending on the polarization within the lightguide. In the proposed configuration, circularly polarized light is incident on the polarization grating for the optical path control. After passing through the polarization grating, the polarization state is orthogonally converted from input state, but the transmitted beam is still circularly polarized. When input beam with circular polarization is incident on the lightguide system, a major problem is that the polarization state is changed by the HOEs and TIR phenomenon within the lightguide. First, in holographic volume grating, the polarization of input field with circular or elliptical polarization can be modified due to phase difference between two orthogonal directions [61]. Second, in lightguide, the phase shift by TIR is different according to polarization eigenmode and guiding angle. When the circularly polarized light enters into the lightguide, the polarization state of the output beam is not perfect circular polarization, as shown in Fig. 14. In this case, GPLs are simultaneously operated as convex and concave lens, resulting in unwanted image noise.

 figure: Fig. 14.

Fig. 14. Simulation results of the polarization state along the optical path; (a) input beam with circular polarization, (b) diffracted beam by in-coupler, (c), (d), (e) guided beam according to a number of TIRs, and (f) output beam extracted by out-coupler where φi is the inclination angle and ei is the ellipticity. The simulation of the polarization change by HOEs is performed using a commercial software (RSoft) based on rigorous couple-wave theory.

Download Full Size | PDF

In addition, the diffraction efficiency for circular polarization is average of values obtained by vertical and horizontal linear polarizations under isotropic medium condition. Based on the CWT, the diffraction efficiency of reflection volume grating for TE mode is given as follows [62]:

$$\eta = {\left[ {1 + \frac{{1 - {{{\xi^2}} \mathord{\left/ {\vphantom {{{\xi^2}} {{\nu^2}}}} \right.} {{\nu^2}}}}}{{{{\sinh }^2}\sqrt {{\nu^2} - {\xi^2}} }}} \right]^{ - 1}},$$
where ν and ξ are given by
$$\nu = {{j\pi {n_H}{d_H}} \mathord{\left/ {\vphantom {{j\pi {n_H}{d_H}} {\lambda \sqrt {\cos {\theta_R} \cdot \cos {\theta_o}} }}} \right.} {\lambda \sqrt {\cos {\theta _R} \cdot \cos {\theta _o}} }},$$
$$\xi = {{\Delta \theta \cdot Kd \cdot \sin ({\theta _B} - \phi )} \mathord{\left/ {\vphantom {{\Delta \theta \cdot Kd \cdot \sin ({\theta_B} - \phi )} {2\cos {\theta_0}}}} \right.} {2\cos {\theta _0}}},$$
where nH and dH are refractive index modulation and thickness of HOE medium, λ is the wavelength of recording beam, θR is the incidence angle of reference beam, K is the magnitude of grating vector, and Φ is slanted angle of grating. For TM mode, ν is redefined as follows:
$$\nu ={-} \frac{{j\pi {n_H}d}}{{\lambda \sqrt {\cos {\theta _p} \cdot \cos {\theta _o}} }} \cdot \cos 2({\theta _B} - \phi ).$$
By multiplying the original value by an additional term, the diffraction efficiency for TM mode is decreased than that for TE mode, as shown in Fig. 15. Therefore, vertical polarizer for TE mode should be placed between the polarization grating and lightguide for the good image quality and high efficiency.

 figure: Fig. 15.

Fig. 15. Diffraction efficiency of the HOE for TE and TM mode.

Download Full Size | PDF

4. Experimental setup and analysis of display system

To validate the design of the lightguide system, the experimental proof-of-concept is demonstrated as shown in Fig. 16. The lightguide slab is fabricated using a polished glass with SCHOTT NBK7, of which thickness is 7 mm. The thickness of polarization grating and GPL is both 0.45 mm and the aperture diameter of these is 25 mm. The PDEC device is fabricated by stacking with optical adhesive, of which total thickness is 1.3 mm. To avoid frustrated TIR, all polarization-dependent optical elements are attached on the lightguide with thin air gap less than 0.1 mm. The HOE combiners are recorded with a wavelength of 532 nm, which is sequentially multiplexed by controlling the recording time and the intensity of beams. The recording angle is listed in Table 1; for quadruple multiplexing out-coupler, the pairs of recording angle (reference and signal beam) are (62°, 8°) and (62°, -3°), which are called group 1 for LCP mode, (51°, 3°) and (51°, -8°), which are called group 2 for RCP mode. The diffraction efficiency for a dual multiplexed in-coupler is 87% and 85%. The out-coupler efficiency is 47%, 44%, 18% and 47%, respectively. The fabricated HOEs are resized with the width corresponding value of wHOE. The experimental results show that each distance between focal spots is 4mm or 5 mm, which have a slight gap with target value (4 mm). Such a problem may arise from fabrication error in the HOEs. The outward focal spots are blurred than inward, observed in Fig. 16. The sharpness degradation is caused by off-axis aberration by GPL [63], and can be mitigated by using aberration-corrected GPLs [64].

 figure: Fig. 16.

Fig. 16. Experimental results according to polarization of input beam (Visualization 1). The grid paper is placed at the eye relief distance (22.5 mm).

Download Full Size | PDF

The experimental setup or the entire system is constructed on the optical bench, as shown in Fig. 17. To provide sharp ray bundles with compact system, we use a commercial LSP (Celluon Picopro) as display. The display resolution is 1280×720 pixels. The focal length of collimating lens is 75 mm, which is employed by considering effective resolution on intercross section of HOEs and divergence of the projector beam for clear image. To enhance the sharpness of image, additional beam shaping lens is employed. In our experiment, focus tunable lens is located between the LSP and collimating lens, which has optical power between -2 and 3 diopters. The polarization rotator can actively switch the polarization state of input beam with high-speed (10 ms) for real-time operating system (Thorlabs LCC1221-A). The clear aperture of the polarization rotator is 20 mm, which is critical aperture stop limiting the FOV. The smartphone with wide-field camera (f ∕1.8 and 16 mm focal length) is used to capture resulting images because a CCD camera is not suitable for short eye relief condition.

 figure: Fig. 17.

Fig. 17. Prototype of the proposed configuration.

Download Full Size | PDF

Figures 18(a) and 18(b) show the results of FOV measurement with a target paper located at 10 cm from the camera. Measured FOV at inward viewpoints is approximately 37 degrees in the horizontal direction and 45 degrees in the diagonal direction. The deteriorated FOV compared with simulation value (42°) may be caused by the misalignment of optical components. Figure 18(e) presents the AR scene in which the virtual image in Fig. 18(d) is superimposed on a printout image in Fig. 18(c), which is located behind the lightguide.

 figure: Fig. 18.

Fig. 18. Photographs of experimental results; (a) AR scene of the rendering test circle image with real FOV target, (b) virtual image with maximum FOV, (c) printed driving screen image (real object), (d) virtual information images and (e) AR scene imitated driving.

Download Full Size | PDF

To validate large depth of focus (DOF) for retinal projection, display results are acquired according to the change in focal plane of the camera, as shown in Fig. 19(a). All captured results are the same quality regardless of the camera focus, demonstrating the all-in-focus property for the proposed configuration. Figure 19(b) shows the observed images at each focal spot. It is confirmed that virtual images can be rendered at separated viewpoints. Several image problems are observed in experimental results. A keystone distortion due to an oblique incidence at viewpoints can be resolved by the additional calibration image processing [65]. In the see-through window mode, chromatic distortion of real scene is slightly occurred, which may be caused by the misalignment and wavelength dispersion of the components in the PDEC. A low contrast issue may result from an HOE manufacturing, which can be alleviated by searching optimal fabrication conditions.

 figure: Fig. 19.

Fig. 19. Experimental results: (a) with different focal lengths of the camera, (b) at different viewpoints (Visualization 2).

Download Full Size | PDF

To analyze image quality for the implemented setup, angular resolution is measured at inward viewpoint with low aberration. The maximum spatial frequency for our prototype system is 6.6 cycles per degree (cpd) in which the effective pixel size is about 30 µm in projected display on the out-coupler. The modulation transfer function (MTF) value is acquired by calculating the contrast of captured fringe patterns and is indicated in Fig. 20. The cut-off frequency is about 3.4 cpd to adopt an MTF criterion of 35% [66]. When input beam passing through the polarization rotator is directly focused by a single GPL without the lightguide module, MTF results are equivalent level; which means that resolution deterioration by the lightguide system can be negligible. The resolution degradation may be caused by Gaussian blur of laser beam, and needs a precise beam shaping for sharp images.

 figure: Fig. 20.

Fig. 20. MTF results of the prototype system. The red circle indicates measured values and the solid line indicates interpolated MTF curve using curve fitting.

Download Full Size | PDF

For dynamic switching, we synchronized the eye tracker, polarization rotator and display. The printed artificial eye on paper is attached to the smartphone camera and the lateral shift of the pupil is simulated by moving the phone on the linear stage. The polarization state of the input beam is automatically changed to activate another viewpoint group when the shifting range of the imitated eye is more than half the spacing (2 mm). The associated movies show the dynamic switching of focal spots and virtual images depending on the pupil position.

5. Conclusions

In this paper, we proposed and demonstrated a novel retinal projection NED that can independently activate two separate groups of viewpoints using polarization-multiplexing technique. The proposed system is constructed by polarization-dependent elements and multiplexed HOE combiners to provide an extended eyebox with compact form factor. We analyzed and optically designed a lightguide system in detail. The fabricated guiding system using optimal design parameters can provide quadruple focal spots for retinal projection with short eye relief 22.5 mm. We built a proof-of-concept system in which wide eyebox 16 mm and moderate FOV 37° are achieved for 4 mm pupil condition. Previous studies with dynamic eyebox have several problems such as relatively bulky form factor and operating speed. The proposed configuration alleviates these problems with lightweight and thin optical devices. In addition, double or blank image arising from always-on focal spots can be mitigated. The proposed concept can also be applied in other optical systems for optical path control. We expect that the proposed concept can provide an efficient way to resolve obstacles that hinder the proliferation of AR NEDs in various applications.

Funding

Samsung Electronics.

Disclosures

The authors declare no conflicts of interest.

References

1. H. Hua, D. Cheng, Y. Wang, and S. Liu, “Near-eye displays: state-of-the-art and emerging technologies,” Proc. SPIE 7690, 769009 (2010). [CrossRef]  

2. Y. Lee, T. Zhan, and S. Wu, “Prospects and challenges in augmented reality displays,” VRIH 1(1), 10–20 (2019). [CrossRef]  

3. S. Yano, M. Emoto, T. Mitsuhashi, and H. Thwaites, “A study of visual fatigue and visual comfort for 3D HDTV/HDTV images,” Displays 23(4), 191–201 (2002). [CrossRef]  

4. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008). [CrossRef]  

5. T. Shibata, J. Kim, D. M. Hoffman, and M. S. Banks, “The zone of comfort: predicting visual discomfort with stereo displays,” J. Vis. 11(8), 11 (2011). [CrossRef]  

6. G. Kramida and A. Varshney, “Resolving the vergence-accommodation conflict in head-mounted displays,” IEEE Trans. Visual. Comput. Graphics 22(7), 1912–1931 (2016). [CrossRef]  

7. K. Terzić and M. Hansard, “Methods for reducing visual discomfort in stereoscopic 3D: a review,” Signal Process Image. 47, 402–416 (2016). [CrossRef]  

8. N. Matsuda, A. Fix, and D. Lanman, “Focal surface displays,” ACM Trans. Graph. 36(4), 1–14 (2017). [CrossRef]  

9. H. J. Jang, J. Y. Lee, J. Kwak, D. Lee, J.-H. Park, B. Lee, and Y. Y. Noh, “Progress of display performances: AR, VR, QLED, OLED, and TFT,” J. Inf. Disp. 20(1), 1–8 (2019). [CrossRef]  

10. R. Zabels, K. Osmanis, M. Narels, U. Gertners, A. Ozols, K. Rutenbergs, and I. Osmanis, “AR Displays: next-generation technologies to solve the vergence–accommodation conflict,” Appl. Sci. 9(15), 3147 (2019). [CrossRef]  

11. G. A. Koulieris, K. Aksit, M. Stengel, R. Mantiuk, K. Mania, and C. Richardt, “Near-eye display and tracking technologies for virtual and augmented reality,” Comput Graph Forum. 38(2), 493–519 (2019). [CrossRef]  

12. S. Liu and H. Hua, “Time-multiplexed dual-focal plane head-mounted display with a liquid lens,” Opt. Lett. 34(11), 1642–1644 (2009). [CrossRef]  

13. S. Ravikumar, K. Akeley, and M. S. Banks, “Creating effective focus cues in multi-plane 3D displays,” Opt. Express 19(21), 20940–20952 (2011). [CrossRef]  

14. X. Hu and H. Hua, “High-resolution optical see-through multi-focal-plane head-mounted display using freeform optics,” Opt. Express 22(11), 13896–13903 (2014). [CrossRef]  

15. F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015). [CrossRef]  

16. D. Dunn and K. Myszkowski, “Wide field of view varifocal near-eye display using see-through deformable membrane mirrors,” IEEE Trans. Visual. Comput. Graphics 23(4), 1322–1331 (2017). [CrossRef]  

17. K. Akşit, W. Lopes, J. Kim, P. Shirley, and D. Luebke, “Near-eye varifocal augmented reality display using seethrough screens,” ACM Trans. Graphic 36(6), 1–13 (2017). [CrossRef]  

18. R. E. Stevens, D. P. Rhodes, A. Hasnainb, and P. Y. Laffontb, “Varifocal technologies providing prescription and VAC mitigation in HMDs using Alvarez lenses,” Proc. SPIE 10676, 18 (2018). [CrossRef]  

19. D. Kim, S. Lee, S. Moon, J. Cho, Y. Jo, and B. Lee, “Hybrid multi-layer displays providing accommodation cues,” Opt. Express 26(13), 17170–17184 (2018). [CrossRef]  

20. J.-H. R. Chang, B. V. K. V. Kumar, and A. C. Sankaranarayanan, “Towards multifocal displays with dense focal stacks,” ACM Trans. Graph. 37(6), 1–13 (2018). [CrossRef]  

21. K. Rathinavel, H. Wang, A. Blate, and H. Fuchs, “An extended depth-at-field volumetric near-eye augmented reality display,” IEEE Trans. Visual. Comput. Graphics 24(11), 2857–2866 (2018). [CrossRef]  

22. S. Lee, Y. Jo, D. Yoo, J. Cho, D. Lee, and B. Lee, “Tomographic near-eye displays,” Nat. Commun. 10(1), 2497 (2019). [CrossRef]  

23. Y. Takaki, “Development of super multi-view displays,” MTA 2(1), 8–14 (2014). [CrossRef]  

24. K. Aksit, P. Chakravarthula, K. Rathinavel, Y. Jeong, R. Albert, H. Fuchs, and D. Luebke, “Manufacturing application-driven foveated near-eye displays,” IEEE Trans. Visual. Comput. Graphics 25(5), 1928–1939 (2019). [CrossRef]  

25. K. J. MacKenzie, D. M. Hoffman, and S. J. Watt, “Accommodation to multiple-focal-plane displays: implications for improving stereoscopic displays and for accommodation control,” J. Vis. 10(8), 22 (2010). [CrossRef]  

26. P. V. Johnson, J. A. Parnell, J. Kim, C. D. Saunter, G. D. Love, and M. S. Banks, “Dynamic lens and monovision 3D displays to improve viewer comfort,” Opt. Express 24(11), 11808–11827 (2016). [CrossRef]  

27. N. Padmanaban, R. Konrad, T. Stramer, E. A. Cooper, and G. Wetzstein, “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proc. Natl. Acad. Sci. U. S. A. 114(9), 2183–2188 (2017). [CrossRef]  

28. Q. Y. J. Smithwick, J. Barabas, D. Smalley, and V. M. Bove Jr, “Interactive holographic stereograms with accommodation cues,” Proc. SPIE 7619, 761903 (2010). [CrossRef]  

29. H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Fully computed holographic stereogram based algorithm for computer generated holograms with accurate depth cues,” Opt. Express 23(4), 3901–3913 (2015). [CrossRef]  

30. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017). [CrossRef]  

31. J.-H. Park, “Recent progresses in computer generated holography for three-dimensional scene,” J. Inf. Disp. 18(1), 1–12 (2017). [CrossRef]  

32. J. Mäkinen and E. Sahin., and A. Gotchev, “Analysis of accommodation cues in holographic stereograms,” in IEEE 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) (2018)

33. St. Hilaire, S. A. Benton, M. Lucente, M. L. Jepsen, J. Kollin, H. Yoshikawa, and J. Underkoffler, “Electronic display system for omputational holography,” Proc. SPIE 1212, 174–182 (1990). [CrossRef]  

34. G. Westheimer, “The Maxwellian view,” Vision Res. 6(11-12), 669–682 (1966). [CrossRef]  

35. M. Sugawara, M. Suzuki, and N. Miyauchi, “Retinal imaging laser eyewear with focus-free and augmented reality,” in SID Symposium Digest of Technical Papers, 164–167 (2016).

36. P. K. Shrestha, M. J. Pryn, J. Jia, J. Chen, H. Fructuoso, A. Boev, Q. Zhang, and D. Chu, “Accommodation-free head mounted display with comfortable 3D perception and an enlarged eye-box,” AAAS Research (2019).

37. R. Konrad, N. Padmanaban, K. Molner, E. A. Cooper, and G. Wetzstein, “Accommodation-invariant computational near-eye displays,” ACM Trans. Graph. 36(4), 1–12 (2017). [CrossRef]  

38. C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 1–13 (2017). [CrossRef]  

39. J. Kim, Y. Jeong, M. Stengel, K. Akşit, R. Albert, B. Boudaoud, T. Greer, W. Lopes, Z. Majercik, P. Shirley, J. Spjut, M. McGuire, and D. Luebke, “Foveated AR: dynamically-foveated augmented reality display,” ACM Trans. Graph 38(4), 1–15 (2019). [CrossRef]  

40. M. Hedili, B. Soner, E. ulusoy, and H. Urey, “Light-efficient augmented reality display with steerable eyebox,” Opt. Express 27(9), 12572–12581 (2019). [CrossRef]  

41. S.-B. Kim and J.-H. Park, “Optical see-through Maxwellian near-to-eye display with an enlarged eyebox,” Opt. Lett. 43(4), 767–770 (2018). [CrossRef]  

42. J. Jeong, J. Lee, C. Yoo, S. Moon, B. Lee, and B. Lee, “Holographically customized optical combiner for eye-box extended near-eye display,” Opt. Express 27(26), 38006–38018 (2019). [CrossRef]  

43. S. R. Nersisyan, N. V. Tabiryan, D. M. Steeves, and B. R. Kimball, “The promise of diffractive waveplates,” Opt. Photonics News 21(3), 40–45 (2010). [CrossRef]  

44. Y. Weng, D. Xu, Y. Zhang, X. Li, and S. T. Wu, “Polarization volume grating with high efficiency and large diffraction angle,” Opt. Express 24(16), 17746–17759 (2016). [CrossRef]  

45. X. Xiang, J. Kim, R. Komanduri, and M. J. Escuti, “Nanoscale liquid crystal polymer Bragg polarization gratings,” Opt. Express 25(16), 19298–19308 (2017). [CrossRef]  

46. M. Khorasaninejad and F. Capasso, “Broadband multifunctional efficient meta-gratings based on dielectric waveguide phase shifters,” Nano Lett. 15(10), 6709–6715 (2015). [CrossRef]  

47. S. Kim, S. Choi, C. Cho, Y. Lee, J. Sung, H. Yun, J. Jeong, S.-E. Mun, Y. Lee, and B. Lee, “Broadband efficient modulation of light transmission with high contrast using reconfigurable VO2 diffraction grating,” Opt. Express 26(26), 34641–34654 (2018). [CrossRef]  

48. H. Chen, Y. Weng, D. Xu, N. V. Tabiryan, and S.-T. Wu, “Beam steering for virtual/augmented reality displays with a cycloidal diffractive waveplate,” Opt. Express 24(7), 7287–7298 (2016). [CrossRef]  

49. K. Sarayeddine and K. Mirza, “Key challenges to affordable see-through wearable displays: the missing link for mobile AR mass deployment,” Proc. SPIE 8720, 87200D (2013). [CrossRef]  

50. M. U. Erdenebat, Y. T. Lim, K. C. Kwon, N. Darkhanbaatar, and N. Kim, “Waveguide-type head-mounted display system for AR application,” (2018).

51. J. A. Arns, W. S. Colburn, and S. C. Barden, “Volume phase gratings and their potentials for astronomical applications,” Proc. SPIE 3355, 866–876 (1998). [CrossRef]  

52. A. Pu and M. Spie, “Exposure schedule for multiplexing holograms in photopolymer films,” Opt. Eng. 35(10), 2824–2829 (1996). [CrossRef]  

53. T. Zhan, Y. H. Lee, and S. T. Wu, “High-resolution additive light field near-eye display by switchable Pancharatnam-Berry phase lenses,” Opt. Express 26(4), 4863–4872 (2018). [CrossRef]  

54. C. Yoo, K. Bang, C. Jang, D. Kim, C.-K. Lee, G. Sung, H.-S. Lee, and B. Lee, “Dual-focus waveguide see-through near-eye display with polarization dependent lenses,” Opt. Lett. 44, 1920–1923 (2019). [CrossRef]  

55. S. Moon, C.-K. Lee, S.-W. Nam, C. Jang, G.-Y. Lee, W. Seo, G. Sung, H.-S. Lee, and B. Lee, “Augmented reality near-eye display using Pancharatnam-Berry phase lenses,” Sci. Rep. 9(1), 6616 (2019). [CrossRef]  

56. T. Zhan, Y. H. Lee, G. Tan, J. Xiong, K. Yin, F. Gou, J. Zou, N. Zhang, D. Zhao, J. Yang, S. Liu, and S. T. Wu, “Pancharatnam-Berry optical elements for head-up and near-eye Displays,” J. Opt. Soc. Am. B 36(5), D52–D65 (2019). [CrossRef]  

57. P. Äyräs, P. Saarikko, and T. Levola, “Exit pupil expander with a large field of view based on diffractive optics,” J. Soc. Inf. Disp. 17(8), 659–664 (2009). [CrossRef]  

58. D. Cheng, Y. Wang, C. Xu, W. Song, and G. Jin, “Design of an ultra-thin near-eye display with geometrical waveguide and freeform optics,” Opt. Express 22(17), 20705–20719 (2014). [CrossRef]  

59. K. Ratnam, R. Konrad, D. Lanman, and M. Zannoli, “Retinal image quality in near-eye pupil-steered systems,” Opt. Express 27(26), 38289–38311 (2019). [CrossRef]  

60. I. M. Borish, Clinical refraction, 3rd ed. (Professional press, Chicago, 1970).

61. A. M. López, M. P. Arroyo, and M. Quintanilla, “Some polarization effects in holographic volume gratings,” J. Opt. A: Pure Appl. Opt. 1(3), 378–385 (1999). [CrossRef]  

62. H. Kogelnik, “Coupled wave theory for thick hologram gratings,” Bell Syst. Tech. J. 48(9), 2909–2947 (1969). [CrossRef]  

63. M. Khorasaninejad, W. T. Chen, J. Oh, and F. Capasso, “Super-dispersive off-axis meta-lenses for compact high resolution spectroscopy,” Nano Lett. 16(6), 3732–3737 (2016). [CrossRef]  

64. K. J. Hornburg, X. Xiang, J. Kim, M. Kudenov, and M. Escuti, “Design and fabrication of an aspheric geometric-phase lens doublet,” Proc. SPIE 10735, 37 (2018). [CrossRef]  

65. A. Bauer, S. Vo, K. Parkins, F. Rodriguez, O. Cakmakci, and J. P. Rolland, “Computational optical distortion correction using a radial basis function-based mapping method,” Opt. Express 20(14), 14906–14920 (2012). [CrossRef]  

66. E. A. Villegas, C. González, B. Bourdoncle, T. Bonin, and P. Artal, “Correlation between optical and psychophysical parameters as function of defocus,” Optom. Vis. Sci. 79(1), 60–67 (2002). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       The video shows that the position of the focal spot is switched according to the pupil position by synchronizing the eye tracker and polarization controller in the proposed system.
Visualization 2       The video shows that the virtual image are changed according to the pupil position by synchronizing the eye tracker, polarization controller and display in the proposed system.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (20)

Fig. 1.
Fig. 1. (a) Simplified schematic diagram of the retinal projection display, (b) Illustration of the double image problem with a narrow viewpoint spacing and (c) blank image problem with wide viewpoint spacing in the RPD with multiple viewpoints.
Fig. 2.
Fig. 2. Schematic diagram of the transmissive polarization grating with the diffraction angle (θ) for the first-order. The polarization state of the diffracted beam is orthogonal to the input beam.
Fig. 3.
Fig. 3. Spatial distribution of the angular resolution depending on the incidence angle of the input beam (θ). In the free-space propagation method, the incidence angle toward the center of the HOE (C) is changed to provide the dynamic eyebox. The simulation is conducted under the following conditions; incidence angle of the input beam: 40°∼90°, HOE size: 190 mm, eye relief: 20 mm, and divergence angle of the lightsource: 30°.
Fig. 4.
Fig. 4. Schematic diagram of the angular-multiplexed out-coupler; the output beam is selectively diffracted depending on the incidence angle, which is presented in different colors.
Fig. 5.
Fig. 5. Illustration of the composition of the PDEC and operating mode depending on the polarization state; (a) lens mode for virtual images and (b) optical windows mode for real scene. All optical elements are so thin (less than 1 mm) that the optical focusing power of the PDEC approximately nulls each other or increases two-fold in the stacked structure.
Fig. 6.
Fig. 6. Schematic diagram of the proposed configuration. Each red line shows beam path with LCP and each blue line shows beam path with RCP. The LCP and RCP are controlled by the polarization controller.
Fig. 7.
Fig. 7. Optical misdesign classification. The simulation results are obtained by changing the size and position of the HOE combiners or the diffraction angle using commercial ray-tracing software (LightTools).
Fig. 8.
Fig. 8. Eyebox expansion by steering of the input lightsource in the wedge-shaped lightguide. (Left) The focal spot by HOE combiner is shifted by inducing Bragg mismatching condition. (Right). The simulation results show that the lateral shifted distance of the focal spot is 1.2 mm in a leftward direction and 1.2 mm in a rightward direction at full-width at half maximum (FWHM) FOV. The simulation condition is as follows: the thickness of lightguide is 7 mm, the slanted angle of the wedge is 60°, and eye relief is 20 mm.
Fig. 9.
Fig. 9. Guiding parameters are limited by the target FOV.
Fig. 10.
Fig. 10. The diffraction efficiency according to the incidence angle (angular-selectivity).
Fig. 11.
Fig. 11. Optimized results of allowable guiding configurations. (a)The positional difference between the two projected beams on the out-coupler (Δp). (b) The longitudinal length of lightguide including HOE couplers. (c) The effective FOV changed by misalignment of the two diffracted beams. The blue dot lines indicate the value used in our experiments.
Fig. 12.
Fig. 12. Simulation results. (a) The spacing between two viewpoints according to the output angle. The red line indicates the pair of output angles with 4 mm spacing, which are target conditions in our experiment. (b) The FOV decrement according to the output angle. The red dot lines indicate reduced FOV variations at selected conditions in our experiments.
Fig. 13.
Fig. 13. 2-D layout of the designed lightguide system in LightTools. Each solid line is ray path. Simulation result for input beam with RCP condition (upper) and result for input beam with LCP condition (lower).
Fig. 14.
Fig. 14. Simulation results of the polarization state along the optical path; (a) input beam with circular polarization, (b) diffracted beam by in-coupler, (c), (d), (e) guided beam according to a number of TIRs, and (f) output beam extracted by out-coupler where φi is the inclination angle and ei is the ellipticity. The simulation of the polarization change by HOEs is performed using a commercial software (RSoft) based on rigorous couple-wave theory.
Fig. 15.
Fig. 15. Diffraction efficiency of the HOE for TE and TM mode.
Fig. 16.
Fig. 16. Experimental results according to polarization of input beam (Visualization 1). The grid paper is placed at the eye relief distance (22.5 mm).
Fig. 17.
Fig. 17. Prototype of the proposed configuration.
Fig. 18.
Fig. 18. Photographs of experimental results; (a) AR scene of the rendering test circle image with real FOV target, (b) virtual image with maximum FOV, (c) printed driving screen image (real object), (d) virtual information images and (e) AR scene imitated driving.
Fig. 19.
Fig. 19. Experimental results: (a) with different focal lengths of the camera, (b) at different viewpoints (Visualization 2).
Fig. 20.
Fig. 20. MTF results of the prototype system. The red circle indicates measured values and the solid line indicates interpolated MTF curve using curve fitting.

Tables (1)

Tables Icon

Table 1. Specifications of the fabricated guiding system

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

Θ max = 2 tan 1 ( w H O E / 2 d e y e ) ,
w H O E < 2 d tan θ g .
| θ B 1 θ B 2 | > Δ θ 1 + Δ θ 2 ,
min   θ g 2 ( m 1 2 d tan θ g 1 + d tan ( θ p / n g ) ) ( m 2 2 d tan θ g 2 d tan ( θ p / n g ) ) , subject to  θ g 1 , θ g 2 θ c
P i = d tan θ o i + f G P tan ( n g θ o i ) / 2 ,
d v i i + 1 = | P i P i + 1 | ,
S = { i = 1 n 1 d v i i + 1 + D p D p > d v n D p D p d v ,
Θ i = tan 1 [ w H O E f G P + tan ( n g θ o i ) ] + tan 1 [ w H O E f G P tan ( n g θ o i ) ] .
η = [ 1 + 1 ξ 2 / ξ 2 ν 2 ν 2 sinh 2 ν 2 ξ 2 ] 1 ,
ν = j π n H d H / j π n H d H λ cos θ R cos θ o λ cos θ R cos θ o ,
ξ = Δ θ K d sin ( θ B ϕ ) / Δ θ K d sin ( θ B ϕ ) 2 cos θ 0 2 cos θ 0 ,
ν = j π n H d λ cos θ p cos θ o cos 2 ( θ B ϕ ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.