Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Holographic multiplane near-eye display based on amplitude-only wavefront modulation

Open Access Open Access

Abstract

We present a holographic multiplane near-eye display method based on Fresnel holography and amplitude-only wavefront modulation. Our method can create multiple focal images across a wide depth range while maintaining a high resolution (1080P) and refresh rate (60 Hz). To suppress the DC and conjugation signals inherent in amplitude-only wavefront modulation, we develop an optimization algorithm which completely separates primary diffracted light from DC and conjugation at a pre-defined intermediate plane. Spatial filtering at this plane leads to a dramatic increase in the image contrast. The experimental results demonstrate our approach can create continuous focus cues in complex 3D scenes.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Considered as the next-generation computing and communication platforms, virtual reality (VR) and augmented reality (AR) have seen rapid growth, with a variety of technologies sprouting in the past decade. To reproduce 3D visual effects, conventional VR/AR displays rely on binocular disparities by presenting a pair of stereoscopic images [1]. However, such devices suffer from the vergence–accommodation conflict (VAC), which frequently causes visual fatigue and discomfort [2]. To alleviate this problem, the key is to provide focus cues, which requires the precise control of light wavefront.

There are three strategies to create focus cues in near-eye displays. The first strategy, referred to as the light field display, manipulates the wavefront as individually addressable ray bundles. The state-of-the-art light field displays are constructed by either mounting a microlens array on a microdisplay [3,4] or stacking multiple liquid crystal display panels [5]. Despite being compact, light field displays are inefficient in modulating the wavefront because the ray bundles allow only coarse approximation of the wavefront, resulting in a severe trade-off among the spatial resolution, depth of field, depth resolution, accuracy of focusing cues, and accommodative response errors [6].

The second strategy, referred to as the multiplane display, creates multiple focal planes at varied accommodation distances. Representative implementations encompass spatial-multiplexing through optically combining multiple depth images followed by projecting them onto different focal planes [712] and temporal-multiplexing through fast sweeping of a focal plane across depths using a high-speed optical device with a tunable optical power (i.e., an acousto-optic lens or a deformable mirror) [1316]. Nonetheless, the spatial-multiplexing methods face a major challenge in stacking focal planes with a compact form factor while maintaining the resolution and contrast. On the other hand, to scan depths within the flicker fusion threshold time, the temporal-multiplexing approaches require a high-speed projector and fast tunable optics, complicating the system setup and inevitably introducing the trade-off between the image refresh rate and the number of depth planes.

The last strategy, referred to as the holographic display, encodes and reproduces the wavefront using a computer-generated hologram (CGH) [1721]. Because holographic displays can precisely control the wavefront, they allow high-resolution 3D display with a compact form factor [20,21]. Currently, most holographic near-eye displays use phase-only spatial light modulators (SLM) [2224]. Despite being efficient in diffraction, the phase-only SLMs are costly and therefore unsuitable for consumer applications. By contrast, amplitude-only SLMs such as a digital micromirror device (DMD) or a liquid-crystal device (LCD) have been widely adopted in consumer electronics owing to their low manufacturing cost. However, the encoding of complex wavefront into an amplitude pattern introduces additional zero-order (also known as DC) and conjugation diffraction beams in the holographic reconstruction [25]. Although the combination of 4-f filtering system and off-axis pattern encoding (i.e., blazed grating encoding) is an effective way to isolate the DC and conjugation from desired signal [25], it has two considerable drawbacks against compact and wide viewing display. First, 4-f optical filtering increases the complexity of the system by the presence of bulky lenses, which requires much space due to its finite focal length and aperture. Second, when the image size is larger than the CGH (i.e., in the case of imposing a random phase to the target image), both the desired information and its conjugation component have wide spectrum bandwidth and are therefore difficult to be separated in the frequency domain (filter plane) in 4-f optical filtering system, reducing the display bandwidth as well as the image contrast due to the spatial aliasing with redundant information.

To overcome the limitations of existing techniques, herein we present a hybrid near-eye 3D display method that shares the roots with both the multiplane and holographic displays—we use a DMD to display an amplitude-only CGH (AO-CGH), modulating the wavefront in such a way that it generates multiple focal plane images. Moreover, to suppress the DC and conjugation signals, we develop an optimization algorithm which separates the primary diffracted light from DC and conjugation signals at a pre-defined filtered plane, yielding a high contrast image and maximizing the utilization of diffraction bandwidth. The resultant device can provide continuous focus cues across a wide depth range while maintaining a high resolution (1080P) and refresh rate (60 Hz) in a compact enclosure.

2. Principle and method

We illustrate the principle for calculating the AO-CGH in Fig. 1. The target 3D scene is first rendered into multiple depth layers based on a depth-weighted blending (DWB) algorithm [2628]. This algorithm creates continuous focus cues, mimicking the perception of a real 3D scene. To calculate the hologram, we use a double-step Fresnel diffraction (DSFD) algorithm. In the DSFD algorithm, we create an intermediate plane as shown in Fig. 1(a). This intermediate plane serves as the filtered plane where we separate the primary diffracted light from the DC and conjugation. For the i-th layer image Ii(x, y), the first step in DSFD is the forward propagation of the wavefront from that layer to the filtered plane using Fresnel diffraction approximation [29] as

$${M_i}({x_m},{y_m}) = \int\!\!\!\int {{I_i}({x,y} )\cdot \exp [{i{\varphi_{i - init}}({x,y} )} ]\cdot \exp \left\{ {\frac{{ik}}{{2({{z_i} + d} )}}[{{{({{x_m} - x} )}^2} + {{({{y_m} - y} )}^2}} ]} \right\}dxdy} ,$$
where φi-init(x, y) is the initial random phase assigned to constitute the layer wavefront, and zi and d are the distances from the layer to the display and from the display to the filtered plane, respectively. In holographic displays, φi-init(x, y) is normally initialized with a totally random phase to enable a wide viewing angle [30]. However, a totally random phase would make the diffraction pattern Mi(xm, ym) occupy the entire diffraction band window at the filtered plane, making it impossible to separate the primary diffracted light from the DC and conjugation.

 figure: Fig. 1.

Fig. 1. Operating principle. (a) Computation model from a 3D scene to the hologram. (b) Procedure for generating an optimized band-limited random phase. (c) Complete procedure for generating AO-CGH.

Download Full Size | PDF

To overcome this problem, we developed a method to optimize the initial random phase to spatially confine the primary diffracted light at the filtered plane without information loss. This optimization procedure is based on a modified version of the Gerchberg-Saxton (GS) iteration algorithm [31], which propagates the wavefront back and forth between the depth layer and filtered plane while enforcing the amplitude constraint. The flow chart is shown in Fig. 1(b). The optimization algorithm starts by assigning a totally random phase to the layer image. Upon impingement on the filtered plane, the diffracted wavefront is modified by multiplying with a band-limited amplitude mask while preserving the phase term. This updated wavefront is then back-propagated to the layer plane and replaces the phase term of the original complex amplitude at that layer. We repeat this process for a fixed number of iterations. Upon completion, the final phase at that depth layer is our optimized band-limited random phase. It is worth noting that this optimized phase is not totally random but actually a pseudo random phase.

For each layer, we generate and assign a separate band-limited random phase optimized by using the same band-limited amplitude mask in the iteration. Therefore, the primary diffracted light associated with each depth layer occupies the same band-limited area at the filtered plane. We calculate the additive diffraction pattern contributed by all layers at the filtered plane as

$$M({x_m},{y_m}) = \sum\limits_{i = 1}^N {{M_i}({x_m},{y_m})} ,$$
where N is the total number of depth layers. Finally, we perform the second step of backward diffraction calculation corresponding to a length of d to obtain the complex hologram H(xh, yh) at the display plane (xh, yh) under the Fresnel approximation:
$$H({x_h},{y_h}) = \int\!\!\!\int {M({{x_m},{y_m}} )\cdot \exp \left\{ {\frac{{ - ik}}{{2d}}[{{{({{x_h} - {x_m}} )}^2} + {{({{y_h} - {y_m}} )}^2}} ]} \right\}d{x_m}d{y_m}} .$$
In reconstruction, we employ a convergent spherical wavefront to illuminate the hologram. The use of convergent illumination has two advantages. First, it physically increases the diffraction angle and thereby the field of view (FOV) compared with plane wave illumination [32]. Second, the DC component introduced in the AO-CGH encoding is focused to a spot due to a positive optical power, and therefore, it can be easily removed through filtering. Accordingly, the filtered plane must co-locate with the focal plane of the convergent illumination wavefront. To compensate for this additional optical power imposed by illumination, we multiply the complex hologram H(xh, yh) with the conjugate of the illumination wavefront and update the hologram as:
$${H_c}({x_h},{y_h}) = H({x_h},{y_h}) \cdot \exp \left[ {\frac{{ik}}{{2d}}({x_h^2 + y_h^2} )} \right].$$
Substituting H(x, y) with Eq. (3) gives:
$${H_c}({x_h},{y_h}) = \int\!\!\!\int {M({{x_m},{y_m}} )\cdot \exp \left[ {\frac{{ - ik}}{{2d}}({x_m^2 + y_m^2} )} \right]\exp \left[ {\frac{{ik}}{d}({{x_h}{x_m} + {y_h}{y_m}} )} \right]d{x_m}d{y_m}} .$$
Equation (5) implies that the amplitude (|M(xm, ym)|) of M(xm, ym) at the filtered plane is the Fourier transform of the hologram Hc(xh, yh). Therefore, the diffraction from the hologram to the filtered plane is the Fraunhofer diffraction under the convergent illumination (with focal of d) condition.

To display the complex hologram on an amplitude display, we encode the complex amplitude calculated by Eq. (5) into an AO-CGH. Given a complex amplitude Hc(xh, yh)=a(xh, yh)·exp[(xh, yh)], where the amplitude a(xh, yh) is a positive normalized function and the phase φ(xh, yh) takes values in the domain [-π, π], we can encode it into an interferometric AO-CGH with a normalized transmittance function [25]

$$A({x_h},{y_h}) = {c_0}\{{b({x_h},{y_h}) + a({x_h},{y_h}) \cdot \cos [{\varphi ({x_h},{y_h}) - 2\pi ({u_0}{x_h} + {v_0}{y_h})} ]} \},$$
where c0≅1/2 is a normalization constant, (u0, v0) are the spatial frequencies of the linear phase hologram carrier, and b(xh, yh) is the bias function generally defined as b(xh, yh)=[1 + a2(xh, yh)]/2. Generation of the hologram transmittance in Eq. (6) is optically equivalent to recording the interference pattern formed by the complex hologram Hc(xh, yh) with an off-axis plane reference wave exp[i2π(u0xh+v0yh)]. The procedure for generating the AO-CGH for a 3D scene is illustrated in Fig. 1(c). When displaying the encoded AO-CGH under convergent illumination, the DC term will be focused to the center of the filtered plane. The off-axis band-limited diffraction signal M(xm, ym) diffracted from Hc(xh, yh) is shifted to one half area of the filtered plane window, while its complex-conjugate occupies the other half due to the conjugate symmetry principle of Fraunhofer diffraction (Fourier transform). As a result, the desired diffraction signal is isolated from the DC and conjugation at the filtered plane, enabling high-contrast and wide-view angle display. More importantly, this filtering operation is implemented in a “lensless” diffraction way from the AO-CGH and the position of the filter plane is adjustable digitally according to the illumination condition, therefore providing a flexible freedom for more compact form factor design compared with conventional 4-f filtering system involving lenses.

It should be mentioned that the size of the band-limited mask at the filter plane is subjected to the following three criterions: 1. the off-axis diffraction signal M(xm, ym) and its conjugation have the same area in a symmetric geometry; 2. the existing of a focused DC spot in the center position; 3. the avoid of possible aliasing from other diffraction order windows caused by pixelated display. So an aperture size of the band-limited mask that is slightly smaller than the half of diffraction window is always more appropriate and preferred in the optimization algorithm to ensure an ideal isolation from other noises.

We validated our method by calculating two AO-CGHs using the conventional method that employs a totally-random phase and our method, respectively. The target gray-level image is placed at distance z = 400mm from the display plane. Figs. 2(a) and 2(b) are the simulated reconstructions at the filtered and the image plane, respectively, from the AO-CGH generated using the conventional method, while Figs. 2(d) and 2(e) are the simulated reconstructions from AO-CGH generated using our band-limited random phase method. To quantitatively compare and evaluate the results, we calculated both the peak signal noise ratio (PSNR) and root mean square error (RMSE) between each simulated reconstruction and the original target image. The values are marked as shown in Figs. 2(b) and 2(e). Figures 2(c) and 2(f) are the experimental results corresponding to Figs. 2(b) and 2(e). The results indicate that using the conventional totally random phase, the primary diffraction signals are overlapped with DC and conjugation in the full diffraction window in Fig. 2(a). Therefore, the reconstructed image has low contrast and quality. By contrast, by using the band-limited random phase, we can completely separate them apart at the filtered plane. After removal of DC and conjugated signals, both the simulated and experimental images exhibit a remarkable improvement in reconstruction fidelity and contrast compared with the results obtained with the conventional method.

 figure: Fig. 2.

Fig. 2. Comparison of image quality without and with filtering. (a)-(c) are the results from without using filtering. (d)-(f) are the results from our method with filtering. (a), (d) Simulated reconstructions at the filtered plane. (b), (e) Simulated reconstructions at the image plane. (c), (f) Experimental reconstructions at the image plane.

Download Full Size | PDF

3. Experiment and results

To demonstrate our method, we built a prototype using only the off-the-shelf optics. The system schematic is shown in Fig. 3. We employ a DMD (Texas Instruments, DLP LightCrafter 4710, pixel count 1920×1080, pixel pitch 5.4µm, 60Hz frame rate for grayl-evel pattern) as our amplitude modulation display module. The 532 nm laser beam is focused by a positive lens (focal length, 100mm). The resultant convergent spherical wavefront illuminates the DMD. The filtered plane where the convergent beam (DC component) focuses locates at the distance of d = 80mm from the DMD surface. We load a 256 gray-level AO-CGH with 1920×1080 resolution into the DMD, and obliquely illuminate the DMD at the angle of 34° (angle is 17° for DMD micro-mirrors in “on” state). After being modulated by the displayed AO-CGH, the beam passes through a band-pass filter placed at the filtered plane which eliminates the DC and conjugation components. The reconstructed virtual scene is combined with the real-world objects through a beam splitter. To capture the image, we used a digital camera which consists of a CMOS sensor (Sony Alpha a7s) and a varifocal lens (focal distance: 400 mm to infinity).

 figure: Fig. 3.

Fig. 3. Optical setup. BS, beam splitter; DMD, digital micromirror device.

Download Full Size | PDF

We first displayed a multi-plane object using our device. The AO-CGH was calculated to display three images of “UIUC”, “iOptics” and “holo” placed at depths z1=400 mm, z2=700 mm, and z3=1000 mm, respectively. We used the time-averaging method [30] to suppress the speckle noise caused by the high coherence of the laser source. For each 3D scene, we generated 20 AO-CGHs by using different optimized band-limited random phases and displayed them in sequence. Figures 4(a)–4(c) show the images when the camera focused on three real-world objects, respectively. The reconstructed image appears sharp only in the designated depth plane while blurred elsewhere.

 figure: Fig. 4.

Fig. 4. Experimental results of displaying a multi-plane object. (a) Camera focuses at 400mm. (b) Camera focuses at 700mm. (c) Camera focuses at 1000mm.

Download Full Size | PDF

Next, we displayed two 3D scenes (a tilted grid and dices) with continuous depths. For each scene, the 3D data was first rendered into a 2D image and a depth map. The content of the 2D image was assigned to each layer image with a weighted proportion of intensity according to the depth map. In Fig. 5(a), we show the 2D image and depth map of a grid model with the depth ranging from 400mm to 1000mm, as well as the rendered four depth-fused layer images. Figure 5(b) shows the corresponding images when the camera focused at five different depths. Figure 5(c) shows the results when displaying the dices, where the 3D scene was rendered into four depth-fused layers in 400mm-700mm range. We also show the dynamic focusing for these two objects in supplementary videos (see Visualization 1 and Visualization 2). The visible flicker in the video is owing to the frame rate mismatch between the camera and the DMD. The results demonstrate that our system delivered natural accommodation cues without apparent discontinuity observed.

 figure: Fig. 5.

Fig. 5. Experimental results of displaying continuous 3D scenes. (a) Multiple-layer rendering for a continuous 3D scene. (b) Images captured at five different depths when displaying a 3D tilted grid (Visualization 1). (c) Images captured at three different depths when displaying 3D dices (Visualization 2).

Download Full Size | PDF

4. Discussions

4.1 Computation time

Extensive computation is a common problem in computer-holography based displays. By contrast, our layer-based holographic data modeling is more efficient in computation compared with conventional holographic rendering methods based on the point cloud or polygon. Moreover, the depth-weighted blending (DWB) algorithm allows the rendering of continuous depths with a limited number of layers, further reducing the calculation cost and improving speed. Despite being efficient in AO-CGH calculation, it is still facing consuming runtimes due to the utilization of iteration optimization algorithm for the band-limited random phase. In our experiment, all the AO-CGHs calculation was implemented on the platform Matlab R2018a and Intel Xeon E5-1650 CPU (3.50GHz). The total calculation time (including 16 iterations of band-limited initial random phase generation for each layer image) of the AO-CGH for “multi-plane letters”, “grid”, and “dice” were 23s, 31s and 31s, respectively.

We can potentially accelerate this process using two methods. First, translating the calculation of AO-CGH to high-speed GPUs will enable parallel processing of all depth layers. Second, because optimizing the band-limited random phase accounts for most calculation time, we can eliminate this process by calculating a universal band-limited phase that is independent of image amplitude for a given projected depth and storing that in the DMD’s memory [33]. Therefore, the hologram associated with all layers can be quickly produced with a minimal computational cost.

4.2 Field of view

The FOV for the reconstructed virtual image can be calculated according to the image size (L) and the accommodation distance as illustrated in Fig. 6(a). The accommodation distance (from virtual image to the eye-pupil) is equal to the summation of the distance (z + d) and the eye relief distance zb between the band-pass filter and the eye pupil. The angle θ spanned by two chief rays (black solid lines) emitted from the edge of the FOV is calculated as θL/(z + d+zb). On the other hand, the FOV is also limited by the diffraction angle of the display, and it reaches the maximum when the target image reaches the full diffraction bandwidth limit. Given the convergent illumination, the maximum image size can be calculated based on the Nyquist sampling theory in FFTs based numerical diffraction calculations of DSFD [34]. The pixel pitch at the filtered plane is calculated as Δm=λd/(ΔNd) where Δ and Nd are the pixel pitch and resolution of the display, respectively. From the filtered plane to the image plane, the maximum image size is computed as Lmax=λ(z+d)/Δm=(z + dNd/d. Therefore, the FOV is

$$\theta \approx \frac{{{L_{\max }}}}{{({z + d + {z_b}} )}} = \frac{{{N_d}\Delta }}{{d + \left( {\frac{{{z_b}}}{{1 + z/d}}} \right)}} \approx \frac{{{N_d}\Delta }}{d} = \frac{{{L_d}}}{d},$$
under the condition z>>zb. LdNd is the size of the display panel in one dimension. In our experiment, the eye relief is zb≈50 mm. For an image at distance z = 1000mm, the horizontal FOV is calculated as θ≈7°. Equation (7) implies that we can either decrease the focal distance d or increase the display resolution to achieve a larger FOV. Therefore, a tightly focusing illumination beam and a high space bandwidth product of the display panel are two key factors for obtaining a larger FOV.

 figure: Fig. 6.

Fig. 6. Illustration of the system parameters. (a) Parameters for the FOV and eyebox. (b) Configuration for transmissive display.

Download Full Size | PDF

In our proof-of-concept system, the reflection geometry prevents the use of a more tightly focused illumination. To obtain a larger FOV, a possible solution is to replace the DMD with a transmissive Liquid Crystal Display (LCD) panel as the amplitude spatial light modulator. This transmission geometry will allow the use of more tightly focused illumination (i.e., a smaller d), as shown in Fig. 6(b). For compact packaging, flat optics such as a metalens [35] or a geometric phase lens [36] can be employed to focus the illumination. Given the numerical aperture NA = sinα (n = 1), the equation of θLd/d can be further deduced to

$$\theta \approx \frac{{2NA}}{{\sqrt {1 - N{A^2}} }}.$$
Therefore, the practical limitation for the FOV in this geometry is the maximum achievable NA of the illumination lens. Given a modest NA of 0.3, the FOV can reach 40 degrees.

On the other hand, the eyebox within which the whole virtual image can be seen is determined by the size of the filtered window (band-limited mask) since it serves as the exit pupil of our holographic near eye display system, as marked in Fig. 6(a). The entire diffraction bandwidth at the filtered plane from the display panel is calculated by Lfull=λd/Δ, indicating that the eyebox is restricted by the focal distance d and the display pixel pitch Δ. This equation also implies that decreasing the focal distance d (i.e., increasing the FOV) would reduce the diffraction bandwidth and therefore the eye box.

4.3 Speckle noise

The speckle noise in the reconstructions of AO-CGH is contributed by two main sources. One is the use of high coherence laser beam which is the primary light source used in holographic displays. The other factor comes from the initial random phase imposed on the layer images in diffraction calculation. The time-averaging method (60 Hz frame rate for superposition in our experiment) has been used in our experiment to smooth and suppress the speckle noise. Figure 7 presents the comparison of the experimental results when the camera focuses on the letter “UIUC” at 400 mm. The image intensity by using the time-averaging method in Fig. 7(a) is more uniform and smoothed than that without using time-averaging method in Fig. 7(b). Although its effectiveness, this method has drawbacks in terms of high temporal bandwidth occupancy for display single frame, and thereby the requirement of very high frame rate devices to display videos or colorful scenes. This problem can be alleviated by replacing the laser by other light sources. For example, it is reported [37] that a light source with high spatial coherence and low temporal coherence is ideal for holographic display to obtain high-quality images with minimum speckles. The high spatial coherence assures the desired diffraction pattern while the low temporal coherence can efficiently reduce the speckle contrast due to broader spectrum smoothing. In this way we can achieve dynamic 3D holographic display with 60Hz frame rate.

 figure: Fig. 7.

Fig. 7. Results obtained with and without using speckle reduce method. (a) with time-averaging method. (b) without time-averaging method.

Download Full Size | PDF

5. Conclusions

To sum up, we developed a holographic multiplane near-eye display method. By using a DMD to modulate the wavefront, we simultaneously create multiple focal planes across a wide depth range while maintaining a high resolution and refresh rate. The optimization of band-limited random phase enables complete separation of primary diffracted signals from DC and conjugation, thereby significantly increasing the image contrast. Featuring low cost and a compact form factor, we expect our method will have a unique edge in various VR and AR applications.

Funding

National Science Foundation (CAREER Award 1652150); Futurewei Technologies, Inc..

Disclosures

The authors declare that there are no conflicts of interest related to this article

References

1. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013). [CrossRef]  

2. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence - accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008). [CrossRef]  

3. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013). [CrossRef]  

4. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]  

5. F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015). [CrossRef]  

6. H. Huang and H. Hua, “High-performance integral-imaging-based light field augmented reality display using freeform optics,” Opt. Express 26(13), 17578–17590 (2018). [CrossRef]  

7. K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A Stereo Display Prototype with Multiple Focal Distances,” ACM Trans. Graph. 23(3), 804–813 (2004). [CrossRef]  

8. D. Cheng, Y. Wang, H. Hua, and J. Sasian, “Design of a wide-angle, lightweight head-mounted display using free-form optics tiling,” Opt. Lett. 36(11), 2098–2100 (2011). [CrossRef]  

9. X. Hu and H. Hua, “Design and Assessment of a Depth-Fused Multi-Focal-Plane Display Prototype,” J. Disp. Technol. 10(4), 308–316 (2014). [CrossRef]  

10. G. Tan, T. Zhan, Y. H. Lee, J. Xiong, and S. T. Wu, “Polarization-multiplexed multiplane display,” Opt. Lett. 43(22), 5651–5654 (2018). [CrossRef]  

11. W. Cui and L. Gao, “Optical mapping near-eye three-dimensional display with correct focus cues,” Opt. Lett. 42(13), 2475–2478 (2017). [CrossRef]  

12. W. Cui and L. Gao, “All-passive transformable optical mapping near-eye display,” Sci. Rep. 9(1), 6064 (2019). [CrossRef]  

13. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010). [CrossRef]  

14. G. D. Love, D. M. Hoffman, P. J. W. Hands, J. Gao, A. K. Kirby, and M. S. Banks, “High-speed switchable lens enables the development of a volumetric stereoscopic display,” Opt. Express 17(18), 15716–15725 (2009). [CrossRef]  

15. R. Narain, R. A. Albert, A. Bulbul, G. J. Ward, M. S. Banks, and J. F. O’Brien, “Optimal presentation of imagery with focus cues on multiplane displays,” ACM Trans. Graph. 34(4), 1–12 (2015). [CrossRef]  

16. X. Hu and H. Hua, “High-resolution optical see-through multi-focal-plane head-mounted display using freeform optics,” Opt. Express 22(11), 13896–13903 (2014). [CrossRef]  

17. G. Li, D. Lee, Y. Jeong, J. Cho, and B. Lee, “Holographic display for see-through augmented reality using mirror-lens holographic optical element,” Opt. Lett. 41(11), 2486–2489 (2016). [CrossRef]  

18. P. Zhou, Y. Li, S. Liu, and Y. Su, “Compact design for optical-see-through holographic displays employing holographic optical elements,” Opt. Express 26(18), 22866–22876 (2018). [CrossRef]  

19. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014). [CrossRef]  

20. A. Mainmone, A. Georgiou, and J. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017). [CrossRef]  

21. H. J. Yeom, H. J. Kim, S. B. Kim, H. Zhang, B. Li, Y. M. Ji, S. H. Kim, and J. H. Park, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015). [CrossRef]  

22. Q. Gao, J. Liu, J. Han, and X. Li, “Monocular 3D see-through head-mounted display via complex amplitude modulation,” Opt. Express 24(15), 17372–17383 (2016). [CrossRef]  

23. J. H. Park and S. B. Kim, “Optical see-through holographic near-eye-display with eyebox steering and depth of field control,” Opt. Express 26(21), 27076–27088 (2018). [CrossRef]  

24. S. Kazempourradi, E. Ulusoy, and H. Urey, “Full-color computational holographic near-eye display,” J,” Inf. Disp. 20(2), 45–59 (2019). [CrossRef]  

25. V. Arrizón, G. Méndez, and D. Sanches-de-La-Llave, “Accurate encoding of arbitrary complex fields with amplitude-only liquid crystal spatial light modulators,” Opt. Express 13(20), 7913–7927 (2005). [CrossRef]  

26. J. S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015). [CrossRef]  

27. S. Suyama, S. Ohtsuka, H. Takada, K. Uehira, and S. Sakai, “Apparent 3-D image perceived from luminance-modulated two 2-D images displayed at different depths,” Vision Res. 44(8), 785–793 (2004). [CrossRef]  

28. S. Ravikumar, K. Akeley, and M. S. Banks, “Creating effective focus cues in multi-plane 3D displays,” Opt. Express 19(21), 20940–20952 (2011). [CrossRef]  

29. J. W. Goodman, Introduction to Fourier Optics (Roberts & Company Publishers, 2005).

30. T. Shimobaba, M. Makowski, T. Kakue, M. Oikawa, N. Okada, Y. Endo, R. Hirayama, and T. Ito, “Lensless zoomable holographic projection using scaled Fresnel diffraction,” Opt. Express 21(21), 25285–25290 (2013). [CrossRef]  

31. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

32. C. Chang, Y. Qi, J. Wu, J. Xia, and S. Nie, “Image magnified lensless holographic projection by convergent spherical beam illumination,” Chin. Opt. Lett. 16(10), 100901 (2018). [CrossRef]  

33. A. V. Zea, J. F. B. Ramirez, and R. Torroba, “Optimized random phase only holograms,” Opt. Lett. 43(4), 731–734 (2018). [CrossRef]  

34. W. Qu, H. Gu, H. Zhang, and Q. Tan, “Image magnification in lensless holographic projection using double-sampling Fresnel diffraction,” Appl. Opt. 54(34), 10018–10021 (2015). [CrossRef]  

35. W. T. Chen, A. Y. Zhu, V. Sanjeev, M. Khorasaninejad, Z. Shi, E. Lee, and F. Capasso, “A broadband achromatic metalens for focusing and imaging in the visible,” Nat. Nanotechnol. 13(3), 220–226 (2018). [CrossRef]  

36. Y. H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S. T. Wu, “Recent progress in Pancharatnam-Berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3(1), 79–88 (2017). [CrossRef]  

37. Y. Deng and D. Chu, “Coherence properties of different light sources and their effect on the image sharpness and speckle of holographic displays,” Sci. Rep. 7(1), 5893 (2017). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       dynamic focusing for 3D tilted grid
Visualization 2       dynamic focusing for 3D dice

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Operating principle. (a) Computation model from a 3D scene to the hologram. (b) Procedure for generating an optimized band-limited random phase. (c) Complete procedure for generating AO-CGH.
Fig. 2.
Fig. 2. Comparison of image quality without and with filtering. (a)-(c) are the results from without using filtering. (d)-(f) are the results from our method with filtering. (a), (d) Simulated reconstructions at the filtered plane. (b), (e) Simulated reconstructions at the image plane. (c), (f) Experimental reconstructions at the image plane.
Fig. 3.
Fig. 3. Optical setup. BS, beam splitter; DMD, digital micromirror device.
Fig. 4.
Fig. 4. Experimental results of displaying a multi-plane object. (a) Camera focuses at 400mm. (b) Camera focuses at 700mm. (c) Camera focuses at 1000mm.
Fig. 5.
Fig. 5. Experimental results of displaying continuous 3D scenes. (a) Multiple-layer rendering for a continuous 3D scene. (b) Images captured at five different depths when displaying a 3D tilted grid (Visualization 1). (c) Images captured at three different depths when displaying 3D dices (Visualization 2).
Fig. 6.
Fig. 6. Illustration of the system parameters. (a) Parameters for the FOV and eyebox. (b) Configuration for transmissive display.
Fig. 7.
Fig. 7. Results obtained with and without using speckle reduce method. (a) with time-averaging method. (b) without time-averaging method.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

M i ( x m , y m ) = I i ( x , y ) exp [ i φ i i n i t ( x , y ) ] exp { i k 2 ( z i + d ) [ ( x m x ) 2 + ( y m y ) 2 ] } d x d y ,
M ( x m , y m ) = i = 1 N M i ( x m , y m ) ,
H ( x h , y h ) = M ( x m , y m ) exp { i k 2 d [ ( x h x m ) 2 + ( y h y m ) 2 ] } d x m d y m .
H c ( x h , y h ) = H ( x h , y h ) exp [ i k 2 d ( x h 2 + y h 2 ) ] .
H c ( x h , y h ) = M ( x m , y m ) exp [ i k 2 d ( x m 2 + y m 2 ) ] exp [ i k d ( x h x m + y h y m ) ] d x m d y m .
A ( x h , y h ) = c 0 { b ( x h , y h ) + a ( x h , y h ) cos [ φ ( x h , y h ) 2 π ( u 0 x h + v 0 y h ) ] } ,
θ L max ( z + d + z b ) = N d Δ d + ( z b 1 + z / d ) N d Δ d = L d d ,
θ 2 N A 1 N A 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.