Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Foveated holographic near-eye 3D display

Open Access Open Access

Abstract

We present a foveated rendering method to accelerate the amplitude-only computer-generated hologram (AO-CGH) calculation in a holographic near-eye 3D display. For a given target image, we compute a high-resolution foveal region and a low-resolution peripheral region with dramatically reduced pixel numbers. Our technique significantly improves the computation speed of the AO-CGH while maintaining the perceived image quality in the fovea. Moreover, to accommodate the eye gaze angle change, we develop an algorithm to laterally shift the foveal image with negligible extra computational cost. Our technique holds great promise in advancing the holographic 3D display in real-time use.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Corrections

16 January 2020: A typographical correction was made to Ref. 14.

1. Introduction

Near-eye displays (NEDs) for virtual reality (VR) and augmented reality (AR) have attracted considerable interest because of their ability to provide immersive and interactive experiences. While most commercial NEDs are based on binocular disparities by presenting a pair of stereoscopic images, holographic NEDs based on wavefront modulation reconstruct three-dimensional (3D) images with full depth cues [15], solving the vergence-accommodation conflict (VAC)—the mismatch between the vergence and focal distances of the eye [6]. Compared with other VAC-free and accommodation supporting techniques, i.e., light field displays [710] or multifocal/varifocal displays [1114], holographic displays provide more natural monocular focus cues, a vision-correction ability, a higher resolution, and a more compact form factor, making them ideal candidates for NED-based applications [1517].

Despite a promising technique, the holographic display faces a significant challenge in calculating the computer-generated holograms (CGH) in real time [18]. For example, point-cloud-based wavefront propagation is one of the most popular methods for computing a CGH. However, massive data processing typically takes tens of minutes when using personal computers. Although many computational methods have been deployed to simplify the 3D data rendering by using models such as a wavefront recording plate [19], ray-tracing [20], a polygon mesh [21], or multi-plane layers [22], the CGH calculation still suffers from a much higher computational cost compared with that in the standard stereoscopic image rendering.

By contrast, foveated rendering accelerates the CGH calculation by reassigning the pixels in a way that matches the photoreceptor density distribution at the retina. A human eye has a high density of photoreceptors only in the central area of the retina, referred to as the fovea region (∼ 5°), while in the peripheral area, the visual acuity is low [2325]. Accordingly, the foveated rendering produces a high-resolution image only in the center of field of view (FOV) while creating a low-resolution representation for the peripheral vision, thereby significantly reducing the computational load without degrading the perceived image quality.

The application of foveated rendering in holographic NEDs is sprouting [2628]. For example, J. Hong et al. developed a foveated rendering technique for point-cloud-based CGH generation by lowering the point sampling density in the peripheral regions [26]. However, the down-sampling creates vacant spaces between points, requiring an additional lateral shifting operation of the hologram to compensate for this visual defect. Based on the same conceptual thread, L. Wei et al. and Y-G Ju et al. applied foveated rendering to a ray-tracing model [27] and a triangular-mesh model [28], respectively, reducing the computational cost and latency. Also, they demonstrated progressive updates from low-resolution to high-resolution images by overlaying the high-resolution patch with the low-resolution scene using an occlusion handling technique [28]. Nonetheless, the foveated rendering so far achieved (3D points, rays, and meshes) relies on complicated geometry and computer graphics processing, posing constraints on its real-world application.

As an alternative approach, the layer-based CGH model is much simpler and more efficient, where the 3D scene is rendered as multiple planar 2D images. The correspondent CGH can be calculated using Fast-Fourier-Transform (FFT) based diffraction algorithms. We previously demonstrated that such a system can provide a high resolution and a wide FOV [29]. In this work, we developed a foveated rendering method for this system to accelerate the amplitude-only CGH (AO-CGH) calculation. By using varied sampling density at each depth layer, we greatly reduced the computational time.

2. Principle and method

We elaborate the basic principle for calculating the CGH in a holographic multiplane display system [29] in Fig. 1(a). For simplicity, we consider only one layer image at the virtual image plane. After creating an intermediate filtered plane, we compute the CGH from the layer image using double-step Fresnel diffraction (DSFD) from the image to the filtered plane and then to the hologram plane. In order to completely separate the DC and conjugation noise from the amplitude-only modulation, in the first step we use an iteration algorithm to optimize the initial random phase of the image, limiting the bandwidth of its Fourier transform at the filtered plane [29]. Given a high-resolution (i.e., 1080P) CGH, the computational cost for iteration optimization is prohibitive.

 figure: Fig. 1.

Fig. 1. Principle of foveated rendering in a holographic multiplane display. (a) Computation model for CGH generation. (b) Rendering of foveal and peripheral sub-images. (c) Calculation of the foveal image. (d) Calculation of the peripheral image.

Download Full Size | PDF

2.1. Foveated image rendering

To account for the variation in the photoreceptor density at the retina, we separate the original image into high- and low-resolution regions for the fast calculation of the hologram. We illustrate the underlying principle in Fig. 1(b). Both the original high-resolution image and the SLM possess the same number of pixels, N×N. For a given pixel pitch Δ, the dimension of the virtual image plane is H(height)=W(width)=NΔ. For foveated rendering, we divide the original image into two sub-images. The first image, referred to as the “foveal image”, is extracted from the central part of the original image. We define parameter Rh as the ratio of the pixels along one dimension in this foveal image to that of the entire image, and Rh < 1. The total number of pixels in the foveal image is thus equal to RhN×RhN with the original pixel pitch Δh=Δ. On the other hand, we compose the second image using the peripheral area of the original image with the central area zero-padded, followed by down-sampling. The resultant low-resolution image—referred to as the “peripheral image”—has a total number of pixels RLN×RLN, where RL is a scaling factor and RL < 1. To maintain the original image size, we accordingly increase the pixel pitch to ΔL=Δ/RL. The foveal and peripheral images so obtained serve as the bases for the subsequent hologram calculations.

2.2. Calculation of the foveal image

We describe the calculation of the foveal image in Fig. 1(c). The foveal image Ih(x, y) is forward-propagated to the filtered plane using the Fresnel diffraction approximation [30]:

$${M_h}({x_{hm}},{y_{hm}}) = \int\!\!\!\int {{I_h}({x,y} )\cdot \exp [{i{\varphi_{h - init}}({x,y} )} ]\cdot \exp \left\{ {\frac{{ik}}{{2({z + d} )}}[{{{({{x_{hm}} - x} )}^2} + {{({{y_{hm}} - y} )}^2}} ]} \right\}dxdy} ,$$
where φh-init(x, y) is the initial random phase. Equation (1) is numerically calculated using the single FFT-based Fresnel diffraction algorithm (also known as Fresnel diffraction in the Fourier form) [31,32]. According to Nyquist samping criterion, the resultant diffraction pattern Mh(xhm, yhm) occupies the whole frequency spectrum bandwidth (λ(z + d)/Δh=λ(z + d)/Δ) at the filtered plane with a pixel pitch λ(z + d)/(RhNΔ). To obtain a band-limited diffraction pattern, we optimize the initial random phase φh-init(x, y) for the foveal image using an iteration algorithm [29], which spatially confines the primary diffracted light at the filtered plane without information loss. This iteration optimization is fast because of the reduced pixel number (RhN×RhN) of the foveal image. After obtaining the updated band-limited wavefront Mh(xhm, yhm), we interpolate the wavefront Mh(xhm, yhm) into Mh(xm, ym) to restore the resolution and pixel pitch to N×N and λ(z + d)/(NΔ), respectively, for the second-step calculation.

2.3. Calculation of the peripheral image

The calculation of the peripheral image is illustrated in Fig. 1(d). Similarly, the peripheral image IL(x, y) that is forward-propagated to the filtered plane can be expressed as

$${M_L}({x_{Lm}},{y_{Lm}}) = \int\!\!\!\int {{I_L}({x,y} )\cdot \exp [{i{\varphi_{L - init}}({x,y} )} ]\cdot \exp \left\{ {\frac{{ik}}{{2({z + d} )}}[{{{({{x_{Lm}} - x} )}^2} + {{({{y_{Lm}} - y} )}^2}} ]} \right\}dxdy} ,$$
where φL-init(x, y) is the initial random phase. Because the sampling pitch of the peripheral image IL(x, y) is down-sampled from Δ to ΔL=Δ/RL, the Fresnel diffraction wavefront with pixels RLN×RLN and a pixel pitch λ(z + d)/N at the filtered plane is confined to the physical size of λ(z + d)/ΔL=λ(z + d)RL/Δ, which is smaller than the whole diffraction bandwidth λ(z + d)/Δ of the filtered plane, eliminating the need for optimizing the initial random phase and thereby reducing the computational cost. To restore the resolution and pixel pitch, we zero pad the diffraction wavefront ML(xhm, yhm) and construct a new complex amplitude ML(xm, ym) of dimensions N×N.

2.4. Generation of AO-CGH

Because the two Fresnel diffraction patterns Mh(xm, ym) and ML(xm, ym) at the filtered plane have the same resolution N×N and pixel sampling λ(z + d)/(NΔ), We sum them and compute an overall diffraction pattern:

$$M({x_m},{y_m}) = {M_h}({x_m},{y_m}) + \alpha {M_L}({x_m},{y_m}),$$
where α serves as a weighted factor to compensate for the intensity variation between the foveal and peripheral images caused by image rescaling. Finally, to obtain the complex hologram H(xh, yh) at the display plane (xh, yh), we perform the second-step backward diffraction calculation by using the Fresnel approximation:
$$H({x_h},{y_h}) = \int\!\!\!\int {M({{x_m},{y_m}} )\cdot \exp \left\{ {\frac{{ - ik}}{{2d}}[{{{({{x_h} - {x_m}} )}^2} + {{({{y_h} - {y_m}} )}^2}} ]} \right\}d{x_m}d{y_m}} .$$
To compute the hologram for a 3D scene modeled by multiple layers, we apply foveated rendering to each layer image and calculate the correspondent sub-hologram Hi(xh, yh), where i is the layer index. We obtain the final complex hologram by summing all the sub-holograms as $\sum\limits_{i = 1}^{{N_{Layers}}} {{H_i}({x_h},{y_h})}$, followed by encoding it into an AO-CGH with positive real values [29].

We validated our method by calculating the AO-CGHs from two test images of 1024×1024 resolution with the foveated rendering technique, and then numerically reconstructed the AO-CGHs to the virtual image plane via intermediate filtering. The reconstruction results are shown in Fig. 2. In Figs. 2(a) and 2(c) we set Rh=0.5 to render a foveal image of resolution 512×512, while in Figs. 2(b) and 2(d) we set Rh=0.25, leading to a foveal image of resolution 256×256. To render the peripheral image of resolution 256×256, we set RL=0.25 in both cases. As expected, the simulation results show that a high-resolution image is restored in the foveal region while the images in the peripheral area exhibit a low resolution.

 figure: Fig. 2.

Fig. 2. Simulation reconstructions of the foveated display. The foveal images were rendered with Rh=0.5 in (a), (c) and Rh=0.25 in (b), (d).

Download Full Size | PDF

2.5. Shifted foveated rendering

The foveated rendering displays a high-resolution image only in the central region of FOV. To accommodate the gaze angle change of the eye, we developed a method to laterally shift the foveal image (Fig. 3(a)). Because the numerical Fresnel diffraction algorithm is shift-invariant, to update the wavefront, we simply multiply it with a linear phase factor (blazing tilting phase):

$$M_h^{\prime}({x_m},{y_m}) = {M_h}({x_m},{y_m}) \cdot \exp \left[ {\frac{{i2\pi ({{x_m}{\theta_x} + {y_m}{\theta_y}} )}}{{\lambda ({z + d} )}}} \right], $$
where (θx, θy) is the eye gaze angle, and they can be calculated as:
$${\theta _x} = \arctan \left[ {\frac{{{x_{shift}}}}{{({z + d} )}}} \right],{\theta _y} = \arctan \left[ {\frac{{{y_{shift}}}}{{({z + d} )}}} \right].$$

 figure: Fig. 3.

Fig. 3. Shifted foveated rendering. (a) Shifting of the high-resolution foveal image to accommodate the eye gaze angle change. (b) Calculation of the shifted foveal image using a linear additive phase.

Download Full Size | PDF

Here (xshift, yshift) are the central coordinates of the foveal image at the virtual image plane. Figure 3(b) shows the reconstruction of the foveal image with and without applying the shifting phase.

3. Performance of CGH computation speed

To evaluate the computation time, we tested our algorithm using an image of 1024×1024 pixels. Because the iteration optimization for calculating the band-limited initial random phase accounts for most computational cost, we set RL=0.25 for rendering the peripheral image while varying Rh=0.5, 0.25 and 0.125 for rendering the foveal image. The resultant foveal images were of 512×512, 256×256, and 128×128 pixels, respectively, while the peripheral image maintained the same resolution (256×256 pixels). For comparison, we also assessed the computation cost for the conventional method where the full resolution image (1024×1024 pixels) was calculated for AO-CGH.

We first show the convergence of the iteration algorithm for band-limited initial random phase optimization in Fig. 4(a). In each iteration, we calculated the root mean square error (RMSE) between the numerical reconstruction and the target (or foveal) image. The RMSE value quickly descends after several iterations, and it almost becomes stable after ten iterations for all cases. In Fig. 4(b), we compared the total computation time of the AO-CGHs for the foveal images of different resolutions at varied iterations (3 to 16). All the calculations were implemented on the platform Matlab R2018a and Intel Core i5-4260U CPU (2.00GHz) with 8.00GB RAM. The calculation time for the conventional method (marked by red dot in Fig. 4(b) significantly increases with the iteration number, reaching more than 10 seconds after 16 iterations. By contrast, the calculation in the foveated rendering shows a much faster computation speed due to the greatly reduced pixel numbers (512×512, 256×256 and 128×128). For example, the calculation time is 10.3s in conventional method under 16 iterations of 1024×1024 image pixels, while the time in the proposed method (including 1 iteration of 256×256 peripheral image) is 4.0s, 1.9s and 1.6s for the foveal images with 512×512, 256×256 and 128×128 pixels, respectively. The computational speed can be potentially accelerated to real time when using a PC with faster GPUs.

 figure: Fig. 4.

Fig. 4. Performance of CGH computation. (a) Convergence of the iteration algorithm for band-limited initial random phase optimization. (b) Comparison of computation time.

Download Full Size | PDF

4. Experimental results

We demonstrated our method in experiments. Figure 5 shows the optical setup. We used a low-cost transmissive Liquid Crystal micro-display (EPSON L3C07U-85G13, 1920×1080, pixel pitch 8.5µm) as the amplitude modulation display module. To remove the focused DC component at the filtered plane as well as increasing the diffraction angle of the pixelated display, we used a lens with a short focal length (f = 60mm) to converge the beam [29]. Compared with our previous implementation where we used a reflective SLM to modulate the wavefront [29], the transmissive configuration herein described enables a wider field of view (FOV) and a more compact system. At the filtered plane (d = 55mm from the LCD panel), we employed a band-pass filter to eliminate the DC and conjugation noises. The holographic reconstructions are combined with the real-world objects by a prism for see-through demonstration. The images were recorded by a digital camera which consists of a CMOS sensor (Sony Alpha a7s) and a varifocal lens (focal distance: 400mm to infinity).

 figure: Fig. 5.

Fig. 5. Optical setup. BS, beam splitter.

Download Full Size | PDF

We used the same test images (1024×1024 pixels; Fig. 2) in experiments as those in simulation. The images were placed at z = 400mm. Figures 6(a) and 6(b) show the display results when using the conventional method to generate the AO-CGHs. Figures 6(c)–6(f) show the foveated rendering results. Here we set RL=0.25 for the peripheral image and Rh=0.5 (512×512) and Rh=0.25 (256×256) for the foveal images in Fig. 6(c), Fig. 6(d), Fig. 6(e), and Fig. 6(f), respectively. The zoom-in views show the expected resolution difference in the foveal and peripheral regions.

 figure: Fig. 6.

Fig. 6. Experimental results of displaying 2D test images. (a) (b) Reconstructions of AO-CGHs without using foveated rendering. (c) (d) Reconstructions of AO-CGHs using foveated rendering with Rh=0.5. (e) (f) Reconstructions of AO-CGHs using foveated rendering with Rh=0.25.

Download Full Size | PDF

We further demonstrated the foveated holographic near-eye display of a multi-plane object in Fig. 7. Three binary images of 1024×1024 pixels were placed at distances of z1=400mm, z2=700mm and z3=1000mm in the calculation. Each image was rendered to the foveal image of 512×512 pixels and the peripheral image of 256×256 pixels in AO-CGH calculation. Figure 7 shows the foveated see-through AR display results when adjusting the camera to various focal planes along with respective real-world objects. The accommodation cues at each focal plane were clearly presented within the foveal region. Figure 8 shows another example of foveated holographic near-eye display of a 3D scene with continuous depth cues between 400mm (2.5D) and 1000mm (1D). This 3D model was sliced into four layer images using a depth-weighted blending (DWB) algorithm [22,29,33] by assigning the contents of each layer according to a 2D projection image and a depth map shown in Figs. 8(a) and 8(b), respectively. The parameters in foveated rendering of each layer image are the same as in Fig. 7. Figures 8(c)–8(e) are the reconstructions with different camera focuses. The zoom-in views show that the central foveal contents exhibit a supreme focus cue.

 figure: Fig. 7.

Fig. 7. Experimental results of displaying a multiplane object. The images were captured at a nominal focus of 400mm, 700mm and 1000mm, respectively.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Experimental results of displaying a continuous 3D scene. (a) 2D projection image. (b) Depth map. (c)-(e) Captured images at a nominal focus of 400 mm, 700 mm, and 1000 mm.

Download Full Size | PDF

In the above foveated holographic display experiments, the central point of the rendered foveal images was fixed at the coordinate origin (0, 0). To demonstrate the device’s ability in accommodating the eye gaze angle change, we shifted the foveal region to several test locations. In this case, the central coordinate (eye-gaze point) of the rendered foveal image is xshift=NxΔ, yshift=NyΔ, where Nx and Ny are the horizontal and vertical translation pixel numbers from the origin. The AO-CGH was generated for the foveated rendered sub-images by adapting Eq. (5) and Eq. (6) for off-axis calculation. Figure 9 shows the display results of a 2D text image with shifted foveal contents (foveal image: 512×512, peripheral image: 256×256) when the eye gaze point moves to the upper-left (−200Δ, 200Δ) in Fig. 9(a), the center (0, 0) in Fig. 9(b), and the lower-right (200Δ, −200Δ) in Fig. 9(c). The images in Fig. 10 show the results of a continuous 3D scene, where the center of the foveal contents moves to the coordinate (−150Δ, −150Δ) in the first row, (0, 0) in the second row, and (150Δ, 150Δ) in the third row. The collective results indicate that our method can accommodate the eye gaze angle change with the aid of an eye-tracking device.

 figure: Fig. 9.

Fig. 9. Experimental results of 2D test images with a shifted foveal region. (a) Reconstructions with an eye-gazing coordinate (−200Δ, 200Δ). (b) Reconstructions with an eye-gazing coordinate (0, 0). (c) Reconstructions with an eye-gazing coordinate (200Δ, −200Δ).

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Experimental results of displaying a continuous 3D scene with a shifted foveal region. The eye gazes at (−150Δ, −150Δ), (0, 0) and (150Δ, 150Δ) for the results shown in the first, second, and third row, respectively. The camera focuses at 400 mm, 700 mm, and 1000 mm for the results shown in the first, second, and third column, respectively.

Download Full Size | PDF

5. Discussions

5.1 Removal of high orders aliasing in foveal image calculation

In the calculation of the band-limited diffraction wavefront at the filtered plane for the foveal image, we apply up-sampling operation to rescale the (RhN×RhN) wavefront Mh(xhm, yhm) to the (N×N) wavefront Mh(xm, ym) by altering the sampling pitch in matrix rescaling from λ(z + d)/(RhNΔ) to λ(z + d)/(NΔ). The Fresnel diffraction wavefront is discretized into matrix grids by manipulating sampling in the FFT-based numerical calculation. As a result, the reconstruction from the pixelated wavefront produces multiple replicated images of high diffraction orders in the reconstruction plane, as shown in Fig. 11(a). The reconstruction content in the foveal region labeled by red lines is the desired image from zero-order diffraction beam while high diffraction order replicas are visible in the surrounding area. To avoid the aliasing between high order replicas and the peripheral contents, we remove the high order noise by multiplying the image with a binary mask of the same size as the rendered foveal image, followed by recalculating the Fresnel diffraction to the filtered plane as shown in Fig. 11(a). This correction for updating Mh(xm, ym) using an additional computation loop containing forward and backward Fresnel diffraction is critical to remove the aliasing noise. This calculation is fast, imposing little burden on the whole computation. Noteworthily, this additional update loop has been included in our results in Section 3.

 figure: Fig. 11.

Fig. 11. Analysis of the grating diffraction effect in calculation and reconstruction. (a) Removal of high order diffraction aliasing by using an additional correction loop for updating the wavefront at the filtered plane. (b) Example of uneven intensity distribution due to the diffraction effect.

Download Full Size | PDF

5.2. Uneven intensity distribution of the foveal image

The pixelation effect of the wavefront modulation in Fresnel holography suffers from the periodic high order replicated images in reconstruction, and this grating diffraction effect creates an “uneven” (but symmetric) intensity distribution within each diffraction order [34] due to the finite pixel aperture. As a result, there is an intensity difference between the central and edge of the light distribution area for each diffraction order, following a sinc function profile across the reconstruction plane. On the other hand, the zero-order diffraction areas are different for foveal and peripheral images due to their different physical sizes at the virtual image plane. The size of the foveal image is smaller, and therefore it has smaller zero-order diffraction. The mismatch of the zero-order diffraction size produces the uneven intensity distribution between the foveal image and the whole image content. Figure 11(b) shows two representative examples manifesting this uneven intensity distribution when we used a white image as the test (rendered in 512×512 and 256×256, respectively). We also plot the intensity profiles of the lines across the images. The degradation of brightness in the edge area of the foveal content appears in the reconstruction, resulting in a non-uniform intensity transition from the foveal region to the peripheral region as observed in our experimental results. A potential solution to this problem is to pre-compensate for this uneven distribution in the rendered foveal image based on the measured diffraction profile.

5.3. Field of view

The maximum FOV for the reconstructed image is calculated by θLd/d = NmaxΔd/d [29] where Ld is the size of the display panel and Δd is its pixel pitch. Nmax is the maximum one-dimensional resolution of the display. In our experiment the liquid crystal micro-display has a resolution of 1920×1080, resulting in a maximum FOV of θ≈1920×8.5µm/55mm≈17°. Similarly, because a smaller spatial bandwidth product (SBP) RhN is used in calculating the foveal image, the correspondent one-dimensional FOV is θRhNΔd/d. When using 512×512 pixels in rendering the foveal image, the system provides a FOV of 4.5°×4.5° for the fovea region, matching that of the human eye.

6. Conclusions

In summary, we have developed a foveated holographic near-eye display system that advances the fast generation of AO-CGH by using a foveated rendering technique. Our method renders the original target image into a high-resolution foveal region and a low-resolution peripheral region, thereby effectively reducing the computational cost in the iteration-based double step Fresnel diffraction algorithm. In addition, to accommodate the eye gaze angle change, we developed an off-axis phase deflection method to laterally shift the foveal region in reconstruction. We expect our method will pose a significant impact on the near-eye holographic displays, advancing their practical use in real-time VR/AR applications.

Funding

National Science Foundation (1652150); Futurewei Technologies, Inc..

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014). [CrossRef]  

2. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017). [CrossRef]  

3. J.-H. Park and S.-B. Kim, “Optical see-through holographic near-eye-display with eyebox steering and depth of field control,” Opt. Express 26(21), 27076–27088 (2018). [CrossRef]  

4. S. Kazempourradi, E. Ulusoy, and H. Urey, “Full-color computational holographic near-eye display,” J. Inf. Disp. 20(2), 45–59 (2019). [CrossRef]  

5. G. Li, D. Lee, Y. Jeong, J. Cho, and B. Lee, “Holographic display for see-through augmented reality using mirror-lens holographic optical element,” Opt. Lett. 41(11), 2486–2489 (2016). [CrossRef]  

6. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence - accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008). [CrossRef]  

7. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1–10 (2013). [CrossRef]  

8. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]  

9. F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 1–12 (2015). [CrossRef]  

10. H. Huang and H. Hua, “High-performance integral-imaging-based light field augmented reality display using freeform optics,” Opt. Express 26(13), 17578–17590 (2018). [CrossRef]  

11. X. Hu and H. Hua, “High-resolution optical see-through multi-focal-plane head-mounted display using freeform optics,” Opt. Express 22(11), 13896–13903 (2014). [CrossRef]  

12. W. Cui and L. Gao, “Optical mapping near-eye three-dimensional display with correct focus cues,” Opt. Lett. 42(13), 2475–2478 (2017). [CrossRef]  

13. W. Cui and L. Gao, “All-passive transformable optical mapping near-eye display,” Sci. Rep. 9(1), 6064 (2019). [CrossRef]  

14. K. Akşit, W. Lopes, J. Kim, P. Shirley, and D. Luebke, “Near-eye varifocal augmented reality display using see-through screens,” ACM Trans. Graph. 36(6), 1–13 (2017). [CrossRef]  

15. H.-J. Yeom, H.-J. Kim, S.-B. Kim, H. Zhang, B. Li, Y.-M. Ji, S.-H. Kim, and J.-H. Park, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015). [CrossRef]  

16. P. Zhou, Y. Li, S. Liu, and Y. Su, “Compact design for optical-see-through holographic displays employing holographic optical elements,” Opt. Express 26(18), 22866–22876 (2018). [CrossRef]  

17. Q. Gao, J. Liu, J. Han, and X. Li, “Monocular 3D see-through head-mounted display via complex amplitude modulation,” Opt. Express 24(15), 17372–17383 (2016). [CrossRef]  

18. T. Shimobaba, T. Kakue, and T. Ito, “Review of fast algorithms and hardware implementations on computer holography,” IEEE Trans. Ind. Inf. 12(4), 1611–1622 (2016). [CrossRef]  

19. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]  

20. T. Ichikawa, T. Yoneyama, and Y. Sakamoto, “CGH calculation with the ray tracing method for the Fourier transform optical system,” Opt. Express 21(26), 32019–32031 (2013). [CrossRef]  

21. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48(34), H54–H63 (2009). [CrossRef]  

22. J. S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015). [CrossRef]  

23. B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder, “Foveated 3D graphics,” ACM Trans. Graph. 31(6), 1 (2012). [CrossRef]  

24. G. Tan, Y.-H. Lee, T. Zhan, J. Yang, S. Liu, D. Zhao, and S.-T. Wu, “Foveated imaging for near-eye displays,” Opt. Express 26(19), 25076–25085 (2018). [CrossRef]  

25. J. Kim, Y. Jeong, M. Stengel, K. Akşit, R. Albert, B. Boudaoud, T. Greer, W. Lopes, Z. Majercik, P. Shirley, J. Spjut, M. McGuire, and D. Luebke, “Foveated AR: Dynamically-foveated augmented reality display,” ACM Trans. Graph. 38(4), 1–15 (2019). [CrossRef]  

26. J. S. Hong, Y. M. Kim, S. H. Hong, C. S. Shin, and H. J. Kang, “Gaze contingent hologram synthesis for holographic head-mounted-display,” Proc. SPIE 9771, 97710K (2016). [CrossRef]  

27. L. Wei and Y. Sakamoto, “Fast calculation method with foveated rendering for computer-generated holograms using an angle-changeable ray-tracing method,” Appl. Opt. 58(5), A258–A266 (2019). [CrossRef]  

28. Y.-G. Ju and J.-H. Park, “Foveated computer-generated hologram and its progressive update using triangular mesh scene model for near-eye displays,” Opt. Express 27(17), 23725–23738 (2019). [CrossRef]  

29. C. Chang, W. Cui, and L. Gao, “Holographic multiplane near-eye display based on amplitude-only wavefront modulation,” Opt. Express 27(21), 30960–30970 (2019). [CrossRef]  

30. J. W. Goodman, Introduction to Fourier Optics (Roberts & Company Publishers, 2005).

31. D. Mas, J. Garcia, C. Ferreira, L. M. Bernardo, and F. Marinho, “Fast algorithms for free-space diffraction patterns calculation,” Opt. Commun. 164(4-6), 233–245 (1999). [CrossRef]  

32. T. Shimobaba, J. Weng, T. Sakurai, N. Okada, T. Nishitsuji, N. Takada, A. Shiraki, N. Masuda, and T. Ito, “Computational wave optics library for C++: CWO++ library,” Comput. Phys. Commun. 183(5), 1124–1138 (2012). [CrossRef]  

33. S. Ravikumar, K. Akeley, and M. S. Banks, “Creating effective focus cues in multi-plane 3D displays,” Opt. Express 19(21), 20940–20952 (2011). [CrossRef]  

34. J.-S. Chen, J. Jia, and D. Chu, “Minimizing the effects of unmodulated light and uneven intensity profile on the holographic images reconstructed by pixelated spatial light modulators,” Chin. Opt. Lett. 15(10), 100901 (2017). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Principle of foveated rendering in a holographic multiplane display. (a) Computation model for CGH generation. (b) Rendering of foveal and peripheral sub-images. (c) Calculation of the foveal image. (d) Calculation of the peripheral image.
Fig. 2.
Fig. 2. Simulation reconstructions of the foveated display. The foveal images were rendered with Rh=0.5 in (a), (c) and Rh=0.25 in (b), (d).
Fig. 3.
Fig. 3. Shifted foveated rendering. (a) Shifting of the high-resolution foveal image to accommodate the eye gaze angle change. (b) Calculation of the shifted foveal image using a linear additive phase.
Fig. 4.
Fig. 4. Performance of CGH computation. (a) Convergence of the iteration algorithm for band-limited initial random phase optimization. (b) Comparison of computation time.
Fig. 5.
Fig. 5. Optical setup. BS, beam splitter.
Fig. 6.
Fig. 6. Experimental results of displaying 2D test images. (a) (b) Reconstructions of AO-CGHs without using foveated rendering. (c) (d) Reconstructions of AO-CGHs using foveated rendering with Rh=0.5. (e) (f) Reconstructions of AO-CGHs using foveated rendering with Rh=0.25.
Fig. 7.
Fig. 7. Experimental results of displaying a multiplane object. The images were captured at a nominal focus of 400mm, 700mm and 1000mm, respectively.
Fig. 8.
Fig. 8. Experimental results of displaying a continuous 3D scene. (a) 2D projection image. (b) Depth map. (c)-(e) Captured images at a nominal focus of 400 mm, 700 mm, and 1000 mm.
Fig. 9.
Fig. 9. Experimental results of 2D test images with a shifted foveal region. (a) Reconstructions with an eye-gazing coordinate (−200Δ, 200Δ). (b) Reconstructions with an eye-gazing coordinate (0, 0). (c) Reconstructions with an eye-gazing coordinate (200Δ, −200Δ).
Fig. 10.
Fig. 10. Experimental results of displaying a continuous 3D scene with a shifted foveal region. The eye gazes at (−150Δ, −150Δ), (0, 0) and (150Δ, 150Δ) for the results shown in the first, second, and third row, respectively. The camera focuses at 400 mm, 700 mm, and 1000 mm for the results shown in the first, second, and third column, respectively.
Fig. 11.
Fig. 11. Analysis of the grating diffraction effect in calculation and reconstruction. (a) Removal of high order diffraction aliasing by using an additional correction loop for updating the wavefront at the filtered plane. (b) Example of uneven intensity distribution due to the diffraction effect.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

M h ( x h m , y h m ) = I h ( x , y ) exp [ i φ h i n i t ( x , y ) ] exp { i k 2 ( z + d ) [ ( x h m x ) 2 + ( y h m y ) 2 ] } d x d y ,
M L ( x L m , y L m ) = I L ( x , y ) exp [ i φ L i n i t ( x , y ) ] exp { i k 2 ( z + d ) [ ( x L m x ) 2 + ( y L m y ) 2 ] } d x d y ,
M ( x m , y m ) = M h ( x m , y m ) + α M L ( x m , y m ) ,
H ( x h , y h ) = M ( x m , y m ) exp { i k 2 d [ ( x h x m ) 2 + ( y h y m ) 2 ] } d x m d y m .
M h ( x m , y m ) = M h ( x m , y m ) exp [ i 2 π ( x m θ x + y m θ y ) λ ( z + d ) ] ,
θ x = arctan [ x s h i f t ( z + d ) ] , θ y = arctan [ y s h i f t ( z + d ) ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.