Abstract

In this paper, a method is proposed to implement noises reduced three-dimensional (3D) holographic near-eye display by phase-only computer-generated hologram (CGH). The CGH is calculated from a double-convergence light Gerchberg-Saxton (GS) algorithm, in which the phases of two virtual convergence lights are introduced into GS algorithm simultaneously. The first phase of convergence light is a replacement of random phase as the iterative initial value and the second phase of convergence light will modulate the phase distribution calculated by GS algorithm. Both simulations and experiments are carried out to verify the feasibility of the proposed method. The results indicate that this method can effectively reduce the noises in the reconstruction. Field of view (FOV) of the reconstructed image reaches 40 degrees and experimental light path in the 4-f system is shortened. As for 3D experiments, the results demonstrate that the proposed algorithm can present 3D images with 180cm zooming range and continuous depth cues. This method may provide a promising solution in future 3D augmented reality (AR) realization.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

AR technology combines virtual information with real world in a common environment. This new characteristic allows AR being applied in various areas, such as large-scale manufacturing, national defense, healthcare, entertainment and mass media [1,2]. The near-eye heads-up system is mostly used for compact and portable design among the existing AR systems. In recent years, various optical schemes were adopted and applied in near-eye display realization, such as coaxial prisms [3], planar display based on transparent film array [4], freeform surface [5], multilayer displays with directional backlighting [6], stereoscopic see-through retinal projection [7] and so on.

Holography as one effective method to realize 3D display has attracted more and more attention in near-eye display system. Some methods based on holography have been introduced to near-eye system design, such as holographic waveguide [8–10], computer-generated holography (CGH) [11–16] and other methods with different holographic optical element (HOE) [17,18]. Zeng et al. analyzed the key enabling technology of waveguide holographic display based on technology bottleneck and development trend [8]. Yeom et al. developed a 3D near-eye display with two HOEs using input and output couplers [9]. Liu et al. designed a compact color waveguide screen using lens array, of which zero-order light was eliminated [10]. However, the ghost image in waveguide system is still hard to eliminate and the fabrication of high quality HOE is also a big challenge. Since CGH can present image in a more natural viewing way, it’s a promising technique to design the near-eye system. Spatial light modulator (SLM) is the core component in the CGH system, just as the traditional micro display in waveguide system. Moon et al. utilized the RGB light-emitting diode (LED) light source and CGH to fulfill colorful 3D system design [11]. Chang et al. proposed their methods based on Fresnel diffraction and suppressed the speckle noise [12,14]. Qi et al. designed a speckleless 3D holographic display with digital blazed grating [13]. The approach of Maimone et al is built on the principles of Fresnel holography and double phase amplitude encoding with additional hardware, phase correction factors, and spatial light modulator encodings to achieve full color, high contrast and low noise holograms with high resolution and true per-pixel focal control [15]. Gao et al. fabricated a see-through 3D head-mounted display based on wavefront modulation with holographic grating filter [16]. Hong et al. proposed an integral floating system with a concave half mirror to obtain the AR performance [17]. Lee et al. proposed a compact 3D system which used the birefringence property of the Savart plate to produce double projection planes [18]. Although these existing methods already have good performance in image reconstruction, there are still some problems remained to be overcome: (1) phase-only SLM modulation drags the reconstructed images quality because of the noises introduced in the holograms encoding and reconstructing process [16]; (2) wide FOV display is hard to achieve due to the limitation of algorithm and experimental system; (3) the calculation of CGH costs much time, which will influence the dynamic display for AR performance.

In order to improve the quality of reconstructed image and realize 3D display, we utilize a phase-only Liquid Crystal on Silicon (LCoS) to achieve noise-reduced holographic near-eye display based on double-convergence light Gerchberg-Saxton algorithm [19], which is called the DCL-GS algorithm. The computing kernel used in this paper is Fraunhofer diffraction instead of Fresnel diffraction most holographic 3D display system used [12–14]. There are two reasons for using Fraunhofer diffraction: (1) the imaging distance of Fraunhofer diffraction is much longer compared with Fresnel diffraction and angular spectrum. With this feature, the zooming range of 3D display can become longer; (2) the reconstructed images can be magnified with simple lenses, that is to say wider FOV can be acquired easily. The first virtual convergence light is a replacement of random phase as the initial value in GS algorithm, which will suppress the speckle noise to some extent [20,21]. In Ref [20,21], low frequency parts of object can spread light widely over the CGH just like the high frequency parts by multiplying the complex distribution of object with the phase of virtual convergence light, and the phase distribution on CGH is got by one diffraction calculation. The second virtual convergence light has three functions: (1) the distance of 4-f system can be shortened; (2) CGH generated by SLM is accompanied with conjugate image and with the help of convergence light, the primary image and conjugate image can be separated in axial direction; (3) the 3D display can be realized with the second convergence light of different curvature radius. In addition, a digital blazed grating is used as a tool to separate the reconstructed image from the zero-order light [13,22]. With the help of the double-convergence light, high quality reconstructed images are obtained with noise suppressed, and owing to the magnifying lenses, FOV of the reconstructed image reaches 40 degree in AR realization. What’s more, 3D display is also achieved with a longer zooming range and continuous depth cues on the premise of guaranteeing the wide FOV. The feasibility of this method is verified by both numerical simulations and optical experiments.

2. Method and simulation

2.1 Method

The GS algorithm is proposed to solve the problem of phase retrieval of a field at two different planes, when at those, only the field amplitudes are known and given that the fields are related by a Fourier transform. Besides the algorithm is often used to calculate the complex amplitude that light in one plane must have in order to form a desired complex amplitude in a second plane, the light distribution in which is related to that in the first plane by a propagating function such as the Fourier transform. Fast Fourier transforms are often used to iteratively propagate the complex amplitude backward and forward from the Fourier (or SLM) plane to the image plane, replacing the illuminating laser beam intensity profile at the SLM plane and the intensity at the image plane with the target intensity.

GS algorithm seeks the extreme value by tracking the negative direction of gradient of the high variable function, so it is a steepest descent algorithm in essence. However, when dealing with the complex model, the steepest descent algorithm won’t do well, because the monotonicity of the model is poor and waveform rise and fall frequently. Furthermore, the minimal value obtained by GS algorithm in a search process is usually related to the phase selected at the beginning of this search. Thus, the choice of iterative initial value has a decisive influence on results calculated by GS algorithm.

The proposed method, called the DCL-GS algorithm, is a modification of the GS algorithm. A virtual convergence light is used as the iterative initial value in replace of the traditional random phase [20]. Figure 1 shows the calculating relations of CGH using convergence light. u1(x1, y1) and u2(x2, y2) are complex amplitude of the CGH plane and image plane respectively. For the first iteration, u1(x1, y1) = a1(x1, y1)w1(x1,y1) where a1(x1, y1) is the amplitude of incident light and w1(x2,y2) is the phase of virtual convergence light. Then, we calculate the complex amplitude u2(x2, y2) by Fraunhofer diffraction of u1(x1, y1). There are some mathematical relations in Fig. 1. The distance from the focal point of convergence light to CGH plane is z1 and distance from CGH plane to image plane is z2. Suppose that the horizontal and vertical sizes of the CGH and image are the same, then the length and width of CGH plane will be L1 and length and width of image plane will be L2. Here z1, z2, L1and L2 must fit to the cone of the convergence light. Thus, the virtual convergence light can be expressed as

w1(x1,y1)=exp(iπ(x12+y12)λr1)
where λ is the wavelength and r1 = z1 + z2 is the radius of virtual convergence light on image plane. According to similar triangle theory, we can derive L1/2: L2/2 = z1: r1, hence r1 = z2/(1-L2/L1). It is noteworthy that the maximum angle θ of the convergence light must satisfy the relation θ = sin−1(λ/2dL1) to avoid the aliasing error, where dL1 is the sampling rate on CGH plane.

 figure: Fig. 1

Fig. 1 CGH with the first virtual convergence light.

Download Full Size | PPT Slide | PDF

When the horizontal and vertical sizes of CGH and image are not the same, w1(x2,y2) should be expressed as w1(x2,y2) = exp[(x22/λrx + y22/λry)] where rx = z2/(1-L2x/L1x) and ry = z2/(1-L2y/L1y). Here L1x × L1y is the size of CGH and L2x × L2y is the size of image.

The next step is to put the iterative initial value into the formula and through iterations of Fourier transform and inverse Fourier transform, a suitable complex amplitude u3(x1, y1) on CGH plane will be figured out. In order to avoid the zero-order light caused by pixel structure of LCoS, digital blazed grating is used to shift the reconstructed image, and it is widely used in many papers for image reconstruction [13–15,22]. The expression of a two-dimensional digital blazed grating is

φbg(x1,y1)=2πTmod(bx1+cy1,T)
Where mod is the operation of complementation, T is the period of digital blazed grating, b and c are offset in x1 and y1 direction respectively. New phase distribution possibly exceeds 2π, so we need to take the reminder of it φ = mod (φbg + φ3, 2π), where φ is the phase distribution of u3. With φ, we can obtain the complex amplitude u4(x1, y1) with digital blazed grating.

The complex amplitude u4(x1, y1) is multiplied with the second virtual convergence light w2(x1,y1). Figure 2 shows the light path behind LCoS. The distance between LCoS and Lens 1 is z and the focal length of Lens 1 and Lens 2 are f1 and f2 respectively, so Plane 2 is the focal plane of Lens1. Point A is the zero-order light caused by LCoS and it is separated from the reconstructed image formed by green light. Incidentally, point A is on the axle wire and green line has an offset from axle wire actually. We draw the green line and black line in the opposite way for the sake of clear size marking and convenient observation. Thus the zero-order light can be eliminated by a filter in Plane 2.

 figure: Fig. 2

Fig. 2 Optical design for the second convergence light.

Download Full Size | PPT Slide | PDF

The conjugate image is caused in a phase modulation type reconstruction. The conjugate image and primary image are centrosymmetric with each other on the same focal plane, and it may influence the image quality badly sometimes. Thus the second convergence light is introduced. The expression of second virtual convergence light is the same as the first one w2(x1, y1) = exp[(x12 + y12)/ λr2], so the complex amplitude inputted on LCoS is u5(x1, y1) = u4(x1, y1) w2(x1, y1). After the complex modulation, the focal plane of primary image will move forward to Plane 1 while the focal plane of conjugate image will move backward to Plane 3. Plane 1 is located Δz in front of plane 2 and Δz can be calculated as [23–25]:

Δz=f12r2z+f1.
When the focal plane of Lens 2 is set at Plane 1, we can capture the reconstructed image with a camera behind Lens 2. Δz here is the distance which can be shortened in the optical system, and the distance between Lens 1 and Lens 2 will be f1 + f2-Δz.

For 3D realization, the CGH loaded on LCoS is the superposition of several CGHs, and the focal planes of these CGHs are controlled by the convergence light multiplied. With different curvature radius r2 in convergence light, target CGH is reconstructed in different depth.

2.2 Simulation comparison and results

Some computer simulations are conducted to demonstrate the performance of DCL-GS algorithm in improving image quality. Figure 3(a) is the original image composed of 1080 × 1080 and the pixel pitch of the CGH is set as dL1 = 6.4μm in order to keep consistent with LCoS used in the experiment. The wavelength of laser is 532nm and iteration number is 15. Under these conditions, Fig. 3(b) is the phase-only CGH calculated by GS algorithm with traditional random phase. Figure 3(c) is image reconstructed from CGH in Fig. 3(b). Intuitively, both the luminance and contrast of the image are reduced in varying degrees compared with the original image. In consideration of aliasing error mentioned in section 2, we set z2 = 700mm, and r1 can be calculated as 794.4mm. Figure 3(d) is the phase-only CGH calculated by DCL-GS algorithm with phase of virtual convergence light. The red box indicates the influence of second virtual convergence light on the CGH. Figure 3(e) is image reconstructed from CGH in Fig. 3(d). The image quality of Fig. 3(e) is better than that of Fig. 3(c) intuitively.

 figure: Fig. 3

Fig. 3 Computer simulation results. (a) The original image. (b) CGH calculated by GS algorithm with traditional random phase. (c) Image reconstructed from Fig. 3(b). (d) CGH calculated by DCL-GS algorithm with phase of virtual convergence light. (e) Image reconstructed from Fig. 3(d).

Download Full Size | PPT Slide | PDF

Some evaluation functions are used to compare the simulation results [26]. The first one is peak signal-to-noise ratio (PSNR). The equation of PSNR for 8-bit gray-level image is defined as

PSNR(x,y)=10log2552MSE(x,y)
MSE(x,y)=1MNi=1Mj=1Ne(i,j)2
where M and N are the horizontal and vertical number of pixels and e(i, j) is the error difference between the original and the reconstructed images. MSE, namely Mean Square Error, is calculated by averaging the squared intensity of the original image and the reconstructed image. The higher the value of PSNR is, the better the image quality will be. Another well-known function is structural similarity index measure (SSIM) between the original image and the reconstructed image and it is given by
SSIM(x,y)=(2μxμy+c1)(2σxy+c2)(μx2+μy2+c1)(σx2+σy2+c2)
where μx and μy are the mean values of the original and reconstructed images. σx and σy are the standard deviation of the original and reconstructed images. σxy is the covariance of both images. c1 and c2 are positive constants used to avoid the null denominator. The value of SSIM is between 0 and 1 and larger SSIM indicates better structural similarity.

The comparison between DCL-GS algorithm and GS algorithm is shown in Fig. 4. Figure 4(a) and 4(b) show the results of PSNR and SSIM respectively. To be specific, the PSNR of DCL-GS algorithm is growing faster in the first 4 iterations and has the same growth rate with GS algorithm in the following iterations. The final PSNR of DCL-GS algorithm is 23.24 and is higher than 18.93 of GS algorithm. The SSIM of DCL-GS algorithm also gets close to 1 in a faster rate in the first 4 iterations. The final SSIM of DCL-GS algorithm and GS algorithm is 0.91 and 0.80. Simulation results show that faster convergence rate can be obtained by DCL-GS algorithm in the first 4 iterations, which means we can obtain the reconstructed image in less iterations and time than GS algorithm. Generally speaking, the performance of DCL-GS algorithm with phase of virtual convergence light is better than GS algorithm with random phase.

 figure: Fig. 4

Fig. 4 Comparison of image quality between GS algorithm and DCL-GS algorithm. (a) Results of PSNR in 15 iteration numbers. (b) Results of SSIM in 15 iteration numbers.

Download Full Size | PPT Slide | PDF

3. Experimental verification and discussions

Optical experiments are carried out to illustrate the ability of DCL-GS algorithm to realize near-eye display with wide FOV. The experimental setup is shown in Fig. 5. The phase-only SLM used here is Holoeye LETO LCoS with pixel pitch: 6.4μm, resolution: 1920 × 1080. The CGH loaded on LCoS is illuminated by a green laser source (532nm) after a polarization beam splitter (PBS), which can ensure the polarization state of beam is aligned with that of the LCoS. The projected beam from LCoS is transmitted through a 4-f system and spatial filter to suppress different noises. Finally the beam with phase information is captured by a camera (Nikon D7100) and the camera can also capture the real scene through the beam splitter. Three original images (“cock”, “horse” and “bull”) from Chinese Zodiac are used in the CGH calculation with 1920 × 1080 pixels. For the convenience of observation of real scene, these images leave a lot of blank on the edge.

 figure: Fig. 5

Fig. 5 Schematic of experimental setup.

Download Full Size | PPT Slide | PDF

3.1 Elimination of conjugate images

The elimination of conjugate images is the purpose of the first experiment. In this experiment, the filter in Fig. 5 is replaced by a white board to receive the reconstructed image. Figure 6(a)-6(c) are original images of “cock”, “horse” and “bull”. Figures 6(d)-6(l) are experimental results taken from white board by the camera. The focal length f1 of lens 1 is 400mm, that is to say the distance between lens 1 and white board is 400mm. The image size of Figs. 6(d)-6(f) and Figs. 6(j)-6(l) is 8cm × 4.5cm while in Figs. 6(g)-6(i), image size is 8cm × 5.2cm.

 figure: Fig. 6

Fig. 6 (a)-(c) The original images (“cock”, “horse” and “bull”). (d)-(l) Optical reconstructions of original images taken from white board by camera (image size is 8cm × 4.5cm). (d)-(f) images reconstructed from CGH calculated by GS algorithm with the first convergence light. (g)-(i) images reconstructed from CGH calculated by DCL-GS algorithm, and received 500mm, 400mm, 350mm behind lens 1 respectively. (j)-(l) images reconstructed from CGH calculated by DCL-GS algorithm with a filter.

Download Full Size | PPT Slide | PDF

Figures 6(d)-6(f) are reconstructed from CGH calculated by GS algorithm only with the first convergence light. The image quality is degraded by zero-order light in the center and conjugate image in the red box. The conjugate image is centrosymmetric with the primary image about the zero-order spot, but the intensity of conjugate image is much weaker than that of primary image. Figures 6(g)-6(i) are reconstructed from CGH calculated by DCL-GS algorithm and the zero-order light is shifted to the top of the image for reference. Figure 6(h) is received 400mm behind lens 1, like Figs. 6(d)-6(f). As we can see in the yellow box, the whole image is blurry except the zero-order light, which means both primary and conjugate image are out-of-focus. Figure 6(g) is received about 500mm behind lens 1. When the blazed grating is introduced, the conjugate image would be separated from the primary image in the same order, but the primary image will usually be overlapped with conjugate image of other high orders. Though the intensity of high order is weaker than the first order, its conjugate image still degrades the image quality and has to be eliminated. As we can see in the red box, the conjugate image is clear while the zero-order light and primary image are blurry. Figure 6(i) is received about 350mm behind lens 1. The primary image is clear enough for human eyes and conjugate image is unnoticeable in background light. Figures 6(g)-6(i) show that the primary image is separated successfully from the conjugate image with good visual quality, and other noises are also well suppressed. Figures 6(j)-6(l) are reconstructed from CGH calculated by DCL-GS algorithm with a filter. Compared with Figs. 6(d)-6(f), all the reconstructed results exhibit the quality that might be accepted with eliminated noises.

3.2 Near-eye display with wide FOV

A second experiment is carried out to prove the feasibility of near-eye display with different FOV by the proposed method. The phase-only CGH is calculated by DCL-GS algorithm and experimental setup is shown in Fig. 5. In order to enwide FOV, lens 1 and lens 2 in 4-f system are changeable here. The beam splitter is set close to lens 2 and camera is set at the output plane, namely the focal plane of lens 2, to acquire the maximum FOV. The experimental results captured by camera are presented in Fig. 7. In Fig. 7(a), the focal length f1, f2 of lens 1 and lens 2 are 250mm and 200mm respectively. The white board is placed in the plane where it is also clear in camera, so this plane is the focal plane of the virtual image. Then FOV can be calculated as:

θ=2tan(L2d)
where L is the maximum image size and d is the distance between white board and camera. In Fig. 7(a), the “bull” is 5.2cm and the distance is 62.3cm. Thus, FOV of Fig. 7(a) can be figured out as 4.8degree. In Fig. 7(b), the focal length f1, f2 are 250mm and 100mm, and L, d are measured as 4.9cm and 36.5cm, so FOV can be calculated as 7.7degree. In Fig. 7(c), the focal length f1, f2 are 400mm and 100mm, and L, d are measured as 10.2cm and 33.5cm, so FOV can be calculated as 17.3degree. In Fig. 7(d), the focal length f1, f2 are 400mm and 45mm, and L, d are measured as 24.7cm and 33.5cm, so FOV can be calculated as 40.5degree. Note that camera is set 45mm behind lens 2 without the beam splitter, because 45mm is too short for a camera, but the distance is just right to fit the human eye with beam splitter. The reconstructed results demonstrated the near-eye display nature of the proposed system. With 40 degree FOV, more details on image can be observed and better 3D viewing perception will be acquired.

 figure: Fig. 7

Fig. 7 Optical reconstructions of different FOV in near-eye display. (a) The focal length f1, f2 of lens 1 and lens 2 are 250mm and 200mm respectively, the maximum image size L is 5.2cm, and the distance d between white board and camera is 62.3cm. (b) f1 = 250mm, f2 = 100mm, L = 4.9cm, d = 36.5cm. (c) f1 = 400mm, f2 = 100mm, L = 10.2cm, d = 33.5cm. (d) Beam splitter is removed and f1 = 400mm, f2 = 45mm, L = 24.7cm, d = 33.5cm.

Download Full Size | PPT Slide | PDF

3.3 3D display by DCL-GS algorithm

In order to evaluate the performance to reconstruct 3D object, another optical experiment for multi-plane holographic display is conducted with unchanged conditions. The phase-only CGH is calculated by DCL-GS algorithm, and lens 1, lens 2 used here are 250mm, 100mm respectively, so the system FOV is 7.7degree according to the last section. As shown in Fig. 8, four Chinese characters are reconstructed at four different planes from the complex CGH after the output plane of 4-f system. The results are captured by camera. The optical multi-plane resultant reconstructions at various distances are shown in Figs. 8(b)-8(e). We have also record this reconstruction by adjusting the focal length of the camera back and forth between the first focused plane and the fourth focused plane. The whole process is vividly shown in Visualization 1.

 figure: Fig. 8

Fig. 8 Experimental results of 3D holographic display at different depths (Visualization 1). (a) The original 3D image with four characters at different depths. (b)-(e) are the focused images at 24.5cm, 43.8cm, 84.7cm, 195.1cm respectively.

Download Full Size | PPT Slide | PDF

In Fig. 8(a), four characters are used as the target 3D images at different depths. Their reconstructed distances to the camera are d1 = 24.5cm, d2 = 43.8cm, d3 = 84.7cm, d4 = 195.1cm respectively, and the corresponding curvature radius r2 in second convergence light are 100cm, 150cm, 200cm, 300cm respectively. Figures 8(b)-8(e) shows the reconstructed images of target CGH at different focused depths. The characters “”, “”, “” and “” are focused and displayed in succession at the depth of 24.5cm, 43.8cm, 84.7cm, 195.1cm respectively. Meanwhile, four real objects (clip, jujube, orange and doorknob) are also placed at the same depths with the four characters respectively. From Figs. 8(b)-8(e), we can see that the reconstructed images are focused and blurred in the same way with the real objects at different depths. The experimental results demonstrate that the proposed algorithm and system can present continuous depth signals. It is shown that the adjustable depths cover from 24.5cm to195.1cm, as matching with the camera zooming range, which is quite sufficient for the human vision.

3.4 Discussions

Conventional near-eye displays only deliver two-dimensional (2D) images to the human eyes, and the stereoscopic 3D vision is mainly based on the binocular parallax [27–30], which may produce the accommodation-vergence conflict problem. The experimental results showed that our method can present 3D images with continue depth cues and nearly 180cm zooming range, which is the essential condition to eliminate the accommodation-vergence conflict. In this paper, the application of Fraunhofer diffraction combined with the second convergence light can realize a longer zooming range on the premise of guaranteeing the wide FOV. The zooming range is in inverse proportion to the system FOV, and even when the FOV is 40.5degree (the maximum FOV in our experiment), the range still has 34cm. Therefore, our method can realize 3D display with more details and depth cues.

However, the clarity of the 3D images still needs to be improved. Several possible reasons may lead to this problem: firstly, though there is a filter in system, other unwanted diffraction orders will still influence the image quality; secondly, though the random phase is replaced by the first convergence light, there is still some speckle noise; thirdly, the superposition of several 2D CGH for 3D display drags the imaging quality, and the more the CGHs are, the worse the imaging quality will be.

Considering the potential AR applications, the interaction is one of the most important aspects, where the signal generation rate and display frame rate should be required as fast as possible. In DCL-GS algorithm, the computing kernel is GS algorithm with 15 iterations, that is to say high generation rate can hardly be achieved. We have computed the holograms with 512 × 512, 1080 × 1080 and 1920 × 1080 resolutions respectively (computing platform: Inter i5-6500k CPU, 3.20GHz, and 16GB RAM) and the computing time are 0.586s, 2.211s, 3.874s respectively. The data shows that the computing time is far greater than 41.7ms (namely 24fps), which means GS algorithm is not quite suitable for dynamic and interactive display.

CGH is troubled with conjugate images sometimes, but the employment of the second convergence light solves the problem successfully. Besides, the second convergence light can also shorten the distance of 4-f system which is quite helpful for the compact heads-up system. It is noted that the combination of Fraunhofer diffraction and virtual convergence light is not confined to the GS algorithm, and it is also applicable to other algorithms. In our future work, we will focus on improving the algorithm and miniaturizing the experimental system for compact heads-up device.

4. Conclusion

In this paper, we utilize a single LCoS to achieve noise-reduced 2D and 3D holographic display based on DCL-GS algorithm. The target image can be reconstructed successfully with reduced noise at focal plane of the first lens. We also use different focusing lenses of the 4-f system to obtain a suitable image magnification, and the FOV can reach 40degree. Experimental results also demonstrate that true 3D images can be reconstructed with continuous depth cues and sufficient zooming range. Our study provides a vigorous potential for designing and realizing the true 3D display. It is expected that the DCL-GS algorithm and our system may provide a better solution in future 3D heads-up display research.

Funding

Program 863 (2015AA016301); National Natural Science Foundation of China (NSFC) (61327902).

References and links

1. J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic, “Augmented reality technologies, system and applications,” Multimedia Tools Appl. 51(1), 341–377 (2011). [CrossRef]  

2. I. Rabbi and S. Ullah, “A survey on augmented reality challenges and tracking,” Acta Graph. 24(1–2), 29–46 (2013).

3. M. B. Spitzer, X. Miao, and B. Amirparviz, “Method and apparatus for a near-to-eye display,” U.S. Patent No. 8,767,305. 1 Jul. (2014).

4. H. Liu, Z. Zheng, H. Li, and X. Liu, “Design of Planar Display Based on Transparent Film Array,” Guangdian Gongcheng 39(5), 145–150 (2012).

5. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]   [PubMed]  

6. G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012). [CrossRef]  

7. H. Takahashi and S. Hideya, “Stereoscopic see-through retinal projection head-mounted display,” Proc. SPIE 6803, 68031N (2008). [CrossRef]  

8. F. Zeng and X. Zhang, “Waveguide holographic head-mounted display technology,” Chinese Optics 7(5), 731–738 (2014). [CrossRef]  

9. H. J. Yeom, H. J. Kim, S. B. Kim, H. Zhang, B. Li, Y. M. Ji, S. H. Kim, and J. H. Park, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015). [CrossRef]   [PubMed]  

10. S. Liu, P. Sun, C. Wang, and Z. Zheng, “Color waveguide transparent screen using lens array holographic optical element,” Opt. Commun. 403, 376–380 (2017). [CrossRef]  

11. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014). [CrossRef]   [PubMed]  

12. C. Chang, J. Xia, L. Yang, W. Lei, Z. Yang, and J. Chen, “Speckle-suppressed phase-only holographic three-dimensional display based on double-constraint Gerchberg-Saxton algorithm,” Appl. Opt. 54(23), 6994–7001 (2015). [CrossRef]   [PubMed]  

13. Y. Qi, C. Chang, and J. Xia, “Speckleless holographic display by complex modulation based on double-phase method,” Opt. Express 24(26), 30368–30378 (2016). [CrossRef]   [PubMed]  

14. C. Chang, Y. Qi, J. Wu, J. Xia, and S. Nie, “Speckle reduced lensless holographic projection from phase-only computer-generated hologram,” Opt. Express 25(6), 6568–6580 (2017). [CrossRef]   [PubMed]  

15. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic Near-Eye Displays for Virtual and Augmented Reality,” ACM Trans. Graph. 36(1), 1–16 (2017). [CrossRef]  

16. Q. Gao, J. Liu, X. Duan, T. Zhao, X. Li, and P. Liu, “Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter,” Opt. Express 25(7), 8412–8424 (2017). [CrossRef]   [PubMed]  

17. J. Hong, S. W. Min, and B. Lee, “Integral floating display systems for augmented reality,” Appl. Opt. 51(18), 4201–4209 (2012). [CrossRef]   [PubMed]  

18. C. K. Lee, S. Moon, S. Lee, D. Yoo, J. Y. Hong, and B. Lee, “Compact three-dimensional head-mounted display system with Savart plate,” Opt. Express 24(17), 19531–19544 (2016). [CrossRef]   [PubMed]  

19. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik (Stuttg.) 35, 237–246 (1972).

20. T. Shimobaba and T. Ito, “Random phase-free computer-generated hologram,” Opt. Express 23(7), 9549–9554 (2015). [CrossRef]   [PubMed]  

21. M. Makowski, T. Shimobaba, and T. Ito, “Increased depth of focus in random-phase-free holographic projection,” Chin. Opt. Lett. 14(12), 120901 (2016). [CrossRef]  

22. Y. Jie, “Optimization of optoelectronic reconstruction of phase hologram by use of digital blazed grating,” Wuli Xuebao 58(5), 22409–22417 (2009).

23. T. Nobukawa and T. Nomura, “Multilayer recording holographic data storage using a varifocal lens generated with a kinoform,” Opt. Lett. 40(23), 5419–5422 (2015). [CrossRef]   [PubMed]  

24. G. Pesce, G. Volpe, O. M. Marago, P. H. Jones, S. Gigain, A. Sasso, and G. Volpe, “A step–by–step guide to the realisation of advanced optical tweezers,” J. Opt. Soc. Am. B. in press.

25. H. Wang, N. Chen, S. Zheng, J. Liu, and G. Situ, “Fast and high-resolution light field acquisition using defocus modulation,” Appl. Opt. 57(1), A250–A256 (2018). [CrossRef]   [PubMed]  

26. Y. Al-Najjar, “Comparison of image quality assessment: PSNR, HVS, SSIM, UIQI,” Int. J. Sci. Eng. Res. 3, 1–5 (2012).

27. S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005). [CrossRef]   [PubMed]  

28. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008). [CrossRef]   [PubMed]  

29. J. Hong, Y. Kim, H. J. Choi, J. Hahn, J. H. Park, H. Kim, S. W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef]   [PubMed]  

30. P. V. Johnson, J. A. Parnell, J. Kim, C. D. Saunter, G. D. Love, and M. S. Banks, “Dynamic lens and monovision 3D displays to improve viewer comfort,” Opt. Express 24(11), 11808–11827 (2016). [CrossRef]   [PubMed]  

References

  • View by:

  1. J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic, “Augmented reality technologies, system and applications,” Multimedia Tools Appl. 51(1), 341–377 (2011).
    [Crossref]
  2. I. Rabbi and S. Ullah, “A survey on augmented reality challenges and tracking,” Acta Graph. 24(1–2), 29–46 (2013).
  3. M. B. Spitzer, X. Miao, and B. Amirparviz, “Method and apparatus for a near-to-eye display,” U.S. Patent No. 8,767,305. 1 Jul. (2014).
  4. H. Liu, Z. Zheng, H. Li, and X. Liu, “Design of Planar Display Based on Transparent Film Array,” Guangdian Gongcheng 39(5), 145–150 (2012).
  5. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014).
    [Crossref] [PubMed]
  6. G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
    [Crossref]
  7. H. Takahashi and S. Hideya, “Stereoscopic see-through retinal projection head-mounted display,” Proc. SPIE 6803, 68031N (2008).
    [Crossref]
  8. F. Zeng and X. Zhang, “Waveguide holographic head-mounted display technology,” Chinese Optics 7(5), 731–738 (2014).
    [Crossref]
  9. H. J. Yeom, H. J. Kim, S. B. Kim, H. Zhang, B. Li, Y. M. Ji, S. H. Kim, and J. H. Park, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015).
    [Crossref] [PubMed]
  10. S. Liu, P. Sun, C. Wang, and Z. Zheng, “Color waveguide transparent screen using lens array holographic optical element,” Opt. Commun. 403, 376–380 (2017).
    [Crossref]
  11. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014).
    [Crossref] [PubMed]
  12. C. Chang, J. Xia, L. Yang, W. Lei, Z. Yang, and J. Chen, “Speckle-suppressed phase-only holographic three-dimensional display based on double-constraint Gerchberg-Saxton algorithm,” Appl. Opt. 54(23), 6994–7001 (2015).
    [Crossref] [PubMed]
  13. Y. Qi, C. Chang, and J. Xia, “Speckleless holographic display by complex modulation based on double-phase method,” Opt. Express 24(26), 30368–30378 (2016).
    [Crossref] [PubMed]
  14. C. Chang, Y. Qi, J. Wu, J. Xia, and S. Nie, “Speckle reduced lensless holographic projection from phase-only computer-generated hologram,” Opt. Express 25(6), 6568–6580 (2017).
    [Crossref] [PubMed]
  15. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic Near-Eye Displays for Virtual and Augmented Reality,” ACM Trans. Graph. 36(1), 1–16 (2017).
    [Crossref]
  16. Q. Gao, J. Liu, X. Duan, T. Zhao, X. Li, and P. Liu, “Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter,” Opt. Express 25(7), 8412–8424 (2017).
    [Crossref] [PubMed]
  17. J. Hong, S. W. Min, and B. Lee, “Integral floating display systems for augmented reality,” Appl. Opt. 51(18), 4201–4209 (2012).
    [Crossref] [PubMed]
  18. C. K. Lee, S. Moon, S. Lee, D. Yoo, J. Y. Hong, and B. Lee, “Compact three-dimensional head-mounted display system with Savart plate,” Opt. Express 24(17), 19531–19544 (2016).
    [Crossref] [PubMed]
  19. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik (Stuttg.) 35, 237–246 (1972).
  20. T. Shimobaba and T. Ito, “Random phase-free computer-generated hologram,” Opt. Express 23(7), 9549–9554 (2015).
    [Crossref] [PubMed]
  21. M. Makowski, T. Shimobaba, and T. Ito, “Increased depth of focus in random-phase-free holographic projection,” Chin. Opt. Lett. 14(12), 120901 (2016).
    [Crossref]
  22. Y. Jie, “Optimization of optoelectronic reconstruction of phase hologram by use of digital blazed grating,” Wuli Xuebao 58(5), 22409–22417 (2009).
  23. T. Nobukawa and T. Nomura, “Multilayer recording holographic data storage using a varifocal lens generated with a kinoform,” Opt. Lett. 40(23), 5419–5422 (2015).
    [Crossref] [PubMed]
  24. G. Pesce, G. Volpe, O. M. Marago, P. H. Jones, S. Gigain, A. Sasso, and G. Volpe, “A step–by–step guide to the realisation of advanced optical tweezers,” J. Opt. Soc. Am. B. in press.
  25. H. Wang, N. Chen, S. Zheng, J. Liu, and G. Situ, “Fast and high-resolution light field acquisition using defocus modulation,” Appl. Opt. 57(1), A250–A256 (2018).
    [Crossref] [PubMed]
  26. Y. Al-Najjar, “Comparison of image quality assessment: PSNR, HVS, SSIM, UIQI,” Int. J. Sci. Eng. Res. 3, 1–5 (2012).
  27. S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005).
    [Crossref] [PubMed]
  28. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008).
    [Crossref] [PubMed]
  29. J. Hong, Y. Kim, H. J. Choi, J. Hahn, J. H. Park, H. Kim, S. W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011).
    [Crossref] [PubMed]
  30. P. V. Johnson, J. A. Parnell, J. Kim, C. D. Saunter, G. D. Love, and M. S. Banks, “Dynamic lens and monovision 3D displays to improve viewer comfort,” Opt. Express 24(11), 11808–11827 (2016).
    [Crossref] [PubMed]

2018 (1)

2017 (4)

S. Liu, P. Sun, C. Wang, and Z. Zheng, “Color waveguide transparent screen using lens array holographic optical element,” Opt. Commun. 403, 376–380 (2017).
[Crossref]

C. Chang, Y. Qi, J. Wu, J. Xia, and S. Nie, “Speckle reduced lensless holographic projection from phase-only computer-generated hologram,” Opt. Express 25(6), 6568–6580 (2017).
[Crossref] [PubMed]

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic Near-Eye Displays for Virtual and Augmented Reality,” ACM Trans. Graph. 36(1), 1–16 (2017).
[Crossref]

Q. Gao, J. Liu, X. Duan, T. Zhao, X. Li, and P. Liu, “Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter,” Opt. Express 25(7), 8412–8424 (2017).
[Crossref] [PubMed]

2016 (4)

2015 (4)

2014 (3)

2013 (1)

I. Rabbi and S. Ullah, “A survey on augmented reality challenges and tracking,” Acta Graph. 24(1–2), 29–46 (2013).

2012 (4)

H. Liu, Z. Zheng, H. Li, and X. Liu, “Design of Planar Display Based on Transparent Film Array,” Guangdian Gongcheng 39(5), 145–150 (2012).

G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

J. Hong, S. W. Min, and B. Lee, “Integral floating display systems for augmented reality,” Appl. Opt. 51(18), 4201–4209 (2012).
[Crossref] [PubMed]

Y. Al-Najjar, “Comparison of image quality assessment: PSNR, HVS, SSIM, UIQI,” Int. J. Sci. Eng. Res. 3, 1–5 (2012).

2011 (2)

J. Hong, Y. Kim, H. J. Choi, J. Hahn, J. H. Park, H. Kim, S. W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011).
[Crossref] [PubMed]

J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic, “Augmented reality technologies, system and applications,” Multimedia Tools Appl. 51(1), 341–377 (2011).
[Crossref]

2009 (1)

Y. Jie, “Optimization of optoelectronic reconstruction of phase hologram by use of digital blazed grating,” Wuli Xuebao 58(5), 22409–22417 (2009).

2008 (2)

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008).
[Crossref] [PubMed]

H. Takahashi and S. Hideya, “Stereoscopic see-through retinal projection head-mounted display,” Proc. SPIE 6803, 68031N (2008).
[Crossref]

2005 (1)

S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005).
[Crossref] [PubMed]

1972 (1)

R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik (Stuttg.) 35, 237–246 (1972).

Akeley, K.

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008).
[Crossref] [PubMed]

S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005).
[Crossref] [PubMed]

Al-Najjar, Y.

Y. Al-Najjar, “Comparison of image quality assessment: PSNR, HVS, SSIM, UIQI,” Int. J. Sci. Eng. Res. 3, 1–5 (2012).

Anisetti, M.

J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic, “Augmented reality technologies, system and applications,” Multimedia Tools Appl. 51(1), 341–377 (2011).
[Crossref]

Banks, M. S.

P. V. Johnson, J. A. Parnell, J. Kim, C. D. Saunter, G. D. Love, and M. S. Banks, “Dynamic lens and monovision 3D displays to improve viewer comfort,” Opt. Express 24(11), 11808–11827 (2016).
[Crossref] [PubMed]

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008).
[Crossref] [PubMed]

S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005).
[Crossref] [PubMed]

Carmigniani, J.

J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic, “Augmented reality technologies, system and applications,” Multimedia Tools Appl. 51(1), 341–377 (2011).
[Crossref]

Ceravolo, P.

J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic, “Augmented reality technologies, system and applications,” Multimedia Tools Appl. 51(1), 341–377 (2011).
[Crossref]

Chang, C.

Chen, J.

Chen, N.

Choi, H. J.

Damiani, E.

J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic, “Augmented reality technologies, system and applications,” Multimedia Tools Appl. 51(1), 341–377 (2011).
[Crossref]

Duan, X.

Ernst, M. O.

S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005).
[Crossref] [PubMed]

Furht, B.

J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic, “Augmented reality technologies, system and applications,” Multimedia Tools Appl. 51(1), 341–377 (2011).
[Crossref]

Gao, Q.

Georgiou, A.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic Near-Eye Displays for Virtual and Augmented Reality,” ACM Trans. Graph. 36(1), 1–16 (2017).
[Crossref]

Gerchberg, R. W.

R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik (Stuttg.) 35, 237–246 (1972).

Gigain, S.

G. Pesce, G. Volpe, O. M. Marago, P. H. Jones, S. Gigain, A. Sasso, and G. Volpe, “A step–by–step guide to the realisation of advanced optical tweezers,” J. Opt. Soc. Am. B. in press.

Girshick, A. R.

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008).
[Crossref] [PubMed]

Hahn, J.

Hideya, S.

H. Takahashi and S. Hideya, “Stereoscopic see-through retinal projection head-mounted display,” Proc. SPIE 6803, 68031N (2008).
[Crossref]

Hirsch, M.

G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

Hoffman, D. M.

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008).
[Crossref] [PubMed]

Hong, J.

Hong, J. Y.

Hua, H.

Ito, T.

Ivkovic, M.

J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic, “Augmented reality technologies, system and applications,” Multimedia Tools Appl. 51(1), 341–377 (2011).
[Crossref]

Javidi, B.

Ji, Y. M.

Jie, Y.

Y. Jie, “Optimization of optoelectronic reconstruction of phase hologram by use of digital blazed grating,” Wuli Xuebao 58(5), 22409–22417 (2009).

Johnson, P. V.

Jones, P. H.

G. Pesce, G. Volpe, O. M. Marago, P. H. Jones, S. Gigain, A. Sasso, and G. Volpe, “A step–by–step guide to the realisation of advanced optical tweezers,” J. Opt. Soc. Am. B. in press.

Kim, H.

Kim, H. J.

Kim, J.

Kim, M.

Kim, S. B.

Kim, S. H.

Kim, Y.

Kollin, J. S.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic Near-Eye Displays for Virtual and Augmented Reality,” ACM Trans. Graph. 36(1), 1–16 (2017).
[Crossref]

Lanman, D.

G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

Lee, B.

Lee, C. K.

Lee, S.

Lei, W.

Li, B.

Li, H.

H. Liu, Z. Zheng, H. Li, and X. Liu, “Design of Planar Display Based on Transparent Film Array,” Guangdian Gongcheng 39(5), 145–150 (2012).

Li, X.

Liu, H.

H. Liu, Z. Zheng, H. Li, and X. Liu, “Design of Planar Display Based on Transparent Film Array,” Guangdian Gongcheng 39(5), 145–150 (2012).

Liu, J.

Liu, P.

Liu, S.

S. Liu, P. Sun, C. Wang, and Z. Zheng, “Color waveguide transparent screen using lens array holographic optical element,” Opt. Commun. 403, 376–380 (2017).
[Crossref]

Liu, X.

H. Liu, Z. Zheng, H. Li, and X. Liu, “Design of Planar Display Based on Transparent Film Array,” Guangdian Gongcheng 39(5), 145–150 (2012).

Love, G. D.

Maimone, A.

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic Near-Eye Displays for Virtual and Augmented Reality,” ACM Trans. Graph. 36(1), 1–16 (2017).
[Crossref]

Makowski, M.

Marago, O. M.

G. Pesce, G. Volpe, O. M. Marago, P. H. Jones, S. Gigain, A. Sasso, and G. Volpe, “A step–by–step guide to the realisation of advanced optical tweezers,” J. Opt. Soc. Am. B. in press.

Min, S. W.

Moon, E.

Moon, S.

Nie, S.

Nobukawa, T.

Nomura, T.

Park, J. H.

Parnell, J. A.

Pesce, G.

G. Pesce, G. Volpe, O. M. Marago, P. H. Jones, S. Gigain, A. Sasso, and G. Volpe, “A step–by–step guide to the realisation of advanced optical tweezers,” J. Opt. Soc. Am. B. in press.

Qi, Y.

Rabbi, I.

I. Rabbi and S. Ullah, “A survey on augmented reality challenges and tracking,” Acta Graph. 24(1–2), 29–46 (2013).

Raskar, R.

G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

Roh, J.

Sasso, A.

G. Pesce, G. Volpe, O. M. Marago, P. H. Jones, S. Gigain, A. Sasso, and G. Volpe, “A step–by–step guide to the realisation of advanced optical tweezers,” J. Opt. Soc. Am. B. in press.

Saunter, C. D.

Saxton, W. O.

R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik (Stuttg.) 35, 237–246 (1972).

Shimobaba, T.

Situ, G.

Sun, P.

S. Liu, P. Sun, C. Wang, and Z. Zheng, “Color waveguide transparent screen using lens array holographic optical element,” Opt. Commun. 403, 376–380 (2017).
[Crossref]

Takahashi, H.

H. Takahashi and S. Hideya, “Stereoscopic see-through retinal projection head-mounted display,” Proc. SPIE 6803, 68031N (2008).
[Crossref]

Ullah, S.

I. Rabbi and S. Ullah, “A survey on augmented reality challenges and tracking,” Acta Graph. 24(1–2), 29–46 (2013).

Volpe, G.

G. Pesce, G. Volpe, O. M. Marago, P. H. Jones, S. Gigain, A. Sasso, and G. Volpe, “A step–by–step guide to the realisation of advanced optical tweezers,” J. Opt. Soc. Am. B. in press.

G. Pesce, G. Volpe, O. M. Marago, P. H. Jones, S. Gigain, A. Sasso, and G. Volpe, “A step–by–step guide to the realisation of advanced optical tweezers,” J. Opt. Soc. Am. B. in press.

Wang, C.

S. Liu, P. Sun, C. Wang, and Z. Zheng, “Color waveguide transparent screen using lens array holographic optical element,” Opt. Commun. 403, 376–380 (2017).
[Crossref]

Wang, H.

Watt, S. J.

S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005).
[Crossref] [PubMed]

Wetzstein, G.

G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

Wu, J.

Xia, J.

Yang, L.

Yang, Z.

Yeom, H. J.

Yoo, D.

Zeng, F.

F. Zeng and X. Zhang, “Waveguide holographic head-mounted display technology,” Chinese Optics 7(5), 731–738 (2014).
[Crossref]

Zhang, H.

Zhang, X.

F. Zeng and X. Zhang, “Waveguide holographic head-mounted display technology,” Chinese Optics 7(5), 731–738 (2014).
[Crossref]

Zhao, T.

Zheng, S.

Zheng, Z.

S. Liu, P. Sun, C. Wang, and Z. Zheng, “Color waveguide transparent screen using lens array holographic optical element,” Opt. Commun. 403, 376–380 (2017).
[Crossref]

H. Liu, Z. Zheng, H. Li, and X. Liu, “Design of Planar Display Based on Transparent Film Array,” Guangdian Gongcheng 39(5), 145–150 (2012).

ACM Trans. Graph. (2)

G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012).
[Crossref]

A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic Near-Eye Displays for Virtual and Augmented Reality,” ACM Trans. Graph. 36(1), 1–16 (2017).
[Crossref]

Acta Graph. (1)

I. Rabbi and S. Ullah, “A survey on augmented reality challenges and tracking,” Acta Graph. 24(1–2), 29–46 (2013).

Appl. Opt. (4)

Chin. Opt. Lett. (1)

Chinese Optics (1)

F. Zeng and X. Zhang, “Waveguide holographic head-mounted display technology,” Chinese Optics 7(5), 731–738 (2014).
[Crossref]

Guangdian Gongcheng (1)

H. Liu, Z. Zheng, H. Li, and X. Liu, “Design of Planar Display Based on Transparent Film Array,” Guangdian Gongcheng 39(5), 145–150 (2012).

Int. J. Sci. Eng. Res. (1)

Y. Al-Najjar, “Comparison of image quality assessment: PSNR, HVS, SSIM, UIQI,” Int. J. Sci. Eng. Res. 3, 1–5 (2012).

J. Vis. (2)

S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005).
[Crossref] [PubMed]

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008).
[Crossref] [PubMed]

Multimedia Tools Appl. (1)

J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic, “Augmented reality technologies, system and applications,” Multimedia Tools Appl. 51(1), 341–377 (2011).
[Crossref]

Opt. Commun. (1)

S. Liu, P. Sun, C. Wang, and Z. Zheng, “Color waveguide transparent screen using lens array holographic optical element,” Opt. Commun. 403, 376–380 (2017).
[Crossref]

Opt. Express (9)

E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014).
[Crossref] [PubMed]

Y. Qi, C. Chang, and J. Xia, “Speckleless holographic display by complex modulation based on double-phase method,” Opt. Express 24(26), 30368–30378 (2016).
[Crossref] [PubMed]

C. Chang, Y. Qi, J. Wu, J. Xia, and S. Nie, “Speckle reduced lensless holographic projection from phase-only computer-generated hologram,” Opt. Express 25(6), 6568–6580 (2017).
[Crossref] [PubMed]

C. K. Lee, S. Moon, S. Lee, D. Yoo, J. Y. Hong, and B. Lee, “Compact three-dimensional head-mounted display system with Savart plate,” Opt. Express 24(17), 19531–19544 (2016).
[Crossref] [PubMed]

Q. Gao, J. Liu, X. Duan, T. Zhao, X. Li, and P. Liu, “Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter,” Opt. Express 25(7), 8412–8424 (2017).
[Crossref] [PubMed]

H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014).
[Crossref] [PubMed]

H. J. Yeom, H. J. Kim, S. B. Kim, H. Zhang, B. Li, Y. M. Ji, S. H. Kim, and J. H. Park, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015).
[Crossref] [PubMed]

T. Shimobaba and T. Ito, “Random phase-free computer-generated hologram,” Opt. Express 23(7), 9549–9554 (2015).
[Crossref] [PubMed]

P. V. Johnson, J. A. Parnell, J. Kim, C. D. Saunter, G. D. Love, and M. S. Banks, “Dynamic lens and monovision 3D displays to improve viewer comfort,” Opt. Express 24(11), 11808–11827 (2016).
[Crossref] [PubMed]

Opt. Lett. (1)

Optik (Stuttg.) (1)

R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik (Stuttg.) 35, 237–246 (1972).

Proc. SPIE (1)

H. Takahashi and S. Hideya, “Stereoscopic see-through retinal projection head-mounted display,” Proc. SPIE 6803, 68031N (2008).
[Crossref]

Wuli Xuebao (1)

Y. Jie, “Optimization of optoelectronic reconstruction of phase hologram by use of digital blazed grating,” Wuli Xuebao 58(5), 22409–22417 (2009).

Other (2)

G. Pesce, G. Volpe, O. M. Marago, P. H. Jones, S. Gigain, A. Sasso, and G. Volpe, “A step–by–step guide to the realisation of advanced optical tweezers,” J. Opt. Soc. Am. B. in press.

M. B. Spitzer, X. Miao, and B. Amirparviz, “Method and apparatus for a near-to-eye display,” U.S. Patent No. 8,767,305. 1 Jul. (2014).

Supplementary Material (1)

NameDescription
Visualization 1       Experimental results of 3D holographic display at different depths

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 CGH with the first virtual convergence light.
Fig. 2
Fig. 2 Optical design for the second convergence light.
Fig. 3
Fig. 3 Computer simulation results. (a) The original image. (b) CGH calculated by GS algorithm with traditional random phase. (c) Image reconstructed from Fig. 3(b). (d) CGH calculated by DCL-GS algorithm with phase of virtual convergence light. (e) Image reconstructed from Fig. 3(d).
Fig. 4
Fig. 4 Comparison of image quality between GS algorithm and DCL-GS algorithm. (a) Results of PSNR in 15 iteration numbers. (b) Results of SSIM in 15 iteration numbers.
Fig. 5
Fig. 5 Schematic of experimental setup.
Fig. 6
Fig. 6 (a)-(c) The original images (“cock”, “horse” and “bull”). (d)-(l) Optical reconstructions of original images taken from white board by camera (image size is 8cm × 4.5cm). (d)-(f) images reconstructed from CGH calculated by GS algorithm with the first convergence light. (g)-(i) images reconstructed from CGH calculated by DCL-GS algorithm, and received 500mm, 400mm, 350mm behind lens 1 respectively. (j)-(l) images reconstructed from CGH calculated by DCL-GS algorithm with a filter.
Fig. 7
Fig. 7 Optical reconstructions of different FOV in near-eye display. (a) The focal length f1, f2 of lens 1 and lens 2 are 250mm and 200mm respectively, the maximum image size L is 5.2cm, and the distance d between white board and camera is 62.3cm. (b) f1 = 250mm, f2 = 100mm, L = 4.9cm, d = 36.5cm. (c) f1 = 400mm, f2 = 100mm, L = 10.2cm, d = 33.5cm. (d) Beam splitter is removed and f1 = 400mm, f2 = 45mm, L = 24.7cm, d = 33.5cm.
Fig. 8
Fig. 8 Experimental results of 3D holographic display at different depths (Visualization 1). (a) The original 3D image with four characters at different depths. (b)-(e) are the focused images at 24.5cm, 43.8cm, 84.7cm, 195.1cm respectively.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

w 1 ( x 1 , y 1 ) = exp ( i π ( x 1 2 + y 1 2 ) λ r 1 )
φ b g ( x 1 , y 1 ) = 2 π T mod ( b x 1 + c y 1 , T )
Δ z = f 1 2 r 2 z + f 1 .
P S N R ( x , y ) = 10 log 255 2 M S E ( x , y )
M S E ( x , y ) = 1 M N i = 1 M j = 1 N e ( i , j ) 2
S S I M ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 )
θ = 2 tan ( L 2 d )

Metrics