Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Monocular 3D see-through head-mounted display via complex amplitude modulation

Open Access Open Access

Abstract

The complex amplitude modulation (CAM) technique is applied to the design of the monocular three-dimensional see-through head-mounted display (3D-STHMD) for the first time. Two amplitude holograms are obtained by analytically dividing the wavefront of the 3D object to the real and the imaginary distributions, and then double amplitude-only spatial light modulators (A-SLMs) are employed to reconstruct the 3D images in real-time. Since the CAM technique can inherently present true 3D images to the human eye, the designed CAM-STHMD system avoids the accommodation-convergence conflict of the conventional stereoscopic see-through displays. The optical experiments further demonstrated that the proposed system has continuous and wide depth cues, which enables the observer free of eye fatigue problem. The dynamic display ability is also tested in the experiments and the results showed the possibility of true 3D interactive display.

© 2016 Optical Society of America

1. Introduction

In the past few decades, the technique of augmented reality (AR) attracts increasingly attention and gains much development. Due to its ability of providing human a new observation mode, the AR technique is found many new applications in different fields, such as navigation, biomedicine, entertainment and military operations [1–8]. In AR system, the micro displays perform digital information to the real world scenes of human eyes. This kind of immersive sense helps the wearer to better interact with his circumstance [9–14]. Among the existing AR systems, the see-through head-mounted display (STHMD) plays an essential role, because it may have the best immersion performance [15,16]. However, in the traditional STHMDs, the micro displays only present 2D images and the stereoscopic vision relies on the binocular parallax. The accommodation-convergence conflict problem is usually inevitable, so that the lack of depth cues generates visual fatigue and causes uncomfortable feelings to the wearer [17–20]. Therefore, the true 3D display with sufficient depth cues is needed for the next generation STHMD system.

Recently some other 3D methods, which are beyond the binocular parallax, have been introduced to the STHMD. Hua et al. combined the microscopic integral imaging and the freeform optical lens [21,22]. Their experiments showed that the designed prototype can offer a field of view (FOV) of 40 degree and a depth range of over 4 meters. However, the depth is restricted by the lateral resolution because of the limitation of the integral imaging theory, and the fabrication of the aspherical freeform lens is a big challenge as well. Later the holographic display techniques began to draw much interest of the researchers, because it can reconstruct the real objects in a nature 3D way and offer needed depth cues to prevent the eye fatigue problem. Thus some 3D-STHMD systems based on the computer-generated holography (CGH) are proposed and investigated successively. In such systems, the traditional micro display is replaced by the spatial light modulator (SLM) with holograms uploaded on, and the reconstructed 3D images are coupled into the observer’s vision by prisms or waveguides. Moon et al. used RGB light-emitting diode (LED) light source and computer-generated holograms to accomplish full color holographic imaging [23]. They intended to utilize light-weight LED to fulfil a compact design, but the partial coherence of LED source reduces the clarity of the display and the modulation of depth cues is affected as well. Then Chen et al. further improved the layer-based CGH method and applied it to a HMD design [24]. Yeom et al. produced a 3D-STHMD system with two holographic optical elements (HOE) as input and output couplers [25]. They also studied the astigmatism aberration of the waveguide propagation in the glass substrate. Although the existing holographic 3D-STHMD systems have achieved much improvement, there also remain some problems to be overcome. For one thing, the used holograms are all pure phase distribution. As a result, the encoding process will lose some information of the target wavefront and the reconstruction quality is dragged down by speckle noise [26–30]. For another, the iterative operations are often applied to suppress the noise in the hologram calculation, so it is difficult to realize the dynamic display. Besides, the used SLMs are all phase-only modulation type, and the price is really not cheap. This is also a disadvantage for the wearable design goal.

In this paper, we propose and investigate a practical 3D-STHMD based on a simple complex amplitude modulation technique (CAM-STHMD). In the proposed CAM-STHMD, two amplitude holograms are generated by separating the object wavefront into the real and the imaginary distributions according to the Euler equation. Then the double amplitude-only SLMs reconstruct the 3D images to the observation window. To our best knowledge, this is the first time that the CAM technique is introduced to the 3D-STHMD design. This combination can bring several advantages: (1) the reconstructed object wavefront contains all 3D depth information, thus it can be flexibly processed for comfortable observation experience; (2) without iterations, the holograms can be calculated in real-time to provide the possibility of true 3D interactive display; (3) the image quality is improved by simultaneously modulating the amplitude and phase of the object wavefront; (4) the fabrication cost can be controlled lower by using the amplitude-only SLMs.

2. The designed 3D see-through head-mounted display

The CAM is an attractive technique to manipulate the full field (i.e. amplitude and phase) and generates different light distributions. So far, there are several methods have been reported in the literature, such as interference of two phase-only holograms [31–35], supperpixel [36–38], cascaded holograms [39–44] and wavefront decomposition [45–47]. Among them, the wavefront decomposition is proved to be one promising way because its algorithm for holograms generation is simple and the operation is also easy to implement. In wavefront decomposition, the object wavefronts are divided to the real and the imaginary distributions. Therefore, in the implementation, the key factor is to create a needed phase difference (π/2) of the two distributions as different modulation terms. Usually, the imaginary term is produced by introducing an optical flat or grating to the light path [47]. But here we use the waveplate (WP) and polarization to fulfil the real and the imaginary modulation.

The schematic of the designed CAM-STHMD is shown in Fig. 1(a). The monochromatic laser source is expanded by a collimating and beam expanding (CBE) system and passes through a polarizer to obtain a specific linear polarization light beam. Then a quarter-waveplate (QWP) and the polarized beam splitter (PBS) are used to produce two divided beams with needed π/2 phase difference as real and imaginary modulation. Double amplitude-only SLM (A-SLM) are put into the light path and they are carefully aligned with pixel-to-pixel accuracy level. A half-waveplate (HWP) is added before one of the A-SLMs, so that the polarization of the two beams can be preserved accordance to ensure the best interference results. Subsequently, the calculated holograms are uploaded on the A-SLMs and the modulated beams are combined together by a normal beam splitter (BS). The target 3D signals are reconstructed and focused at different distance of d1 and d2 in the downstream light beam. Another BS is used to deliver the reconstructed 3D images to the human eye and overlap them on the real world scenes.

 figure: Fig. 1

Fig. 1 (a)Schematic of the 3D see-through head-mounted display based on complex amplitude modulation, (b)an assembled example and the wearing effect, (c)inner optical system of the assembled example.

Download Full Size | PDF

The optical implemented structure can be further integrated and assembled in a compact form. Figure 1(b) gives a design example and Fig. 1 (c) shows its inner optical system. In the design example, the A-SLM is set to be 1024 × 1024 pixels and the pixel pitch is 8μm, so the size of the A-SLM is 0.82cm × 0.82cm. Based on this fundamental data, the other optical elements are selected to match the A-SLM well and the size of the compact assembled example is just 9.53cm × 8.05cm × 2.02cm, with a 8.02cm length accessory to embed the observation window. Figure 1(b) also presents the viewing effect for navigation in the campus of Beijing Institute of Technology (BIT). When wearing the designed CAM-STHMD, the wearer can see additional 3D instruction signals for different buildings in the campus. As Fig. 1(b) shows, along with the wearer focusing from the main building to the center building, the relevant instructions of ‘Main Building’ and ‘Center Building’ become clear in turn. Because the modulated signals are true 3D and can offer sufficient depth cues, the wearer will be free of the eye fatigue problem and feel much better than traditional 3D-STHMDs.

The two amplitude holograms are produced by decomposing the target object wavefront into real and imaginary distributions. Firstly, we assume the target object wavefront is Aexp() with A and θ representing the amplitude and phase respectively. It is then separated according to the Euler formula as bellow:

Aexp(iθ)=Acosθ+iAsinθ=Ar+iAi
where i=1 is the imaginary number and Ar = Acosθ and Ai = Asinθ are the real and imaginary terms, respectively. This separation is further illustrated in Fig. 2. The x axis stands for the real part Ar, while the y axis stands for the imaginary part Ai. Since θ is in [0, 2π], the complex wavefront Aexp() distributes in a circle with the radius A (the blue solid line circle in Fig. 2). It should be noted that Ar and Ai have both positive and negative values according to Eq. (1). They cannot be uploaded to the A-SLMs directly, because the modulation ranger of the A-SLM is normally in [0, 1]. A simple solution is to add a shift vector to Aexp() and move it to the first quartile to ensure that the separated two terms are both non-negative distributions. Similar with [47], such shift operation can be expressed as the bellow equation:
Asexp(iθs)=Aexp(iθ)+2Texp(iπ4)=Ar+iAi+T(1+i)=Ar+T+i(Ai+T)=Ars+iAis
where T(1 + i) is the shift vector, and Ars and Ais are the shifted real and imaginary terms (i.e. the two amplitude holograms), respectively. As Fig. 2 shows, the value T is at least not less than A in order to ensure the object wavefront Aexp() is shifted to the first quartile (the purple dash line circle). After this shift operation, the two amplitude holograms Ars and Ais can be extracted from Asexp(s).

 figure: Fig. 2

Fig. 2 The illustration of the shift operation.

Download Full Size | PDF

A problem should be noticed in the shift operation. Although the target object wavefront is successfully divided to double non-negative holograms, this shift operation also introduces some disturbance for the reconstruction. The shift vector T(1 + i) will produce a varying contrast background for the display. Such process can be expressed as bellow:

Frt{Ars+iAis}λd=Frt{Ar+iAi+T(1+i)}λd=Frt{Ar+iAi}λd+Frt{T(1+i)}λd=Aexp(iθ)+2TFrt{exp(iπ4)}λd
where λ is the wavelength, d is the propagation distance, and Frt stands for the Fresnel diffraction. Assuming the diffraction field and the original field are U(x, y) and U(x0, y0) respectively, the Fresnel diffraction is then expressed as bellow [48]:
U(x,y)=1iλdexp(ikd)U(x0,y0)exp{ik2d[(xx0)2+(yy0)2]}dx0dy0
where k = 2π/λ is the wave number. From Eq. (3), we can see that the final display is two terms interference. Although the second term is only unitary values, it will disturb the contrast of the output images. As the propagation distance changing, the display signal suffers different contrast background. This problem influences the viewing impression. A simple solution to balance this disturbance is to add some random phase to the target wavefront. Since the random phase can smooth the frequency spectrum, the background contrast will stay at a fixed level and the observation quality will be improved. Besides, from Eqs. (1) and (2), we can see that the two holograms are obtained without iterations, thus this method is quite suitable for dynamic interactive display.

It is also worthy to be noticed that the length difference (LD) of the two split light beams in the optical set-up may affect the recombined complex modulation. Thus, in order to compensate such disturbance, we observed the interference patterns before and after adding the QWP in the optical set-up. If the phase delay (caused by the QWP) of the two beams is π/2, the above interference process can be expressed as below:

{|exp(iθ1)+exp(iθ2)|2=2+2cos(θ1θ2)|exp[i(θ1+π2)]+exp(iθ2)|2=22sin(θ1θ2)
where exp(iθ1) and exp(iθ2) represent the two beams. From the above formula, we can see that the intensity of the interference patterns will reverse when the phase delay of the two beams is exactly π/2. This intensity change can help us to judge whether the modulation is correct.

Another problem is the alignment of the two SLMs. Because the reconstructed 3D signal is achieved via overlap of the modulated fields, the SLMs should be aligned carefully at the pixel-to-pixel scale. In general, the pixel scale of the A-SLM is dozens of or even several micrometers. Figure 3 illustrates our designed auxiliary device for the alignment. As Fig. 3(a) depicts, an image lens is put downstream the BS and its position is well adjusted to satisfy the magnification imaging condition. A camera is then located at the corresponding imaging plane to capture the magnified photos. According to the lens imaging formula, the magnification factor β can be written as below:

β=fdx(0<dx<f)
where f is the focal length of the imaging lens, dx is the distance between the A-SLM and the front focus of the image lens and the minus represents the image direction. If dx is well controlled, a suitable magnification factor β will be obtained. Figure 3(b) shows the effect of the magnified pixel arrays, so we can use it for the alignment checking. Actually, some digital image processing techniques (edge detection and feature matching) can also further be applied to improve the alignment. Besides, the other necessary operation is that assembling the SLMs to the precise translation stages with enough adjustable freedom.

 figure: Fig. 3

Fig. 3 (a) Optical auxiliary device for the two SLMs alignment, (b) a magnified image of the aligned A-SLMs pixel arrays.

Download Full Size | PDF

3. Optical experiment results and discussions

The optical experiments are performed to test the designed CAM-STHMD system. Its testing experimental facility is shown in the Fig. 4. In the experiments, the used A-SLMs are DHC GCI-770102 with 1024 × 768 pixels and the pixel pitch is 26μm. They are assembled to the platforms of 5 degrees of freedom (x, y, z and two angles’ adjustion), so the position and the angle of the A-SLMs can be fully manipulated to achieve the pixel-to-pixel scale alignment. The used light source is a green color laser of 532.8nm wavelength. It should be noted that the modulation window of the SLM is not filled fully by the active pixel areas due to the limitation of the manufacture technology, as Fig. 3(b) shows. This filling defect will cause multiple orders diffraction effect like the grating and influence the observation quality [26,29]. To solve this problem, the 4-f lens system and a band-pass filter (BPF) are used to eliminate the unwanted orders. As Fig. 4 shows, the 4-f lens system is composed by two Fourier lenses which are assigned downstream in turn with the focus position coinciding together. The BPF is put at the back focus of the first lens and only allows the wanted order to pass through. A Canon D5 camera with a focus zoom range from 25 cm to 75cm is put at the observation window to record the modulated 3D signals. As the used observation BS cube is 2.5cm × 2.5cm × 2.5cm, the system eye box is mainly determined by the uploaded hologram. Here in our experiments, the used holograms are all 768 × 768 pixels, so the eye box is 2.0cm × 2.0cm.

 figure: Fig. 4

Fig. 4 The experimental facility for testing the proposed CAM-STHMD system.

Download Full Size | PDF

Firstly, we tested the performance of the proposed CAM-STHMD on 3D signals reconstruction. Two characters ‘G’ and ‘F’ are used as the 3D images to be displayed at different places. The distance d1 from the character ‘F’ to the SLM plane is 52.5cm, and the distance d2 from the character ‘G’ to the SLM plane is 32.5cm. As shown in Fig. 5(a), the Fresnel diffraction fields UG and UF of ‘G’ and ‘F’ are calculated respectively according to the formula below:

UG=Frt{f(G)}λd1UF=Frt{f(F)}λd2
where f(G) and f(F) are the expressions of the planes G and F, respectively, and the operator Frt is defined as Eq. (4). The target complex wavefront are obtained by adding the two diffraction distributions together (UG + UF). Then the two relevant amplitude holograms Ars and Ais can be extracted from the overlapped fields according to Eq. (2). They are given in Figs. 5(b) and 5(c), respectively. As discussed in the section 2, the shift vector T(1 + i) is also applied to make the amplitude holograms Ars and Ais both non-negative. The shift operation introduces some disturbance to the final display. Thus, the random phases are added in the diffraction propagation to smooth the frequency spectrum and suppress the disturbance. This process is illustrated in Fig. 5(a). The two amplitude holograms (768 × 768 pixels) are further normalized in [0, 1] to coordinate with the modulation ranger of the A-SLMs.

 figure: Fig. 5

Fig. 5 (a) Illustration for the production of the target complex wavefront, (b) and (c) are two amplitude holograms Ars (real) and Ais (imaginary), respectively.

Download Full Size | PDF

Figure 6 gives the reconstructed images of the target complex wavefront at different distances. The characters ‘G’ and ‘F’ are focused and displayed in succession at the distances of 32.5cm and 52.5cm, respectively. Meanwhile, two toys (white and green soliders) are also placed at the same distance with the displayed characters ‘G’ and ‘F’, respectively. From Figs. 6(a) and 6(b), we can see that the reconstructed images are focused and blurred in the same way with the real objects at different distances. The reconstruction demonstrated the 3D display property of the proposed CAM-STHMD successfully. We also calculated the contrast values of the experimental images when the 3D signal focuses at the position of letters ‘G’ and ‘F’ respectively. The two values are CG = 9.18 and CF = 10.47 (the contrast C = Imax/Imin, where Imax and Imin are the maximum and minimum intensity values of the focused image respectively). It is worthy to mention that the recorded 3D images look smaller with the focus distance increasing. This is mainly caused by the camera (or human eyes) zooming nature: far objet corresponds to long focal length and small size.

 figure: Fig. 6

Fig. 6 Reconstructed 3D images when focusing at (a) G, 32.5cm, and (b) F, 52.5cm.

Download Full Size | PDF

We then further tested more depth signals for the CAM-STHMD system. Five characters ‘B’, ‘I’, ‘T’, ‘3′ and ‘D’ are chosen as the displayed images placed successively at the distances of 66.5cm, 74.5cm, 82.5cm, 90.5cm and 98.5cm, respectively. The synthetic complex wavefront is obtained the same as Fig. 5(a) and the amplitude holograms are also calculated in the computer according to Eq. (2). In order to show the 3D reconstructed scenes better, the toy (white solider) is put at the same plane of character ‘B’, and an iron ruler is also added into the camera vision field. It should be noticed that the ruler is placed slantwise to the focus line, so that the scale marks can be shoot by the observation camera. As the camera focus is zoomed, the reconstructed complex wavefront is focused at characters ‘B’ to ‘D’ sequentially from 66.5cm to 98.5cm. Meanwhile, from Figs. 7(a) to 7(e), it can be seen that the real object turns to be more and more obscure and different scale marks on the ruler are changing clear. These results demonstrated that the proposed CAM-STHMD system can present the continuous see-through 3D scenes. The experiments show that the adjustable range of the system focal length covers from 32.5cm to 98.5cm, as matching with the zooming ability of the camera. Actually, if we use a better zooming camera, the depth range can be enlarged from dozens centimeters to several meters. Technically to say, the modulation depth range is related with the coherence length of the used laser source. The more monochromaticity the source is, the wider depth range is achieved. Typically, the spectral line-width of a semiconductor laser is only several nanometers, thus the coherence length may reach dozens meters. This depth range is quite sufficient for the human vision.

 figure: Fig. 7

Fig. 7 Focused images with different depths (Visualization 1): (a) 66.5cm, (b) 74.5cm, (c) 82.5cm, (d) 90.5cm and (e) 98.5cm.

Download Full Size | PDF

The zooming process has been checked in real time and recorded with multiple frames. The focusing interval is about 1.74cm, so 23 frames are obtained in 40cm zooming. These frames are synthesized to a video, which is attached as Visualization 1. From the video, it can be seen that the reconstructed signals have continuous depth range.

The ability of dynamic 3D see-through display is also tested in the experiments. The 3D signal is a rotate cube with five characters ‘B’, ‘I’, ‘T’, ‘3′, and ‘D’ decorated on each surface. As the cube is rotating, the corresponding holograms are calculated and uploaded to the double A-SLMs. The reconstructed distance is 61.5cm. For better presenting the see-through effect, the toy (green solider) is placed at the same focus distance with the 3D signal. Then the rebuilded images are recorded by the camera and some extracted frames of the video (Visualization 2) are shown in Figs. 8(a)-8(i). From the experimental results, we can see that the rotating cube is reconstructed in a good quality in spite of some background noise which is mainly produced by the shift vector T(1 + i). Since no iterations are needed in the calculation of the holograms, the CAM-STHMD is proved to be quite suitable for the design of interactive display devices.

 figure: Fig. 8

Fig. 8 Dynamic 3D see-through display (Visualization 2), (a)-(i) are some extracted frames.

Download Full Size | PDF

Although we have successfully reconstructed the 3D images for displaying, the clarity of the experiments is still needed to be improved. Several possible reasons may lead to this problem: first, some mismatch may exist in the alignment of the two A-SLMs; second, the random phase suppressed the disturbance of the shift vector, but it also brought some speckle noise; third, the filter in the 4-f lens system causes additional noise when it is used to eliminate the unwanted-frequency information; at last, the surrounding dust is inevitable during the recording of the 3D signals. Actually, the system can be assembled in a compact structure, so the system is able to have high stability, which helps to improve the clarity of the output 3D images. Besides, due to the big pixel pitch (26μm) of the used A-SLMs, the FOV of the system is still small (FOV = ± arcsin(λ/2p) = ± 0.59°, where p is the pixel pitch). This problem impacts the 3D viewing perception in a certain degree. Higher precision instruments can reduce the alignment difficulty and smaller pixel pitch A-SLM is able to be applied to enlarge the system FOV.

4. Summary

We have proposed and developed a low-cost true 3D-STHMD system based on the CAM technique for the first time. Two amplitude holograms are obtained by dividing the target object wavefront into the real and the imaginary distributions, which guarantees the real-time calculation. Optical experiments are performed and the results demonstrate that the designed CAM-STHMD system can reconstruct true 3D images in real-time with the continuous and wide depth cues. The proposed CAM-STHMD is free of the accommodation-convergence conflict and the eye fatigue problem. The dynamic display ability is also tested in the experiments, which provides the possibility of true 3D interactive display. It is expected that the proposed method and system may bring good promotion and open up new applications for the 3D-STHMD research.

This preliminary study encourages us to continue the exploration for improving the optical implementation and optimizing the calculation algorithm. The issues of better resolution and rendering effect are going to be taken into account as well. We will focus on the engineering issues to consummate the assembling design and finally achieve the real-time full-color interactive goal. Some other CAM techniques will also be developed in our future work.

Funding

National Natural Science Founding of China (NSFC) (61575024, 61235002, 61420106014); Program 863 (2015AA015905); Program 973 (2013CB328801, 2013CB328806).

Acknowledgments

The authors thank the editor and reviewers for giving the pertinent comments, valuable questions and constructive suggestions on this work.

References and Links

1. R. Azuma, Y. Baillot, R. Behringer, S. Feiner, S. Julier, and B. MacIntyre, “Recent advances in augmented reality,” IEEE Comput. Graph. Appl. 21(6), 34–47 (2001). [CrossRef]  

2. M. Billinghust and H. Kato, “Collaborative augmented reality,” Commun. ACM 45(7), 64–70 (2002).

3. O. Cakmakci and J. Rolland, “Head-worn displays: A review,” J. Disp. Technol. 2(3), 199–216 (2006). [CrossRef]  

4. F. Zhou, H. B.-L. Duh, and M. Billinghust, “Trends in augmented reality tracking, interaction and display: a review of ten years of ISMAR,” Proc. of 7th IEEE/ACM, International Symposium on Mixed and Augmented Reality, 193–202 (2008).

5. J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic, “Augmented reality technologies, system and applications,” Multimedia Tools Appl. 51(1), 341–377 (2011). [CrossRef]  

6. J. Rolland and K. Thompson, “See-through head worn display for mobile augmented reality,” Commun. China Comp. Fed. 7(8), 28–37 (2011).

7. I. Rabbi and S. Ullah, “A survey on augmented reality challenges and tracking,” Acta Graph. 24(1–2), 29–46 (2013).

8. H. Li, X. Zhang, G. Shi, H. Qu, Y. Wu, and J. Zhang, “Review and analysis of avionic helmet-mounted displays,” Opt. Eng. 52(11), 110901 (2013). [CrossRef]  

9. D. Cheng, Y. Wang, H. Hua, and M. M. Talha, “Design of an optical see-through head-mounted display with a low f-number and large field of view using a freeform prism,” Appl. Opt. 48(14), 2655–2668 (2009). [CrossRef]   [PubMed]  

10. S. Liu, H. Hua, and D. Cheng, “A novel prototype for an optical see-through head-mounted display with addressable focus cues,” IEEE Trans. Vis. Comput. Graph. 16(3), 381–393 (2010). [CrossRef]   [PubMed]  

11. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010). [CrossRef]   [PubMed]  

12. O. Hilliges, D. Kim, S. Izadi, M. Weiss, and A. Wilson, “Holodesk: direct 3D interactions with a situated see-through display,” Proc. of the SIGCHI Conference on Human Factors in Computing Systems ACM, 2421–2430 (2012). [CrossRef]  

13. J. Lee, A. Olwal, H. Ishii, and C. Boulanger, “Spacetop: Integrating 2D and spatial 3D interactions in a see-through desktop environment,” Proc. of the SIGCHI Conference on Human Factors in Computing Systems ACM, 189–192 (2013). [CrossRef]  

14. H. Hua, X. Hu, and C. Gao, “A high-resolution optical see-through head-mounted display with eyetracking capability,” Opt. Express 21(25), 30993–30998 (2013). [CrossRef]   [PubMed]  

15. R. Shi, J. Liu, H. Zhao, Z. Wu, Y. Liu, Y. Hu, Y. Chen, J. Xie, and Y. Wang, “Chromatic dispersion correction in planar waveguide using one-layer volume holograms based on three-step exposure,” Appl. Opt. 51(20), 4703–4708 (2012). [CrossRef]   [PubMed]  

16. J. Han, J. Liu, X. Yao, and Y. Wang, “Portable waveguide display system with a large field of view by integrating freeform elements and volume holograms,” Opt. Express 23(3), 3534–3549 (2015). [CrossRef]   [PubMed]  

17. S. Yano, S. Ide, T. Mitsuhashi, and H. Thwaites, “A study of visual fatigue and visual comfort for 3D HDTV/HDTV images,” Displays 23(4), 191–201 (2002). [CrossRef]  

18. S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005). [CrossRef]   [PubMed]  

19. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008). [CrossRef]   [PubMed]  

20. J. Hong, Y. Kim, H. J. Choi, J. Hahn, J. H. Park, H. Kim, S. W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef]   [PubMed]  

21. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]   [PubMed]  

22. H. Hua, “Past and future of wearable augmented reality display and their applications,” Proc. SPIE 9186, 91860O (2014).

23. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014). [CrossRef]   [PubMed]  

24. J. S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015). [CrossRef]   [PubMed]  

25. H. J. Yeom, H. J. Kim, S. B. Kim, H. Zhang, B. Li, Y. M. Ji, S. H. Kim, and J. H. Park, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015). [CrossRef]   [PubMed]  

26. X. Li, J. Liu, J. Jia, Y. Pan, and Y. Wang, “3D dynamic holographic display by modulating complex amplitude experimentally,” Opt. Express 21(18), 20577–20587 (2013). [CrossRef]   [PubMed]  

27. X. Li, J. Liu, Z. Zhang, J. Jia, Y. Pan, and Y. Wang, “Precise intensity modulation in dynamic holographic 3D display,” in Proceedings ofDigital Holography and Three-Dimensional Imaging 2014, Imaging and Applied Optics, OSA Technical Digest, JTh1C.1 (2014).

28. J. Liu and X. Li, “Complex amplitude modulation in real time holographic computation,” in Proceedings ofDigital Holography and Three-Dimensional Imaging 2014,Imaging and Applied Optics, OSA Technical Digest, SM4F.1 (2014).

29. G. Xue, J. Liu, X. Li, J. Jia, Z. Zhang, B. Hu, and Y. Wang, “Multiplexing encoding method for full-color dynamic 3D holographic display,” Opt. Express 22(15), 18473–18482 (2014). [CrossRef]   [PubMed]  

30. C. Gao, J. Liu, X. Li, G. Xue, J. Jia, and Y. Wang, “Accurate compressed look up table method for CGH in 3D holographic display,” Opt. Express 23(26), 33194–33204 (2015). [CrossRef]   [PubMed]  

31. H. O. Bartelt, “Computer-generated holographic component with optimum light efficiency,” Appl. Opt. 23(10), 1499–1502 (1984). [CrossRef]   [PubMed]  

32. J. Amako, H. Miura, and T. Sonehara, “Wave-front control using liquid-crystal devices,” Appl. Opt. 32(23), 4323–4329 (1993). [CrossRef]   [PubMed]  

33. L. G. Neto, D. Roberge, and Y. Sheng, “Full-range, continuous, complex modulation by the use of two coupled-mode liquid-crystal televisions,” Appl. Opt. 35(23), 4567–4576 (1996). [CrossRef]   [PubMed]  

34. R. Shi, J. Liu, J. Xu, D. Liu, Y. Pan, J. Xie, and Y. Wang, “Designing and fabricating diffractive optical elements with a complex profile by interference,” Opt. Lett. 36(20), 4053–4055 (2011). [CrossRef]   [PubMed]  

35. H. Zhao, J. Liu, R. Xiao, X. Li, R. Shi, P. Liu, H. Zhong, B. Zou, and Y. Wang, “Modulation of optical intensity on curved surfaces and its application to fabricate DOEs with arbitrary profile by interference,” Opt. Express 21(4), 5140–5148 (2013). [CrossRef]   [PubMed]  

36. J. M. Florence and R. D. Juday, “Full-complex spatial filtering with a phase mostly DMD,” Proc. SPIE 1558, 487–498 (1991). [CrossRef]  

37. P. M. Birch, R. Young, D. Budgett, and C. Chatwin, “Two-pixel computer-generated hologram with a zero-twist nematic liquid-crystal spatial light modulator,” Opt. Lett. 25(14), 1013–1015 (2000). [CrossRef]   [PubMed]  

38. V. Bagnoud and J. D. Zuegel, “Independent phase and amplitude control of a laser beam by use of a single-phase-only spatial light modulator,” Opt. Lett. 29(3), 295–297 (2004). [CrossRef]   [PubMed]  

39. A. Jesacher, C. Maurer, A. Schwaighofer, S. Bernet, and M. Ritsch-Marte, “Near-perfect hologram reconstruction with a spatial light modulator,” Opt. Express 16(4), 2597–2603 (2008). [CrossRef]   [PubMed]  

40. A. Jesacher, C. Maurer, A. Schwaighofer, S. Bernet, and M. Ritsch-Marte, “Full phase and amplitude control of holographic optical tweezers with high efficiency,” Opt. Express 16(7), 4479–4486 (2008). [CrossRef]   [PubMed]  

41. M. Makowski, A. Siemion, I. Ducin, K. Kakarenko, M. Sypek, A. M. Siemion, J. Suszek, D. Wojnowski, Z. Jaroszewicz, and A. Kolodziejczyk, “Complex light modulation for lensless image projection,” Chin. Opt. Lett. 9(12), 12008 (2011). [CrossRef]  

42. A. Siemion, M. Sypek, J. Suszek, M. Makowski, A. Siemion, A. Kolodziejczyk, and Z. Jaroszewicz, “Diffuserless holographic projection working on twin spatial light modulators,” Opt. Lett. 37(24), 5064–5066 (2012). [CrossRef]   [PubMed]  

43. E. Bolduc, N. Bent, E. Santamato, E. Karimi, and R. W. Boyd, “Exact solution to simultaneous intensity and phase encryption with a single phase-only hologram,” Opt. Lett. 38(18), 3546–3549 (2013). [CrossRef]   [PubMed]  

44. L. Zhu and J. Wang, “Arbitrary manipulation of spatial amplitude and phase using phase-only spatial light modulators,” Sci. Rep. 4, 7441 (2014). [CrossRef]   [PubMed]  

45. R. Tudela, E. M. Badosa, I. Labastida, S. Vallmitjana, I. Juvells, and A. Carnicer, “Full complex Fresnel holograms displayed on liquid crystal devices,” J. Opt. A, Pure Appl. Opt. 5(5), S189–S194 (2003). [CrossRef]  

46. R. Tudela, E. M. Badosa, I. Labastida, S. Vallmitjana, and A. Carnicer, “Wavefront reconstruction by adding modulation capabilities of two liquid crystal devices,” Opt. Eng. 43(11), 2650–2657 (2004). [CrossRef]  

47. J. P. Liu, W. Y. Hsieh, T. C. Poon, and P. Tsang, “Complex Fresnel hologram display using a single SLM,” Appl. Opt. 50(34), H128–H135 (2011). [CrossRef]   [PubMed]  

48. J. W. Goodman, Introduction to Fourier Optics (Robert & Company Publishers, 2005).

Supplementary Material (2)

NameDescription
Visualization 1: MOV (826 KB)      Focused images with different depths
Visualization 2: MOV (1225 KB)      Dynamic 3D see-through display

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 (a)Schematic of the 3D see-through head-mounted display based on complex amplitude modulation, (b)an assembled example and the wearing effect, (c)inner optical system of the assembled example.
Fig. 2
Fig. 2 The illustration of the shift operation.
Fig. 3
Fig. 3 (a) Optical auxiliary device for the two SLMs alignment, (b) a magnified image of the aligned A-SLMs pixel arrays.
Fig. 4
Fig. 4 The experimental facility for testing the proposed CAM-STHMD system.
Fig. 5
Fig. 5 (a) Illustration for the production of the target complex wavefront, (b) and (c) are two amplitude holograms Ars (real) and Ais (imaginary), respectively.
Fig. 6
Fig. 6 Reconstructed 3D images when focusing at (a) G, 32.5cm, and (b) F, 52.5cm.
Fig. 7
Fig. 7 Focused images with different depths (Visualization 1): (a) 66.5cm, (b) 74.5cm, (c) 82.5cm, (d) 90.5cm and (e) 98.5cm.
Fig. 8
Fig. 8 Dynamic 3D see-through display (Visualization 2), (a)-(i) are some extracted frames.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

A exp ( i θ ) = A cos θ + i A sin θ = A r + i A i
A s exp ( i θ s ) = A exp ( i θ ) + 2 T exp ( i π 4 ) = A r + i A i + T ( 1 + i ) = A r + T + i ( A i + T ) = A r s + i A i s
F r t { A r s + i A i s } λ d = F r t { A r + i A i + T ( 1 + i ) } λ d = F r t { A r + i A i } λ d + F r t { T ( 1 + i ) } λ d = A exp ( i θ ) + 2 T F r t { exp ( i π 4 ) } λ d
U ( x , y ) = 1 i λ d exp ( i k d ) U ( x 0 , y 0 ) exp { i k 2 d [ ( x x 0 ) 2 + ( y y 0 ) 2 ] } d x 0 d y 0
{ | exp ( i θ 1 ) + exp ( i θ 2 ) | 2 = 2 + 2 cos ( θ 1 θ 2 ) | exp [ i ( θ 1 + π 2 ) ] + exp ( i θ 2 ) | 2 = 2 2 sin ( θ 1 θ 2 )
β = f d x ( 0 < d x < f )
U G = F r t { f ( G ) } λ d 1 U F = F r t { f ( F ) } λ d 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.