Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

A full-color compact 3D see-through near-eye display system based on complex amplitude modulation

Open Access Open Access

Abstract

For complex amplitude modulation (CAM)-based three-dimensional (3D) near-eye systems, it is a challenge to realize colorful 3D display by using spatial light modulator (SLM) and grating. Here, a full-color compact 3D see-through near-eye display (NED) system by CAM is proposed. Computer generated holograms (CGHs) for different wavelengths are calculated separately. Each CGH contains two position-shifted sub-holograms and the separated distance is carefully calibrated to eliminate chromatic aberration. Colorful 3D images are synthesized through time-multiplexing. Color managements are performed and chromatic aberration of the system is analyzed to provide better colorful effect. The system structure is integrated to be compact and a prototype is implemented. Pre-compensation is added on CGHs to offset the system’s assembling errors. Optical experiment results show that the proposed system can provide good 3D full-color see-through performance without vergence-accommodation conflict (VAC). Dynamic colorful display ability is also tested, which shows good potential for interactive NED in the future.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Since its appearance, augmented reality (AR) technique has been attracting more and more attention continuously. Among existing AR devices, near-eye display (NED) systems are the most popular research hotspots as they are portable for personal use. NED devices can be used for social communication, navigation, education and entertainment, and are considered to be the personal device of next generation after personal computer and smartphone. Current see-through NED devices provide stereoscopic 3D perception basically by binocular parallax which also brings vergence-accommodation conflict (VAC) and causes visual fatigue to the wearer. Therefore, in most recently, several methods beyond binocular parallax have been proposed and investigated, including light field display [1,2], integral imaging (II) [3–6], multi-focal-plane (MFP) display [7–9], Maxwellian display [10,11], and holographic 3D display [12–16]. For light field display, Lee et al. utilized projectors and HOE layers to realize light field AR display [1]. Each layer diffuses a projection beam from a projector to form additive light field and real world scenes can be seen through the HOEs. But the optical blur and ghost effect of the display results degrade the image quality to some extent. Integral imaging could be regarded as one way to realize light field display. An II based NED system can provide full-parallax 3D images with good occlusion effect. The system structure is compact so it can cooperate with extra elements such as liquid lens [3] or holographic optical elements (HOEs) [4] to improve the performance. Nevertheless, for II-based NED systems, the trade-off among the viewing angle, lateral resolution and the depth range still exists. For MFP display, Hu et al. applied free-form optics on the MFP system to present see-through AR performance [7]. They further analyzed the engineering challenges in the integration of MFP system [8]. The variation of the image planes, however, depends on the use of electro-optical modulator which makes the system structure quite bulky and complex. Maxwellian display is another approach to alleviate the VAC. Instead of reconstructing 3D images with true depth cues, Maxwellian display presents images which are always in focus regardless of the focal length of human eyes. Kim et al. proposed a see-through Maxwellian NED by aid of holographic waveguide [10]. But as Maxwellian display systems guide light into human eyes through pupil directly, the wearers may feel uncomfortable or irritation after wearing a long time. Holographic 3D display is regard as a promising technology since it provides full depth cues in a natural viewing way. In such NED systems computer generated holograms (CGHs) are uploaded on a spatial light modulator (SLM) to reconstruct 3D signals. An optical combiner superimposes virtual signals on real world scene. Li et al. proposed a holographic 3D NED system by employing a multi-functional HOE which performs the optical functions of a mirror and a lens simultaneously [12]. The system is simplified by the proposed HOE and good see-through AR effect is presented. Chen et al. improved the layer-based CGH algorithm and built their holographic NED test system [13]. Despite of the improvements so far, there still exists limitations to be overcome. Phase-only SLMs are commonly used in holographic display because of its high efficiency. So there is an information loss during the CGH encoding as the amplitude distribution is omitted. The other issue is the time consumption. Iteration operation is often performed during CGH calculation to improve the reconstruction quality which makes it difficult to realize dynamic display.

The complex amplitude modulation (CAM) technique is one way to realize holographic 3D display with advantage of reconstructing high quality 3D images without information loss and iterations. In recent years, the CAM based NED systems experience fast development and a lot of relevant studies have been done by our research group already [17–19]. In our previous work, the CAM technique was applied in the design of optical see-through 3D-NED systems for the first time [17]. The proposed system presents high-quality 3D images with sufficient depth cues to eliminate VAC and visual fatigue. We further improved the structure to be more wearable and a compact prototype is implemented [18]. However, the proposed NED systems present only single-color images which makes the viewing effect less vivid and degrades the viewing experience. This is mainly caused by the chromatic dispersion effect of the SLM. Besides, in existing CAM based 3D-NED systems, the grating plays an essential role in complex amplitude reconstruction. As another chromatic dispersive component, the grating makes it more difficult to realize colorful 3D display.

In this paper, a full-color compact 3D see-through NED system by CAM is proposed. The system involves a standard coherent image processing system with a holographic grating at the Fourier plane. Two position-shifted phase holograms are extracted from object wavefront and uploaded on a single phase-only SLM. The holograms can be coupled via the grating and overlapped automatically at the output plane of the system to synthesize a complex hologram. Colorful display is mainly realized by time-multiplexing. CGHs for different wavelengths are calculated separately and carefully calibrated. Color managements are performed to show the color space and chromatic aberration of the system is analyzed. Pre-compensation is operated before optical experiments to improve the image quality. Dynamic colorful display ability is also tested for the proposed 3D NED system.

2. Principle and system

2.1 Principle of complex amplitude modulation method for colorful display

Figure 1 depicts the principle of the proposed CAM method for colorful display. Two Fourier lenses (L1 and L2) constitute a 4-f system. The focal length of two lenses are f1 and f2 respectively. The grating is located at the back focal plane of L1 and plays a role as frequency filter. Firstly, the target colorful image is decomposed into three color components and CGHs for different wavelengths are calculated separately. For each wavelength, two position-shifted phase holograms are uploaded on the input plane. If the distance between two holograms dh and the grating period Δ satisfies a specific condition, the holograms will be coupled via the grating and overlapped at the output plane automatically to synthesize a complex hologram. After propagating distance z0, the complex wavefront for single color will be reconstructed [20]. Finally, through time-multiplexing, the target colorful image can be observed with high image quality.

 figure: Fig. 1

Fig. 1 Schematic of CAM method for colorful display.

Download Full Size | PDF

The process can be further explained mathematically in detail. For single-wavelength component, as the image reconstruction is usually implemented by Fresnel diffraction, we represent a complex Fresnel hologram as follows:

H=Aexp(iθ)
where A and θ represent the amplitude and phase distributions respectively and i denotes1 . We extract two phase holograms from the complex hologram because phase-only SLM has higher diffraction efficiency and is commonly used for holographic display. The decomposition can be expressed as follows:
exp(iθ1)+exp(iθ2)=Aexp(iθ)
where θ1 and θ2 refer to two phase holograms. After some mathematically deduction, θ1 and θ2 can be solved as:

{θ1=θ+arccos(A2)θ2=θarccos(A2)

The solved two phase holograms are uploaded on the input plane of the 4-f system. They propagate to the back focus of L1 and are multiplied by the grating. The complex amplitude at the Fourier plane is given by:

u(x1,y1)=1iλf1F{t1(x0,y0dh2)+t2(x0,y0+dh2)}G(x1,y1)=1iλf1T(x1λf1,y1λf1)[12+m2cos(2πy1Δ)]
where (x1,y1) and (x0,y0) denote the coordinate of the Fourier plane and the input plane, respectively. F{} stands for Fourier transformation operator and T means frequency spectrum. λ is the wavelength of the laser. t1 and t2 represent two phase holograms with a separated distance dh. G represents the superposed grating. We assume the grating to be an amplitude cosine grating for brief description while Δ is the period of the grating and m(0<m1) is the modulation depth of the grating. After some mathematical manipulations, the complex amplitude at the output plane can be represented as:
u(x2,y2)=1iλf2F{1iλf1T(x1λf1,y1λf1)[12+m2cos(2πy1Δ)]}=1M{12[h1(x2M,y2Mdh2)+h2(x2M,y2M+dh2)]+m4[h1(x2M,y2Mdh2+λf1Δ)+h2(x2M,y2M+dh2+λf1Δ)]+m4[h1(x2M,y2Mdh2λf1Δ)+h2(x2M,y2M+dh2λf1Δ)]}
where (x2,y2) denotes the coordinate of the output plane and M=f2/f1 is the magnification of the system. If the separated distance dh and the grating period Δ satisfy the following condition:
dh=2λf1Δ
Equation (5) can be rewritten as:
u(x2,y2)=1M{+m4[h1(x2,y2)+h2(x2,y2)]+}=1M{+m4[exp(iθ1)+exp(iθ2)]+}=1M{+m4Aexp(iθ)+}
where the second term is the magnified complex hologram by recognizing the definition in Eq. (1). The other terms are higher diffraction orders and are omitted for simplicity. So far, the complex hologram is synthesized at the output plane. Then the wavefront propagates a z0 distance to the reconstruction plane where the desired real image can be observed by Fresnel diffraction:
O(x2,y2)=1iλz0exp(ikz0)u(x2,y2)exp{ik2z0[(xx2)2+(yy2)2]}dx2dy2
where k=2π/λ is the wave number.

In fact, the principle can be easily understood from Eqs. (5) and (7). The grating at the Fourier plane produces three duplicates of each phase hologram at the output plane. The first order duplicate of h1 and the minus first order duplicate of h2 overlap at the center of the output plane to synthesize a complex hologram when Eq. (6) is satisfied. Consequently, the designed image is obtained with high quality on the reconstruction plane via Fresnel diffraction.

The relationship between the grating and CGHs of different components are further illustrated in Fig. 2. The distances for three wavelengths are written as dr, dg and db respectively. From Eq. (6), we can see that when the grating period is settled, the separated distances between two holograms are proportional to the wavelengths which means the holograms for red component has the biggest separated distance while the blue component has the smallest. This is because the grating is chromatic dispersive for different wavelengths so the CGHs should be designed and calculated separately. The function of the grating is to synthesize two sub-holograms into a complex hologram at exactly the same position for each wavelength. While displaying, the CGHs are uploaded on the SLM in time sequence and synchronized with three-color lasers. By time-multiplexing, colorful 3D images can be seen through persistence of vision.

 figure: Fig. 2

Fig. 2 Relationship between the grating and CGHs of different components.

Download Full Size | PDF

2.2 Schematic of 3D full-color near-eye display system

Based on the principle above, a 3D full-color NED system is designed and the schematic is illustrated as Fig. 3(a). Three color lasers are used as light source with wavelengths of λ1, λ2 and λ3 respectively. After collimating and beam expanding, the combined beam illuminates the SLM through a beam splitter (BS). A 4-f system composed of two doublet lenses (DL) follows the SLM. The doublet lenses are used to correct chromatic aberration. A sinusoidal grating (G) is placed right at the back focus of the first doublet lens to realize complex amplitude modulation. The mirror (M) reflects light to another BS which is used to superimpose the virtual signals on real world scene to realize see-through effect. A compact and lightweight prototype is designed to assemble the optical elements. The proposed prototype has a simple structure thus it is quite suitable for portable and wearable application.

 figure: Fig. 3

Fig. 3 (a) Schematic of the proposed 3D full-color see-through NED system. BS means beam splitter, DL means doublet lens, M is mirror and G refers to grating, (b) viewing effect of the proposed system.

Download Full Size | PDF

Figure 3(b) shows the colorful viewing effect of the system. The real world scene can be seen through the viewing window of the proposed system. At the same time, colorful 3D virtual images with correct depth cues are presented in front of human eyes and are superposed on the real world scene. The virtual signals can be reconstructed with full color and displayed at continuous depths along with real objects. The system can present colorful images with right depth cues and are free of VAC problem.

2.3 Color management analysis

For colorful display, one issue to be considered is the color management. According to the International Commission on Illumination (CIE) color system, the tristimulus value of three colors are defined as:

{X=kλφ(λ)x¯(λ)ΔλY=kλφ(λ)y¯(λ)ΔλZ=kλφ(λ)z¯(λ)Δλ
where k is the coefficient, φ(λ) is the color stimulus function of the light seen by the observer, x¯, y¯, z¯ are the color matching functions of the CIE 1931 standard observer. The wavelengths of the laser source used in the proposed system are 639nm, 532nm and 473nm respectively. Based on it, the color gamut of the system is shown in the CIE x-y color space in Fig. 4. Compared with standard Red Green Blue (sRGB), the color gamut of the proposed NED system is much bigger which means the proposed system can present more plentiful colors.

 figure: Fig. 4

Fig. 4 Color gamut of the proposed system.

Download Full Size | PDF

Taking optical elements used in the system into consideration, the chromatic aberration of the system needs to be discussed. Two lenses are employed to constitute the 4-f system. As we know, singlet lenses suffer from severe chromatic aberration. For paraxial ray situation, images with different colors are imaged on different positions which is called longitudinal chromatic aberration. So in the proposed system, doublet lenses are employed to correct the chromatic aberration. Besides, all the mirrors and beam splitters used in the system are carefully selected so that they work in a wide spectrum that covers the spectral width of the lasers. Therefore the system is able to reconstruct colorful images with high color fidelity. Still, there exists some cumulative chromatic aberration in the system so it’s necessary to make pre-compensation and fine adjustment of the holograms which will be discussed in section 3.2.

3. Optical experiments and discussions

3.1 Optical setup and specifications

The prototype and optical setup of the proposed 3D full-color see-through near-eye system is illustrated as Fig. 5(a). The shell of the prototype is fabricated through 3D printer technology and the material used is polylactic acid (PLA) which is cheap and hard enough. The size of the structure is 176.4mm × 34.8mm × 35.4mm with a 65.2mm observation accessory. Two doublet lenses constitute the 4-f system to filter the background noise of the SLM panel. The focal lengths of two lenses aref1: 60mm andf2: 75mm respectively. The bigger focal length of the second lens can realize a magnification of the images. The grating is placed right at the back focal plane of the first lens and the grating period is 100μm after proper choice according to Eq. (6). The grating is fabricated by holographic interference for its simple process. The parameters of the fabrication process are as follows: working wavelength λ=532nm, angle of two interference beams β=0.305°, then the period of the holographic grating is decided by: Δ=λ/2sin(β2). Figure 5(b) shows a fabricated example of the holographic grating. A band-pass filter is introduced behind the grating to obtain a better displaying effect. For the best viewing experience of human eyes, the eye relief of the system is designed to be 10mm with an eyebox of 21.7mm(H) × 16.9(V)mm. The calculation of eyebox is further illustrated in the section of discussion in details. The weight of the module with inner optics is 220.0g.

 figure: Fig. 5

Fig. 5 The developed 3D full-color NED prototype, (a) optical setup with inner assembling details, BPF means band-pass filter, (b) a fabricated example of grating.

Download Full Size | PDF

Optical experiments are performed to test the proposed 3D full-color see-through NED system. The illumination light consists of three color lasers with wavelengths of 639nm, 532nm and 473nm respectively. The SLM used is Holoeye Pluto with 8μm pixel pitch, 1920 × 1080 resolution, and the phase modulation range is [0, 2π]. For better displaying effect, the SLM panel is rotated 90° to fit the viewing habit of human eyes. The resolution of the complex field is the same with the resolution of the sub-hologram which is set to be 400 × 1000 here. Three color lasers and the SLM are synchronized by synchronization controllers and the working frequency is set to be 60fps here. Cards with colorful pictures are placed at different distances as real objects (O1 and O2). A charge coupled device (CCD) camera (Lumenera Infinity 4) with a camera lens (Nikon AF-S 105mm) is put at the exit pupil to test the performance of the system.

The detailed parameters of the system are listed in Table 1.

Tables Icon

Table 1. Parameters of the Proposed Prototype

3.2 Experimental results

Firstly, pre-compensation is operated for the proposed 3D full-color NED system as is mentioned in section 2.3. According to Eq. (6), the separated distance for CGHs of each wavelengths can be calculated by the given parameters: dr=958pixels, dg=798pixels and db=710pixels. Though the proposed CAM method is designed to be free of chromatic aberration, there still remains some cumulative chromatic aberration because of the assembling errors in the system which will cause degradation on the colorful effect. From the principle of CAM method illustrated in section 2.3, we conclude that when the grating is placed right at the Fourier plane of the 4-f system and the separated distance satisfies the specific condition of Eq. (6), the complex amplitude of objects will be reconstructed via superposition on image plane. However, for actual experiment, it is hard to position the grating at exactly the right place. Tiny movement of the grating will cause position shift of the reconstructed images. So pre-compensation is necessary for a colorful display system. In the proposed system, the separated distance of two holograms are finely adjusted to make offsets for the chromatic aberration. The actual distance are calibrated to be: dr=976pixels, dg=816pixels and db=728pixels. Figure 6 is an example of the calculated CGHs for different wavelengths after calibration.

 figure: Fig. 6

Fig. 6 Example of CGHs after calibration, (a) CGH for red component (λ=639nm), (b) CGH for green component (λ=532nm), (c) CGH for blue component (λ=473nm).

Download Full Size | PDF

Figure 7 depicts the effect of the pre-compensation. The word ‘colour’ with six characters is chosen to be the target colorful image. Each single character is set to be different colors while ‘c’, ‘l’, ‘u’ are in red, green and blue respectively and ‘o’, the second ‘o’, ‘r’ are in cyan, magenta and yellow respectively. The reconstruction distance is set to be 90mm. The card with a school badge is put at the same distance with the reconstructed image. Before pre-compensation, the diffracted images of two sub-holograms fail to overlap in the same position. The reconstructed colorful images are indistinct as shown in Fig. 7(a). After pre-compensation, the SLM are uploaded with CGHs after calibration. The reconstructed characters are clear and in good shape as shown in Fig. 7(b). The colorful CAM is realized and the colors of the characters are all displayed correctly.

 figure: Fig. 7

Fig. 7 Effect of the pre-compensation, (a) before pre-compensation, (b) after pre-compensation.

Download Full Size | PDF

Then the system is tested with 3D full-color reconstruction. The six characters are divided into ‘col’ and ‘our’ and the reconstruction distances to the SLM plane are 90mm, 120mm respectively. Theoretically, the max reconstruction distance of images is mainly decided by the coherence length of the used laser source. The better monochromaticity the lasers have, the larger modulation depth range can be achieved. Meanwhile, two cards with colorful images (school badge and tree) are placed at the same depths with two groups of characters. The 3D colorful display effect is shown in Fig. 8. From Figs. 8(a)-7(b), we can see the characters are all reconstructed faithfully with the colors we previously designed. Besides, two groups of characters are focused and blurred the same way as the real objects. When the camera lens focus on the school badge, the characters ‘col’ are clear and in good shape. As the focus position moves further, the characters ‘our’ along with the tree becomes clear. The process is recorded in a video which is attached as Visualization 1. Figure 8(c) illustrates the spatial distribution of two groups of characters. The experimental results show that the system can provide good 3D full-color see-through performance and correct depth cues without VAC.

 figure: Fig. 8

Fig. 8 Reconstructed 3D colorful images at different depths (Visualization 1), (a) focus at ‘col’, d1=90mm, (b) focus at ‘our’, d2=120mm, (c) spatial distribution of reconstructed colorful images.

Download Full Size | PDF

As no iterations operations are applied in the CGH calculation process, the system is quite suitable for dynamic display. So a dynamic colorful display is performed. A car navigation system is simulated and used as colorful dynamic signals. The signals contain the real-time speed of the car and the direction instruction arrow. The color of the numbers are designed to be cyan, the speed unit ‘km/h’ set to be yellow and the directional arrow to be magenta respectively. The content of the dynamic signal is a process of a car passing a right corner. While displaying, multiple CGHs are calculated and uploaded in real time. The virtual signal is reconstructed 100mm from the SLM. A smartphone displaying real scene video from a driving cab is placed the same distance with the dynamic images. The virtual instructional signals are correspond with the content of the real scene video such as slowing down, accelerating and turning right. The dynamic colorful display effect is also recorded in a video (Visualization 2). Figures 9(a) and 9(b) are two frames exacted from the video. The visual colors of different parts of the images are correctly reconstructed. The results show good colorful dynamic display effect of the system which proves the potential to interactive NED display.

 figure: Fig. 9

Fig. 9 Dynamic colorful display (Visualization 2), (a) and (b) are two frames exacted from the video.

Download Full Size | PDF

3.3 Discussions

From the experimental results above, the proposed 3D full-color see-through NED system is proved to be able to display colorful 3D images in good quality. But we can also see from the results that the image resolution is not high enough. This is due to several reasons. Firstly, it is mainly because the pixels of the SLM are not used sufficiently. In the experiment, the resolution of the SLM is 1920 × 1080 pixels, but the resolution of the complex holograms is only 400 × 1000 pixels which means for each frame, only a few parts of the pixels are used for display. This is mainly restricted by the CAM method we use. According to section 2.2, two separated holograms are coupled via the grating on the output plane. When the separated distance is settled, bigger width of the holograms will cause interference between two holograms. Secondly, random phase is introduced in the calculation of the hologram to diffuse the wavefront information to the SLM plane. But meanwhile, it will bring speckle noise and drags the image quality. Besides, the assembly error of the optical elements and the dust on the lens will also cause degradation to the images.

For a see-through NED system, the field of view (FOV) can be described in two ways. One is the FOV for real scene (θr), the other is the FOV for 3D virtual images (θv) as is illustrated in Fig. 10. θr is decided by the eye relief and the structure of the viewing window. Since the eye relief is 10mm, θr is calculated by θr=2×arctan(12×24mmeyerelief+BSsize)=37.5 according to simple geometrical relationship in Fig. 10. From the equation, we can see that when the optical structure is settled, θr is decided by the eye relief of the system. If the eye relief gets bigger, θr will become smaller. θv is mainly decided by the structure of the SLM. It is calculated by θv=±arcsin(λ/2p) where p is the pixel size. For the proposed 3D full-color NED system, given λblue=473nm andp=8μm, θv=±1.7°. The pupil size is another key parameter for near-eye AR system which means the effective range from where human eye could see the reconstructed images. In the proposed system, it is calculated by pupilsize=R×p+2×L×tanθv2 according to Fig. 10 where R denotes to image resolution, p means pixel size, R × p means image size and L represents the distance between image plane and human eye respectively. Given p = 8μm, R is 1000(H) × 400(V) and L is measured to be 230mm, the pupil size is calculated to be 21.7mm(H) × 16.9mm(V). In conclusion, for a full scope, the FOV for real scene θr and the pupil size of our system is acceptable for a near-eye AR design. But the FOV for virtual image θv and the image resolution still need to be improved in our future work. For further improvement, if SLM with smaller pixel size and higher resolution is applied, the FOV for virtual images and resolution will be enlarged effectively and the optical performance will be much better.

 figure: Fig. 10

Fig. 10 Illustration of the system parameters, θv indicates FOV for virtual image, θr indicates FOV for real scene, L means distance between image plane and human eye.

Download Full Size | PDF

Last thing to mention is the refreshing frequency. The refreshing rate of the used SLM is 60fps, so the holograms are uploaded at a frequency of 60fps. As time-multiplexing technique is applied, each laser works at 20fps which is a bit lower than the minimum frequency human eyes can sense a continuous video (24Hz). As a result, there is a sense of flickering during colorful dynamic display. So if a SLM with higher refreshing rate is used, the problem will be solved perfectly and there will be no sense of flickering.

4. Summary

A full-color compact 3D see-through near-eye display system by CAM is proposed. A CAM method for colorful display is proposed based on a grating. Colorful display is realized by time-multiplexing and the CGHs for each wavelength are properly managed and finely calibrated. A prototype is fabricated. The structure is compact and lightweight which is quite suitable for portable and wearable applications. Optical experiments are performed and the results show the proposed system can reconstruct high quality 3D full-color images without VAC and eyes fatigue. Dynamic colorful display ability is also tested which shows good potential for interactive near-eye display. It is expected that the proposed method and system could make a promotion for the development of 3D see-through NED systems. Several aspects of the systems such as image resolution and the FOV still remain to be improved. In our future work, other methods such as HOEs, curved reflective optics or freeform optics could be applied to improve the effect. We will further investigate on optimizing the system structure and the modulation algorithm to realize better performance for actual application in the future.

Funding

National Natural Science Founding of China (NSFC) (61575024, 61420106014); National Key R&D Program of China (2017YFB1002900); Newton Fund.

References

1. S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, “Additive light field displays,” ACM Trans. Graph. 35(4), 1 (2016). [CrossRef]  

2. S. Xie, P. Wang, X. Sang, and C. Li, “Augmented reality three-dimensional display with light field fusion,” Opt. Express 24(11), 11483–11494 (2016). [CrossRef]   [PubMed]  

3. X. Shen and B. Javidi, “Large depth of focus dynamic micro integral imaging for optical see-through augmented reality display using a focus-tunable lens,” Appl. Opt. 57(7), B184–B189 (2018). [CrossRef]   [PubMed]  

4. H. Zhang, H. Deng, W. Yu, M. He, D. Li, and Q. Wang, “Tabletop augmented reality 3D display system based on integral imaging,” J. Opt. Soc. Am. B 34(5), B16–B21 (2017). [CrossRef]  

5. H. Huang and H. Hua, “High-performance integral-imaging-based light field augmented reality display using freeform optics,” Opt. Express 26(13), 17578–17590 (2018). [CrossRef]   [PubMed]  

6. Y. Yamaguchi and Y. Takaki, “Asymmetric integral imaging system for a see-through three-dimensional display with background imaging function,” Opt. Express 25(17), 20369–20380 (2017). [CrossRef]   [PubMed]  

7. X. Hu and H. Hua, “High-resolution optical see-through multi-focal-plane head-mounted display using freeform optics,” Opt. Express 22(11), 13896–13903 (2014). [CrossRef]   [PubMed]  

8. X. Hu and H. Hua, “Design and tolerance of a free-form optical system for an optical see-through multi-focal-plane display,” Appl. Opt. 54(33), 9990–9999 (2015). [CrossRef]   [PubMed]  

9. S. Liu, Y. Li, P. Zhou, Q. Chen, and Y. Su, “Reverse-mode PSLC multi-plane optical see-through display for AR applications,” Opt. Express 26(3), 3394–3403 (2018). [CrossRef]   [PubMed]  

10. S. B. Kim and J. H. Park, “Optical see-through Maxwellian near-to-eye display with an enlarged eyebox,” Opt. Lett. 43(4), 767–770 (2018). [CrossRef]   [PubMed]  

11. N. Fujimoto and Y. Takaki, “Holographic Maxwellian-view Display System,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (online) (Optical Society of America, 2017), paper Th3A.4.

12. G. Li, D. Lee, Y. Jeong, J. Cho, and B. Lee, “Holographic display for see-through augmented reality using mirror-lens holographic optical element,” Opt. Lett. 41(11), 2486–2489 (2016). [CrossRef]   [PubMed]  

13. J. S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015). [CrossRef]   [PubMed]  

14. P. Zhou, Y. Li, S. Liu, and Y. Su, “Compact design for optical-see-through holographic displays employing holographic optical elements,” Opt. Express 26(18), 22866–22876 (2018). [CrossRef]   [PubMed]  

15. Y. Su, Z. Cai, Q. Liu, L. Shi, F. Zhou, and J. Wu, “Binocular holographic three-dimensional display using a single spatial light modulator and a grating,” J. Opt. Soc. Am. A 35(8), 1477–1486 (2018). [CrossRef]   [PubMed]  

16. J. S. Lee, Y. K. Kim, and Y. H. Won, “Time multiplexing technique of holographic view and Maxwellian view using a liquid lens in the optical see-through head mounted display,” Opt. Express 26(2), 2149–2159 (2018). [CrossRef]   [PubMed]  

17. Q. Gao, J. Liu, J. Han, and X. Li, “Monocular 3D see-through head-mounted display via complex amplitude modulation,” Opt. Express 24(15), 17372–17383 (2016). [CrossRef]   [PubMed]  

18. Q. Gao, J. Liu, X. Duan, T. Zhao, X. Li, and P. Liu, “Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter,” Opt. Express 25(7), 8412–8424 (2017). [CrossRef]   [PubMed]  

19. X. Li, Y. Wang, J. Liu, J. Jia, Y. Pan, and J. Xie, “Color holographic display using a phase-only spatial light modulator,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (online) (Optical Society of America, 2013), paper DTh2A.3.

20. J. P. Liu, W. Y. Hsieh, T. C. Poon, and P. Tsang, “Complex Fresnel hologram display using a single SLM,” Appl. Opt. 50(34), H128–H135 (2011). [CrossRef]   [PubMed]  

Supplementary Material (2)

NameDescription
Visualization 1       Performance of 3D colorful images at different depths
Visualization 2       Dynamic colorful display performance

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Schematic of CAM method for colorful display.
Fig. 2
Fig. 2 Relationship between the grating and CGHs of different components.
Fig. 3
Fig. 3 (a) Schematic of the proposed 3D full-color see-through NED system. BS means beam splitter, DL means doublet lens, M is mirror and G refers to grating, (b) viewing effect of the proposed system.
Fig. 4
Fig. 4 Color gamut of the proposed system.
Fig. 5
Fig. 5 The developed 3D full-color NED prototype, (a) optical setup with inner assembling details, BPF means band-pass filter, (b) a fabricated example of grating.
Fig. 6
Fig. 6 Example of CGHs after calibration, (a) CGH for red component ( λ = 639 nm), (b) CGH for green component ( λ = 532 nm), (c) CGH for blue component ( λ = 473 nm).
Fig. 7
Fig. 7 Effect of the pre-compensation, (a) before pre-compensation, (b) after pre-compensation.
Fig. 8
Fig. 8 Reconstructed 3D colorful images at different depths (Visualization 1), (a) focus at ‘col’, d 1 = 90 mm, (b) focus at ‘our’, d 2 = 120 mm, (c) spatial distribution of reconstructed colorful images.
Fig. 9
Fig. 9 Dynamic colorful display (Visualization 2), (a) and (b) are two frames exacted from the video.
Fig. 10
Fig. 10 Illustration of the system parameters, θ v indicates FOV for virtual image, θ r indicates FOV for real scene, L means distance between image plane and human eye.

Tables (1)

Tables Icon

Table 1 Parameters of the Proposed Prototype

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

H = A exp ( i θ )
exp ( i θ 1 ) + exp ( i θ 2 ) = A exp ( i θ )
{ θ 1 = θ +arc cos ( A 2 ) θ 2 = θ arc cos ( A 2 )
u ( x 1 , y 1 ) = 1 i λ f 1 F { t 1 ( x 0 , y 0 d h 2 ) + t 2 ( x 0 , y 0 + d h 2 ) } G ( x 1 , y 1 ) = 1 i λ f 1 T ( x 1 λ f 1 , y 1 λ f 1 ) [ 1 2 + m 2 cos ( 2 π y 1 Δ ) ]
u ( x 2 , y 2 ) = 1 i λ f 2 F { 1 i λ f 1 T ( x 1 λ f 1 , y 1 λ f 1 ) [ 1 2 + m 2 cos ( 2 π y 1 Δ ) ] } = 1 M { 1 2 [ h 1 ( x 2 M , y 2 M d h 2 ) + h 2 ( x 2 M , y 2 M + d h 2 ) ] + m 4 [ h 1 ( x 2 M , y 2 M d h 2 + λ f 1 Δ ) + h 2 ( x 2 M , y 2 M + d h 2 + λ f 1 Δ ) ] + m 4 [ h 1 ( x 2 M , y 2 M d h 2 λ f 1 Δ ) + h 2 ( x 2 M , y 2 M + d h 2 λ f 1 Δ ) ] }
d h = 2 λ f 1 Δ
u ( x 2 , y 2 ) = 1 M { + m 4 [ h 1 ( x 2 , y 2 ) + h 2 ( x 2 , y 2 ) ] + } = 1 M { + m 4 [ exp ( i θ 1 ) + exp ( i θ 2 ) ] + } = 1 M { + m 4 A exp ( i θ ) + }
O ( x 2 , y 2 ) = 1 i λ z 0 exp ( i k z 0 ) u ( x 2 , y 2 ) exp { i k 2 z 0 [ ( x x 2 ) 2 + ( y y 2 ) 2 ] } d x 2 d y 2
{ X = k λ φ ( λ ) x ¯ ( λ ) Δ λ Y = k λ φ ( λ ) y ¯ ( λ ) Δ λ Z = k λ φ ( λ ) z ¯ ( λ ) Δ λ
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.