Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Zoomable head-up display with the integration of holographic and geometrical imaging

Open Access Open Access

Abstract

Head-up displays (HUDs) have already penetrated into vehicle applications and demand keeps growing. Existing head-up displays have their image fixed at a certain distance in front of the windshield. New development could have two images displayed at two different yet fixed distances simultaneously or switchable upon request. The physical distance of HUD image is associated with the accommodation delay as a safety issue in driving, and could also be a critical parameter for augmented reality (AR) function. In this paper, a novel architecture for HUD has been proposed to make the image distance continuously tunable by exploiting the merit of both holographic and geometrical imaging. Holographic imaging is capable of changing image position by varying the modulation on a spatial light modulator (SLM) without any mechanical movement. Geometrical imaging can easily magnify longitudinal image position with short depth of focus by using large aperture components. A prototype based on liquid crystal on silicon (LCoS) SLM has demonstrated the capability of changing image position from 3 m to 30 m verified with parallax method.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Head-up displays (HUD) have long been used in aircraft as an auxiliary devices for visual guiding manipulation functions [13]. With the speed of aircraft, the focus of the pilot is normally at a far distance, and therefore the default image distance of HUD is set at infinity. The technology of HUD started to be introduced into commercial vehicles about two decades ago [47] and the penetration is growing quickly in recent years. For commercial vehicles, the focus of the driver changes significantly even in a single journey. The two extreme scenarios would be driving on the highway and crowd city. The position of HUD image becomes a major issue in safety concern, because there will be a time delay for accommodation if the HUD image does not locate at where the driver focuses on the surrounding scene or object [810]. Currently, many embedded HUDs on the road set a fixed distance at about 2 m in front of the windshield [11,12] and the distance is satisfactory to some extent partly due to that the human eye has a certain amount of depth of focus. Switchable image distance between two modes of driving speed has also been proposed for this issue [13,14]. On the other hand, the next generation of HUD is targeting AR function. There has been graphical technology exploiting psychological depth cue to make monocular depth perception up to long distance [15]. Most recent solutions generate two images at different distances simultaneously. Together with graphical technology, a perceptually immersive image can be generated [16,17]. No matter for resolving accommodation delay or providing AR function, the image distance keeps as a major concern in the development of HUD for commercial vehicle application. A HUD with physically tunable image distance would be the solution which can meet all the demands, together with other already developed 3D technologies.

The major optical architecture of current HUD is using a micro-display as the image source and exploiting geometrical imaging to magnify the source image with traditional curved mirror or lens elements [18,19]. The magnification includes lateral and longitudinal direction, where the latter one is closely related to image distance. If the image distance needs to be changed, the variables available are the object distance and the power of the mirror or lens. Both are technically feasible but not favorable with the limitation of current technologies [20,21]. On the other hand, holographic imaging has also been introduced into HUD for generating images at varied distances or multiple images at different distances [22]. The key component for dynamic holographic imaging is spatial light modulator, which can change the position of reconstructed image or generate image with true depth by varying the modulation on the device, being amplitude or phase modulation [2325]. However, due to the limitation on increasing overall size and reducing pixel size of SLMs with current technology [26], the flexibility on changing image distance with holographic imaging is also quite limited. In this paper, an architecture combining holographic and geometrical imaging has been proposed for breaking the limitation of individual imaging methods so as to increase the tunable distance range of HUD images as large as possible with currently viable technologies. Section 2 describes the basic architecture and design method. Section 3 demonstrates the prototype based on liquid crystal on silicon SLM with the performance evaluation. Finally, Section 4 gives the conclusions.

2. Architecture of HUD with tunable image distance and design methodology

The image position with geometrical imaging can be evaluated with Eq. (1):

$$\frac{1}{{{S_o}}} + \frac{1}{{{S_i}}} = \frac{1}{f}, $$
where So and Si are the object distance and image distance respectively, and f is the focal length of mirror or lens. If Si needs to be changed, either So or f has to be changed. The former one should be realized with mechanical movement if the object is a hardware of planar image source, and the latter one requests special components with tunable power. If aspherics or free-form is required for the tunable component with the consideration of image quality, it can be difficult to keep the accuracy of ray bending when the basic power is changed, especially when the aperture size is large. Therefore, geometrical imaging alone could be not adequate as the solution for tuning HUD image position with current technologies. On the other hand, holographic imaging has been known to be able to provide images with real depth, and therefore potentially can be used for tuning longitudinal image position with the modulation pattern on the holographic devices. The image forming pattern can be categorized into Fresnel diffraction and Fraunhofer diffraction from scalar diffraction theory [27,28], which are also referred to as near field diffraction and far field diffraction respectively. An important feature of far field diffraction is that the diffraction pattern itself is already fixed but only enlarged with the propagation distance, which is equivalent to having a long depth of focus. This feature can be advantageous for real image projection displays, but would make it difficult to identify longitudinal image position if necessary. There is no clear boundary between near field and far field diffraction but it is highly dependent on the aperture size of SLMs [29]. The smaller the aperture size, the closer the far field pattern is being developed. With currently available SLMs, such as phase type liquid crystals on silicon devices, the size is still smaller than 1” in diagonal and far field pattern could have been quite well developed with only meters away.

With the argument stated above, the combination of holographic and geometrical imaging is proposed, and the merit of both imaging methods is exploited. Firstly, holographic imaging is used to generate images with tunable position in the near field with a short longitudinal tunable range. This short longitudinal range is then magnified with geometrical imaging to the desired tunable range. The longitudinal magnification is the square of lateral magnification based on paraxial theory of geometrical imaging, and the image distance shifts to infinity quickly when the object approaches the focal point, as can be seen from Eq. (1). The longitudinal image position could also be difficult to identify if the f-number of the component is large, but making a large size mirror or lens with high accuracy is still far easier than making a large size SLM with current technologies. The architecture of the proposed HUD is illustrated in Fig. 1. The holographic image is generated with SLM as the object for geometrical imaging, and two extreme positions of holographic image is shown with a red arrow and green arrow in the figure. A concave mirror is used for magnifying both the image size and the longitudinal tunable range, and the object distance and the image distance are denoted with So and Si respectively. Two extreme positions of the holographic image are located within the focal length of the concave mirror and therefore the magnified image is virtual and located at the back of the mirror. If the windshield is considered as a flat mirror by ignoring its weak curvature, the final HUD image position in front of the windshield $\overline {BC} $ becomes equal to $\overline {AB} + {S_i}$, where $\overline {AB} $ is the distance between the concave mirror and the windshield.

 figure: Fig. 1.

Fig. 1. Architecture of HUD with tunable image distance.

Download Full Size | PDF

For generating the holographic image as the object for the concave mirror, the near field diffraction formula, as shown in Eq. (2), has been used as the propagator in the iterative optimization process [30]. Equation (2) shows the mathematical relationship between the complex disturbance at the exit of SLM $U({\xi ,\eta } )$ and that at the target image plane $U({x,y} )$, where z is the distance between two planes. The relationship, including the coordinate denotation, is illustrated in Fig. 2. Equation (2) can be interpreted as $U({x,y} )$ being the Fourier transform of $U({\xi ,\eta } )$ multiplied with an extra phase term $exp\left[ {j\frac{k}{{2z}}({{\xi^2} + {\eta^2}} )} \right]$. The extra phase term has distance z in it, which manifests that the near field pattern changes significantly with propagation distance. In another word, the desired image generated with the modulation on SLM can be clearly identified locating at the corresponding position z.

$$U({x,y} )= \frac{{exp({jkz} )}}{{j\lambda z}}exp\left[ {j\frac{k}{{2z}}({{x^2} + {y^2}} )} \right]{{\int\!\!\!\int }_{\sum}}\left\{ {U({\xi ,\eta } )exp\left[ {j\frac{k}{{2z}}({{\xi^2} + {\eta^2}} )} \right]} \right\}exp\left[ { - j\frac{{2\pi }}{{\lambda z}}({x\xi + y\eta } )} \right]d\xi d\eta . $$

 figure: Fig. 2.

Fig. 2. Schematic diagram of the complex disturbance at the exit of SLM and the target image plane.

Download Full Size | PDF

An iterative Fourier transform algorithm (IFTA) is used for optimizing the modulation pattern on the SLM and Gerchberg–Saxton algorithm is adopted because there is only phase modulation available on the SLM and the desired target pattern is specified with intensity distribution [31,32]. The cost function in the iteration process is root mean square error of the intensity distribution.

3. Prototype and experiment verification

The SLM being used is a phase type LCoS device with an overall size of 0.7” in diagonal and a pixel size of 3.74 μm. The pixel count is 4096 × 2400 and 256 phase levels are available. To avoid safety issues with direct viewing of the laser, the reconstruction light source is a green LED with a central wavelength of 532 nm. A pinhole of 0.3 mm in diameter is attached to increase the spatial coherence. The quasi-point source is then expanded and collimated to cover the whole active area of LCoS SLM. The holographic image is generated as a real image in between 18.3 cm to 21 cm from the LCoS SLM and becomes 27 cm to 29.7 cm from the concave mirror as the range of object distance for geometrical imaging. The concave mirror has a focal length of 30 cm. The magnified image from the concave mirror can be calculated with the formula shown in Eq. (1) and should be located in between 270 cm and 2970 cm behind the mirror. After being reflected from the windshield toward the driver, the final image becomes in between 3 m and 30 m from the windshield, where the estimation includes the distance between the mirror and the windshield of 30 cm.

The prototype was built up in two steps, the first one for the part of holographic imaging and the second one for that of geometrical imaging, to ensure the image is located at the designed position, especially for the holographic ones. Figure 3 shows the optical construction for holographic imaging with the desired image position 18.5 cm from the LCoS SLM. The white card board is right at the desired image position in the left picture, where it is only 1 mm shifted away in the right picture. It shows that the holographic image becomes blurry quickly as the light propagates away from the designed image position. This phenomenon coincides with the argument described in Section 2 based on Eq. (2). All the image positions for 7 different designed values have been verified and the pictures of the images with corresponding distances from LCoS SLM are shown in Fig. 4.

 figure: Fig. 3.

Fig. 3. Experiment for verification of holographic image position.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Holographic images at various distances from LCoS SLM.

Download Full Size | PDF

The concave mirror and the windshield were mechanically aligned with the optics for holographic imaging and the whole prototype with the highlight of light path with green arrow is shown in Fig. 5, where there is a normal car window film attached on the windshield to eliminate the ghost image from the windshield. The HUD image, which is a T character, at various distances in between 3 m and 30 m is shown in Fig. 6. Those images look similar but are captured with a camera at different focus. The sharpness of the background and the fire extinguisher hanging on the left side wall of the corridor can be used as a reference to check the difference. The quality of the final image in Fig. 6 is not sharp but already in the best focus of the camera. There are two major mechanisms deteriorating the image quality. One is the low spatial coherence of the LED light source which makes the holographic image not sharp enough, as can be seen from Fig. 4. The other one is the aberration caused by highly off-axis geometrical imaging of the concave mirror, especially the mirror is a spherical one. The windshield could also introduce off-axis aberration, although having a very weak curvature. Nevertheless, the quality is still sufficient for the verification of image position, especially with the visual comparison with the defocused one. A video is attached to demonstrate that the camera needs to refocus when the holographic image position is changed by feeding another modulation pattern to the LCoS SLM. When the camera is well refocused, the image still looks blurry but the best focus at that position can be judged.

 figure: Fig. 5.

Fig. 5. HUD prototype.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. HUD images at various distances in between 3 m and 30 m captured with a camera.

Download Full Size | PDF

The physical image position has also been verified with parallax method based on psychophysical test, which is illustrated in Fig. 7. The green T represents the virtual HUD image and the yellow bar represents a real stick as the tool for the reference in the measurement. If the stick is not put at the correct position of the HUD image, as shown in Fig. 7(a), the viewer will see an opposite lateral shift between the two viewpoints when moving the head laterally in different directions. However, if the stick is located at the right position of the HUD image, as shown in Fig. 7(b), they will always stick together no matter how the viewer moves.

 figure: Fig. 7.

Fig. 7. Parallax method for image distance verification.

Download Full Size | PDF

The measured image distance is compared with the calculated value and the result is listed in Table 1. The errors for all different distances are below 2%, which are not perceptually noticeable in practice.

Tables Icon

Table 1. Calculated and psychophysically evaluated image distance

4 Conclusions

A HUD capable of tuning image distance from the windshield between 3 m and 30 m without mechanical movement has been achieved by combining holographic and geometrical imaging technology. The tunable range can be easily modified to any desirable value by varying the modulation on SLM, changing the power of the concave mirror or the distance between SLM and concave mirror. Furthermore, the proposed architecture and design methodology can be generalized and applied to other 3D or AR displays.

Funding

Ministry of Science and Technology, Taiwan (MOST 106-2622-E-009-023-CC2).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. L. Gu, D. Cheng, Q. Wang, Q. Hou, and Y. Wang, “Design of a two-dimensional stray-light-free geometrical waveguide head-up display,” Appl. Opt. 57(31), 9246–9256 (2018). [CrossRef]  

2. R. L. Newman, “Operational problems associated with head-up displays during instrument flight,” (Tech. Report AFAMRL-TR-80-116). Wright-Patterson Air Force Base, Armstrong Aerospace Medical Research Lab, OH (1980).

3. A. Ingman, “The head up display concept,” Lund University School of Aviation, Maret (2005).

4. M. O. Freeman, “MEMS scanned laser head-up display,” Proc. SPIE 7930, 79300G (2011). [CrossRef]  

5. J. A. Betancur, G. Osorio, and A. Mejía, “Integration of Head-Up Display system in automotive industry: a generalized application,” Proc. SPIE 8736, 87360F (2013). [CrossRef]  

6. S. Wei, Z. Fan, Z. Zhu, and D. Ma, “Design of a head-up display based on freeform reflective systems for automotive applications,” Appl. Opt. 58(7), 1675–1681 (2019). [CrossRef]  

7. M. K. Hedili, M. O. Freeman, and H. Urey, “Microlens array-based high-gain screen design for direct projection head-up displays,” Appl. Opt. 52(6), 1351–1357 (2013). [CrossRef]  

8. Stephen R. Ellis, Urs J. Bucher, and Brian M. Menges, “The relationship of binocular convergence and errors in judged distance to virtual objects,” IFAC Proc 28(15), 253–257 (1995). [CrossRef]  

9. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]  

10. Y. Gao, E. Peillard, J. M. Normand, G. Moreau, Y. Liu, and Y. Wang, “Influence of virtual objects’ shadows and lighting coherence on distance perception in optical see-through augmented reality,” J. Soc. Inf. Disp. 28(2), 117–135 (2020). [CrossRef]  

11. Y. Takaki, Y. Urano, S. Kashiwada, H. Ando, and K. Nakamura, “Super multi-view windshield display for long-distance image information presentation,” Opt. Express 19(2), 704–716 (2011). [CrossRef]  

12. K. H. Kim and S. C. Park, “Design of confocal off-axis two-mirror system for head-up display,” Appl. Opt. 58(3), 677–683 (2019). [CrossRef]  

13. T. Zhan, Y. H. Lee, J. Xiong, G. Tan, K. Yin, J. Yang, S. Liu, and S. T. Wu, “High-efficiency switchable optical elements for advanced head-up displays,” J. Soc. Inf. Disp. 27(4), 223–231 (2019). [CrossRef]  

14. K. Li, Y. Geng, A. Ö. Yöntem, D. Chu, V. Meijering, E. Dias, and L. Skrypchuk, “Head-up display with dynamic depth-variable viewing effect,” Optik 221, 165319 (2020). [CrossRef]  

15. T. Sasaki, A. Hotta, A. Moriya, T. Murata, H. Okumura, K. Horiuchi, N. Okada, K. Takagi, Y. Nozawa, and O. Nagahara, “Novel depth perception controllable method of WARP under real space condition,” Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. 42(1), 244–247 (2011). [CrossRef]  

16. I. H. Shao, W. W. Yang, C. H. Chen, and K. T. Luo, “40.3: High Efficiency Dual Mode Head Up Display System for Vehicle Application,” Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. 44(1), 559–562 (2013). [CrossRef]  

17. Z. Qin, S. M. Lin, K. T. Luo, C. H. Chen, and Y. P. Huang, “Dual-focal-plane augmented reality head-up display using a single picture generation unit and a single freeform mirror,” Appl. Opt. 58(20), 5366–5374 (2019). [CrossRef]  

18. J. Nakagawa, H. Yamaguchi, and T. Yasuda, “Head up display with laser scanning unit,” Proc. SPIE 11125, 12 (2019). [CrossRef]  

19. J. Tauscher, W. O. Davis, D. Brown, M. Ellis, Y. Ma, M. E. Sherwood, D. Bowman, M. P. Helsel, S. Lee, and J. Wyatt Coy, “Evolution of MEMS scanning mirrors for laser projection in compact consumer electronics,” Proc. SPIE 7594, 75940A (2010). [CrossRef]  

20. H. S. Chen, Y. J. Wang, P. J. Chen, and Y. H. Lin, “Electrically adjustable location of a projected image in augmented reality via a liquid-crystal lens,” Opt. Express 23(22), 28154–28162 (2015). [CrossRef]  

21. Y. J. Wang and Y. H. Lin, “An optical system for augmented reality with electrically tunable optical zoom function and image registration exploiting liquid crystal lenses,” Opt. Express 27(15), 21163–21172 (2019). [CrossRef]  

22. B. Mullins, P. Greenhalgh, and J. Christmas, “The holographic future of head up displays,” Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. 48(1), 886–889 (2017). [CrossRef]  

23. J. P. Huignard, “Spatial light modulators and their applications,” J. Opt. 18(4), 181–185 (1987). [CrossRef]  

24. D. Casasent, “Spatial light modulators,” Proc. IEEE 65(1), 143–157 (1977). [CrossRef]  

25. N. Savage, “Digital spatial light modulators,” Nat. Photonics 3(3), 170–172 (2009). [CrossRef]  

26. S. Q. Li, X. Xu, R. M. Veetil, V. Valuckas, R. Paniagua-Domínguez, and A. I. Kuznetsov, “Phase-only transmissive spatial light modulator based on tunable dielectric surface,” Science 364(6445), 1087–1090 (2019). [CrossRef]  

27. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

28. D. G. Voelz, Computational Fourier Optics: A MATLAB Tutorial (SPIE, 2011).

29. J. W. Goodman, Introduction to Fourier Optics, Chap. 4, 63–95, (McGraw-Hill, 1996).

30. F. Wyrowski and O. Bryngdahl, “Iterative Fourier-transform algorithm applied to computer holography,” J. Opt. Soc. Am. A 5(7), 1058–1065 (1988). [CrossRef]  

31. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

32. J. S. Chen and D. P. Chu, “Fast calculation of wave front amplitude propagation: a tool to analyze the 3D image on a hologram (Invited Paper),” Chin. Opt. Lett. 12(6), 060021 (2014). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Architecture of HUD with tunable image distance.
Fig. 2.
Fig. 2. Schematic diagram of the complex disturbance at the exit of SLM and the target image plane.
Fig. 3.
Fig. 3. Experiment for verification of holographic image position.
Fig. 4.
Fig. 4. Holographic images at various distances from LCoS SLM.
Fig. 5.
Fig. 5. HUD prototype.
Fig. 6.
Fig. 6. HUD images at various distances in between 3 m and 30 m captured with a camera.
Fig. 7.
Fig. 7. Parallax method for image distance verification.

Tables (1)

Tables Icon

Table 1. Calculated and psychophysically evaluated image distance

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

1 S o + 1 S i = 1 f ,
U ( x , y ) = e x p ( j k z ) j λ z e x p [ j k 2 z ( x 2 + y 2 ) ] { U ( ξ , η ) e x p [ j k 2 z ( ξ 2 + η 2 ) ] } e x p [ j 2 π λ z ( x ξ + y η ) ] d ξ d η .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.