Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

LiDAR-derived digital holograms for automotive head-up displays

Open Access Open Access

Abstract

A holographic automotive head-up display was developed to project 2D and 3D ultra-high definition (UHD) images using LiDAR data in the driver’s field of view. The LiDAR data was collected with a 3D terrestrial laser scanner and was converted to computer-generated holograms (CGHs). The reconstructions were obtained with a HeNe laser and a UHD spatial light modulator with a panel resolution of 3840×2160 px for replay field projections. By decreasing the focal distance of the CGHs, the zero-order spot was diffused into the holographic replay field image. 3D holograms were observed floating as a ghost image at a variable focal distance with a digital Fresnel lens into the CGH and a concave lens.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

1.25 million fatal car accidents occurred on roads worldwide in 2017 [1]. Human error was a major contributing factor in 94% of the crashes [24]. Automotive HUDs have been developed as a safer alternative to touch control infotainment systems to reduce the sight shift from the road [5]. HUDs were first utilized in fighter aircrafts after World War II. The original HUDs emerged as an advancement to the reflector sight that were capable of projecting a reticle at the infinite [6]. The major characteristics of aircraft HUDs were derived from the reflector sight reticle predecessor. A semi-transparent window allows the transmitted light from a real sight to be combined with a projected image that appears as a ghost image. The reticle is projected at the infinite, where it remains fixed at distant targets regardless of the position of the viewer. Further advanced in HUDs included variable focal distance of the projected reticle and a gyroscope [7]. However, current high-tech aircraft HUDs are unable to produce a simultaneous multifocal projection, and mostly they are focused at a long distance. The first in-vehicle HUDs was developed in the Oldsmobile Cutlass Supreme by General Motors in 1988. This HUD consisted of a monochromatic segment display reflected on the windshield. New advances have been in HUDs, including colored images, matrices, retractable reflectors [8,9]. In contrast to the aircraft HUDs, automotive HUDs show the ghost image focused at a short distance from the windshield. The mismatch between the focal distance of the displayed image and the real objects has adverse consequences [10,11].

The human eye requires a change in accommodation between the displayed image and the road even if the displayed image is in the line of sight [12,13]. The displayed virtual objects or informative signs cannot be fixated in place of the real objects [14]. These challenges limit HUDs in producing an augmented reality experience. The most important challenges in the implementation of HUDs to reach a real applicability in augmented reality are the multifocal display, large viewing area without compromising the field of view, the optimal position on the windscreen and minimal invasiveness in the driving behavior by accurately pointing out hazards on the road. HUDs can serve as a defensive technology to promote driver attention. HUDs should project information within the eye box (15×15 cm) of the driver. The eye box is required to achieve minimal interference with driver assistance systems into the driving behaviour [15]. A collimated optical system needs a predefined exit pupil [16]. As conventional HUDs are fixed, retaining a small exit pupil is challenging [17]. Panoramic display can enable projecting directly in the eye box of the driver.

Light Detection and Ranging (LiDAR) systems complement camera or radar-based perception to increase accuracy and safety in autonomous cars [18]. LiDAR instruments are active sensors that illuminate the surroundings by emitting either pulsed or phase-modulated light; the range is then measured precisely by processing the backscattered laser waveform. LiDAR sensors are utilized for object detection, classification, tracking and intention prediction and depth layer analyses [19]. Mounted on a moving platform, LiDAR has emerged as an indispensable technology to generate a detailed 3D representation of a locality. The benefits of using LiDAR sensing not only include leveraging both image and 3D point cloud information, but also accurate moving object detection and grid detection for localization and mapping [20]. The incorporation of LiDAR data in holographic HUDs is highly desirable to project images in creating an augmented reality experience. Compared to conventional HUDs, holographic HUDs consist of less opto-mechanical components and more compact design in the setup, requiring a lower power consumption and have longer virtual image distance possibilities [2123].

Here, CGHs of 2/3D LiDAR projections were developed based on a modified Gerchberg-Saxton algorithm for phase retrieval. The LiDAR data was collected from a public road in London using a 3D terrestrial laser scanner. The acquired waveform data was processed to digitize the echo signals in the form of CGHs. Holography setups were developed to generate UHD images of the LiDAR data with a panel resolution of 3840×2160 px: (i) 2D projections with zero order, (ii) 2D projections with diffused zero order, and (iii) 3D projections with zero order in the far field. Multi-level phase modulators used in this work have a greater efficiency due to the absence of conjugate orders and residual zero orders. The ability to display holographic LiDAR images setup can have safety and security applications in the transportation sector. The present work consists of the enlarged eye box of the driver, the alignment of the holographic objects in size and distance with real-life objects in the 3D floating AR view, and the contrasted and accurate replay field results without zero order. The eye box developed in this work meets the size of the holographic objects of the same size as the real-life objects which act as an addition to the driver’s field of view to alert about road obstacles. The addition of the LiDAR point cloud data has the advantages of real-life scanned data from public roads with potential to integrate the data into the urban environment to project 3D road obstacles into the driver’s field of view in real-time.

2. Experimental setup

2.1 Equipment

A HeNe laser (random polarization, λ=632.8 nm, 5 mW), an aspheric lens (f=3.30 mm, NA=0.47), a plano-convex lens (f=75 mm, Ø1”, N-BK7, ARC: 350-700 nm), achromatic doublet lens (f=100 mm, Ø1”), a linear polarizer (Ø1”, N-BK7, 38% transmission), a polymer zero-order half-wave plate (Ø1”, 633 nm), non-polarizing beamsplitter (50:50 split, 30 mm), an optical power meter interface with USB operation (PM100USB), and an UHD SLM (3840×2160 px, EXULUS-4K1) were purchased from Thorlabs. The UHD SLM was manufactured by Jasper Display Corporation and had an operating wavelength of 400-850 nm, phase/retardance range of 2π at 633 nm, frame rate of 30 Hz, and fluctuation/flickering (RMS) <5%. A digital camera (α7 II E-mount, full frame sensor (35.8 mm×23.9 mm), 24.3 MP) and a camera lens (FE 28-70 mm, F3.5-5.6 OSS) were purchased from Sony. A Range Rover Velar model (h=110 mm, w=37 mm, h=53 mm) was used.

2.2 CGH image generation

CGH data was created using SolidWorks 2020 (SP2.0, Dassault Systèmes) CAD modeling, and importing LiDAR data into MATLAB (R2020a, MathWorks). The CGH data was communicated to the UHD SLM via HDMI. The processing time to generate the CGH via the MATLAB code on a Lenovo ThinkPad laptop (i9-9880H, 2.30 GHz, 64 GB RAM) with a NVIDIA GeForce GTX 1650 Max-Q 4GB GDDR5 graphics card required 1.5 s.

2.3 LiDAR data acquisition

A RIEGL VZ-400 (RIEGL Laser Measurement Systems GmbH) was utilized for LiDAR data collection. The scanner used has a wavelength of 1,550 nm, a beam divergence of 0.35 mrad and a measuring range of around 600 m. The LiDAR data was obtained from scanning Malet street and Hampstead Heath in London. Data was post-processed in RiSCAN Pro (RIEGL Laser Measurement Systems GmbH) to produce a co-register point cloud in an arbitrary coordinate system. The objects on the sidewalks of the scanned street were processed by using separation algorithms containing four different filters.

2.4 Holography setup to display 2D/3D images

The criteria of an ideal SLM are a phase modulation device instead of an intensity modulation; the efficiency of the replay field projection increases when the SLM modulates the phase instead of the intensity. The next criterion is the bandwidth of the modulation of the SLM: at least a 2π modulation should occur in the SLM for maximum efficiency. The SLM should have adequate resolution to reproduce an accurate replay field. The UHD SLM had a phase of 2π. The SLM was operated at 633 nm wavelength with an unpolarized HeNe laser (5.0 mW).

2.5 Modulation of linear polarizers

In the UHD SLM setup, linear polarizers (Ø1”, N-BK7, 38% transmission) were placed to control the polarization of the light source as the laser light before the polarizers was not polarized to any axes. The polarizers were crucial during all experiments as a grayscale map was developed by having found the so-called liquid crystal switching angle. Two polarizers were used before the beam splitter and the replay field image was observed when the polarizers were rotated, one at a time. Generally, light polarized with 90° would be parallel to the axis of the liquid crystal and the SLM. The least zero order and the clearest image appeared when both polarizers were placed at 45°. Hence, the liquid crystal switching angle of the SLM was 45°. The zero-order region at the origin of the replay field were due to undiffracted light.

3. Experimental results

A 3D terrestrial laser scanner was utilized to collect LiDAR data (Fig. 1(a)). The scanner had a reference beam wavelength of 1,550 nm, a beam divergence of 0.35 mrad, and a measurement range of 600 m. The scanner captures data at 122 kHz and has a range accuracy of 5 mm accuracy with a repeatability of 3 mm (Fig. 1(b)). Figure 1(c) shows the experimental setup for LiDAR data collection of an object (e.g. tree). The LiDAR produces echo signals (waveform data) from the emitted pulses of the objects (Fig. 1(d)). Malet Street (51.5214° N, −0.1302° W) and Hampstead Heath (51.5608° N, 0.1629° W) in London were scanned. For example, Malet Street was scanned at 11 positions along the street, at each position an upright and tilt scan were acquired (Fig. 1(e)). Data was post-processed to produce a co-register point cloud in an arbitrary coordinate system. A previously developed open-source Python library called TLSeparation was used during this study to perform a classification of TLS data separation [20]. This package, including filters, separation algorithms and classification of collected data used geometric features and structural analysis to classify individual point clouds into different materials. In total, four algorithms, two based on path detection and two based on pointwise geometric features were introduced. The pointwise algorithms were based on the classification and class labeling [24]. The path detection approach based on point arrangements from trees as connected topological networks [25]. The validation of the separation algorithm was carried out using direct comparison of manually classified randomly sampled points from point clouds measured from real objects. The objects in the scanned street were processed by using separation algorithms containing multiple filters. The separation algorithm had 10 point clouds. The accuracy of the separation algorithm had a 90% correctly identified objects and the path detection showed an accuracy of >80%. Figure 1(f) illustrates the reflection points of a street location (Fig. 1(e) inset), showing the ability to differentiate between objects, including buildings, vehicles and trees.

 figure: Fig. 1.

Fig. 1. LiDAR data collection. (a) Features of the 3D terrestrial laser scanner featuring a pulsed laser scanner (λ=1,550 nm), photodiode, timer circuit, and automated control of the field of view (100° vertical, 360° horizontal). (b) The LiDAR scanner utilizes a reference beam to scan the objects in the environment with a scan data acquisition of 5 mm accuracy and 3 mm repeatability, and a measurement range of 600 m, enabling a measurement rate of 122,000 points s−1. (c) The experimental setup for LiDAR data collection of an object (e.g. tree). (d) The echo signals (waveform data) based on the emitted pulsed of the laser and returned waveform were analyzed. (e) Bird’s-eye view of Malet Street in London, showing the direction of LiDAR scanning. Scale bar=30 m. The inset shows the scanned street position. Scale bar=10 m. (f) A LiDAR reflectance image of a scanned street position (inset in (d)), demonstrating the ability to distinguish objects such as vehicles and trees.

Download Full Size | PDF

To connect the post-processed LiDAR data with the holographic technology, the contrasted objects obtained at different focal distances were converted into an object plane. A Fourier transform was then applied to the object plane to holographically project the information within a single layer at the desired distance of the driver. Hence, this technology presents a personalized approach to project 3D floating holograms at a desired distance in the driver’s personal field of view, the eye box. Studies have proven the importance of displaying data in specific areas of the field of view for the driver’s safety without presenting hazards of distraction to the driver [26]. The varying ability of the holographic post-processed LiDAR data to be displayed at different distances within one single layer present an addition of depth recreating the augmented reality experience. An example for the arranged information displayed in the driver’s field of view, objects such as trees and cars were tested within this study in addition to humans at various focal distances.

A holography setup was developed to project the 2D UHD images on a viewing screen (Fig. 2(a)). The holographic projection setups consisted of a HeNe laser (λ=632.8 nm, 5 mW), an aspheric lens (f=3.30 mm, NA=0.47), achromatic doublet lens (f=100 mm, Ø1”), linear polarizer (Ø1”, N-BK7, 38% transmission), a polymer zero-order half-wave plate (Ø1”), non- polarizing beamsplitter (50:50 split, 30 mm), and an UHD SLM. The SLM employs a Liquid Crystal on Silicon (LCoS) microdisplay to create high-resolution (3840×2160 px) reflective phase modulation with individually controlled pixels. The LCoS microdisplay panel consist of a liquid crystal layer sandwiched between a reflective electrode (bottom layer) and a transparent and conductive indium tin oxide (ITO) electrode (top layer). The reflective layer produces individually controlled pixels. When a voltage is applied, an electric field is formed between the two reflective electrode and ITO electrode. The strength and direction of the electric field allows for controlling the alignments of the birefringent liquid crystal molecules to modulate the phase shift and retardance of each pixel. When the LCoS microdisplay is incidentally illuminated with a HeNe laser beam wavefront, the panel can reflect the phase and shift the wavefront based on the individually controlled pixels. Figure 2(b) shows the experimental projection setup to display 2D holographic images. An unpolarized HeNe laser (λ=632.8 nm, 5mW) beam was expanded using an aspheric lens (f=3.30 mm) and collimated with another lens (f=100 mm) to obtain a beam of Ø30 mm. A linear polarized (Ø1”, N-BK7, 38% transmission) polymer half-wave plate (Ø1”) was utilized. An image (to be projected) was converted to a CGH pattern and communicated to the SLM though a High-Definition Multimedia Interface (HDMI). The computational focal length was set with a digital Fresnel lens to 100 m at a beam wavelength of 633 nm. To display the far-field diffraction a plano-convex lens (f=75 mm) was used. To calibrate the holography setup, a slanted line (45°), a grid, fine slanted lines (0°, 45° and 90°) were displayed (Fig. 2(c)). Figure 2(d) shows that the holography setup displayed the far-field diffraction patterns correctly. Figure 2(e) shows the optical power measurements of the diffracted patterns, where the zero-order and the diffraction spots were identified in the far field. Computer-Generated Holograms (CGHs) were created at the SLM, and the beam was project into the screen through another lens to obtain a reconstructed image, the so-called “replay field” of the hologram. Due the reflective nature of the LCoS SLM, a beam splitter was necessary to divide the input and output beams. Building on the Maxwell’s electromagnetic radiation theory, the Huygens-Fresnel diffraction principle is found. Here, the propagation of an electromagnetic field U from a point ${\vec{r}_0}$ to at an arbitrary point $\vec{r}$ is described as the surface integral:

$$U({\vec{r}} )\propto {\smallint\!\!\!\smallint }U({{{\vec{r}}_0}} )\frac{{{e^{jk|{\vec{r} - {{\vec{r}}_0}} |}}}}{{|{\vec{r} - {{\vec{r}}_0}} |}}ds$$
where k=2π/λ is the wave number. Huygen’s wavelet theory is based on the assumption that each point of light becomes a secondary emitter of spherical wavelets [16]. This principle leads to the observation that the preservation of straight wavefronts cannot be sustained if the waves pass through an aperture; the edges of any particular aperture will cause a lack of emitters leading to an overall distortion of the wavefront diffraction [27]. In this work, two approximations were described for the non-trivial integral of the Eq. (1). The first approximation, referred as Fraunhofer field occurs when the variable:
$$|{\vec{r} - {{\vec{r}}_0}} |= \sqrt {{{({x - x^{\prime}} )}^2} + {{({y - y^{\prime}} )}^2} + {z^2}} $$
is evaluated to a point approaching infinity. With a Taylor expansion, it can be demonstrated that for a large distance z:
$$U({x,y} )\propto {\smallint\!\!\!\smallint }U({x^{\prime},y^{\prime}} ){e^{j\frac{k}{z}({xx^{\prime} + yy^{\prime}} )}}dx^{\prime}dy^{\prime}$$

This represents the Fourier transform of the field function at the diffraction plane. However, the SLM can only define a retardation phase of the propagated field. This implies that a given phase at the diffractive plane should be capable of creating an intensity profile at the far field. In display applications, a CGH can be formed through the phase retrieval [28,29], the wavefront recording plane [3032], the multi-view [33] and the polygon-based [34,35] algorithms. This study focussed on the phase-only CGH as it had a higher diffraction efficiency for enhanced reconstruction in the replay field (projected area). Hence, this approach was phase-only as the phase component of the hologram was kept while it was sampled with a uniform grid lattice. The most important parameter which could be varied in both the SLM plane and the replay field plane, is the phase. The phase at the SLM was controlled pixel by pixel, in a similar fashion of a display [36,37]. This crucial parameter is paramount when creating the CGH as it can be adjusted to optimize the replay field projections. When the CGH was modelled with a fast Fourier transform (FFT), a limiting factor was the bandwidth of this system to contain the amount of gratings [38]. Hence, the phase between 0 and 2π was applied to set the amount of grating within the limits of the bandwidth of the supersystem. Herein, The CGH was generated and optimized through the FFT in the Gerchberg-Saxton method. This optimization method utilizes the square root of the intensity map of pixels from the original image as the modulus of the target field to find an optimized field in an iterative approach [39,40]. The original object (LiDAR data) was represented as an intensity map of pixels. There are two unknowns in the equation to obtain the optimal replay field result: the phase at the SLM and the phase at the replay field. The intensity at the SLM was set to 1 as an optimal result parameter; however, the crucial parameter determining the replay field result was the unknown phase of both the original object image and the replay field projection phase. Hence, the Gerchberg-Saxton algorithm allocated a random phase in each pixel of the original image and run the FFT through it, hence a result for the phase parameter in the replay field was generated. This iteration process was repeated multiple times, where each time the retrieved phase was allocated by the algorithm to each pixel of the original object image and a different parameter for the phase of the replay field was obtained. A FFT accelerated the calculation of the discrete Fourier transform from an input sequence. During the hologram generation process, the output image could be varied by amplitude and phase.

 figure: Fig. 2.

Fig. 2. Optical setup to generate 2D images with zero order. (a) Schematic of the holographic projection setups showing a HeNe laser, a focusing lens, a polarizer, a half-wave plate, a beam splitter and an UHD SLM (3840×2160 px). (b) The experimental setup that allows for projecting 2D holographic images. The inset shows the LCoS microdisplay panel. (c) Patterns (slanted line, grid, fine lines at 0°, 45°, 90°) that are projected to a screen. (d) Projected patterns in the far field. (e) Optical power measurement of the projected patterns.

Download Full Size | PDF

To validate the ability to project 2D images, solid modeling computer-aided design (CAD) were converted to holograms and displayed in the far field (Fig. 3(a)). To determine the optimal phase angle, the polarizers were rotated to minimize the zero order in the replay field. The smallest zero order was determined at 45° of both polarizers, having a phase angle of π/4. Figure 3(b) illustrates a hologram of 1951 USAF resolution test chart. Figure 3(c) shows a LiDAR image of a tree based on the collected reflectance waveforms. This LiDAR image was converted to a CGH and the collimated laser beam (λ=633 nm) illuminated the LCoS microdisplay panel to project 2D LiDAR data on a plane opaque viewing screen through a convex lens (f=750 mm) (Fig. 3(d)). This UHD 2D image displayed the fine features of the tree (branches, leaves) in high resolution (3840×2160 px). Figure 3(e) illustrates the LiDAR reflectance images of two individuals, showing clear facial features. The LiDAR data of these individuals were holographically projected (Fig. 3(f)).

 figure: Fig. 3.

Fig. 3. 2D holographic projection of LiDAR images with zero order. (a) Solid modeling computer-aided design (CAD) holograms displayed in the far field. Scale bar=5 mm. (b) 1951 USAF resolution test chart. Scale bar=5 mm. (c) LiDAR data of a tree. Scale bar=3 m. (d) Holographic image of a tree. Scale bar=5 mm. (e) LiDAR data of two individuals. Scale bar=20 cm. (f) Holographic 2D images of two individuals. Scale bar=5 mm.

Download Full Size | PDF

Although the zero-order spot can be highly reduced with the aid of polarizers and a half wave plate, it will be an inherent feature of any reflective SLMs. This high-intensity spot interferes with the displayed holographic projection on the same image plane. Hence, it is highly desirable to diffuse the zero-order spot in display applications. The focusing effect of the CGH was controlled by designing a virtual lens without the requirement of a physical lens. The diffraction pattern was focused at a closer distance rather than at the far-field. This reduction of the field to a closer distance is commonly called the Fresnel field. In contrast with the Fraunhofer field, the Fresnel field is an alternative Taylor approximation of the Eq. (2). In this approximation, the Fourier transform represented at the Eq. (3) included a convolution Fresnel term:

$$U({x,y} )\propto {\smallint\!\!\!\smallint }U({x^{\prime},y^{\prime}} )\; {e^{\left\{ {\frac{{jk}}{{2z}}\; [{{x^2} + {y^2}}]} \right\}}}{e^{jk({xx^{\prime} + yy^{\prime}} )}}dx^{\prime}dy$$

The Fresnel term is multiplied by the field function at the diffraction plane. Therefore, it represented a focusing lens and is referred as a Fresnel zone plate. In our experiment, the far field was focused at a distance near the SLM with a Fresnel zone plate, and then projected to a screen with the aid of a lens. This methodology diffused the zero-order because it had a different focal point than the replay field. The optimal scenario occurs with the minimum focal point achievable by the SLM. This minimum reconstructed an image of a similar size that of the SLM panel. This focal point could be obtained with trigonometrical relations. The maximum diffraction angle θ according to the grating formula is:

$$\theta = \textrm{arcsin}\left( {\frac{\lambda }{{4p}}} \right)$$
where p represents the pixel pitch. A factor four was used instead of two because two pixels were required to form a complete oscillation. Based on the trigonometric approach, the minimum focusable distance f is related to the SLM size d:
$$f = \frac{d}{{2\;\textrm{tan}(\theta )}} = \frac{d}{{2\tan \left( {\textrm{arcsin}\left( {\frac{\lambda }{{4p}}} \right)} \right)}} = \frac{d}{2}\sqrt {{{\left( {\frac{{4p}}{\lambda }} \right)}^2} - 1} $$

With an SLM of pixel size of 3.74 µm and a resolution of 2160 px, an optimum focal distance of 47.68 mm was obtained. This number was utilized in the calculation of the Fresnel zone plate. Although this technique allowed for diffusing the zero order, it was necessary to utilize a lens to project the hologram in a larger area. Following this scheme, an optical setup was developed to diffuse the zero order (Fig. 4(a)). A focusing lens (f=75 mm) was used after the beam splitter to project the hologram and the Fresnel diffraction method was utilized. The 75 mm focal length lens was determined as optimal for the zero-order diffusion in the replay field. The zero order in this setup spreads out throughout the projected holographic pattern, so that the observer can focus on the object instead of having a highly concentrated spot in the center of the image. Figure 4(b) shows the zero-order diffused 2D holographic projections of the CAD models in the far field. Figure 4(c) demonstrates 1951 USAF resolution test chart with zero-order diffusion. Figure 4(d) illustrates the ability to project 2D LiDAR holographic images of a tree, a man and a woman in the far field without zero order.

 figure: Fig. 4.

Fig. 4. 2D holographic projection of LiDAR images with diffused zero order. (a) Optical setup to diffuse the zero order of the holographic image. (b) Solid modeling computer-aided design (CAD) 2D holograms displayed in the far field. Scale bar=5 mm. (c) 1951 USAF resolution test chart. Scale bar=5 mm. (d) 2D Holographic LiDAR images of a tree, a man and a woman. Scale bar=5 mm.

Download Full Size | PDF

An optical setup was developed to project 3D in-eye holograms (Fig. 5(a)). The beam was collimated then linearly polarized before entering the SLM. By excluding the convex lens shown in Fig. 4(a), the beam splitter directed the replay field directly into the observer’s eyes, so that the eye lens could focus at the infinite when a Fraunhofer hologram was displayed in the SLM. Figure 5(b) shows the setup for 3D in-eye holographic projections focused at the infinity in direct view mode without any lens at the hologram output. Inset in Fig. 5(b) illustrates the 3D in-eye holographic projection of a checkerboard. Furthermore, a concave lens (f= −100 mm) was utilized at the hologram output to increase the focusing range and field of view of the SLM. The concave lens projected a virtual image behind the lens when a Fraunhofer hologram was focused at the infinity. Contrastingly, Fig. 4(a) follows the approach, where the image was displayed in front of the convex lens. The position of the visualized object followed the lens maker’s formula [41]:

$${\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 f}} \right.}\!\lower0.7ex\hbox{$f$}} = {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {{z_1}}}} \right.}\!\lower0.7ex\hbox{${{z_1}}$}} + {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {{z_2}}}} \right.}\!\lower0.7ex\hbox{${{z_2}}$}}$$
where f is the focal distance of the lens, and ${z_1}$ and ${z_2}$ are the positions of the SLM image and the projected image, respectively. The positions were calculated from the lens plane as the reference as opposed to the SLM plane. The effective optical path from the SLM to the lens was of 50 mm. Based on the lens maker’s formula, a hologram focused at 100 mm in front of the concave lens resulted in a collimated image, which represented an object at the infinity. Thus, any distance between infinity and 50 mm behind the lens could be effectively calculated with the lens maker’s formula.

 figure: Fig. 5.

Fig. 5. 3D in-eye holographic projection of LiDAR images. (a) Optical setup to create 3D holographic projections focused at infinity, directed toward the viewer. (b) Optical setup showing a HUD to display 3D holographic images. The inset shows a holographic checkerboard at infinity in direct view. (c) 3D holographic LiDAR images of a tree and a woman focused at infinity without zero order. (d) 3D holographic LiDAR image projections with a semireflective window and real-world car model in the background to simulate a car HUD with a scenery, showing 4 types of CAD car models aligned with the real-world model of a Land Rover Range Rover Velar at (i-ii) 300 mm and (iii-iv) 600 mm.

Download Full Size | PDF

LiDAR data was projected at different distances in front of the viewer. This effective location of the image was achieved by a virtual Fresnel lens with a multiplication with the CGH. Figure 5(c) illustrates the 3D in-eye holographic images of LiDAR data focused at the infinity with dissipated zero order. The dissipation of the zero order was achieved through the polarisers. The images do not have scale bars as the holograms are displayed as AR in 3D at infinity (Fig. 5(c)) or at different distances aligned with the real-life objects (Fig. 5(d)). Additionally, Fig. 5(d) depicts the AR applications of the presented technology as the CGH can be projected accurately and aligned with the real-life object until 600 mm distance from the viewer. The elaborated algorithm allows from further distances, but it does consider the fact that with increasing distance the object size decreases. The Range Rover Velar model measures 110 mm in length, 37 mm in width and 53 mm in height; this limited the full alignment distance to 300 mm as shown in Fig. 5(d(i))-(d(ii)). The in-eye holographic projections provided image depth focus matching the position of the real-life objects, which were at 300 mm and 600 mm distances from the viewer. This setup provides the driver the opportunity to see the holographic projections at the same distance as the real objects are located. Hence, the driver will not need to shift the eye focus (field of view) on the car windscreen as the holographic objects will appear in the far field outside of the car although the projection will be directed straight into the driver’s eyes. The reproduced hologram in Fig. 5(d(iii))-(d(iv)) which are at a further distance appear sharper and more accurate than the reproduced CGH at a closer distance. It can be observed form Fig. 5(d(iii))-(d(iv)) that objects at a greater distance (600 mm from the viewer) appear sharper, more contrasted, and better aligned with the real-life objects as it is the case with the CGH car and the real-life Range Rover Velar model.

4. Discussion

Comparisons of the AR replay field projections were performed with ray tracing simulations of a HUD design based on a freeform reflective system, generating an eye box of 80 mm×40 mm [42]. A method was developed to align CGH with real-life objects at near and far distances with an eye box size of 3 mm×3 mm [43]. For holographic near-eye display systems, an eye box expansion method was generated, allowing an eye box size of 7 mm×7 mm [44]. For applications in HUDs, an area of 150×150 mm2 is available. In the present work, the eye box of the driver was 25 mm×25 mm in 2D holographic projections on the windshield, and the eye box was of the size of the beamsplitter (25 mm×36 mm) in the 3D floating holographic projections. The field of view of the achieved holographic projections, the UHD resolution of the SLM, has a trade-off with a maximum diffraction angle restricted by the pixel pitch of the SLM of 3.74 µm. The calculated field of view with the laser wavelength of 632.8 nm is:

$$FO{V_1} = \textrm{arcsin}\left( {\frac{{0.6328}}{{\frac{{3.74}}{4}}}} \right) = \textrm{arcsin}({0.0423} )$$
$$FO{V_{diagonal}} = \arcsin ({0.042} )\times \sqrt 2 \times 2 = 6.856^\circ $$

The wavelength of the He-Ne laser and the pixel pitch of the SLM determine the maximum FOV of 6.856°. The smaller the pixel pitch of an SLM, the greater the FOV. Other investigations with identical SLM pixel pitch and a laser wavelength of 550 nm achieved a FOV of 8.4° [45]. For HUD applications, the FOV achieved in this work is acceptable considering the driver’s limited position inside the vehicle. Near-eye holographic display technologies achieved a FOV of 45°, but a limited eye box size of 7 mm×7 mm and a low contrasted replay field. Others developed asymmetric FOV holographic HUDs, which achieved a FOV of 30° (horizontal) and 24° (vertical) with a resolution of 1280×1024 px [46]. This technology is useful in multi-viewer applications. For the automotive HUD, a strong contrast in the replay field results and an enlarged eye box are of greater importance. The focus of the present work was the enlarged eye box with a resolution of 3840×2160 px and the dissipation of the zero order to enable an accurate replay field filling the eye box of the driver with holographic objects. The impact of the SLM on the FOV and the eye box have been studied for the limited driver position, but for an observer with lateral and rotational motion, the FOV and eye box achieved was not enough in size, which could be investigated in future studies [45,47,48]. However, an increase in the eye box decreases the resolution of the reproduced hologram. In the present work, three different setup architectures were developed for 2D with zero order, 2D without zero order and 3D floating modes. Future directions for increased efficiency within these presented setup architectures could focus on algorithm optimisation in terms of time-efficiency and accuracy and by using different light sources.

The methods used in this work demonstrated the basic principles of aligned 3D holographic projections with real-life objects. This process could be optimised on both the computational and the experimental approaches. In future works, more objects could be added aligned with real-life objects to recreate the AR experience. This could be achieved with a time-efficient algorithm to spatially slice the hologram into holographic elements (hologels) by using the multiple viewpoint rendering technique and providing motion parallax with an occlusion effect [49,50]. The slices could generate accurate depth cues and the advantage of the algorithm is that it is compatible with computer graphics rendering techniques for producing quality 3D images at a decreased computing load. The optimisation of the computational algorithm was identified during this work when the CGH was adjusted with the distance. With increasing distance, the CGH decreased in size. This automation could be further optimised for better alignment with the real-life model. To achieve better AR results, the aligned replay field images could be integrated with the dynamic 3D holography having high frame rates to produce video projections in the visible range [51,52]. Currently, a frame rate of 9523 frames per second and 228 different holographic frames are achieved through high-speed dynamic laser beam modulation and space channel metasurfaces [48,53]. Another area for future research is the real-time data acquisition and generation of holograms that could be employed to navigate and orient the driver in real-time through traffic [54,55]. This could lead to mobile holographic AR driver assistance systems on demand [56].

5. Conclusions

A holographic UHD HUD was developed for projecting LiDAR images for automotive applications. The capabilities of an 4K SLM were demonstrated to project holographic images in HUDs. These applications were demonstrated in projecting 2D images and for in-eye 3D images aligned with real-life objects at different distances. The panoramic holographic projection in the far field were achieved through the projection with the eye’s lens. This lens converted the images directly modulated by the SLM in arbitrary distances on the road, where the driver has the focus, and project the images into the retina. This allowed aligning holographic objects with real-life objects in the replay field with different depths to create AR. By aligning the objects with real-life objects, the aimed eye box size for the HUD application was achieved. With variable distances, the holographic AR objects can act as an alert mechanism to the driver’s field of view without distractions. Panoramic holographic projections can be displayed to show the different depth possibilities with several objects focused at infinity to recreate the experience of human eyes to see objects at different distances within the holographic projections setting of the driving experience. The digital holograms offer the driver secure options of multi-dimensional objects as opposed to conventional HUD projection on the windscreen. The integration of the LiDAR data with point cloud data presents an approach to enhance current safety and security levels in the transportation sector to project road obstacles in real-time. The LiDAR point clouds may allow for identifying road obstacles, which are hidden (behind other objects) from the driver’s field of view. The integration of the scanned LiDAR data into the holographic HUD in AR mode can enhance the obstacle identification by alerting the driver in real time. Driver identification with optical scanners, human recognition technologies can be combined with machine learning approaches to create security applications in the automotive sector.

Funding

Engineering and Physical Sciences Research Council (EP/S022139/1); Stiftung der Deutschen Wirtschaft; H2020 Marie Skłodowska-Curie Actions (896410); Consejo Nacional de Ciencia y Tecnología (FORDECYT-PRONACES 1327713).

Disclosures

The authors declare no conflicts of interest.

References

1. F. Kuhnert, C. Stürmer, and A. Koster, “Five trends transforming the Automotive Industry,” PricewaterhouseCoopers GmbH: Berlin, Germany1, 1–48 (2018).

2. J. Kocić, N. Jovičić, and V. Drndarević, “Sensors and sensor fusion in autonomous vehicles,” in 26th Telecommunications Forum (TELFOR), Belgrade, Serbia, (IEEE, 2018), 420–425.

3. Z. Wang, Y. Wu, and Q. Niu, “Multi-sensor Fusion in Automated Driving: A Survey,” IEEE Access (2019).

4. I. Politis, P. Langdon, D. Adebayo, M. Bradley, P. J. Clarkson, L. Skrypchuk, A. Mouzakitis, A. Eriksson, J. W. Brown, and K. Revell, “An evaluation of inclusive dialogue-based interfaces for the takeover of control in autonomous cars,” in 23rd International Conference on Intelligent User Interfaces, (Tokyo, Japan, 2018), 601–606.

5. S. Okabayashi, M. Furukawa, M. Sakata, and T. Hatada, “Driver’s Ability to Recognize the Forward View and Head-Up Display Images in Automobiles,” J. Light Visual Environ. 16(2), 13–21 (1992). [CrossRef]  

6. R. Fisher, Aircraft head-up displays from refractors to holograms, OE/LASE ‘92, Los Angeles, CA (SPIE, 1992), Vol. 10263.

7. F. Nobis, M. Geisslinger, M. Weber, J. Betz, and M. Lienkamp, “A Deep Learning-based Radar and Camera Sensor Fusion Architecture for Object Detection,” in Sensor Data Fusion: Trends, Solutions, Applications (SDF), Bonn, Germany, (IEEE, 2019), 1–7.

8. V. Charissis and M. Naef, “Evaluation of prototype automotive head-up display interface: testing driver's focusing ability through a VR simulation,” in IEEE Intelligent Vehicles Symposium, Istanbul, Turkey, (IEEE, 2007), 560–565.

9. S. Sechrist, “The Expanding Vision of Head-Up Displays: HUDs for Cars at Display Week 2017,” Inf. Disp. 33(5), 18–23 (2017). [CrossRef]  

10. E. Štrumbelj and I. Kononenko, “Explaining prediction models and individual predictions with feature contributions,” Knowl. Inf. Syst. 41(3), 647–665 (2014). [CrossRef]  

11. H. Bast, D. Delling, A. Goldberg, M. Müller-Hannemann, T. Pajor, P. Sanders, D. Wagner, and R. F. Werneck, “Route planning in transportation networks,” in Algorithm engineering (Springer, 2016), pp. 19–80.

12. S. Keates, P. J. Clarkson, L.-A. Harrison, and P. Robinson, “Towards a practical inclusive design approach,” in Proceedings on the 2000 Conference on Universal Usability, (Arlington, VA, 2000), 45–52.

13. L. Skrypchuk, A. Mouzakitis, P. Langdon, and P. Clarkson, “The Effect of Age and Gender on Task Performance in the Automobile,” in Cambridge Workshop on Universal Access and Assistive Technology, Cambridge, UK, (Springer, 2018), 17–27.

14. M. Tonnis, C. Sandor, G. Klinker, C. Lange, and H. Bubb, “Experimental evaluation of an augmented reality visualization for directing a car driver's attention,” in Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05), Vienna, Austria, (IEEE, 2005), 56–59.

15. H. Okumura, T. Sasaki, A. Hotta, A. Moriya, N. Okada, and O. Nagahara, “Hyperrealistic head-up-display for automotive application,” in 2011 IEEE International Conference on Consumer Electronics ICCE, (Las Vegas, NV, 2011), 875–876.

16. J. W. Goodman, Introduction to Fourier Optics, 3 Edition, Roberts & Company, Englewood, CO (2005).

17. P.-A. Blanche, C. Bigler, C. Draper, J. McDonald, and K. Sarma, Holography for automotive applications: from HUD to LIDAR, SPIE Optical Engineering + Applications, San Diego, CA (SPIE, 2018), Vol. 10757.

18. T. Ogawa, H. Sakai, Y. Suzuki, K. Takagi, and K. Morikawa, “Pedestrian detection and tracking using in-vehicle lidar for automotive application,” in IEEE Intelligent Vehicles Symposium (IV), (Baden-Baden, Germany, 2011), 734–739.

19. M. Kutila, P. Pyykönen, H. Holzhüter, M. Colomb, and P. Duthon, “Automotive LiDAR performance verification in fog and rain,” in 21st International Conference on Intelligent Transportation Systems (ITSC), (Maui, HI, 2018), 1695–1701.

20. P. Wilkes, M. Disney, M. B. Vicari, K. Calders, and A. Burt, “Estimating urban above ground biomass with multi-scale LiDAR,” Carbon Balance Manage. 13(1), 10 (2018). [CrossRef]  

21. W. Wang, X. Zhu, K. Chan, and P. Tsang, “Digital holographic system for automotive augmented reality head-up-display,” in IEEE 27th International Symposium on Industrial Electronics (ISIE), Cairns, Australia, (IEEE, 2018), 1327–1330.

22. B. Mullins, P. Greenhalgh, and J. Christmas, “59-5: Invited Paper: The Holographic Future of Head Up Displays,” SID Symp. Dig. Tech. Pap. 48(1), 886–889 (2017). [CrossRef]  

23. J. Christmas and N. Collings, “75-2: Invited Paper: Realizing Automotive Holographic Head Up Displays,” SID Symp. Dig. Tech. Pap. 47(1), 1017–1020 (2016). [CrossRef]  

24. L. Ma, G. Zheng, J. U. Eitel, L. M. Moskal, W. He, and H. Huang, “Improved salient feature-based approach for automatically separating photosynthetic and nonphotosynthetic components within terrestrial lidar point cloud data of forest canopies,” IEEE Trans. Geosci. Remote Sensing 54(2), 679–696 (2016). [CrossRef]  

25. M. B. Vicari, M. Disney, P. Wilkes, A. Burt, K. Calders, and W. Woodgate, “Leaf and wood classification framework for terrestrial LiDAR point clouds,” Methods Ecol. Evol. 10(5), 680–694 (2019). [CrossRef]  

26. R. Häuslschmid, D. Ren, F. Alt, A. Butz, and T. Höllerer, “Personalizing content presentation on large 3d head-up displays,” PRESENCE: Virtual and Augmented Reality 27(1), 80–106 (2019). [CrossRef]  

27. A. J. Macfaden, G. S. Gordon, and T. D. Wilkinson, “An optical Fourier transform coprocessor with direct phase determination,” Sci. Rep. 7(1), 13667 (2017). [CrossRef]  

28. J. Jia, J. Liu, G. Jin, and Y. Wang, “Fast and effective occlusion culling for 3D holographic displays by inverse orthographic projection with low angular sampling,” Appl. Opt. 53(27), 6287–6293 (2014). [CrossRef]  

29. C.-Y. Chen, W.-C. Li, H.-T. Chang, C.-H. Chuang, and T.-J. Chang, “3-D modified Gerchberg–Saxton algorithm developed for panoramic computer-generated phase-only holographic display,” J. Opt. Soc. Am. B 34(5), B42–B48 (2017). [CrossRef]  

30. H. Nishi, K. Matsushima, and S. Nakahara, “Rendering of specular surfaces in polygon-based computer-generated holograms,” Appl. Opt. 50(34), H245–H252 (2011). [CrossRef]  

31. J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20(4), 4018–4023 (2012). [CrossRef]  

32. D. Arai, T. Shimobaba, K. Murano, Y. Endo, R. Hirayama, D. Hiyama, T. Kakue, and T. Ito, “Acceleration of computer-generated holograms using tilted wavefront recording plane method,” Opt. Express 23(2), 1740–1747 (2015). [CrossRef]  

33. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18(19), 19504–19509 (2010). [CrossRef]  

34. B. Katz, N. T. Shaked, and J. Rosen, “Synthesizing computer generated holograms with reduced number of perspective projections,” Opt. Express 15(20), 13250–13255 (2007). [CrossRef]  

35. Y. Takaki and K. Ikeda, “Simplified calculation method for computer-generated holographic stereograms from multi-view images,” Opt. Express 21(8), 9652–9663 (2013). [CrossRef]  

36. T. Haist, M. Schönleber, and H. Tiziani, “Computer-generated holograms from 3D-objects written on twisted-nematic liquid crystal displays,” Opt. Commun. 140(4-6), 299–308 (1997). [CrossRef]  

37. D. G. Grier, “A revolution in optical manipulation,” Nature 424(6950), 810–816 (2003). [CrossRef]  

38. I. Reutsky-Gefen, L. Golan, N. Farah, A. Schejter, L. Tsur, I. Brosh, and S. Shoham, “Holographic optogenetic stimulation of patterned neuronal activity for vision restoration,” Nat. Commun. 4(1), 1509 (2013). [CrossRef]  

39. Z. Zalevsky, D. Mendlovic, and R. G. Dorsch, “Gerchberg–Saxton algorithm applied in the fractional Fourier or the Fresnel domain,” Opt. Lett. 21(12), 842–844 (1996). [CrossRef]  

40. C. Chang, J. Xia, J. Wu, W. Lei, Y. Xie, M. Kang, and Q. Zhang, “Scaled diffraction calculation between tilted planes using nonuniform fast Fourier transform,” Opt. Express 22(14), 17331–17340 (2014). [CrossRef]  

41. J. E. Greivenkamp, Field guide to geometrical optics (SPIE Press, Bellingham, WA, 2004), Vol. 1.

42. S. Wei, Z. Fan, Z. Zhu, and D. Ma, “Design of a head-up display based on freeform reflective systems for automotive applications,” Appl. Opt. 58(7), 1675–1681 (2019). [CrossRef]  

43. C. Chang, W. Cui, J. Park, and L. Gao, “Computational holographic Maxwellian near-eye display with an expanded eyebox,” Sci. Rep. 9(1), 18749 (2019). [CrossRef]  

44. C. Jang, K. Bang, G. Li, and B. Lee, “Holographic near-eye display with expanded eye-box,” ACM Trans. Graph. 37(6), 1–14 (2019). [CrossRef]  

45. Y. Isomae, T. Ishinabe, Y. Shibata, and H. Fujikake, “Alignment Control of Liquid Crystals in a 1.0-µm-pitch Spatial Light Modulator by Lattice-shaped Dielectric Wall Structure,” in SID Symposium Digest of Technical Papers, (2019), 251–258.

46. H. Peng, D. Cheng, J. Han, C. Xu, W. Song, L. Ha, J. Yang, Q. Hu, and Y. Wang, “Design and fabrication of a holographic head-up display with asymmetric field of view,” Appl. Opt. 53(29), H177–H185 (2014). [CrossRef]  

47. L. Onural, F. Yaras, and H. Kang, “Digital holographic three-dimensional video displays,” Proc. IEEE 99(4), 576–589 (2011). [CrossRef]  

48. X. Gao, J. Werner, M. Necker, and W. Stork, A calibration method for automotive augmented reality head-up displays using a chessboard and warping maps, Twelfth International Conference on Machine Vision, Amsterdam, Netherlands (SPIE, 2020), Vol. 11433.

49. H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Layered holographic stereogram based on inverse Fresnel diffraction,” Appl. Opt.55(3), A154–A159 (2016). [CrossRef]  

50. A. Gilles, P. Gioia, R. Cozot, and L. Morin, “Computer generated hologram from Multiview-plus-Depth data considering specular reflections,” in IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Seattle, WA, (2016), 1–6.

51. D. K. Shah, J. Ascenso, C. Brites, and F. Pereira, “Evaluating multi-view plus depth coding solutions for 3D video scenarios,” in 2012 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Zurich, Switzerland, (2012), 1–4.

52. H. Gao, Y. Wang, X. Fan, B. Jiao, T. Li, C. Shang, C. Zeng, L. Deng, W. Xiong, J. Xia, and M. Hong, “Dynamic 3D meta-holography in visible range with large frame number and high frame rate,” Sci. Adv. 6(28), eaba8595 (2020). [CrossRef]  

53. L. Shi, F.-C. Huang, W. Lopes, W. Matusik, and D. Luebke, “Near-eye light field holographic rendering with spherical waves for wide field of view interactive 3D computer graphics,” ACM Trans. Graph. 36(6), 1–17 (2017). [CrossRef]  

54. T. Höllerer, S. Feiner, D. Hallaway, B. Bell, M. Lanzagorta, D. Brown, S. Julier, Y. Baillot, and L. Rosenblum, “User interface management techniques for collaborative mobile augmented reality,” Computers & Graph. 25(5), 799–810 (2001). [CrossRef]  

55. T. Höllerer, S. Feiner, T. Terauchi, G. Rashid, and D. Hallaway, “Exploring MARS: developing indoor and outdoor user interfaces to a mobile augmented reality system,” Computers & Graph. 23(6), 779–785 (1999). [CrossRef]  

56. W. Narzt, G. Pomberger, A. Ferscha, D. Kolb, R. Müller, J. Wieghardt, H. Hörtner, and C. Lindinger, “Augmented reality navigation systems,” Univ. Access Inf. Soc. 4(3), 177–187 (2006). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. LiDAR data collection. (a) Features of the 3D terrestrial laser scanner featuring a pulsed laser scanner (λ=1,550 nm), photodiode, timer circuit, and automated control of the field of view (100° vertical, 360° horizontal). (b) The LiDAR scanner utilizes a reference beam to scan the objects in the environment with a scan data acquisition of 5 mm accuracy and 3 mm repeatability, and a measurement range of 600 m, enabling a measurement rate of 122,000 points s−1. (c) The experimental setup for LiDAR data collection of an object (e.g. tree). (d) The echo signals (waveform data) based on the emitted pulsed of the laser and returned waveform were analyzed. (e) Bird’s-eye view of Malet Street in London, showing the direction of LiDAR scanning. Scale bar=30 m. The inset shows the scanned street position. Scale bar=10 m. (f) A LiDAR reflectance image of a scanned street position (inset in (d)), demonstrating the ability to distinguish objects such as vehicles and trees.
Fig. 2.
Fig. 2. Optical setup to generate 2D images with zero order. (a) Schematic of the holographic projection setups showing a HeNe laser, a focusing lens, a polarizer, a half-wave plate, a beam splitter and an UHD SLM (3840×2160 px). (b) The experimental setup that allows for projecting 2D holographic images. The inset shows the LCoS microdisplay panel. (c) Patterns (slanted line, grid, fine lines at 0°, 45°, 90°) that are projected to a screen. (d) Projected patterns in the far field. (e) Optical power measurement of the projected patterns.
Fig. 3.
Fig. 3. 2D holographic projection of LiDAR images with zero order. (a) Solid modeling computer-aided design (CAD) holograms displayed in the far field. Scale bar=5 mm. (b) 1951 USAF resolution test chart. Scale bar=5 mm. (c) LiDAR data of a tree. Scale bar=3 m. (d) Holographic image of a tree. Scale bar=5 mm. (e) LiDAR data of two individuals. Scale bar=20 cm. (f) Holographic 2D images of two individuals. Scale bar=5 mm.
Fig. 4.
Fig. 4. 2D holographic projection of LiDAR images with diffused zero order. (a) Optical setup to diffuse the zero order of the holographic image. (b) Solid modeling computer-aided design (CAD) 2D holograms displayed in the far field. Scale bar=5 mm. (c) 1951 USAF resolution test chart. Scale bar=5 mm. (d) 2D Holographic LiDAR images of a tree, a man and a woman. Scale bar=5 mm.
Fig. 5.
Fig. 5. 3D in-eye holographic projection of LiDAR images. (a) Optical setup to create 3D holographic projections focused at infinity, directed toward the viewer. (b) Optical setup showing a HUD to display 3D holographic images. The inset shows a holographic checkerboard at infinity in direct view. (c) 3D holographic LiDAR images of a tree and a woman focused at infinity without zero order. (d) 3D holographic LiDAR image projections with a semireflective window and real-world car model in the background to simulate a car HUD with a scenery, showing 4 types of CAD car models aligned with the real-world model of a Land Rover Range Rover Velar at (i-ii) 300 mm and (iii-iv) 600 mm.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

U ( r ) U ( r 0 ) e j k | r r 0 | | r r 0 | d s
| r r 0 | = ( x x ) 2 + ( y y ) 2 + z 2
U ( x , y ) U ( x , y ) e j k z ( x x + y y ) d x d y
U ( x , y ) U ( x , y ) e { j k 2 z [ x 2 + y 2 ] } e j k ( x x + y y ) d x d y
θ = arcsin ( λ 4 p )
f = d 2 tan ( θ ) = d 2 tan ( arcsin ( λ 4 p ) ) = d 2 ( 4 p λ ) 2 1
1 / 1 f f = 1 / 1 z 1 z 1 + 1 / 1 z 2 z 2
F O V 1 = arcsin ( 0.6328 3.74 4 ) = arcsin ( 0.0423 )
F O V d i a g o n a l = arcsin ( 0.042 ) × 2 × 2 = 6.856
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.