Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Large real-time holographic 3D displays: enabling components and results

Open Access Open Access

Abstract

A holographic 3D display with 300mm×200mm active area was built. The display includes a spatial light modulator that modulates amplitude and phase of light and thus enables holographic reconstruction with high efficiency. Furthermore, holographic optical elements in photopolymer films and laser light sources are used. The requirements on these optical components are discussed. Photographs taken at the display demonstrate that a 3D scene is reconstructed in depth, thus enabling selective accommodation of the observer’s eye lenses and natural depth perception. The results demonstrate the advantages of SeeReal’s holographic 3D display solution.

© 2017 Optical Society of America

1. INTRODUCTION

Holographic 3D displays are an advanced alternative to stereoscopic 3D displays. Several approaches to real-time holographic 3D displays have been demonstrated in the past, as described in Refs. [13]. SeeReal Technologies presented its unique and proprietary approach to real-time holography several years ago [4].

The promising feature of holographic 3D displays in comparison with stereoscopic 3D displays is connected with the accommodation–convergence (AC) conflict [5]. In brief, the AC conflict arises when the depth cues from accommodation (of the eye lens) and convergence (of the eyes) generate inconsistent information. This can occur, e.g., when viewing a 3D scene via a stereoscopic display, which necessitates that the eye focus remains in the plane of the display surface, regardless of stereoscopic convergence cue. The AC conflict may lead to visual discomfort and fatigue. Holographic 3D technology is considered a solution to overcome the AC conflict and to enable other depth cues, e.g., motion parallax.

The term “holographic” in the context of 3D displays is often used with different meanings. In its genuine form, a holographic display uses a spatial light modulator (SLM) with an encoded hologram in order to reconstruct a 3D scene by interference of light. This type of holographic display reconstructs the 3D scene in depth and thus facilitates accommodation and convergence of the eyes.

In a wider context, the term “holographic display” is sometimes used to describe a 2D or stereoscopic display that uses holographic or diffractive optical elements (HOEs, DOEs) as light shaping or light deflecting elements. Such a display is not holographic in its genuine sense and does not lead to proper accommodation and convergence of the eyes.

Our holographic displays are holographic in its genuine sense. A computer-generated hologram (CGH) is calculated and encoded in a SLM in real time. The SLM modulates amplitude and phase of light and thus reconstructs the 3D scene in three dimensions, including depth.

Additionally, HOEs are used in our holographic displays for beam shaping, e.g., for light collimation, focusing, and deflection. The employed HOEs replace bulky refractive optical elements and enable large holographic displays while maintaining a compact form factor.

After a brief recapitulation of our approach, we discuss requirements and solutions for components that enable an improved holographic display. Further, some of the progress in our technology is demonstrated by photographs of holographic reconstructions.

2. RECAPITULATION OF CONCEPT

Our concept of real-time holography is based on sub-hologram (SH) encoding and tracked viewing windows (VWs). The concept is described in detail in [4] and will now be briefly recapitulated. The concept is illustrated in Fig. 1.

 figure: Fig. 1.

Fig. 1. Schematic principle of SeeReal’s holographic 3D display. Light source, FL, SLM, SHs, and VW are shown. The 3D scene can be located anywhere in the yellow area and is visible from the VW that is positioned at the eye pupil.

Download Full Size | PDF

The light source illuminates the SLM and is focused into the VW by the field lens (FL). Alternatively, the light is collimated by a first lens before the SLM and focused into the VW by a second lens after the SLM. The VW is positioned in the observer plane. The distance between SLM and VW is the viewing distance d. The SLM has a pixel structure with pitch δy.

The VW with size wy is located within one diffraction order in the observer plane. The size of a diffraction order is given by λd/δy in linear approximation, with wavelength λ. As an example, the VW size is 13 mm for λ=465nm, d=1m, and δy=35μm. That size is larger than the pupil of the eye at which the VW is positioned. Therefore, the 3D scene is visible from the VW. The VW is tracked to the new eye position when the observer moves.

Each object point of the 3D scene is encoded in a SH on the SLM. The SH size and position are given by projecting the VW through the object point onto the SLM, as indicated by the green lines in Fig. 1. The SH can be considered a diffractive lens that generates a focus point at the position of the object point. Positive and negative lens powers are possible for generating real and virtual object points. In our concept, highest priority is to reconstruct the wavefront in the VW that would be generated by a real 3D scene, and not the 3D scene itself.

The size of the 3D scene is limited only by the size of the SLM and not by the pixel pitch. The 3D scene can be reconstructed anywhere in the frustum that is defined by the VW and the SLM, i.e., the yellow area in Fig. 1. Therefore, our concept of SH encoding and tracked VWs enables large 3D scenes with feasible pitch and number of SLM pixels.

We presented holographic displays already several years ago [4]. These displays used mainly off-the-shelf components. Meanwhile, improved holographic displays were built with dedicated components. The main components and results are described in the following sections.

3. ENABLING COMPONENTS

The main components of our holographic display are illumination, SLM, and lenses. The backlight unit (BLU) expands the light from the light source and provides collimated coherent light for illuminating the SLM. The amplitude and phase of the light is modulated in the SLM. The modulated light is focused into the observer plane by a FL.

These components are now described in detail.

A. Volume Gratings

Volume gratings (VGs) are diffractive elements consisting of a periodic refractive index (RI) or absorption modulation through the entire volume of the element. Here we consider the gratings with RI modulation only, also called volume phase gratings. They possess a number of unique properties that make them very attractive for display technologies. A theoretical description of the diffraction in a VG was made within a theory of coupled waves by Kogelnik [6].

An incident beam passing through a VG diffracts on it, forming two beams (transmitted and diffracted). Only one diffracted beam exists in a VG in contrast to thin diffractive gratings. One of the main parameters of a VG is diffraction efficiency (DE). DE is the ratio of the diffracted light power to the incident light power. This value characterizes the total efficiency of the VG and is often called “absolute DE.” To characterize the diffraction properties of a VG, a so-called “Kogelnik DE” is used, i.e., the ratio of the diffracted beam power to the sum of the power of the diffracted and transmitted beams. Kogelnik DE can reach 100%, i.e., all the incident light can be deflected. The absolute DE depends on Kogelnik DE and optical losses (Fresnel reflections, absorption, scattering, etc.).

Angular and spectral selectivity are very important properties of VGs. Ideally, only light with design wavelength that is incident at the design angle (also called the “Bragg angle”) onto the VG is diffracted. Deviation of the incidence angle or wavelength from the design values leads to decreasing or vanishing DE. This property is extremely important for RGB display applications where each HOE consists of three (or more) independent gratings designed for three different wavelengths. The selectivity of VGs can reduce crosstalk, i.e., unwanted diffraction on the grating designed for another wavelength. Angular and spectral selectivity depend on thickness and period of the VG and usually can be described by the square of the sinc function=sin(x)/x [6].

The material for VGs is very important. New photopolymer (PP) films for volume holography were developed, e.g., BayFol from Covestro [7]. PPs are at the moment the most appropriate materials for display applications. The best PPs provide close to 100% DE at layer thickness of the order of 10 μm. They have almost no absorption in the whole visible range and low scattering of less than several percent. The PP film with recorded VG can be laminated onto another VG to form an extremely thin stack for RGB application.

B. BLU with VG

The BLU creates three collimated coherent beams of different wavelengths (R, G, and B). The size of the beams is equal to the display size. The size available in our laboratory at that time was 400 mm diagonal.

Creating a compact BLU is a challenging task. One concept is shown in Fig. 2 (left). The incident beam is incident on the VG at a large angle α and deflected in the VG to normal output. Thus, the cross section is expanded by a factor 1/cos(α). Currently we use α=84.3 deg, leading to 10-fold beam expansion. Larger incidence angles leading to larger beam expansions are possible.

 figure: Fig. 2.

Fig. 2. Concept of beam expansion in VG (left) and concept of cFL (right).

Download Full Size | PDF

Figures 3 and 4 show a drawing and the experimental setup of the BLU, respectively. The divergent light at the fiber output is collimated by a lens with small diameter, reflected twice at mirrors, and incident on the first VG stack. The first VG stack expands the beam in the horizontal direction and directs the beam to the second stack that expands the beam in vertical direction.

 figure: Fig. 3.

Fig. 3. Drawing of the BLU. The light after the fiber output is collimated by a collimating lens and twice expanded in VG stacks.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Experimental setup of BLU with area 300mm×220mm. The light after the fiber output is collimated by a collimating lens and twice expanded in VG stacks.

Download Full Size | PDF

Each stack comprises one VG for each RGB wavelength, i.e., 3 VGs per stack. The stack is coated with an antireflection (AR) coating to prevent high reflection losses at the air–glass interface with 84.3 deg incidence angle. The AR coating is a multi-layer stack of dielectric layers that is specially designed for the incidence angle and the RGB laser wavelengths.

The BLU for the described display generates a collimated RGB beam with area 300mm×220mm at compact size. Taking into account that the final VG stack of the BLU can be attached directly to the SLM, such geometry allows a thin BLU with thickness of approximately 25 mm.

To create VGs for BLU, we use a standard two-beam holographic exposure setup. A Verdi V10 laser from Coherent Inc. operating in single-frequency mode at 532 nm is used. Two collimated beams of diameter 400 mm are generated by collimating lenses. These lenses have a reflective coating at the rear surface and are used in reflection. The beams interfere in the sample plane and provide holographic exposure of the VG. BayFol PPs from Covestro were used with thickness of 16, 15, or 12 μm. We recorded VGs for the three RGB wavelengths using the same recording wavelength by adjustment of the exposure angles.

In addition to the direct exposure of the VGs, we also used a copy process to produce the VGs. In the copy process, a master VG was placed onto a PP film and illuminated with one beam. The master VG has the same geometry as the desired copy and a reduced DE of approximately 50%. The zeroth- and first-order beams of approximately equal intensity interfere after the master and provide holographic exposure in the PP film as a copy. The copy process is less sensitive to vibrations and more convenient for mass production.

We have reached 99% Kogelnik DE and 89% absolute DE for a single VG. Homogeneity of the VGs is sufficiently good. Kogelnik DE in the best case is higher than 90% over the whole area of the VG. For the stack of three BLU VGs, an absolute DE60% was reached for all three wavelengths (see Fig. 5). In addition to the above-mentioned losses in single VG (reflection, absorption, and stray light) there are additional losses in stack of the VGs, namely, the above-mentioned crosstalk.

 figure: Fig. 5.

Fig. 5. Absolute DE for a stack of RGB VGs for the BLU. The DE versus incidence angle is shown for wavelengths 640, 532, and 473 nm. The VGs were measured in the reverse direction, i.e., at normal input and slanted output.

Download Full Size | PDF

The value of crosstalk can be reduced by adjusting the thickness of the VG. The angular and spectral selectivity of the VG depends on its thickness. Larger thickness leads to narrower angular and spectral selectivity and vice versa. The thickness of the VG also determines the position of the first minimum of DE in angular and spectral selectivity. We have used a 16 μm VG thickness for the BLU. This thickness provides sufficient angular acceptance and small crosstalk. The peak DE for one wavelength is close to the first minimum for the other wavelengths (e.g., blue and red wavelengths on the VG designed for green wavelength).

Figure 5 also shows the narrow angular selectivity of the VG, which reduces crosstalk. Please note that the thickness of this VG is 12 μm instead of 16 μm. The reason is that a PP with especially low absorption and higher transmission was used that was not yet available with 16 μm thickness. The DE in Fig. 5 was measured in the reverse direction, i.e., at around 0 deg incidence angle and 84.3 deg output angle. The reverse beam direction makes measurements easier and has only a little influence on the measured DE. Furthermore, the blue VG was exposed for 457 nm, whereas the wavelength in the measurement setup was 473 nm. This difference leads to a 1.2deg offset in the peak position but has only a little influence on the peak DE.

C. FL with VGs

The FL focuses the light after the SLM into the observer plane. Each lens consists of two VGs (Fig. 2, right). Therefore, we use the term “compound field lens” (cFL). The first plane-to-plane VG is called “pre-deflection” (PD) and creates a slanted plane wavefront. This wavefront diffracts in the second VG, which is called the FL and forms a convergent wavefront. The PD is used to provide suitable periods in the FL. Without PD, the periods in the center of the FL are very large and cannot be reproduced by standard holographic materials. Furthermore, the DE for large periods is usually smaller for two reasons, namely, small incidence angle and transition of the grating from volume to “thin” grating. Another advantage of PD usage is an additional degree of freedom in the period of the grating. This can be helpful in crosstalk optimization.

An additional advantage of the PD is the simple realization of the setup. For the PD exposure we use a setup similar to the setup for a BLU. The pre-deflection angle is in the range of 40–50 deg. For the FL exposure, the normally incident beam is divergent. It is provided by a lens placed at a certain distance from the sample. To record the FL for an operating wavelength different from the recording wavelength, not only the angle between recording beams should be optimized but also the distance the lens. The pair of PD and FL is designed for each wavelength, so the total number of gratings in the cFL stack is six.

Reduction of crosstalk in the case of the cFL stack is even more important than for BLU. Angular selectivity of each FL VG depends on the position within the VG since period and orientation of the grating fringes are not constant over the VG. Hence, crosstalk in the cFL would be visible as inhomogeneous brightness, whereas crosstalk in the BLU would homogeneously reduce the brightness. Therefore, crosstalk in the cFL is more critical than in the BLU. Nevertheless, fine optimization of the PD angles made it possible to reduce crosstalk down to less than 10% over the whole area of the cFL stack. A VG thickness of 15 μm was used. The remaining crosstalk is not directed into the VW and, hence, does not reduce visible contrast and image quality.

D. Light Source

Holography is based on diffraction and interference and, therefore, has requirements on the coherence of light that illuminates the SLM. Reduced coherence will lead to smearing in the reconstructed 3D scene and, hence, to loss of visible resolution. The required coherence can be calculated from the desired resolution of the 3D scene. Spatial and temporal coherence have to be considered. Spatial coherence is related to the incidence angle on the grating and temporal coherence to the spectral linewidth of the light source. The dispersion of the grating in the BLU is dominant as it has the largest diffraction angle.

The general diffraction equation of a grating is given by

sinα0sinαm=mλp.
The variables are incidence angle α0, diffracted angle αm in diffraction order m, wavelength λ, and grating pitch p.

By differentiation we get

cosα0·dα0cosαm·dαm=mp·dλ.

This equation describes the relation among variations or spreads dα0, dαm, and dλ of incidence angle, diffracted angle, and wavelength. Assuming a laser source that emits in lateral single mode and neglecting wavefront aberrations in the optical system between laser and grating, we set dα0=0. As the VG in the BLU is used in first order and at normal output, we set m=1 and α1=0. Using the above equations, we get

dα1=sinα0·dλλ.

A SH can be considered a small lens that focuses the light from the BLU into a focus at the desired object point position. Hence, an angular spread dα1 at the output of the BLU leads to lateral smearing of the reconstructed object points. The lateral smearing can be expressed as an angular resolution αres that is seen by the observer.

The holographic display can reconstruct objects in front of the SLM (i.e., between SLM and observer) and behind the SLM. For object points far behind the SLM, the distance between observer and SLM is negligible. Therefore, for large object distances, the angular resolution αres that is seen by the observer is equal to the spread dα1 of the diffracted light at the output of the BLU.

In general, the visible resolution that can be resolved by the human eye is of the order of 1arcmin=(1/60)deg. Using the above equations with BLU parameters α0=84.3 deg, central wavelength λ=532nm, and dα1=αres=(1/60) deg, we get dλ=0.16nm. Therefore, the spectral linewidth of the laser has to be of the order of 0.1 nm in order to achieve a 3D scene resolution that is at the resolution limit of the human eye.

Laser diodes that emit directly at the desired RGB wavelengths are desired due to their compact size, low cost, and good modulation characteristics. However, the spectral linewidth of typical laser diodes is of the order of 1 nm, i.e., too broad by 1 order of magnitude. Laser diodes with attached Bragg gratings for wavelength stabilization are a solution as the Bragg gratings also reduce the spectral linewidth.

Inserting a laser diode in an external cavity might be another option to reduce spectral linewidth. However, the required alignment precision and stability of an external laser cavity might have negative impact on the economic feasibility.

The small spectral linewidth of lasers also has a positive influence on color gamut. The corners of the color triangle are on the perimeter of the chromaticity diagram if lasers are used, whereas they are inside for LEDs [8]. Therefore, the color gamut using lasers is larger than using LEDs as light sources for a display.

E. SLM

Hologram calculation is based on Fourier transformations. Therefore, the CGH contains complex values consisting of amplitude and phase. Several solutions to represent complex values on a SLM consisting of a single liquid crystal (LC) display were discussed earlier [9]. Iterative phase-only encoding is one of these solutions. Its CGH calculation requires an iterative algorithm that is not suitable for real-time calculation of large CGHs. Furthermore, iterative phase-only encoding for our application differs from standard projection on a flat screen: the phase distribution in the VW carries the depth information and, hence, cannot be used as a degree of freedom in the iteration process. As a consequence, only one half of a diffraction order can be used as a VW, which would reduce the size of the VW.

Another solution is light modulation in a SLM consisting of a “sandwich” of two LC displays. The first display modulates the phase of light, the second display the amplitude of light. Both displays are aligned pixel to pixel. CGH calculation does not involve iterations and is thus usable for real-time applications. Furthermore, the VW can extend over the whole size of one diffraction order.

As an example of an alternative method using two LC displays, complex light modulation can also be achieved by adding two wavefronts of light [10]. Light is modulated in two LC displays, and beam splitter cubes are used to combine the light thereafter. This method is preferably suitable for small displays.

We use the “sandwich” method for our SLM with 300mm×200mm active area. Both displays have the same layout and use the ECB mode (ECB = electrically controlled birefringence, more specifically called PA = parallel alignment). In the ECB mode, the LC molecules are aligned parallel to the display substrate in the absence of an electric field [11]. Application of an electric field leads to a tilt of the LC molecules with respect to the display substrate and, hence, to modulation of phase and/or polarization state.

The displays differ in the orientation of the LC molecules when an electric field is applied. The LC molecules of the first display are aligned in a plane that is parallel to the polarization of the incoming light, thus modulating only the phase. The LC molecules of the second display are aligned in a plane that is at 45 deg relative to the polarization of light, thus modulating mostly the amplitude. An inherent small phase modulation of the second display is easily compensated in the first display. Hence, phase and amplitude can be modulated independently.

Both displays have a 350 mm diagonal and a pixel pitch of 135 μm horizontally and 35 μm vertically. The cover glasses of both displays are put together with a thin layer of UV curing glue in between. Alignment is done by precision mechanics that allows lateral positioning and rotation. The alignment is controlled by observation through a microscope objective. The two displays are permanently fixed by curing the UV curing glue after successful alignment.

Figure 6 shows the result of an alignment procedure. An alignment precision of better than 5 μm across the whole SLM surface was repeatedly and reliably achieved.

 figure: Fig. 6.

Fig. 6. Photographs of pixels of the SLM during alignment. The left photograph shows vertical misalignment of the order of 10 μm, visible as a black shadow at the bottom of the pixel apertures. The right photograph shows perfect alignment.

Download Full Size | PDF

The SLM does not reach full 2π phase modulation for every color. Larger phase modulation would require a thicker LC layer or LC material with higher birefringence. Both were not available in the used production process. Therefore, 2π phase modulation was reached for the shortest wavelength only. We achieved 2π, 1.9π, and 1.7π phase modulation for 465, 532, and 638 nm, respectively.

The SLM “sandwich” consisting of a phase- and an amplitude-modulating display was the selected preferred option for the described configuration at the time when the holographic display was built.

4. HOLOGRAPHIC DISPLAY AND RESULTS

A. Display Configuration Used for High-Resolution Photographs

The BLU is a compact solution for illuminating the SLM with coherent collimated light. However, as discussed before, it makes demands on the laser sources with respect to their spectral linewidth. The spectral linewidth has to be of the order of 0.1 nm to achieve the highest resolution that can be resolved by the human eye. Laser sources in the visible wavelength range and with such spectral linewidth are commercially available only with optical output power <100mW.

The cFL contains two VGs in sequence. The incident beam is first deflected in the PD and then deflected into the opposite direction in the FL. Please note that Fig. 2 (right) is not to scale and shows an exaggerated aperture angle after the FL. Therefore, the dispersion effects in both VGs nearly cancel, and the contribution of the cFL to dispersion is negligible. Thus, the constraints to the spectral linewidth are determined by the BLU.

The current efficiencies of the optical components are not yet fully optimized. As an example, the individual components are not assembled to a stack without an air gap to keep the display configuration flexible. Therefore, 4% light is lost at each air–glass interface. Furthermore, the SLM does not reach full 2π phase modulation for each color, leading to reduced hologram efficiency. As a further example, the lasers have to be operated at low duty cycle (see below). These and other compromises lead to an overall display efficiency that is lower than what will be achievable for a fully developed display. Hence, optical output powers of the order of 1 W are required to achieve luminance of the order of 100cd/m2 for the currently available components.

Because of the mismatch between presently available and required laser power, the current holographic display does not use a BLU. Instead, a VG lens is used to collimate the diverging laser light to a collimated beam, as shown in the temporary configuration in Fig. 7. The VG lens has smaller diffraction angles than the VG in the BLU and is therefore less susceptible to the spectral linewidth. Therefore, commercially available laser sources with spectral linewidths of the order of 1 nm can be used. The current holographic display uses the blue and red lasers of the Matrix-700 module and a separate green 532 nm laser, both from Necsel IP, Inc. The optical peak powers are 3, 3, and 1.8 W at 638, 532, and 465 nm, respectively. Together, these lasers provide 6.4 W white-balanced optical power with a flux of 1600 lm for 100% duty cycle.

 figure: Fig. 7.

Fig. 7. Schematic drawing of the holographic display. The target configuration (left) includes the BLU, SLM, and cFL. The temporary configuration (right) uses a point light source (LS) and a collimating lens (CL) to generate collimated light at the SLM.

Download Full Size | PDF

The LC molecules of the present LC displays have finite response time: it takes approximately 10 ms for amplitude and phase to settle to their target values. As a well-defined amplitude and phase are required for interference-based holography, the lasers are used in pulsed operation. They are switched on when amplitude and phase have settled to their target values. The duty cycle is approximately 10%. Faster LC modes exist but were not available for the required SLM configuration at the time the display was built. With faster LC displays, the duty cycle can be increased and the required laser power reduced.

The pixel pitch of the SLM is 135 μm horizontally and 35 μm vertically. A vertical-parallax-only CGH is used with holographic object point reconstruction in the vertical direction. The vertical VW size wy=13mm is sufficient for an eye pupil and facilitates eye focusing to a reconstructed object point.

The VWs are located in the observer plane at 1 m distance from the SLM. To enable full-color holographic 3D viewing, six CGHs are generated for each frame—they represent two views with three colors each. Using a beam steering element in combination with our fast and precise eye tracking system, VW locations can be adapted rapidly to moving eye positions [12].

Because of the limited frame rate of the currently available active optical components (SLM, beam steering), two full-color modes were set up: a fixed VW position for a single eye view with 40 Hz refresh rate and tracked VWs for both eyes with a 10 Hz refresh rate. A monochrome mode with tracked VWs for both eyes works at refresh rates of 30 Hz.

A powermeter was positioned in the VW position. A display luminance of approximately 70cd/m2 was calculated from the measured power in the VW.

Figure 8 shows a photograph of the holographic display. The size of the display prototype was determined by the available components and the desired flexibility of the setup, not by our concept. A holographic display product can have the same compact geometry as a common 2D display.

 figure: Fig. 8.

Fig. 8. Photograph of the holographic display. The aperture in the front surface is the active area with a diagonal of 350 mm.

Download Full Size | PDF

The CGHs are calculated in real time on a standard computer graphics card (Nvidia GTX 980). The graphics card is used for convenience, but also a standalone encoding system based on a field programmable gate array (FPGA) was developed, enabling the option for porting to a custom ASIC. Details of the calculation are described in [13]. The useful frame rate of the whole display system is limited by the response time of the LC molecules of the LC displays. Therefore, a frame rate of 40 Hz is used, although the CGH calculation would allow a higher frame rate. The achievable CGH computation frame rate depends on the arrangement of 3D scene points. As a comparison, the scene shown in Fig. 9 is computed at a frame rate of 46.5 Hz. By shifting the front airplane 0.1 m toward the display (0.75 m distance), the framerate rises to 59.5 Hz. The 3D scene for calculation of the CGH has a typical resolution of 720horizontal×960vertical RGB scene points.

 figure: Fig. 9.

Fig. 9. Photographs of reconstructed holographic 3D scenes. The CGH is the same for all photographs. The camera lens focus was set on distances 0.65 m (top), 1.2 m (center), and 1.7 m (bottom) from the VW.

Download Full Size | PDF

Due to real-time computation, the observer can interact with the displayed 3D scenes using human input devices like a hand gesture tracker or 3D navigation device.

B. Photographs of 3D Scenes

A camera was used to take photographs of the reconstructed holographic 3D scenes. The camera lens was placed in the VW plane. An aperture at the VW position with size (5mm)2 transmits only light that would be seen by the eye. Therefore, the apertures of the camera lens and eye are comparable, and the resolution of the photographs corresponds to the visible resolution of the eye.

Figure 9 shows a series of three photographs. The airplanes are at a distance of 0.65, 1.2, and 1.7 m from the VW. For all photographs, the same CGH was displayed on the SLM, and only the focus of the camera lens was changed.

Figure 9 shows that the 3D scene is reconstructed in depth. For the top photograph, the camera lens focus was set on a 0.65 m distance from the VW. Only the front airplane is in focus, whereas the middle and the rear airplane are out of focus. The middle airplane comes into focus when the camera lens focus is set on 1.2 m, while the other airplanes are out of focus (center photograph). The same applies to the bottom photograph where the camera focus is set on 1.7 m.

The photographs in Fig. 9 demonstrate that our holographic display supports selective eye accommodation to different parts of the 3D scene that are located at different depths. Therefore, the holographic display is a solution to overcome the accommodation–convergence conflict.

5. SUMMARY AND CONCLUSION

Our holographic 3D display uses two types of holographic elements:

  • (1) HOEs are used to collimate and focus the light in the BLU and the FL. They replace bulky refractive optical components and thus enable a compact holographic display. The VGs of the HOEs were optimized with respect to high efficiency and low crosstalk. Absolute diffraction efficiencies 60% were achieved for the whole RGB stack of the VGs. The VGs set additional requirements on the laser sources of the holographic display with respect to the spectral linewidth.
  • (2) CGHs are displayed on the SLM. The CGHs are calculated in real time on a computer graphics card. The SLM is a sandwich of a phase-modulating LC display and an amplitude-modulating LC display, which are aligned pixel to pixel. The SLM enables amplitude and phase modulation of light and leads to higher efficiencies in holographic 3D scene reconstruction.

Our concept of real-time holography is based on SH encoding and tracked VW technology. The concept was already demonstrated in earlier display prototypes. The holographic display is enabled by our proprietary software and hardware solutions.

Our current holographic display was built with dedicated components and thus has several improvements with respect to the previous prototypes that were built with off-the-shelf components:

  • (a) The SLM modulates the amplitude and phase of light. The efficiency of holographic 3D scene reconstruction is approximately 20%. It is significantly higher than that of the amplitude-modulating SLMs of the previous prototypes, which was only approximately 1%. The higher efficiency contributes to higher brightness of the display.
  • (b) Laser sources are used as light sources. They lead to higher brightness and larger color gamut than the LEDs of the previous prototypes.
  • (c) Thin HOEs are used instead of bulky refractive optical components. They lead to compact size of the display. The display can be made even more compact in its future final development when BLU, SLM, and FL will be assembled to a thin and robust stack.

Photographs of reconstructed holographic 3D scenes were made with a camera. The camera lens was located at the VW with an appropriate aperture. The photographs show that the 3D scene is reconstructed in depth. Selective focus is possible while the same CGH is displayed. The selected part of the 3D scene is in focus, whereas the other parts are out of focus. The photographs demonstrate that our holographic display supports accommodation of the eye lenses.

In conclusion, our holographic display is a solution to overcome the accommodation–convergence conflict that is inherent in stereoscopic 3D displays. Accommodation and convergence will lead to the same depth information, thus avoiding visual discomfort and fatigue. Other depth cues (e.g., motion parallax) are also supported and contribute to natural depth perception with the same set of cues as natural viewing.

The current prototype with 350 mm diagonal uses dedicated components that lead to significant improvements compared to previous prototypes. Further development is in progress that will lead to further improvements in image quality, efficiency, brightness, etc.

The results demonstrate the advantages of our holographic 3D display solution and the feasibility of further development toward a commercial product.

REFERENCES

1. P. St.-Hilaire, S. A. Benton, M. Lucente, J. D. Sutter, and W. J. Plesniak, “Advances in holographic video,” Proc. SPIE 1914, 188–196 (1993). [CrossRef]  

2. C. Slinger, C. Cameron, S. Coomber, R. Miller, D. Payne, A. Smith, M. Smith, M. Stanley, and P. Watson, “Recent developments in computer-generated holography: toward a practical electroholography system for interactive 3D visualization,” Proc. SPIE 5290, 27–41 (2004). [CrossRef]  

3. Y. Takaki and M. Nakaoka, “Scalable screen-size enlargement by multi-channel viewing-zone scanning holography,” Opt. Express 24, 18772–18781 (2016). [CrossRef]  

4. R. Häussler, S. Reichelt, N. Leister, E. Zschau, R. Missbach, and A. Schwerdtner, “Large real-time holographic displays: from prototypes to a consumer product,” Proc. SPIE 7237, 72370S (2009). [CrossRef]  

5. S. Reichelt, R. Häussler, G. Fütterer, and N. Leister, “Depth cues in human visual perception and their realization in 3D displays,” Proc. SPIE 7690, 76900B (2010). [CrossRef]  

6. A. Kogelnik, “Coupled wave theory for thick hologram gratings,” Bell Syst. Tech. J. 48, 2909–2947 (1969). [CrossRef]  

7. D. Jurbergs, F.-K. Bruder, F. Deuber, T. Fäcke, R. Hagen, D. Hönel, T. Rölle, M.-S. Weiser, and A. Volkov, “New recording materials for the holographic industry,” Proc. SPIE 7233, 72330K (2009). [CrossRef]  

8. J. Someya, Y. Inoue, H. Yoshii, M. Kuwata, S. Kagawa, T. Sasagawa, A. Michimori, H. Kaneko, and H. Sugiura, “Laser TV: ultra-wide gamut for a new extended color-space standard, xvYCC,” in SID 06 Digest (2006), pp. 1134–1137.

9. S. Reichelt and N. Leister, “Computational hologram synthesis and representation on spatial light modulators for real-time 3D holographic imaging,” J. Phys. Conf. Ser. 415, 012038 (2013). [CrossRef]  

10. R. Tudela, E. Martin-Badosa, I. Labastida, S. Vallmitjana, and A. Carnicer, “Wavefront reconstruction by adding modulation capabilities of two liquid crystal devices,” Opt. Eng. 43, 2650–2657 (2004). [CrossRef]  

11. P. Yeh and C. Gu, Optics of Liquid Crystal Displays (Wiley, 2010).

12. E. Zschau and S. Reichelt, “Head- and eye-tracking solutions for autostereoscopic and holographic 3D displays,” in Handbook of Visual Display Technology, J. Chen, W. Cranton, and M. Fihn, eds. (Springer, 2012), pp. 1875–1897.

13. E. Zschau, R. Missbach, A. Schwerdtner, and H. Stolle, “Generation, encoding and presentation of content on holographic displays in real time,” Proc. SPIE 7690, 76900E (2010). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Schematic principle of SeeReal’s holographic 3D display. Light source, FL, SLM, SHs, and VW are shown. The 3D scene can be located anywhere in the yellow area and is visible from the VW that is positioned at the eye pupil.
Fig. 2.
Fig. 2. Concept of beam expansion in VG (left) and concept of cFL (right).
Fig. 3.
Fig. 3. Drawing of the BLU. The light after the fiber output is collimated by a collimating lens and twice expanded in VG stacks.
Fig. 4.
Fig. 4. Experimental setup of BLU with area 300mm×220mm. The light after the fiber output is collimated by a collimating lens and twice expanded in VG stacks.
Fig. 5.
Fig. 5. Absolute DE for a stack of RGB VGs for the BLU. The DE versus incidence angle is shown for wavelengths 640, 532, and 473 nm. The VGs were measured in the reverse direction, i.e., at normal input and slanted output.
Fig. 6.
Fig. 6. Photographs of pixels of the SLM during alignment. The left photograph shows vertical misalignment of the order of 10 μm, visible as a black shadow at the bottom of the pixel apertures. The right photograph shows perfect alignment.
Fig. 7.
Fig. 7. Schematic drawing of the holographic display. The target configuration (left) includes the BLU, SLM, and cFL. The temporary configuration (right) uses a point light source (LS) and a collimating lens (CL) to generate collimated light at the SLM.
Fig. 8.
Fig. 8. Photograph of the holographic display. The aperture in the front surface is the active area with a diagonal of 350 mm.
Fig. 9.
Fig. 9. Photographs of reconstructed holographic 3D scenes. The CGH is the same for all photographs. The camera lens focus was set on distances 0.65 m (top), 1.2 m (center), and 1.7 m (bottom) from the VW.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

sinα0sinαm=mλp.
cosα0·dα0cosαm·dαm=mp·dλ.
dα1=sinα0·dλλ.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.