Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

A crosstalk-suppressed dense multi-view light-field display based on real-time light-field pickup and reconstruction

Open Access Open Access

Abstract

A crosstalk-suppressed dense multi-view light-field display based on real-time light-field pickup and reconstruction is demonstrated, which is capable of realizing high view density in the horizontal direction with low crosstalk between micro-pitch viewing zones. The micro-pinhole unit array and the vertically-collimated backlight are specially developed and used, instead of refraction-based optical components like lenticular lens, to avoid aberrations and to suppress crosstalk for accurately projecting multiple view perspectives into each eye pupil of the viewer. Additionally, the spatial information entropy is defined and investigated to improve 3D image perception for balancing resolution, which can be generally applicable to better-reconstructed 3D images with the limited number of resolution pixels. To enlarge the viewing angle of 3D images with smooth motion parallax, the novel high-efficient light-field pickup and reconstruction method based on the real-time position of the viewer’s pupils is implemented with an eye tracker to scan 750 view perspectives with the correct geometric occlusion in real time at the frame rate of 40 fps. In the experiment, a floating horizontal-parallax 3D light-field image with the view density of 0.75 mm−1 and the micro-pitch crosstalk of less than 7% can be perceived with the clear floating focus depth of 10 cm and the high resolution of 1920 × 1080 in the viewing angle of 70°.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Three-dimensional (3D) display technique is regarded as an ideal way to three-dimensionally visualize accurate medical data and geographic information. Various efforts have been devoted to obtain high quality 3D display performance. Autostereoscopic 3D displays based on stereoscopic vision have been commercialized for years [1], which can provide the viewer’s eyes with two parallax images without wearing special devices. However, the commercial application of the autostereoscopic 3D displays is limited due to many inherent drawbacks including unsmooth motion parallax, small viewing angle, inadequate amount of 3D spatial information and convergence-accommodation conflict (CAC) [2]. The holographic display [3] is considered as a natural stereoscopic display, but it’s hard to realize large-size, dynamic and real-time 3D image in recent years. Light field displays are proposed to attain natural 3D image presentation, which are based on the realization of multiple light-ray sampling for the propagating light rays with specific intensities and directions projected on the retina of the viewer from each point on the objects. Hence, the light-field display can be as the reconstruction of the vector light-field. The spatial representations of the reconstructed 3D light-field can overcome CAC by rendering natural depth cues with dense view perspectives. In recent years, there are several kinds of light-field displays having been realized, which can be classified into near-eye (head-mounted) light-field displays [4,5] and glasses-free light-field displays [6–12]. Compared with glasses-free light-field displays, near-eye light-field displays contribute to accurate presentations of light-field information with lower crosstalk because of the retina-based light-ray reconstruction achieved with optoelectronic micro-components.

Several kinds of advanced glasses-free light-field displays have been devoted to retina-based light-ray reconstruction by addressing multiple view perspectives into each of two eye pupils of the viewer. Y. Takaki, K. Tanaka and J. Nakamura developed a super multi-view display [13], which can provide two or more views for each eye of the viewer with low resolution requirement. The 3D images displayed by the display prototype with the view density of 0.38 mm−1 were demonstrated with the clear depth cues of 80 mm. However, lenticular lens used in this literature suffers from aberrations and causes high crosstalk to deteriorate the display quality. Moreover, the narrower the pitch of the viewing zones is, the higher crosstalk caused by aberrations of refraction-based optical components becomes, like lenticular lens. H. Kakeya recently proposed an advanced full-HD super multi-view display with 18 views by time-division multiplexing parallax barrier [14], which can realize a view distribution around eye pupils of the viewer with the view density of 0.71 mm−1 and can increase the viewing angle with a head tracking system. However, this method has an equipment requirement of a pair of LCD panels with 180 Hz refresh rate to scan 18 views at a frame rate of 30 fps. Most importantly, suppressing crosstalk between different view perspectives is without consideration in those super multi-view displays, which is very important for the realization of high 3D perceptive quality and clear depth cues of 3D images, especially for display of complex 3D scene.

Here, a 15.6-inch crosstalk-suppressed dense multi-view horizontal-parallax only light-field display prototype is demonstrated with the clear floating focus depth of 10 cm, the 3D image resolution of 1920 × 1080 and the view density of 0.75 mm−1 at the micro-pitch crosstalk of less than 7%. The 750 view perspectives of relative parts of the displayed 3D images can be continuously produced with the eye tracker in the viewing angle of 70° based on real-time light-field pickup and reconstruction at the frame rate of 40 fps.

One main contribution of our proposed light-field display compared to previous eye-tracking based super multi-view displays are low crosstalk between micro-pitch viewing zones of different view perspectives. The micro-pinhole unit array and the vertically-collimated backlight are employed, instead of lenticular lenses, to avoid refractive aberrations and significantly suppress micro-pitch crosstalk. This part of work is discussed in detail in Subsection 2.2. Additionally, in order to improve 3D image perception with limited number of resolution pixels, balancing resolution of the elemental images along vertical and horizontal directions is investigated to optimize wavefront recomposing of light-field with the use of the holographic functional screen in Subsection 2.3. To enlarge the viewing angle with smooth motion parallax, the real-time light-field pickup and reconstruction method is proposed in Section 3, based on which the eye tracker can scan a large number of view perspectives with correct geometric occlusion in real time at a high frame rate.

2. Principle

2.1 System configuration

The proposed light-field display is an efficient retina-based representation of the reconstructed light-field information, and the generated light ray bundles addressed into the pupils of the viewer are converged backward at points to reconstruct volumetric pixels of the original light-field of the target displayed objects or scene.

Principal components of the proposed light-field display are a vertically-collimated backlight (VC-BL), a liquid crystal display (LCD) panel, a micro-pinhole unit array (MPUA), a holographic functional screen (HFS) [15,16] and an eye tracker, which are shown in Fig. 1(a). Similar to integral imaging (InI) [9–11], our light-field display configuration uses the LCD panel to load elemental images (EIs) which can provide multiple view perspectives, and different perspectives are integrated as a 3D image. Different from InI, for the demonstrated light-field display, the light ray bundles with specific directions and intensities generated by the pixels of real-timely scanned EIs at a high frame rate are projected into the viewer’s eye pupils with the use of the MPUA and the HFS in a large viewing angle. The real-time viewing positions of the viewer are precisely monitored and obtained by the eye tracker. The MPUA is an occlusion-based light-controlled optical component, which has an excellent light-controlled ability without aberrations in horizontal direction [17]. The HFS is holographically printed with speckle patterns exposed on proper sensitive material and the diffusion angle is determined by the shape and size of speckles. The HFS can be used to recompose wavefront and re-modulate the light-field distribution from the light-controlled optical components [11].

 figure: Fig. 1

Fig. 1 The schematic diagram of the proposed light-field display prototype. (a) The system configuration of the display prototype. (b) The schematic of the MPUA assisted with the LCD panel under the illumination of the VC-BL. The simulations of the radiation energy pattern of (c) the VC-BL and of (d) one period of viewing zones of 6 view perspectives produced by the MPUA and the VC-BL at viewing distance.

Download Full Size | PDF

To realize high view density with low crosstalk between micro-pitch viewing zones, the MPUA and the VC-BL are designed and fabricated. The VC-BL consists of a point light source, an aspheric lens group, a mirror and an aspheric Linear Fresnel Lens (LFL). Light rays from the VC-BL through the pixels of the EIs are accurately addressed with different propagating angles by the use of the MPUA, which can ensure that multiple micro-pitch viewing zones with low crosstalk are formed around each eye pupil of the viewer. Assuming that the pupil diameter and interpupillary distance of the viewer are 4 mm and 64 mm respectively, as shown in Fig. 1(a), 6 view perspectives at viewing distance are realized, and both the left eye pupil (LEP) and the right eye pupil (REP) of the viewer receive different specific 3 of them (LEP receives view 4-6 and REP receives view 1-3). The value of the view density can be calculated with 3 / 4 mm, namely 0.75 mm−1, and it means that the lateral width of the viewing zone is 1.333 mm in the proposed light-field display. It should be noticed that, according to the coding table of the EIs as illustrated in Fig. 1(b) and the display principle that each eye pupil of the viewer is provided with 3 different parallax images at the same time, 1280 × 1080 pixels of odd (or even) rows and 2560 × 1080 pixels of even (or odd) rows of the LCD panel are perceived with the left (or right) eye pupil of the viewer at the same time. Therefore, it can be considered that the 3D image resolution is (1280 + 2560) / 2 × 1080 pixels, namely 1920 × 1080 pixels. As a result, CAC is obviously alleviated or even eliminated, and natural 3D scene with clear continuous focus cues is perceived according to literatures [18,19]. The structure of the MPUA, as shown in Fig. 1(b), is a combined structure consisting of lightproof part and numbers of micro-pinholes arranged at regular intervals. The shape of the micro-pinhole on the MPUA is ellipse and the major axis of the micro-pinhole is parallel to the vertical direction. In the display prototype, the VC-BL with the vertical divergence angle of 0.05° and the horizontal divergence angle of 90° is implemented, and the simulation of the radiation energy pattern of the VC-BL is illustrated in Fig. 1(c). With the utilization of the VC-BL and the MPUA with specific designed parameters, the simulation of the radiation pattern of one period of viewing zones of 6 view perspectives at viewing distance is shown in Fig. 1(d).

The eye tracker composed of an UHD camera and a pupil tracker is developed and used in our display prototype, which is driven by a novel pupil-tracking algorithm operating in a PC (CPU: Intel Core i7-7700K, GPU: NVIDIA Geforce GTX 970). The UHD camera is used to locate the real-time 3D position of the viewer’s face, while the retina tracker is used to real-timely detect the positions of viewer’s left and right eye pupils with high accuracy. The tracking detection rate of 90% and tracking accuracy of 0.5 mm for the eye pupil are achieved with the processing time of 9 ms in the tracking mode at the frame rate of 40 fps of the real-time implementation of the light-field pickup and reconstruction.

2.2 Utilization of the MPUA and the VC-BL for micro-pitch crosstalk suppression

Large view perspective crosstalk will also occur with the MPUA, although the MPUA has an excellent light-controlled ability in the horizontal direction, if the stray backlight with large divergence angle is used. To simplify analysis of the crosstalk caused by the MPUA and stray backlight, the operation principle of the MUPA is given by one MPU, as shown in Fig. 2. Light ray bundles generated by the pixels in EI panel improperly emit out from the micro-pinhole in adjacent row with incorrect propagating directions, and then those light ray bundles are projected onto the HFS together with the light ray bundles across their corresponding micro-pinhole with correct propagating directions. View spots are formed by the light ray bundles with correct propagating directions on the HFS are marked with dark and light blue oval balls in Fig. 2, while crosstalk view spots are formed by the light ray bundles with incorrect propagating directions are marked with red oval balls. View spots are noted as Vn (n = 1, 2, 3… 6) to represent the n-th view perspective projected on the HFS. The view spots and the crosstalk view spots are widened to a large height along vertical direction with the HFS at the viewing window. View overlap areas between the vertically stretched crosstalk view spots and view spots result in high crosstalk between viewing zones of different view perspectives.

 figure: Fig. 2

Fig. 2 The schematic diagram of the occurrence of high crosstalk between viewing zones of different view perspectives with the use of the MPUA and stray backlight.

Download Full Size | PDF

In order to obviously suppress the crosstalk between micro-pitch viewing zones, the VC-BL is used, as illustrated in Fig. 1(a) and Fig. 3. The VC-BL can ensure that light ray bundles from the pixels of the EI are addressed by the corresponding micro-pinhole in the same row with correct propagating directions. As a result, the crosstalk view spots are eliminated. As for the view overlap areas between the view spots in horizontal direction, they can be constrained to a narrow size by the use of the MPUA and the HFS with an appropriate horizontal diffusion angle. The threshold of the vertical divergence angle of the VC-BL γ can be obtained by

γ=arctan(WpHp2g)
where Wp is the size of pixels of the LCD panel and Hp is the height of the micro-pinhole of the MPUA. Considering to the excellent occlusion-based light-controlled ability of the MPUA in horizontal direction, the horizontal divergence angle of the VC-BL should be designed as large as possible to meet the expectation of large viewing angle. In the demonstrated display prototype, the micro-pitch crosstalk can be alleviated to less than 7%, which reaches the typical level of an eyewear-assisted 3D display for commercialization [20,21].

 figure: Fig. 3

Fig. 3 The schematic diagram of low crosstalk between viewing zones of different view perspectives with the use of the MPUA and the VC-BL.

Download Full Size | PDF

The lens design is optimized for realizing the VC-BL with the designed light-ray divergence angles. The lens I, the lens II and the LFL are designed to be aspheric, and these three aspheric lenses are jointly optimized as a combination for optical design. The backward ray-path combinational optimization with the damped least-squares method is implemented to optimize the primary aberrations and other higher order optical aberrations of the lenses. In the process of the backward ray-path combinational optimization, the directionally collimated rays are as the incident rays of the optical system and the image plane is at the position of the point light source. The modulation transfer function of the lens combination is as the measurement of the optical optimization. After the aberrations are suppressed, the optically optimized parameters of the lenses and the easily manufactured size of the point light source can be obtained. As shown in Fig. 4, the modulation transfer function of the optimized aspheric lens combination is depicted together with that of the standard lens combination with the same focal lengths and the same diameters, which shows that aberrations for the optimized aspheric lens combination are well suppressed within the designed horizontal incident ray angle of 90° and vertical incident ray angle of 0.05° marked as (90°, 0.05°) in the Fig. 4. Through the optimization of the lens design, the VC-BL with the designed light-ray divergence angles can be achieved.

 figure: Fig. 4

Fig. 4 Comparison of modulation transfer function for the optimized aspheric lens combination and the standard lens combination.

Download Full Size | PDF

2.3 Optimization of wavefront recomposing with balancing resolution

The HFS is employed to recompose wavefront and re-modulate the light-field distribution from the MPUA, and multiple light beams with various directions and intensities emitting from the HFS reconstruct volumetric pixels of the original light-field of the target 3D objects in a precise 3D spatial information recovery way. In this subsection, the spatial information entropy of the sampled view perspective is mathematical defined and experimentally investigated to optimize wavefront recomposing with balancing resolution of the EIs. This part of work can be generally applicable to improving 3D image perception with limited number of resolution pixels.

According to information theory, information entropy is used to characterize the uncertainty of information. Lower information entropy means higher recovery fidelity of the compressed information with the same decompression coding algorithm. As for the recovery of 3D scene with sampled view perspectives, spatial information entropy is taken into consideration for the optimization of wavefront recomposing. We use the standard mathematical definition to quantitatively evaluate the spatial information entropy of the sampled view perspective. The spatial information entropy of the i-th sampled view perspective Hi can be obtained with

Hi=k=0255Pklog2Pk
where Pk is the probability of the pixels with gray value of k of the corresponding sampled parallax image of the i-th view perspective for all resolution pixels of the EIs. The spatial information entropy can be changed with different code tables of rendering the EIs. At viewing distance of the proposed display, the simulation results of recovering one of view perspectives (view 1) of a reconstructed 3D image are measured with PSNR within a viewing zone of the horizontal field of view of 0.152° and the vertical field of view of 0.152°, which are shown in Fig. 5(a). The PSNR curve of different average spatial information entropy values within the viewing zone is illustrated in Fig. 5(b). The simulation results demonstrate the trending that the display quality of the recovered view perspective based on wavefront recomposing is improved by decreasing the spatial information entropy of the sampled view perspective. Because 3D images are integrated by numbers of recovered view perspectives, it is desirable to improve each recovered view perspective to enhance 3D image perception in the light-field display.

 figure: Fig. 5

Fig. 5 (a) The simulation results of the recovered view 1 of a 3D image by using the sampled view 1 with different spatial information entropy values, which are measured with PSNR within the viewing zone. (b) The PSNR curve of the recovered view 1 with different average spatial information entropy values.

Download Full Size | PDF

In the digital coding process of the EIs, the spatial information entropy of each sampled view perspective can be decreased with balancing the vertical and horizontal resolution of the EIs. Therefore, optimization of wavefront recomposing for recovering view perspectives can be realized by balancing resolution of the EIs. Experimental results of optimization of wavefront recomposing with balancing resolution are measured with PSNR of the recovered view perspective. Figure 6 shows different results of one view perspective (view 1) of a 3D image of city model with balancing resolution and the HFS or not. The original view perspective is shown in Fig. 6(a). As illustrated in Figs. 6(b) and 6(c), we can see that although the balanced resolution is achieved with the MPUA, the effect of the displayed view perspective is not improved effectively. However, Fig. 6(d) shows the recovered view perspective result is un-shaped and distorted even if the HFS is used without balancing resolution. Reconstruction of the original view perspective with the HFS can be obviously improved with balancing resolution, which is illustrated in Fig. 6(e). The experimental results show that better recovered view perception can be obtained with optimization of wavefront recomposing by balancing resolution.

 figure: Fig. 6

Fig. 6 Comparison of presentation results of the recovered view 1 of 3D image of city model with balancing resolution and the HFS or not. (a) The original view 1. (b) The result without both. (b) The result with balancing resolution and without the HFS. (c) The result with the HFS and without balancing resolution. (d) The result with both.

Download Full Size | PDF

3. Real-time light-field pickup and reconstruction

3.1 The retina-based light-field pickup method

In order to recover the target light-field information of the 3D scene, the relative direction and intensity of light-field from the target 3D scene should be recorded. Here, a retina-based light-field pickup method is proposed based on the human visual system. For the proposed light-field display prototype, an off-axis virtual camera array (CA) consisting of 6 horizontally arranged virtual digital cameras is employed to digitally sample the relative direction and intensity of the target virtual 3D scene, and the cameras of the CA are divided into 2 sub-CAs with 3 cameras to respectively record view perspectives for the observer’s left and right pupils, which is shown in Fig. 7. The retina-based light-field pickup method can be mathematically derived according to the positional relation between the spatially reconstructed point and the image sensor array of the CA. (x, y, zp) represents the position coordinate of arbitrary spatial point on objects in the 3D scene, and (ui, vi, zs) (zs = 0) denotes the position coordinate of the corresponding image point on the image senor of the i-th camera in the CA. The retina-based light-field pickup method can be formulated as Eq. (3).

[uivi]=[f2fzp00fzpf][yx]+[fzp2fzp000][di1]
with
di={d1+Wp2(i1),i=1,2,3d1+Dp+Wp2(i4),i=4,5,6
where di represents the horizontal distance between the i-th image and the origin O (x = 0, y = 0, z = 0), f, Dp and Wp are the imaging distance of the virtual camera, the pupillary distance and the pupil size of the observer, respectively. It is worth mentioning that the direction information of the light ray samples emitted from arbitrary spatial point on the object for the observer’s pupils can be accurately recorded with each corresponding image point. The direction information of the light ray samples can be expressed as θi=arctan(fdi-ui).

 figure: Fig. 7

Fig. 7 Ray model for the proposed retina-based light-field pickup method.

Download Full Size | PDF

In addition, the retina-based light-field pickup method can be implemented to realize real-time recording of the light-field based on the position of observer’s eye pupils at the viewing distance. The pupil’s real-time position can be obtained with the eye tracker and the systemic tracking accuracy is equal to the micro-area size of the view perspective at the viewing distance. Δm=(mx,0) denotes the real-time horizontal displacement of the observer’s pupil from the original position to the current position. Take Δm into consideration in Eq. (3), and then the mathematical expression of the proposed method can be deduced as Eq. (5),

[uivi]=[f2fzp00fzpf][yx]+[fzp2fzp000][diI1]+(mx0)T
where diI represents the horizontal distance between the i-th image and the origin O (x = 0, y = 0, z = 0) when the observer is at the initial position, and it can be obtained by Eq. (4).

3.2 The anti-pseudoscopic image mapping method based on backward ray-trace for light-field reconstruction

The anti-pseudoscopic image mapping method based on backward ray-tracing technique for efficiently and concisely rendering EIs is used to ensure a right 3D perception and occlusion along the horizontal direction, which geometrically maps the pixels of a set of multiple off-axis recorded parallax images (PIs) generated with the method demonstrated in Section 3.1 to the pixels of EIs as shown in Fig. 8. In effect, this mapping method is the retina-based light-field pickup inversion or backward ray-trace to avoid pseudoscopic image, a most common inherent issue in 3D displays. (s, t) is the coordinate of the pixel of the EIs mapping the image point (ui, vi, zs) in the CA, and (k, m) represents the k-th row and m-th column pixel on the LCD panel in the index coordinate with the origin Op (k = 0, m = 0). It should be pointed out that, in terms of the periodical distribution of the generated view perspective micro-areas at viewing distance along the horizontal direction, the relative view perspective micro-areas with correct direction and intensity light-field information projected onto the observer’s left and right eye pupils (LEP and REP) can be formed across several micro-pitch viewing zone periods, so that the cameras of the CA can be uniformly rearranged precisely at a Wp / 2 axial interval in the image mapping process for mathematical simplification. The mathematical description of the proposed image mapping is formulated as Eq. (6).

[st]=[g+Df001][uivi]+[g+Df00pRv][di1]+[(m1)mod2000][b1]
with
{k=floor[sp]+k0m=floor[tp]+m0
where D is the distance between the MPUA and the CA plane. b is the horizontal distance between adjacent pinholes in the MPU. (k0, m0) is the index of the origin O (x = 0, y = 0, z = 0).

 figure: Fig. 8

Fig. 8 The principle of reconstruction with the anti-pseudoscopic image mapping method based on backward ray-trace.

Download Full Size | PDF

3.3 The real-time implementation of the light-field pickup and reconstruction

The real-time pickup and reconstruction of the light-field can expand the viewing angle of 3D image by producing a large number of view perspectives with correct geometric occlusion and smooth motion parallax based on the real-time position of the viewer’s pupils, which is feasible due to the high efficiency and concision of the proposed retina-based light-field pickup and anti-pseudoscopic image mapping methods. Different components of the light-field of the target 3D scene observed from different angles are recorded in real time, and sets of PIs are generated according to the pupils’ position by conducting the method in Subsection 3.1. When the observer moves away from the initial position, the different viewed parts of the displayed 3D scene at different viewing positions are exhibited in real time by scanning EIs at a high frame rate. The scanned EIs are rendered with sets of PIs by carrying out the anti-pseudoscopic image mapping method. The real-time optical reconstruction of the light-field is thus realized, and the 3D image with higher spatial information capacity and larger viewing angle is also perceived. In addition, the smooth motion parallax of the perceived 3D image can be obtained from the observer. The process of the real-time light-field pickup and reconstruction is shown in Fig. 9.

 figure: Fig. 9

Fig. 9 The process of the real-time light-field pickup and reconstruction.

Download Full Size | PDF

3.4 Pupil-tracking algorithm

In the proposed light-field display, a novel pupil-tracking algorithm is developed to drive the eye tracker to obtain the real-time positions of the viewer’s eye pupils with high accuracy and low latency. High accuracy and low latency of tracking the eye pupils are essential for realistic presentation of 3D images with smooth parallax motion and correct geometric occlusion in a lager viewing angle. The flow chart of the proposed pupil-tracking algorithm is illustrated in Fig. 10. The RGB image captured with the UHD camera is used to roughly locate the position of the face of the viewer with the Adaboost algorithm and mono-vision algorithm based on OpenCV. Then, accurate positioning of the eye pupils of the viewer is implemented based on our originally proposed differential stereo-vision algorithm with three infrared (IR) parallax images obtained with the pupil tracker. Through the processes of the rough positioning of the face and the accurate positioning of the eye pupils, the real-time none-discrete position data of the viewer’s eye pupils can be obtained. However, the inherent positioning error exits due to the limitation of the precision of the eye tracker, which will cause image flipping and deteriorate the smooth parallax motion within the tracking area, if the none-discrete position data of the eye pupils is directly taken into consideration. In order to address this issue, the real-time position data of the eye pupils is discretized for improving the positioning error, and 125 viewing windows with different corresponding view perspectives are predefined in the light-field display prototype. The real-time none-discrete position data is discretized into 125 equidistant position data intervals along horizontal direction at the viewing distance in the viewing angle of 70° according to the position of the predefined 125 viewing windows. In our light-field display prototype, the tracking detection rate of 90% and tracking accuracy of 0.5 mm for the eye pupil are achieved with processing time of 9 ms in the tracking mode at the frame rate of 40 fps of the real-time implementation of the light-field pickup and reconstruction. The eye pupil tracking system in our display prototype is implemented in a PC (CPU: Intel Core i7-7700K, GPU: NVIDIA Geforce GTX 970).

 figure: Fig. 10

Fig. 10 The flow chart of the proposed pupil-tracking algorithm

Download Full Size | PDF

4. Experimental results

In the light-field display prototype illustrated in Fig. 11, an eye tracker, a 15.6-inch LCD panel with a resolution of 3840 × 2160, a HFS and a MPUA are employed. The configuration of the display prototype is listed in Table 1. The prototype with a clear floating focus depth of 10 cm is set up with the view density of 0.75 mm−1 and 750 view perspectives in the viewing angle of 70°. The resolution of the displayed 3D scene can reach 1920 × 1080 pixels.

 figure: Fig. 11

Fig. 11 (a) The setup of the demonstrated light-field display prototype (b) The MPUA and its microscopic image.

Download Full Size | PDF

Tables Icon

Table 1. Configuration of experiments

As for the experimental floating 3D perspective to present natural light-field depth cues, a 3D scene consisting of letter A and letter B is displayed. The designed arrangement of experimental target scene is illustrated in Fig. 12(a). By using a calibration board which is 0.1 m away from the LCD panel, focus cues can be confirmed obviously. The corresponding experimental results for the floating focus cues of the 3D letter A and B are shown in Fig. 12(b) and Visualization 1. As is shown in Fig. 12(b), letter A can be focused when a camera is focusing the calibration board, whereas the profile of letter B is blurred. And when the camera is focusing letter B, the profile of letter A and the calibration board are both blurred.

 figure: Fig. 12

Fig. 12 The displayed 3D scene consisting of letter A and letter B. (a) The arrangement of the experimental target scene. (b) The presentation of clear focus cues (see Visualization 1).

Download Full Size | PDF

The luminance distribution of each viewpoint perspective at the heights of y1 (91 mm) and y2 (−30 mm) is measured with the setup in Fig. 13(a) to evaluate the crosstalk of the display prototype. From the results depicted in Fig. 13(b), crosstalk of 6.11%, 3.35%, 2.03%, 4.09%, 4.69% and 2.86% are the statistics measured at y1, while 5.74%, 6.10%, 2.03%, 3.02%, 5.22% and 2.50% belong to y2. These results state an evident fact that the crosstalk of the prototype is under the level of the commercialized glasses-type 3D display (less than 7%).

 figure: Fig. 13

Fig. 13 Illustration of the systematic crosstalk measurement. (a) The crosstalk measurement setup. (b) The luminance and crosstalk distributions for 6 view perspectives with a longitudinal range of 0-9 mm at the heights of y1 and y2 at viewing plane.

Download Full Size | PDF

One of the prominent applications of the proposed light-field display is for medical analysis and diagnosis. The medical data of human heart and sternum displayed with the prototype is presented as 3D image in Fig. 14. The corresponding light-field display results are respectively captured from multiple angles in Fig. 14 and Visualization 2. Figure 15 and Visualization 3 show a floating light-field display of 3D image of city model. We can clearly perceive the structures and the relative position occlusions of different buildings from the 3D images.

 figure: Fig. 14

Fig. 14 3D light-field display of human heart and sternum (see Visualization 2).

Download Full Size | PDF

 figure: Fig. 15

Fig. 15 3D light-field display of city model (see Visualization 3).

Download Full Size | PDF

5. Conclusion

In summary, a crosstalk-suppressed dense multi-view light-field display with horizontal-parallax based on real-time light-field pickup and reconstruction is demonstrated, which can exhibit the 3D images with the view density of 0.75 mm−1 and the micro-pitch crosstalk of less than 7% in the viewing angle of 70°. Compared to previous light-field displays, our proposed light-field display takes suppressing crosstalk into account to render clear and lager depth cues of complex 3D scene. The MPUA assisted with the VC-BL are able to suppress crosstalk between micro-pitch viewing zones and to accurately reconstruct the light-field of the target 3D scene without distortion. Furthermore, wavefront recomposing with the HFS can be optimized with balancing resolution of the EIs in order to improve the recovered 3D images perception. Based on the real-time light-field pickup and reconstruction method, the eye tracker is used to produce a large number of view perspectives with correct geometric occlusion to obviously increase the viewing angle of the reconstructed 3D light-field images with smooth motion parallax. In our demonstrated experiments, high quality 3D medical and geographic light-field images with natural depth cues are perceived. We believe that the demonstrated crosstalk-suppressed dense multi-view light-field display system has a very promising future to be applied in medical and navigation areas.

Funding

National Natural Science Foundation of China (NSFC) (61575025, 61705014); Fund of the State Key Laboratory of Information Photonics and Optical Communications (IPOC2017ZZ02); Fundamental Research Funds for the Central Universities (2018PTB-00-01, 2016ZX01).

References

1. N. A. Dodgson, “Autostereoscopic 3D displays,” IEEE CS 38(8), 31–36 (2005).

2. T. Bando, A. Iijima, and S. Yano, “Visual fatigue caused by stereoscopic images and the search for the requirement to prevent them: a review,” Displays 33(2), 76–83 (2012). [CrossRef]  

3. J. Y. Son, C. H. Lee, O. O. Chernyshov, B. R. Lee, and S. K. Kim, “A floating type holographic display,” Opt. Express 21(17), 20441–20451 (2013). [CrossRef]   [PubMed]  

4. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]   [PubMed]  

5. F.-C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope,” ACM Trans. Graph. 34(4), 60 (2015). [CrossRef]  

6. E. Downing, L. Hesselink, J. Ralston, and R. Macfarlane, “A three-color, solid-state, three-dimensional display,” Science 273(5279), 1185–1189 (1996). [CrossRef]  

7. X. Liu and H. Li, “The progress of light field 3-D displays,” Inf. Disp. 30(6), 6–14 (2014).

8. G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays,” ACM Trans. Graph. 31(4), 1–11 (2012). [CrossRef]  

9. G. Lippmann, “Épreuves réversibles: photographies intégrals,” C. R. Acad. Sci. 146, 446–451 (1908).

10. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [Invited],” Appl. Opt. 52(4), 546–560 (2013). [CrossRef]   [PubMed]  

11. X. Sang, X. Gao, X. Yu, S. Xing, Y. Li, and Y. Wu, “Interactive floating full-parallax digital three-dimensional light-field display based on wavefront recomposing,” Opt. Express 26(7), 8883–8889 (2018). [CrossRef]   [PubMed]  

12. J.-H. Lee, J. Park, D. Nam, S. Y. Choi, D.-S. Park, and C. Y. Kim, “Optimal projector configuration design for 300-Mpixel multi-projection 3D display,” Opt. Express 21(22), 26820–26835 (2013). [CrossRef]   [PubMed]  

13. Y. Takaki, K. Tanaka, and J. Nakamura, “Super multi-view display with a lower resolution flat-panel display,” Opt. Express 19(5), 4129–4139 (2011). [CrossRef]   [PubMed]  

14. H. Kakeya, “A Full-HD Super-Multiview Display with Time-Division Multiplexing Parallax Barrier,” SID Symposium Digest of Technical Papers49, 259–262 (2018). [CrossRef]  

15. X. Sang, F. C. Fan, C. C. Jiang, S. Choi, W. Dou, C. Yu, and D. Xu, “Demonstration of a large-size real-time full-color three-dimensional display,” Opt. Lett. 34(24), 3803–3805 (2009). [CrossRef]   [PubMed]  

16. C. Yu, J. Yuan, F. C. Fan, C. C. Jiang, S. Choi, X. Sang, C. Lin, and D. Xu, “The modulation function and realizing method of holographic functional screen,” Opt. Express 18(26), 27820–27826 (2010). [CrossRef]   [PubMed]  

17. L. Yang, X. Sang, X. Yu, B. Liu, L. Liu, S. Yang, B. Yan, J. Du, and C. Gao, “Demonstration of a large-size horizontal light-field display based on the LED panel and the micro-pinhole unit array,” Opt. Commun. 414, 140–145 (2018). [CrossRef]  

18. Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006). [CrossRef]  

19. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017). [CrossRef]   [PubMed]  

20. H. Kang, S.-D. Roh, I.-S. Baik, H.-J. Jung, W.-N. Jeong, J.-K. Shin, and I.-J. Chung, “A novel polarizer glasses-type 3D displays with a patterned retarder,” SID Symposium Digest of Technical Papers10, 1–4 (2010). [CrossRef]  

21. Y.-C. Chang, C.-Y. Ma, and Y.-P. Huang, “Crosstalk suppression by image processing in 3D display,” SID Symposium Digest of Technical Papers10, 124–127 (2010). [CrossRef]  

Supplementary Material (3)

NameDescription
Visualization 1       The floating 3D perspective of letter A and B is displayed to present clear floating light-field depth cues of 10cm.
Visualization 2       The 3D image of the medical data of human heart and sternum displayed with the prototype is presented.
Visualization 3       The 3D image of the geographic data of city model displayed with the prototype is presented.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1
Fig. 1 The schematic diagram of the proposed light-field display prototype. (a) The system configuration of the display prototype. (b) The schematic of the MPUA assisted with the LCD panel under the illumination of the VC-BL. The simulations of the radiation energy pattern of (c) the VC-BL and of (d) one period of viewing zones of 6 view perspectives produced by the MPUA and the VC-BL at viewing distance.
Fig. 2
Fig. 2 The schematic diagram of the occurrence of high crosstalk between viewing zones of different view perspectives with the use of the MPUA and stray backlight.
Fig. 3
Fig. 3 The schematic diagram of low crosstalk between viewing zones of different view perspectives with the use of the MPUA and the VC-BL.
Fig. 4
Fig. 4 Comparison of modulation transfer function for the optimized aspheric lens combination and the standard lens combination.
Fig. 5
Fig. 5 (a) The simulation results of the recovered view 1 of a 3D image by using the sampled view 1 with different spatial information entropy values, which are measured with PSNR within the viewing zone. (b) The PSNR curve of the recovered view 1 with different average spatial information entropy values.
Fig. 6
Fig. 6 Comparison of presentation results of the recovered view 1 of 3D image of city model with balancing resolution and the HFS or not. (a) The original view 1. (b) The result without both. (b) The result with balancing resolution and without the HFS. (c) The result with the HFS and without balancing resolution. (d) The result with both.
Fig. 7
Fig. 7 Ray model for the proposed retina-based light-field pickup method.
Fig. 8
Fig. 8 The principle of reconstruction with the anti-pseudoscopic image mapping method based on backward ray-trace.
Fig. 9
Fig. 9 The process of the real-time light-field pickup and reconstruction.
Fig. 10
Fig. 10 The flow chart of the proposed pupil-tracking algorithm
Fig. 11
Fig. 11 (a) The setup of the demonstrated light-field display prototype (b) The MPUA and its microscopic image.
Fig. 12
Fig. 12 The displayed 3D scene consisting of letter A and letter B. (a) The arrangement of the experimental target scene. (b) The presentation of clear focus cues (see Visualization 1).
Fig. 13
Fig. 13 Illustration of the systematic crosstalk measurement. (a) The crosstalk measurement setup. (b) The luminance and crosstalk distributions for 6 view perspectives with a longitudinal range of 0-9 mm at the heights of y1 and y2 at viewing plane.
Fig. 14
Fig. 14 3D light-field display of human heart and sternum (see Visualization 2).
Fig. 15
Fig. 15 3D light-field display of city model (see Visualization 3).

Tables (1)

Tables Icon

Table 1 Configuration of experiments

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

γ =arc tan ( W p H p 2 g )
H i = k = 0 255 P k log 2 P k
[ u i v i ] = [ f 2 f z p 0 0 f z p f ] [ y x ] + [ f z p 2 f z p 0 0 0 ] [ d i 1 ]
d i = { d 1 + W p 2 ( i 1 ) , i = 1 , 2 , 3 d 1 + D p + W p 2 ( i 4 ) , i = 4 , 5 , 6
[ u i v i ] = [ f 2 f z p 0 0 f z p f ] [ y x ] + [ f z p 2 f z p 0 0 0 ] [ d i I 1 ] + ( m x 0 ) T
[ s t ] = [ g + D f 0 0 1 ] [ u i v i ] + [ g + D f 0 0 p R v ] [ d i 1 ] + [ ( m 1 ) mod 2 0 0 0 ] [ b 1 ]
{ k = f l o o r [ s p ] + k 0 m = f l o o r [ t p ] + m 0
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.