Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Time-multiplexed light field display with 120-degree wide viewing angle

Open Access Open Access

Abstract

The light field display can provide vivid and natural 3D performance, which can find many applications, such as relics research and exhibition. However, current light field displays are constrained by the viewing angle, which cannot meet the expectations. With three groups directional backlights and a fast-switching LCD panel, a time-multiplexed light field display with a 120-degree wide viewing angle is demonstrated. Up to 192 views are constructed within the viewing range to ensure the right geometric occlusion and smooth parallax motion. Clear 3D images can be perceived at the entire range of viewing angle. Additionally, the designed holographic functional screen is used to recompose the light distribution and the compound aspheric lens array is optimized to balance the aberrations and improve the 3D display quality. Experimental results verify that the proposed light field display has the capability to present realistic 3D images of historical relics in 120-degree wide viewing angle.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Light field display is a promising three-dimensional display technology without any additional equipment, which has attracted considerable attentions in recent years [112]. Unlike the traditional auto-stereoscopic display which simply directs parallax images for the viewer’s eyes to form 3D depth impression, the light field display optically redistributes the 3D spatial information precisely, which provides vivid and natural 3D images similar to how humans observe the real 3D scene. It can find many potential applications, especially for relics research and exhibition. Most of the historical relics are limited by the protection technology and region, which are not convenient for exhibition and scientific research. Application of the light field display to showcase realistic reconstructed images of the precious relics is an effective solution to realize their exhibition or scientific research while maintaining the entities. In order to supply as much spatial information as possible about relics and accommodate multi-viewer at same time, a wide viewing angle is particularly necessary for light field displays. Multi-projection light field display is a common technique to realize the wide viewing angle (even 360 degrees) [1315], and moreover, smooth motion parallax and high definition can be achieved with high dense projectors. However, plenty of projectors make the prototype too complex, and thereby considerable calibration effort as well as massive data transmission are required. The scanning light field display has the capacity to render a seamless 3D scene without complex mechanism and calibration, which usually consists of a high-speed projector and a rotating screen only [1619]. But the high frame rate operation of the projector usually reduces the color depth range or gray levels of generated 3D images. In addition, it is also difficult to realize the stable high-speed rotation of a large screen, which limits the scale of 3D images. In recent studies [20,21], the combination of pyramidal mirrors and the integral imaging (II) technique provides a solution to realize a light field display with wide viewing angle, full color and stable structure. However, the clarity of generated 3D images is not satisfactory due to the aberrations of lenses and the loss of brightness caused by semitransparent mirrors. Obviously, the above light field techniques are not feasible solutions for virtual reappearance of relics.

In contrast, the flat-panel (such as LCDs or LEDs) based light field display has the advantages of low cost, full color, simple and stable structure and low brightness loss. Our group has devoted lots of efforts to the flat-panel based light field displays in recent years, and significant breakthroughs have been made in the super-dense viewpoints [8] and large-scale display [9,10]. However, the number of pixels used to construct spatial information is limited, more spatial information to be displayed means that fewer pixels are perceived at each viewing position, which results in a loss of resolution. The eye-tracking method was integrated into the light field display to solve the problem [11,12]. All pixels in panel were devoted to produce the corresponding spatial information for the current captured position of human eyes in real time. As a result, the utilization rate of the resolution is improved, and viewing angle is widened to about 70 degrees. However, the system is incapable of providing multiple viewers with corresponding light field information at the same time, so it is only suitable for the single viewer.

Here, a novel LCD flat-panel based light field display with 120-degree viewing angle is presented. Clear 3D image can be perceived at the entire range of viewing angle. Up to 192 dense viewpoints are constructed, which ensures the right geometric occlusion with smooth motion parallax [8]. The key of our design is to introduce the idea of time-multiplexing into the light field display to resolve the contradiction between viewing angle and resolution, which is realized by the alternate illumination of three groups directional backlights and the synchronous refreshing of coded images. In addition, the holographic functional screen with special modulation function is adopted to recompose the light distribution. The compound aspheric lens array is optimally designed to balance the aberrations and improve the imaging quality. To the best of our knowledge, here the widest viewing angle is realized for the flat-panel based light field display, which is well suited to showcase the precious historical relics for viewers.

2. Principle

2.1 System configuration

The proposed light field display can construct a 3D image of the cultural relics within 120 degrees in space (here an ancient Buddha statue is taken as an example). The configuration of the system is schematically shown in Fig. 1(a). The light source part consists of several multi-directional backlight units (MDBUs) with the same structure arranged side by side, which is responsible for the illumination of the system. Each MDBU comprises three linear LED light bars and a linear Fresnel lens (LFL), as illustrated in Fig. 1(b). The three light bars are symmetrically distributed on a circular arc, the center of which is the midpoint of the LFL. The angle between the midlines of side and middle LED light bars is set to 40 degrees. The midlines of all light bars direct toward the center of the LFL, and the radius is set to the focal length of the LFL (${f_L}$ in Fig. 1(b)), which ensures that the light ray bundles emitting from each LED would be collimated into parallel beams at minor field curvature when passing through the LFL. Further, in order to prevent light ray bundles from overflowing into adjacent units and causing needless interference, thin light barriers are installed on both sides of each MDBU. A FPGA based control module (CM) is utilized for synchronization control between MDBUs and a LCD panel. The LCD panel adopted in our prototype is 32-inch with the resolution of 3840×2160 and the refresh rate of 120 Hz. A lenticular lens array (LLA) is placed in the front of the LCD, which has the light control ability in the horizontal direction. To avoid the Moire Pattern, here the LLA is arranged aslant relative to the LCD panel. A holographic functional screen (HFS) is placed at the focus plane of the LLA (${f_A}$ is the focal length of the LLA), which can be used to re-modulate the light-field distribution from the LLA.

 figure: Fig. 1.

Fig. 1. (a) The schematic diagram of the proposed light-field display. (b) The structure of a MDBU.

Download Full Size | PDF

Figure 2 shows the simulation result of the wavelength dispersion during the process of light rays collimation in a MDBU. The method of the inverse ray tracing is used in the simulation. Three sets of collimated beams with the directions of + 40°, 0° and −40° are used as the incident light respectively, and the diffuse spot distribution of the convergence points (i.e., the positions of the three LEDs) can be obtained respectively. Each set of collimated beams contains three typical wavelengths in the visible range (${\lambda _1} = 0.486nm$, ${\lambda _2} = 0.587nm$ and ${\lambda _3} = 0.656nm$). It can be seen from the three sets of diffuse spot distribution that the blue, green and red dots which respectively represents the three wavelengths are almost coincided. It indicates that the wavelength dispersion in the MDBU is very slight that can be ignored.

 figure: Fig. 2.

Fig. 2. The simulation result of the wavelength dispersion in the MDBU.

Download Full Size | PDF

2.2 Time-multiplexed light field reconstruction

As shown in Figs. 3(a), 3(b) and 3(c), during a working cycle of a MDBU, the central, right and left backlight bars (respectively noted as BC, BR and BL) are lighted up separately in turn to provide illumination for the system. During the gating time of each light bar, a corresponding encoded image is synchronously loaded to the LCD panel. The three encoded images (Image-C, Image-R and Image-L) are synthesized from parallax images that are respectively captured from 40-degree central, 40-degree right and 40-degree left views of the original 3D object. The divergent light ray bundles from BC, BR and BL would be respectively collimated by the LFL before arriving at the LCD. When the sub-pixels of the LCD panel are passed, the collimated light ray bundles are separated into many thin light beams, which will carry the encoded color and intensity information. As the field of view (FOV) of the LLA is set to 40°, each lens of the LLA will refract the set of light beams from covered sub-pixels to the focus point at a convergence angle of 40 degrees (P is one of the focus points as shown in Fig. 3). As shown in Fig. 3(d), the focal length of lens (${f_A}$), the pitch of lens ($p$) and the incident angle of light beams emitted from BR and BL ($\theta $) satisfy the imaging relationship ${f_A} = {p \mathord{\left/ {\vphantom {p {\tan \theta }}} \right.} {\tan \theta }}$, which ensures that the focus point formed by each lens in the gating time of BC coincides with the focus points respectively formed by its right and left adjacent lenses in the gating time of BR and BL. Thus, the three sets of light beams emitted from a focus point in the three periods of gating time can be aligned without overlapping. In addition, the lens elements are optically optimized to suppress the aberrations, which can ensure a high convergence accuracy of light beams (The specific optimization process is discussed in detail in Chapter 2.4).

 figure: Fig. 3.

Fig. 3. The viewpoints construction process of (a) the central viewing zone, (b) the right viewing zone and (c) the left viewing zone. (d) The construction process of the DC

Download Full Size | PDF

Here a HFS is placed on the focal plane, which can be considered as a derived 2D display plane with all the focal points of the LLA as its display cells. With the modulation of HFS, a display cell would emits multiple continuous distributed light beams of various intensities and colors at a divergence angle of 40 degrees in a controlled way (The specific modulation process of the HFS is discussed in detail in Chapter 2.3). During the gating time of each directional light bar, the re-modulated light beams emitted from all display cells intersect in the 40-degree central, 40-degree right and 40-degree left space zone respectively, and thereby dense viewpoints are constructed. The perspective information of captured parallax images is recovered with these constructed viewpoints, and in consequence, the 40-degree central, 40-degree right and 40-degree left light field information of 3D object is respectively reconstructed under the working states of corresponding light bars.

The number of constructed viewpoints depends on the lens pitch of the LLA. Here, each lens of LLA covers 5.333 sub-pixels in horizontal direction and the inclined angle between the LCD and LLA is set to 14.04° (arctan1/4). Under such structure, the LCD panel can be divided into many identical image units, one of which consists of 64 sub-pixels, as indicated in Fig. 4. In an image unit, the relative positions between the sub-pixels and the lenses vary from each other, which realizes a 64-viewpoint display [22] under the working states of corresponding light bars. The mapping process that determines the viewpoint number N corresponding to the $k\textrm{th}$ sub-pixel in the $l\textrm{th}$ row in an image unit is given in Eq. (1).

$$N = \left\lceil {\frac{{[(l - 1) - 3 \times (l - 1) \times \tan \theta + (k - 1)]\bmod w}}{{{w \mathord{\left/ {\vphantom {w {{N_t}}}} \right.} {{N_t}}}}}} \right\rceil $$
where $\theta = {14.04^ \circ }$ represents the slant angle of LLA, $w = 5.333$ represents the number of covered sub-pixels and ${N_t} = 64$ denotes the total number of viewpoints. The symbol “mod” is a modulo operator, and the symbol “$\lceil{} \rceil $” represents the minimum integer not less than the variable within it. The marked numbers in Fig. 4 shows the mapping result of each sub-pixel, which corresponding to the order numbers of parallax images in the process of image synthesis. The dense viewpoints will provide viewers with smooth motion parallax and right geometric occlusion without the convergence–accommodation conflict. Because the horizontal and vertical intervals of sub-pixels distributed in different image units for constructing the same viewpoint are 16 and 4 respectively, the horizontal resolution of the 3D image perceived by eyes is calculated as 3840÷(16÷3) = 720 and the vertical resolution is calculated as 2160÷4 = 540 in each viewing zone. Such a resolution ensures a clear 3D image for the viewers.

 figure: Fig. 4.

Fig. 4. The sub-pixels’ arrangement in an image unit.

Download Full Size | PDF

By switching the working state of light bars of MDBUs at a high speed and synchronously refreshing the loaded images on LCD, the light distribution from a display cell separately provided by three light bars will be spliced up due to the visual persistence effect of eyes. As a result, the separately reconstructed central, right and left part light field information of the 3D object will also be spliced together as a continuous whole visually. Thus, the viewing angle can be tripled to 120 degrees. The total number of viewpoints is increased up to 192. Besides, all pixels on the panel contributes to one viewing zone rather than the whole range of 120 degrees at each moment, so the resolution of the perceived 3D image in the entire range will not be affected by the increase of the viewing angle, and remains at 720×540 without further decreasing. It indicates that the contradiction between viewing angle and resolution is resolved and a clear 3D image can be perceived within the whole viewing range.

To implement the above time-multiplexing way into the system, a FPGA based controller module is utilized to accurately determine the gating time of each LED bar and the synchronously refreshing of images on the LCD. The time sequence diagram of the CM is depicted in Fig. 5. Triggered by the synchronizing signal, LED bars are lighted up for $T = 4ms$ in turn after finishing the refreshing of LCD in one time sequence cycle. ${T_D} = 8.33ms$ is the time interval of triggering signals which is exact one cycle duration of the 120 Hz display and ${T_W} = 4ms$ is the waiting time after lighting up the corresponding LED bars.

 figure: Fig. 5.

Fig. 5. Time sequence diagram of the CM.

Download Full Size | PDF

The process of light field pickup for the original 3D object corresponds to the process of viewpoints construction, which is shown in Fig. 6. 192 off-axis cameras are radially arranged with the step of 120°÷192 = 0.625° to capture the light field information. Such dense pickup for the original light field of the 3D object usually can be realized by guideway camera scanning or virtual pickup of numerical computer 3D model [23]. According to the sub-pixel arrangement of image units illustrated in Fig. 4, the captured left 64 parallax images, central 64 parallax images and right 64 parallax images are synthesized as Image-L Image-C, and Image-R respectively, which are going to be loaded on the LCD during corresponding gating time of directional backlights.

 figure: Fig. 6.

Fig. 6. The process of light field pickup and the formation of synthesized images.

Download Full Size | PDF

2.3 The modulation of holographic functional screen

In order to provide viewers with a realistic and natural three-dimensional vision, the reconstructed light field information should be as close as possible to the original light field distribution of the 3D object. As mentioned above, the light field distribution is discretely picked up by 192 cameras in the horizontal direction with different vision angles. Based on the intersection of the thin light beams directionally refracted by the LLA, the picked up light distribution is conversely reproduced with 192 constructed viewpoints. Obviously, the reproduced light field is also discrete, which is disable to provide a natural 3D performance. Here, to recover the 3D information correctly, the HFS is used to process the necessary optical transformation during the process of viewpoints construction, which has the specifically spreading function to distribute the thin light beams in a specific arranged geometry. As shown in Fig. 7(a), the thin light beams incident on a DC are provided by three sets of directional backlights, which are separately generated at each gating times of the three light bars. The HFS expand each light beam in a right spatial angle ${\omega _n}$ in the horizontal direction so that all the light field distribution from each DC are combined as a continuous distribution, which is approximate to the original light field distribution of the displayed 3D object. ${\Omega _C}$, ${\Omega _L}$ and ${\Omega _R}$ respectively represent the combined visual angle in each gating time of directional backlights. Under the time-multiplexing way, they are integrated as a larger continuous visual angle $\Omega $, which is given in Eq. (2).

$$\Omega = {\Omega _C} + {\Omega _L} + {\Omega _R} = \sum\limits_{n = 1}^N {\omega _n^C} + \sum\limits_{n = 1}^N {\omega _n^L} + \sum\limits_{n = 1}^N {\omega _n^R} $$
All re-modulated light beams from all DCs on HFS contributes to the viewpoints construction, and a continuous visual performance with smooth light field information transition at different horizontal directions is obtained finally. In addition, the HFS can realize a large vertical diffuse angle $\theta = {150^ \circ }$, which guarantee an adequate visual range along the vertical direction. Figure 7(b) shows the light intensity distribution of the viewpoints before and after the modulation. We can see that the transition of the curves become smoother and more continuous after being modulated, which is more approximate to the target light field distribution. In addition, it can be seen that some crosstalk occurs between adjacent viewpoints after the modulation. However, since the viewpoints in the system are dense enough (192÷120°=1.6 viewpoints per degree), the contents of the perceived adjacent viewpoints are very close, which means that the observers will not perceive the aliasing images. Figure 7(c) shows the observed 3D image before and after the modulation in the same position. The contrast clearly illustrates that the images perceived by the human eyes become more uniform and natural after being modulated.

 figure: Fig. 7.

Fig. 7. Illustration of the modulation function of HFS. (a) Schematic diagram of the HFS modulation of thin light beams under the time-multiplexing way. (b) Comparison of the light intensity distribution before and after being modulated by HFS. (c) Comparison of the observed 3D image in the same position without and with HFS. (d) The scanning electron microscope (SEM) image of the HFS surface.

Download Full Size | PDF

The method of the directional speckle [24] is used to realize the HFS. In order to ensure that the diffusion angles of the light rays that incident from different directions (in the range of −60° to + 60°) are as close as possible, several sets of collimated beams of different directions (0°, ±30°, ±60°) are adopted as the reference beams for multiple exposures to record the speckles on a holographic plate. With the ultraviolet curing and splicing method, the repeated speckle patterns consist the HFS. The specific diffusion angle is determined by the shape and size of speckles. The scanning electron microscope (SEM) image of the HFS surface is presented in Fig. 7(d).

2.4 The optical optimization

The aberrations of the LLA reduce the convergence accuracy of the light beams in the generation of DCs, which further affect the splice accuracy of the viewing zones and degrade the 3D imaging quality. Here, to suppress the aberration, the LLA is designed as a compound structure consisting of two aspheric lenses. The aspheric model uses the base radius of curvature and the conic coefficient, its surface sag is given in Eq. (3).

$$z = \frac{{c{r^2}}}{{1 + \sqrt {1 - (1 + k){c^2}{r^2}} }} + {a_2}{r^2} + {a_4}{r^4} + {a_6}{r^6} + \ldots $$
where c is the vertex curvature, r is the radial coordinate, k is the conic coefficient and ${\alpha _2}$, ${\alpha _4}$, ${\alpha _6}$ are the aspheric coefficients. The 0° and ± 40° directionally collimated rays are as the incident rays of the optical system and the damped least-squares method is used to optimize the primary aberrations and other higher order optical aberrations. The optimized structure and corresponding parameters are shown in Fig. 8(a). As shown in Fig. 8(b), compared with the standard lens with the same diameter and focus distance, the modulation transfer function of the compound lens is improved significantly, which shows that aberrations for the compound aspheric lens are well suppressed within the designed horizontal viewing angle of 120°.

 figure: Fig. 8.

Fig. 8. (a) The optimized structure and corresponding parameters for the compound aspheric lens. (b) Comparison of modulation transfer function for the optimized compound aspheric lens and the standard lens.

Download Full Size | PDF

Figure 9 shows the reproduced images of two optical structures with the same 3D content. We can see that the 3D image with the standard lens array is blurred. By introducing the compound aspheric lens array, the image quality is significantly improved and the details of the Buddha statue’s face are much clearer.

 figure: Fig. 9.

Fig. 9. (a) The reproduced 3D image with the standard lens array. (b) The reproduced 3D image with the optimized compound aspheric lens array

Download Full Size | PDF

3. Experimental results

A 32-inch prototype based on the proposed time-multiplexed light field display is demonstrated. 23 MDBUs contribute to the illuminations for the right 40 degrees, central 40 degrees and left 40 degrees, respectively. For a portable form factor, the thickness of MDBU is designed as 4mm, where the thickness of the LFL is 2mm and the focal length ${f_L}$ is 30mm. The LLA is close to LCD without gap and its focal length ${f_A}$ is 0.44mm. The detailed parameters are listed in Table 1.

Tables Icon

Table 1. Configuration of experiments

The 3D display result of the reconstructed image (the ancient Buddha statue) observed from different directions is shown in Fig. 10. Continuous motion parallax can be seen in Visualization 1. The display result shows that a natural and realistic 3D image can be perceived in a wide viewing angle of 120°. We can clearly see the details of the 3D image and the relative 3D position relation of different parts, and more than 30 cm displayed depth can be perceived. Figure 11 and Visualization 2 show a 3D display of a group of Tri-colored glazed potteries in the Tang Dynasty. The colors of the potteries are realistically reverted and the relative position occlusions of different potteries can be clearly perceived. Besides, interactive operations such as rotation, zoom and translation can be integrated into the light field display for a comprehensive showcase of the 3D image. Figure 12 and Visualization 3 show an interactive dynamic 3D display of an ancient Bronze Drinking Vessel. The rotation and zoom operations are controlled through the mouse drag and wheel respectively.

 figure: Fig. 10.

Fig. 10. 3D light field display result of the ancient Buddha statue (see Visualization 1).

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. 3D light field display of a group of Tri-colored glazed potteries in the Tang Dynasty (see Visualization 2).

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Interactive dynamic 3D display of an ancient Bronze Drinking Vessel (see Visualization 3).

Download Full Size | PDF

4. Conclusion

In summary, a novel time-multiplexed light-field display system with 120-degree wide viewing angle is demonstrated. A clear 3D image can be perceived at the entire range of viewing angle. 192 dense viewpoints are constructed in the viewing range, which ensures the smooth parallax and right geometric occlusion. The time-multiplexing way is implemented with the alternate illumination of three groups directional backlights and synchronous refreshing of coded images. The optimally designed holographic functional screen is used to modulate the light beams, which helps to recover the light field information accurately. To further improve the image quality, the compound lenticular lens array is designed and fabricated to suppress the aberrations. The combination of wide viewing angle, clear 3D performance and smooth parallax motion will make our proposed light field display very promising for relics research and exhibition or other industrial applications.

Funding

National Key Research and Development Program (2017YFB1002900); National Aerospace Science Foundation of China (61575025); State Key Laboratory of Information Photonics and Optical Communications; Fundamental Research Funds for the Central Universities (2019PTB-018); Fundamental Research Funds for the Central Universities (2019RC13).

Disclosures

The authors declare no conflicts of interest. This work is original and has not been published elsewhere.

References

1. N. Balram and I. Tosic, “Light-field imaging and display systems,” Inf. Disp. 32(4), 6–13 (2016). [CrossRef]  

2. X. Liu and H. Li, “The progress of light field 3-D displays,” Inf. Disp. 30(6), 6–14 (2014). [CrossRef]  

3. W. Song, Q. Cheng, P. Surman, Y. Liu, Y. Zheng, Z. Liu, and Y. Wang, “Design of a light-field near-eye display using random pinholes,” Opt. Express 27(17), 23763–23774 (2019). [CrossRef]  

4. B. J. Jackin, L. Jorissen, R. Oi, J. Y. Wu, K. Wakunami, M. Okui, Y. Ichihashi, P. Bekaert, Y. P. Huang, and K. Yamamoto, “Digitally designed holographic optical element for light field displays,” Opt. Lett. 43(15), 3738–3741 (2018). [CrossRef]  

5. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017). [CrossRef]  

6. C. Yao, D. Cheng, T. Yang, and Y. Wang, “Design of an optical see-through light-field near-eye display using a discrete lenslet array,” Opt. Express 26(14), 18292–18301 (2018). [CrossRef]  

7. D. Chen, X. Sang, X. Yu, X. Zeng, S. Xie, and N. Guo, “Performance improvement of compressive light field display with the viewing-position-dependent weight distribution,” Opt. Express 24(26), 29781–29793 (2016). [CrossRef]  

8. X. Sang, X. Gao, X. Yu, S. Xing, Y. Li, and Y. Wu, “Interactive floating full-parallax digital three-dimensional light-field display based on wavefront recomposing,” Opt. Express 26(7), 8883–8889 (2018). [CrossRef]  

9. S. Yang, X. Sang, X. Yu, X. Gao, L. Liu, B. Liu, and L. Yang, “162-inch 3D light field display based on aspheric lens array and holographic functional screen,” Opt. Express 26(25), 33013–33021 (2018). [CrossRef]  

10. L. Yang, X. Sang, X. Yu, B. Liu, L. Liu, S. Yang, B. Yan, J. Du, and C. Gao, “Demonstration of a large-size horizontal light-field display based on the LED panel and the micro-pinhole unit array,” Opt. Commun. 414, 140–145 (2018). [CrossRef]  

11. Y. Zhu, X. Sang, X. Yu, P. Wang, S. Xing, D. Chen, B. Yan, K. Wang, and C. Yu, “Wide field of view tabletop light field display based on piece-wise tracking and off-axis pickup,” Opt. Commun. 402, 41–46 (2017). [CrossRef]  

12. L. Yang, X. Sang, X. Yu, B. Yang, B. Yan, K. Wang, and C. Yu, “A crosstalk-suppressed dense multi-view light-field display based on real-time light-field pickup and reconstruction,” Opt. Express 26(26), 34412–34427 (2018). [CrossRef]  

13. Q. Zhong, Y. Peng, H. Li, C. Su, W. Shen, and X. Liu, “Multiview and light-field reconstruction algorithms for 360° multiple-projector-type 3D display,” Appl. Opt. 52(19), 4419–4425 (2013). [CrossRef]  

14. S. Yoshida, “fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector arrays,” Opt. Express 24(12), 13194–13203 (2016). [CrossRef]  

15. L. Ni, Z. Li, H. Li, and X. Liu, “360-degree large-scale multiprojection light-field 3D display system,” Appl. Opt. 57(8), 1817–1823 (2018). [CrossRef]  

16. A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360° light field display,” ACM Trans. Graph. 26(3), 40 (2007). [CrossRef]  

17. Y. Takaki and S. Uchida, “Table screen 360-degree three-dimensional display using a small array of high-speed projectors,” Opt. Express 20(8), 8848–8861 (2012). [CrossRef]  

18. X. Xia, X. Liu, H. Li, Z. Zheng, H. Wang, Y. Peng, and W. Shen, “A 360-degree floating 3D display based on light field regeneration,” Opt. Express 21(9), 11237–11247 (2013). [CrossRef]  

19. C. Su, X. Zhou, H. Li, Q. Yang, Z. Wang, and X. Liu, “360 deg full-parallax light-field display using panoramic camera,” Appl. Opt. 55(17), 4729–4735 (2016). [CrossRef]  

20. D. Zhao, B. Su, G. Chen, and H. Liao, “360 degree viewable floating autostereoscopic display using integral photography and multiple semitransparent mirrors,” Opt. Express 23(8), 9812–9823 (2015). [CrossRef]  

21. G. Chen, C. Ma, D. Zhao, Z. Fan, and H. Liao, “360 degree crosstalk-free viewable 3D display based on multiplexed light field: theory and experiments,” J. Disp. Technol. 12(11), 1309–1318 (2016). [CrossRef]  

22. X. Yu, X. Sang, D. Chen, P. Wang, X. Gao, T. Zhao, B. Yan, C. Yu, D. Xu, and W. Dou, “Autostereoscopic three-dimensional display with high dense views and the narrow structure pitch,” Chin. Opt. Lett. 12(6), 060008 (2014). [CrossRef]  

23. X. Yu, X. Sang, S. Xing, T. Zhao, D. Chen, Y. Cai, B. Yan, K. Wang, C. Yu, and W. Dou, “Natural three-dimensional display with smooth motion parallax using active partially pixelated masks,” Opt. Commun. 313, 146–151 (2014). [CrossRef]  

24. C. Yu, J. Yuan, F. C. Fan, C. C. Jiang, S. Choi, X. Sang, C. Lin, and D. Xu, “The modulation function and realizing method of holographic functional screen,” Opt. Express 18(26), 27820–27826 (2010). [CrossRef]  

Supplementary Material (3)

NameDescription
Visualization 1       This video shows the 3D display result of the ancient Buddha statue. It shows that a natural and realistic 3D image can be perceived in a wide viewing angle of 120°. The motion parallax is continuous and smooth. We can clearly see the details of the
Visualization 2       This video shows a 3D display of a group of Tri-colored glazed potteries in the Tang Dynasty. The colors of the potteries are realistically reverted and the relative position occlusions of different potteries can be clearly perceived.
Visualization 3       This video show an interactive dynamic 3D display of an ancient Bronze Drinking Vessel, the rotation and zoom operation are controlled through the mouse drag and wheel respectively.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. (a) The schematic diagram of the proposed light-field display. (b) The structure of a MDBU.
Fig. 2.
Fig. 2. The simulation result of the wavelength dispersion in the MDBU.
Fig. 3.
Fig. 3. The viewpoints construction process of (a) the central viewing zone, (b) the right viewing zone and (c) the left viewing zone. (d) The construction process of the DC
Fig. 4.
Fig. 4. The sub-pixels’ arrangement in an image unit.
Fig. 5.
Fig. 5. Time sequence diagram of the CM.
Fig. 6.
Fig. 6. The process of light field pickup and the formation of synthesized images.
Fig. 7.
Fig. 7. Illustration of the modulation function of HFS. (a) Schematic diagram of the HFS modulation of thin light beams under the time-multiplexing way. (b) Comparison of the light intensity distribution before and after being modulated by HFS. (c) Comparison of the observed 3D image in the same position without and with HFS. (d) The scanning electron microscope (SEM) image of the HFS surface.
Fig. 8.
Fig. 8. (a) The optimized structure and corresponding parameters for the compound aspheric lens. (b) Comparison of modulation transfer function for the optimized compound aspheric lens and the standard lens.
Fig. 9.
Fig. 9. (a) The reproduced 3D image with the standard lens array. (b) The reproduced 3D image with the optimized compound aspheric lens array
Fig. 10.
Fig. 10. 3D light field display result of the ancient Buddha statue (see Visualization 1).
Fig. 11.
Fig. 11. 3D light field display of a group of Tri-colored glazed potteries in the Tang Dynasty (see Visualization 2).
Fig. 12.
Fig. 12. Interactive dynamic 3D display of an ancient Bronze Drinking Vessel (see Visualization 3).

Tables (1)

Tables Icon

Table 1. Configuration of experiments

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

N = [ ( l 1 ) 3 × ( l 1 ) × tan θ + ( k 1 ) ] mod w w / w N t N t
Ω = Ω C + Ω L + Ω R = n = 1 N ω n C + n = 1 N ω n L + n = 1 N ω n R
z = c r 2 1 + 1 ( 1 + k ) c 2 r 2 + a 2 r 2 + a 4 r 4 + a 6 r 6 +
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.