Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Vertically spliced tabletop light field cave display with extended depth content and separately optimized compound lens array

Open Access Open Access

Abstract

Tabletop three-dimensional light field display is a kind of compelling display technology that can simultaneously provide stereoscopic vision for multiple viewers surrounding the lateral side of the device. However, if the flat panel light field display device is simply placed horizontally and displayed directly above, the visual frustum will be tilted and the 3D content outside the display panel will be invisible, the large oblique viewing angle will also lead to serious aberrations. In this paper, we demonstrate what we believe to be a new vertical spliced light field cave display system with an extended depth content. A separate optimization of different compound lens array attenuates the aberration from different oblique viewing angles, and a local heating fitting method is implemented to ensure the accuracy of fabrication process. The image coding method and the correction of the multiple viewpoints realize the correct construction of spliced voxels. In the experiment, a high-definition and precisely spliced 3D city terrain scene is demonstrated on the prototype with a correct oblique perspective in 100-degree horizontal viewing range. We envision that our research will provide more inspiration for future immersive large-scale glass-free virtual reality display technologies.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Light field display is a kind of three-dimensional (3D) display method that can restore the distribution of information about the direction, intensity, and from a point or every point on a real object's surface, which provides the viewer with stereoscopic vision [15]. Tabletop 3D light field display is a unique three-dimensional display form, viewers can surround the display device and observe the three-dimensional scenes such as hill terrain and urban layout from an oblique side [610]. In science fiction movies, an ideal tabletop 3D light field display will be able to generate large area and depth of 3D images right above the 2D display panel, without the assistance of the display media. It can directly image in the air, and can provide an immersive multi-angle surround viewing experience for multiple viewers. In order to achieve a perfect tabletop light field display, the following requirements need to be achieved: a real-time dynamic display content, medium free imaging and interaction in the air, a large vertical depth of field, correct lateral viewing perspective for multiple viewers, and no distortion in large vertical viewing angles [1113]. At present, there are several ways to realize tabletop 3D display, such as projection display, volumetric display, holographic display and so on. Projection-based tabletop light field displays can be implemented to construct true-color, dynamic 3D scenes by controlling one or multiple high-refresh-rate projectors and diffusion screens to produce stereoscopic visions [1417]. Volumetric display can produce real voxels in spatial medium to offer 3D image to multiple viewers in different positions [1822]. It will not bring visual fatigue to the viewers due to avoidance of the vergence-accommodation conflict of the human eyes. Holographic technology records the phase and amplitude of the real object on the photosensitive material through the coherent light source, and then uses the same reference light to irradiate the recording medium, diffracts the three-dimensional information of the object light wave, and realizes the three-dimensional reconstruction of the object [2327]. Recently, the optical tweezers technology proposed by Arthur Ashkin and other scholars who won the 2018 Nobel Prize in Physics, as well as the acoustic tweezers technology that can control particle luminescence using ultrasonic beams, have both achieved great development [2832]. These kind of display technology, which can capture luminescent particles in space to generate images, are very promising methods for achieving a true three-dimensional holographic image, but there are still some obstacles to apply it to the reconstruction of large area and dynamic real-time continuous 3D image.

Flat panel light field displays based on liquid crystal panels or LED display panels, such as integral imaging, can realize the 3D display with a large display area and stable assembly [3337]. However, as shown in Fig. 1, when viewing a tabletop display device from a side view, with the tilt of the vertical or horizontal viewing angle, the visual frustum also tilts. This will lead to two phenomena: First, there will be shrunken of FOV and deficiencies of the original light field display content with a large display depth. In the urban complex terrain shown in Fig. 1, the red part of high building represents the invisible area in distal edge of visual frustum. With the display position of the light field content is farther away from the viewer, the maximum display depth will be smaller. Second, in tabletop light field display system based on lenticular lens array, unlike in horizontal direction, the light control unit don’t have a periodic structure in vertical direction. The light emitted by pixels on LCD can’t be focused and forms optical path difference. With the increase of the vertical viewing angle, the optical path difference will become larger and resulting in more serious aberration, which will seriously affect the viewing experience.

 figure: Fig. 1.

Fig. 1. Schematic of the invisible part with limited DOF caused by the incline of the visual frustum.

Download Full Size | PDF

Our previous research has explored the aforementioned issues, a lot of efforts have been devoted to the flat-panel based light field displays in recent years, and significant breakthroughs have been made in the display depth extension and optical optimization [3842]. However, the method of simply extending the display depth based on the front-view flat panel display still has significant limitations when applied to tabletop light field displays with a display direction of directly above but an oblique viewing angle. In this paper, inspired by the immersive virtual reality (VR) display device of surround splicing screens, we proposed a kind of light field cave display with extensional depth of field and separate optimization of compound lens array. By placing an additional light field display panel vertically behind the original horizontal tabletop display device, a light field cave is constructed to supplement the display range through viewpoint splicing. The depth of the light field content can be extended to solve the loss of content with display depth caused by tilt of the visual frustum. The expansion of display area not only brings an immersive viewing experience to viewers, but also brings more possibilities for innovation in the application scenarios of 3D light field display. According to the viewing angle and viewing distance of the two different display panels, two different light field control units are designed, and the corresponding horizontal and vertical viewing angle and viewing distance diffuse spots are optimized respectively, reducing the aberration in the designed viewing range. During a local heating and laser calibration process, the composite lens and aperture are accurately aligned and fitted. In addition, due to the assembly and processing error of the seam position, and the differences of the structure design of the two light control units, the viewpoints distribution of the two screens are uneven and cannot be accurately aligned. A precise viewpoint correction step is implemented to eliminate the content misalignment in the spliced parts. In the optical experiments, after the image coding process, the elemental image is loaded on the LCD panel, and the accurate and high-definition light field cave splicing image is well reconstructed within the horizontal 100° viewing angle.

2. Design and methods

2.1 Design and fabrication of the vertically spliced light field cave display system

The proposed tabletop light field cave 3D display system for producing the splicing 3D images is schematically shown in Fig. 2(a). The bottom of both display panels are both 65-inch 8 K liquid crystal display (LCD) panels. The light control components of the light field display system are two vertically spliced compound lenticular lens array (LLA) with optimized aberration in designed viewing range. As shown in Fig. 2(b) and Fig. 2(c), every light control unit (a single compound lens unit) covers a certain amount of pixels. By loading the encoded elemental image on the two LCD panels, voxels are generated on the display range of the light field cave. The light emitted by these pixels is modulated by the light control unit to generate a viewpoint at the viewing position, thereby providing motion parallax for the viewers. The size of the diffuse spots from different viewing angles are all controlled within the size of a pixel, which ensures that the displayed 3D content will not suffer from severe aberrations due to the large viewing angles. Based on the designed light field display device placement height and the most comfortable viewing position for viewers, the horizontal viewing angle of the vertically placed display screen is -50° to 50°, and the best vertical viewing angle is 35° to 55°. The horizontal viewing angle of the horizontally placed screen is -50° to 50°, and the best vertical viewing angle is 15° to 35°. Viewers in these viewing range will observe a vertically spliced tabletop light field content with extensional depth of field and corrected aberrations.

 figure: Fig. 2.

Fig. 2. Schematic of the proposed tabletop light field 3D display based on vertically spliced light field cave. (a) Structure of the display system. (b) Principle of the modulation of the vertical compound lens array.

Download Full Size | PDF

The core of achieving the proposed vertically spliced tabletop light field display system is to design a light control unit that meets the requirements of horizontal parallax generations and vertical viewing angles. The proposed light control unit is composed of two different set of compound LLA that can have their best display experience within their designed horizontal viewing angles and vertical viewing ranges, after optical optimization for a specific viewing range. Figure 3(a) shows the viewing range of the two vertically spliced compound lens after aberration optimization. The horizontal viewing angle (θH) of the vertically placed compound lens array is -50° to 50°, and the best vertical viewing angle (θV) is 15° to 35°. The horizontal viewing angle of the horizontally placed compound lens array is -50° to 50°, and the best vertical viewing angle is 35° to 55°. Figure 3(d) and Fig. 3(e) shows the original structure and spot diagram of a standard lenticular lens array used as comparison, which cannot effectively control the size of the diffuse spot within such a large vertical and horizontal display angle. Figure 3(b) shows the optimized structure and corresponding parameters of the lens unit, two identical and symmetrically placed plano-convex lenses can effectively eliminate aberrations and suppress the size of diffuse spots, and the aperture can precisely control the angle and position of emitted light. Figure 3(c) shows the spot diagram of the two designed compound lens. The root mean square (RMS) spot radii of all sampled fields (θH = 0°, 25°, 50°; θV = 15°, 25°, 35°, 45°, 55°) are smaller than 120 × 240µm, which is smaller than the size of one single pixel. Within the preset viewing range, the diffuse spots at various horizontal and vertical viewing angles are well controlled, and the influence of aberration on the display effect is very small. Compare with the traditional lenticular lens array with a simple structure, the optical performance is significantly enhanced, the aberration is well controlled.

 figure: Fig. 3.

Fig. 3. Designed compound lens array. (a) Schematic of the optimized viewing range of the compound lens. (b) Schematic of the compound lens array. (c) Spot diagram of the compound lens unit. (d) Schematic of a traditional lenticular lens array used as comparison. (e) Spot diagram of the traditional lenticular lens array used as comparison.

Download Full Size | PDF

In order to ensure that the designed compound lens array can be precisely processed and light control method can be correctly implemented, a fabrication process of aligning and pasting the compound lens array by local heating and laser calibration method is adopted in a series of three steps, as shown in Fig. 4. Firstly, each part of the composite lens array is roughly aligned and placed between the transparent adsorption platform and heating adsorption platform, as shown in Fig. 4(a). Secondly, a scanning camera is controlled to scan every detection point of the heating platform, to check if each double-layer compound lens unit is correctly aligned with the aperture diaphragm. Finally, heat the partially malposed compound lens detected through the partial heater below the adsorption platform to inflate it to complete alignment, as shown in Fig. 4(b).

 figure: Fig. 4.

Fig. 4. Fabrication and assembly process of the compound lens array. (a) Malposed compound lens before local heating process. (b) Aligned compound lens after local heating process.

Download Full Size | PDF

2.2 Image coding method of the vertically spliced light field cave

In the vertically spliced light field cave display system, both horizontally and vertically placed screens will generate dense viewpoint distribution (96 viewpoints each, 192 viewpoints total). Because the two light field display screens are spliced vertically at right angles, the horizontal viewing angles of the two light screens viewed by the viewers from each position are the same. Figure 5(a) shows the schematic diagram of four exampled splicing viewpoints of the light field cave, the two display screens need to have the same view position distribution, so as to ensure that the viewer can see the 3D content correctly spliced at any viewing position within the viewing range. Figure 5(b) shows the light emitted from pixels on two LCD screens passing through the light control unit to form voxels, then converged and spliced in space to form viewpoints. The set of rays emitted from each screen to form voxels and generate spliced viewpoints V3D_scene is shown as Eq. (1):

$$\begin{aligned}{V_{3D\_Scene}} = \sum {{V_{Voxel(vertical)}} + } \sum {{V_{Voxel(horizontal)}}} \\ = \sum {\sum {Ra{y_{vertical}}({x_{voxel}},{y_{voxel}},{z_{voxel}},\theta ,\varphi )} } \\ + \sum {\sum {Ra{y_{horizontal}}({x_{voxel}},{y_{voxel}},{z_{voxel}},\theta ,\varphi )} } \end{aligned}$$
where Vvoxel(vertical) and Vvoxel(horizontal) are the voxels generated by the vertical and horizontal light field screens, ΣRayvertical(xvoxel, yvoxel, zvoxel) and ΣRayhorizontal(xvoxel, yvoxel, zvoxel) are the set of the rays emitted from the two light control units. θ is the vertical angle of the light rays, φ is the horizontal rotation angle of the light rays. The relationships between the coordinates of the emitted light rays and the coordinates of the sub-pixels are shown in Eq. (2) and Eq. (3):
$$\frac{{{x_1} - {{x^{\prime}}_1}}}{{x - {x_1}}} = \frac{{{y_1} - {{y^{\prime}}_1}}}{{y - {y_1}}} = \frac{{{z_1} - {{z^{\prime}}_1}}}{{z - {z_1}}}$$
$$\frac{{{x_2} - {{x^{\prime}}_2}}}{{x - {x_2}}} = \frac{{{y_2} - {{y^{\prime}}_2}}}{{y - {y_2}}} = \frac{{{z_2} - {{z^{\prime}}_2}}}{{z - {z_2}}}$$
where (x, y, z) is the coordinates of spliced viewpoint position, (x1, y1, z1) and (x2, y2, z2) are the coordinates of Voxel1 and Voxel2 in Fig. 5(b). (x1’, y1’, z1) and (x2’, y2’, z2) are the coordinates of the rays emitted from the light control unit of two compound lens array. The relationship between the position of each voxel and the ith sub-pixel in the jth row it covers are given in Eq. (4) to Eq. (7):
$${x_{c1}} = \left( {\left\lceil {\frac{{{{x^{\prime}}_1} - {{z^{\prime}}_1}\tan \theta }}{w}} \right\rceil + \frac{1}{2}} \right) \cdot w + {z^{\prime}_1}\tan \theta $$
$${x_{c2}} = \left( {\left\lceil {\frac{{{{x^{\prime}}_2} - {{y^{\prime}}_2}\tan \theta }}{w}} \right\rceil + \frac{1}{2}} \right) \cdot w + {y^{\prime}_2}\tan \theta $$
$$\frac{{{f_1}}}{{{y_1} - {f_1}}} = \frac{{{x_{c1}} - {i_1} \cdot p}}{{{x_1} - {x_{c1}}}} = \frac{{{{z^{\prime}}_1} - {j_1} \cdot 3p}}{{{z_1} - {{z^{\prime}}_1}}}$$
$$\frac{{{f_2}}}{{{z_2} - {f_2}}} = \frac{{{x_{c2}} - {i_2} \cdot p}}{{{x_2} - {x_{c2}}}} = \frac{{{{y^{\prime}}_2} - {j_2} \cdot 3p}}{{{y_2} - {{y^{\prime}}_2}}}$$
where f1 and f2 are the distance between light control unit and sub-pixels on two LCD panels. (xc1, f1, z1) and (xc2, y2’, f2) are the coordinates of the optical axis of each light control unit. The image coding process that determines the viewpoint number N corresponding to the coordinates of sub-pixels (i, j) in an elemental image unit is given in Eq. (8):
$$N = \left\lceil {\frac{{[{(i - 1) - 3(j - 1)\tan \gamma } ]\bmod w}}{{w/{N_{\max }}}}} \right\rceil $$
where γ=12.13° represents the slant angle of compound lens array, w = 28.03 represents the number of covered sub-pixels and Nmax = 96 represents the total number of viewpoints. The symbol “mod” is a modulo operator, and the symbol “⌈ ⌉” represents the minimum integer not less than the variable within it. After the image coding method is calculated, two elemental images are rendered and loaded on the vertical and horizontal LCD panel. Each light field display screen can produce 96 densely distributed viewpoints and spliced together, a clear and continuous 3D image can be perceived within the whole 100-degree horizontal viewing range.

 figure: Fig. 5.

Fig. 5. (a) Schematic diagram of four splicing viewpoints of the light field cave. (b) Image coding process of the viewpoint construction.

Download Full Size | PDF

3. Experimental results and discussion

In optical verification experiment, a prototype of the demonstrated vertically spliced tabletop light field 3D display is built, and Fig. 6 shows the appearance of the prototype. Two 65-inch light field display panels are 90-degree spliced with the long side of the screen as the axis to form a right angle, and the resolution of both LCD panels is 7680 × 4320. Two elemental images with resolution of 7680 × 4320 pixels are rendered by two NVIDIA GeForce RTX 3080Ti GPUs, and the display contents of the two screens are synchronized through timing control. The image coding method mentioned above can realize 96 spliced horizontal viewpoints for each light field screen. However, due to the influence of inaccurate processing and assembly error, each light control unit may not correspond exactly with the sub-pixels it covers, thus the viewpoints of the two light field display screens sometimes cannot have the same distribution. The viewpoint distribution will be misaligned, which will lead to the dislocation of the display content of the splicing position. Therefore, it is necessary to correct the misaligned viewpoints so that the view areas generated by the two light field screens and each viewpoint are distributed in the same horizontal position. Figure 6 shows the results of the comparison of four viewpoints distribution before and after correction. In Fig. 6(a), the viewpoint distribution of the two light field display screens is not completely aligned, and the four preset white viewpoints are overlapped with the adjacent black viewpoints. The correction method of spliced viewpoints is divided into three steps. Firstly, light up the 1st (viewpoint 1), 32nd (viewpoint 2), 64th (viewpoint 3), and 96th (viewpoint 4) viewpoint in pure white, and all other viewpoints in pure black. Secondly, by adjusting the arrangement of sub-pixels generated for each viewpoint one by one, the construction position of the viewpoint is moved point by point. Thirdly, by gradually modifying the pitch, tilt angle and other parameters of each light control unit in the image coding method mentioned above, the black part with crosstalk of adjacent viewpoints is corrected until the black part is completely corrected to pure white and two screens simultaneously display the content of the same preset viewpoint. After correcting the viewpoints through the image coding method of splicing light field mentioned above, the viewpoints constructed by the two screens can be completely aligned and achieve improved clarity and content construction accuracy for 3D light field display. As shown in Fig. 6(b), four pure white viewpoints without the interference of adjacent black viewpoints can be observed in the four preset viewpoint positions, which proves the correctness of viewpoint correction method.

 figure: Fig. 6.

Fig. 6. (a) Viewpoints distribution of four viewpoints before correction. (b) Correction results of four viewpoints in the vertically spliced light field cave display system.

Download Full Size | PDF

Figure 7 and Visualization 1 show an experimental result of a continuous spliced tabletop light field scene. A continuous urban sandbox scene consists of buildings in different heights are demonstrated on proposed light field cave, which can simultaneously offer 96 viewpoints to multiple viewers in 100-degree horizontal viewing angle.

 figure: Fig. 7.

Fig. 7. Urban terrain 3D images in five different viewing angles (-50°, -25°, 0°, 25°, 50°) along the horizontal direction (see Visualization 1).

Download Full Size | PDF

In our proposed tabletop light field cave display system, the panel 1 is vertically spliced at the distal edge of panel 2 to supplement the missing light field content caused by the inclination of the visual frustum. The proposed prototype is to verify the design feasibility of spliced light field cave display system. In order to ensure the clarity of light field display, the display depth of the designed compound lens array is set to 20cm-30 cm. Compared with the size of LCD panel (65 inches), it is relatively small, which limited the integrity and display range of the spliced light field cave. If the high-rise buildings are set at the bottom panel, the display depth will be too high and cause serious blur, hence the tallest building can only be set in the background panel. However, it still proves the applicability of splice light field cave. We will focus on how to realize the spliced light field with large depth and full display range in our further studies. We hope that our research can provide inspiration for the glass-free light field display with more immersive viewing experience. In the future, with the rapid development of display technology, the advanced flexible screen technology will provide more assistance for seamless splicing of large-scale tabletop light field display. At the same time, the display screen with higher resolution and smaller pixel size will provide more support for the realism and clarity of virtual reality 3D display. We believe that some high-tech scenes in science fiction movies with a full space coverage, highly immersive experience, and a 360-degree surrounding stereoscopic light field display without views flip will come true one day.

4. Conclusion

In summary, a vertically spliced tabletop light field cave display with extended depth content and separate optimization of viewing angle was demonstrated. By vertically splicing two light field display screens at right angles to build a light field cave, the vertical off-screen depth of field was expanded and the 3D content loss caused by the tilt of the visual frustum was compensated. By designing and optimizing vertically and horizontally placed compound lens array separately, the aberrations of the light control units were reduced when viewing two light field screens with different viewing angles simultaneously. A locally heating method was implemented to correct the bonding error of the compound lens array, which improves the fabrication accuracy. At the same time, the calculation of the image coding method and the correction of the spliced viewpoint also ensured the correct construction of the voxel position and elimination of the seam dislocation. In the experiment, a clear, precisely spliced 3D terrain scene of the city composed of buildings of different heights was displayed in the prototype, which proved the extension utility of display depth and the result of aberration correction.

Funding

National Key Research and Development Program of China (2023YFB3611500); National Natural Science Foundation of China (62075016, 62175015).

Disclosures

The authors declare no conflicts of interest.

Data availability

No data were generated or analyzed in the presented research.

References

1. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photon. 5(4), 456–535 (2013). [CrossRef]  

2. D. Fattal, Z. Peng, T. Tran, et al., “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013). [CrossRef]  

3. N. Balram and I. Tosic, “Light-field imaging and display systems,” Inf. Disp. 32(4), 6–9 (2016). [CrossRef]  

4. M. Martínez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photon. 10(3), 512–566 (2018). [CrossRef]  

5. Q. Ma, L. Cao, Z. He, et al., “Progress of three-dimensional light-field display [Invited],” Chin. Opt. Lett. 17(11), 111001 (2019). [CrossRef]  

6. S. Yoshida, “fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector,” Opt. Express 24(12), 13194–13203 (2016). [CrossRef]  

7. M. He, H. Zhang, H. Deng, et al., “Dual-view-zone tabletop 3D display system based on integral imaging,” Appl. Opt. 57(4), 952–958 (2018). [CrossRef]  

8. H. Ren, L. Ni, H. Li, et al., “Review on tabletop true 3D display,” J. Soc. Inf. Disp. 28(1), 75–91 (2020). [CrossRef]  

9. S. Yoshida, “Virtual multiplication of light sources for a 360°-viewable tabletop 3D display,” Opt. Express 28(22), 32517–32528 (2020). [CrossRef]  

10. F. Zhou, F. Zhou, Y. Chen, et al., “Vector light field display based on an intertwined flat lens with large depth of focus,” Optica 9(3), 288–294 (2022). [CrossRef]  

11. S. Pang, T. Wang, F. Zhong, et al., “Tabletop integral imaging 3D display system based on annular point light source,” Displays 69, 102029 (2021). [CrossRef]  

12. Y. Xing, Y. Xia, S. Li, et al., “Annular sector elemental image array generation method for tabletop integral imaging 3D display with smooth motion parallax,” Opt. Express 28(23), 34706–34716 (2020). [CrossRef]  

13. X. Pei, X. Yu, X. Gao, et al., “End-to-end optimization of a diffractive optical element and aberration correction for integral imaging,” Chin. Opt. Lett. 20(12), 121101 (2022). [CrossRef]  

14. A. Jones, K. Nagano, J. Liu, et al., “Interpolating vertical parallax for an autostereoscopic three-dimensional projector array,” J. Electron. Imaging 23(1), 011005 (2014). [CrossRef]  

15. Y. Takaki and J. Nakamura, “Generation of 360-degree color three-dimensional images using a small array of high-speed projectors to provide multiple vertical viewpoints,” Opt. Express 22(7), 8779–8789 (2014). [CrossRef]  

16. C. Su, X. Zhou, H. Li, et al., “360 deg full-parallax light-field display using panoramic camera,” Appl. Opt. 55(17), 4729–4735 (2016). [CrossRef]  

17. B. Chen, L. Ruan, and M. Lam, “Light field display with ellipsoidal mirror arrray and single projector,” Opt. Express 27(15), 21999–22016 (2019). [CrossRef]  

18. D. MacFarlane, “Volumetric three-dimensional display,” Appl. Opt. 33(31), 7453–7457 (1994). [CrossRef]  

19. S. Patel, J. Cao, and A. Lippert, “A volumetric three-dimensional digital light photoactivatable dye display,” Nat. Commun. 8(1), 15239 (2017). [CrossRef]  

20. K. Kumagai, S. Hasegawa, and Y. Hayasaki, “Volumetric bubble display,” Optica 4(3), 298–302 (2017). [CrossRef]  

21. C. Martinez, Y. Lee, N. Clement, et al., “Multi-user volumetric 360° display based on retro-reflective transparent surfaces,” Opt. Express 28(26), 39524–39543 (2020). [CrossRef]  

22. Y. Gu, S. Wan, Q. Liu, et al., “Luminescent materials for volumetric three-dimensional displays based on photoactivated phosphorescence,” Polymers 15(9), 2004 (2023). [CrossRef]  

23. Y. Sando, D. Barada, and T. Yatagal, “Optical rotation compensation for a holographic 3D display with a 360 degree horizontal viewing zone,” Appl. Opt. 55(30), 8589–8595 (2016). [CrossRef]  

24. C. Zhong, X. Sang, B. Yan, et al., “Real-time realistic computer-generated hologram with accurate depth precision and a large depth range,” Opt. Express 30(22), 40087–40100 (2022). [CrossRef]  

25. J. Li, Q. Smithwick, and D. Chu, “Holobricks: modular coarse integral holographic displays,” Light: Sci. Appl. 11(1), 82 (2022). [CrossRef]  

26. Z. Yu, Q. Zhang, X. Tao, et al., “High-performance full-color imaging system based on end-to-end joint optimization of computer-generated holography and metalens,” Opt. Express 30(22), 40871–40883 (2022). [CrossRef]  

27. A. Brkić, V. Cviljušac, H. Skenderović, et al., “Unifying fast computer-generated hologram calculation and prepress for new and existing production techniques,” Appl. Opt. 62(10), D119–D124 (2023). [CrossRef]  

28. A. Ashkin and J. Dziedzic, “Optical trapping and manipulation of viruses and bacteria,” Science 235(4795), 1517–1520 (1987). [CrossRef]  

29. D. Smalley, E. Nygaard, K. Squire, et al., “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018). [CrossRef]  

30. L. Lin, M. Wang, X. Peng, et al., “Opto-thermoelectric nanotweezers,” Nat. Photonics 12(4), 195–201 (2018). [CrossRef]  

31. R. Hirayama, D. Martinez Plasencia, N. Masuda, et al., “A volumetric display for visual, tactile and audio presentation using acoustic trapping,” Nature 575(7782), 320–323 (2019). [CrossRef]  

32. C. Hong, S. Yang, and J. Ndukaife, “Stand-off trapping and manipulation of sub-10 nm objects and biomolecules using opto-thermo-electrohydrodynamic tweezers,” Nat. Nanotechnol. 15(11), 962 (2020). [CrossRef]  

33. J. Hua, E. Hua, F. Zhou, et al., “Foveated glasses-free 3D display with ultrawide field of view via a large-scale 2D-metagrating complex,” Light: Sci. Appl. 10(1), 213 (2021). [CrossRef]  

34. J. Hua, F. Zhou, Z. Xia, et al., “Large-scale metagrating complex-based light field 3D display with space-variant resolution for non-uniform distribution of information and energy,” Nanophotonics 12(2), 285–295 (2023). [CrossRef]  

35. X. Yu, H. Dong, X. Gao, et al., “360-degree directional micro prism array for tabletop flat-panel light field displays,” Opt. Express 31(20), 32273–32286 (2023). [CrossRef]  

36. X. Yan, X. Lin, L. Zhang, et al., “Integral imaging-based tabletop light field 3D display with large viewing angle,” Opto-Electron. Adv. 6(6), 220178 (2023). [CrossRef]  

37. R. Zhou, C. Wei, H. Ma, et al., “Depth of field expansion method for integral imaging based on diffractive optical element and CNN,” Opt. Express 31(23), 38146–38164 (2023). [CrossRef]  

38. X. Yu, H. Li, X. Sang, et al., “Aberration correction based on a pre-correction convolutional neural network for light-field displays,” Opt. Express 29(7), 11009–11020 (2021). [CrossRef]  

39. X. Pei, S. Xing, X. Yu, et al., “Three-dimensional light field fusion display system and coding scheme for extending depth of field,” Opt. Laser Eng. 169, 107716 (2023). [CrossRef]  

40. X. Yu, H. Li, X. Su, et al., “Image edge smoothing method for light-field displays based on joint design of optical structure and elemental images,” Opt. Express 31(11), 18017–18025 (2023). [CrossRef]  

41. B. Fu, X. Yu, X. Gao, et al., “Analysis of the relationship between display depth and 3D image definition in light-field display from visual perspective,” Displays 80, 102514 (2023). [CrossRef]  

42. X. Yu, Z. Zhang, B. Liu, et al., “True-color light-field display system with large depth-of-field based on joint modulation for size and arrangement of halftone dots,” Opt. Express 31(12), 20505–20517 (2023). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Urban terrain 3D scene of a vertically spliced tabletop light field cave display with extended depth content and separately optimized compound lens array

Data availability

No data were generated or analyzed in the presented research.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Schematic of the invisible part with limited DOF caused by the incline of the visual frustum.
Fig. 2.
Fig. 2. Schematic of the proposed tabletop light field 3D display based on vertically spliced light field cave. (a) Structure of the display system. (b) Principle of the modulation of the vertical compound lens array.
Fig. 3.
Fig. 3. Designed compound lens array. (a) Schematic of the optimized viewing range of the compound lens. (b) Schematic of the compound lens array. (c) Spot diagram of the compound lens unit. (d) Schematic of a traditional lenticular lens array used as comparison. (e) Spot diagram of the traditional lenticular lens array used as comparison.
Fig. 4.
Fig. 4. Fabrication and assembly process of the compound lens array. (a) Malposed compound lens before local heating process. (b) Aligned compound lens after local heating process.
Fig. 5.
Fig. 5. (a) Schematic diagram of four splicing viewpoints of the light field cave. (b) Image coding process of the viewpoint construction.
Fig. 6.
Fig. 6. (a) Viewpoints distribution of four viewpoints before correction. (b) Correction results of four viewpoints in the vertically spliced light field cave display system.
Fig. 7.
Fig. 7. Urban terrain 3D images in five different viewing angles (-50°, -25°, 0°, 25°, 50°) along the horizontal direction (see Visualization 1).

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

V 3 D _ S c e n e = V V o x e l ( v e r t i c a l ) + V V o x e l ( h o r i z o n t a l ) = R a y v e r t i c a l ( x v o x e l , y v o x e l , z v o x e l , θ , φ ) + R a y h o r i z o n t a l ( x v o x e l , y v o x e l , z v o x e l , θ , φ )
x 1 x 1 x x 1 = y 1 y 1 y y 1 = z 1 z 1 z z 1
x 2 x 2 x x 2 = y 2 y 2 y y 2 = z 2 z 2 z z 2
x c 1 = ( x 1 z 1 tan θ w + 1 2 ) w + z 1 tan θ
x c 2 = ( x 2 y 2 tan θ w + 1 2 ) w + y 2 tan θ
f 1 y 1 f 1 = x c 1 i 1 p x 1 x c 1 = z 1 j 1 3 p z 1 z 1
f 2 z 2 f 2 = x c 2 i 2 p x 2 x c 2 = y 2 j 2 3 p y 2 y 2
N = [ ( i 1 ) 3 ( j 1 ) tan γ ] mod w w / N max
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.