Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Annular sector elemental image array generation method for tabletop integral imaging 3D display with smooth motion parallax

Open Access Open Access

Abstract

One of the important features of tabletop 3D displays is the annular viewing area above the display system. In this paper, we propose an annular sector elemental image array (ASEIA) generation method for the tabletop integral imaging 3D display to form the annular viewing zone with smooth motion parallax. The effective pixels of the elemental images are distributed as annular sector, and they are mapped from the perspective images captured by the ring-shaped camera array. Correspondingly, the viewing sub-zones can be formed with an annular sector configuration and can be seamlessly stitched by using the time division scheme. Compared with the previous approach with rectangular elemental image array (EIA) distribution, the number of viewing sub-zones is decreased from 360 to 10 for the same effect of smooth motion parallax. Meanwhile, rendering efficiency is improved. The experimental results show that the proposed method is feasible to produce 360-degree continuous viewpoints in an annular viewing zone.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Tabletop three-dimensional (3D) display enables the reproduction of natural and vivid 3D scenes for multiple people standing or sitting around a table [1]. Since many collaborative tasks are performed on the table, the tabletop 3D display has a wide range of applications in tabletop games, interactive teaching, military maps, industrial design fields and so on. In recent years, several interesting tabletop 3D displays have been developed, and most of them are based on the glasses-free 3D display technologies [2], including holographic displays [36], light field displays [710], volumetric displays [1114], and integral imaging (II) displays [15,16]. Some head-tracking and eye-tracking technologies are also adopted to provide vertical direction motion parallax and other functions for the tabletop 3D displays [1719]. Among these technologies, II based tabletop 3D displays have the ability to reproduce full-parallax and full-color 3D images and under incoherent or ambient light for scenarios, which allow fairly low-cost and simple system configuration compared with the other glasses-free tabletop 3D displays [20,21]. Moreover, the tracking technology for providing motion parallax in vertical direction is not required due to the full-parallax properties of the II display.

Ideally, the tabletop 3D display should provide 3D images within a 360-degree viewing area for multiple people sharing, and the viewing zone should be annular. The conventional schemes of expanding viewing area to 360 degrees in the tabletop II 3D display are roughly divided into a space division scheme, like an aspheric conical lens array, and a time division scheme with rotating mechanism [2226]. The space division scheme that uses an aspheric conical lens array enables the annular viewing zone since the aspheric conical lenses can control the light rays emitted by elemental images (EIs) to ring-shaped range [23]. However, the manufacturing of the aspheric conical lens array is complicated and too expensive. Besides, the resolution of 3D images is deteriorated because all the spatial-angular information within 360 degrees originates from one elemental image array (EIA). In the time division scheme [2426], rotating barrier array or mirror scanner can be combined with a high-refresh-rate liquid crystal display (LCD) or a projector to dynamically present 3D images in different viewing sub-zones around the table. Taking advantage of the human eye’s persistence of vision, multiple viewing sub-zones can be stitched to compose the 360-degree viewing zone. However, the viewing zone is not strictly annular. The reason is that each viewing sub-zone is generally rectangular [27]. The stitching of rectangular viewing sub-zones would result in unsmooth motion parallax and unnatural 3D visual experience, which do not satisfy the annular viewing characteristics of the tabletop 3D display. An efficient solution is improving the number of viewing sub-zones, but more viewing sub-zones require a higher refresh rate of the display, which would aggravate the complexity of the system and the rendering burden of the 3D scenes. In addition, the viewpoints distributed outside the annular viewing area are redundant, which also aggravates the rendering burden.

In this paper, an annular sector EIA (ASEIA) generation method is proposed to achieve the annular viewing zone with smooth motion parallax in the time division tabletop II 3D display. Each viewing sub-zone is distributed as an annular sector configuration by synthesizing the ASEIA composed of multiple annular sector elemental images (ASEIs). Using the time division scheme, the viewing sub-zones can be seamlessly stitched to form the annular viewing zone with smooth motion parallax. Compared with the previous approach introduced by Ling et al. [25], the number of viewing sub-zones is decreased by 350 for the same effect of smooth motion parallax, which simplifies the hardware and decreases the rendering burden of the 3D scenes. In addition, rendering efficiency can also be improved. The experimental results demonstrate the feasibility of our method.

This paper is organized as follows. In Section 2, we introduce the basic principle of the time division tabletop II 3D display based on the tilted barrier array [25]. Section 3 is devoted to present the proposed ASEIA generation method. The experimental results are shown in Section 4. Finally, Section 5 concludes the achievements of this work.

2. Principle of the time division tabletop II 3D display

Figure 1(a) shows the schematic diagram of the tabletop II 3D display by using a time division method. It consists of a 2D display panel with a high refresh rate, a tilted barrier array, a large-pitch lens array, a light shaping diffuser screen and a rotating mechanism [25]. The key idea of this system is to rotate the tilted barrier array to integrate K EIAs to K directional viewing sub-zones through the synchronously rotating lens array and 2D display panel. These viewing sub-zones are stitched to compose the complete 360-degree viewing zone.

 figure: Fig. 1.

Fig. 1. (a) Schematic diagram of the time division tabletop II 3D display. (b) Perception of 3D scene in a directional viewing sub-zone.

Download Full Size | PDF

In the tabletop II 3D display, the 2D display panel is located at the bottom to show switched EIAs. The light rays emitted by the 2D display panel are partially blocked by the tilted barrier array to eliminate the unwanted viewpoints right above the display system. The direction of the viewing sub-zone is shifted to an oriented angle φ. As shown in Fig. 1(b), the perception of the 3D scene in one directional viewing sub-zone can be provided. By rotating the lens array, the 2D display panel and the tilted barrier array with the rotating mechanism and the switched EIAs, the perception of 3D scene can be produced within a 360-degree viewing area. The reason why a large-pitch lens array is used is to enlarge the viewing angle of each viewing sub-zone. The light shaping diffuser screen diffuses rays emitted by the reconstructed 3D images for ensuring the viewers observe consecutive 3D images.

In order to form the annular viewing zone with smooth motion parallax and without flipping, as a portion of the annular viewing zone, each viewing sub-zone should be annular sector, as illustrated in Fig. 2(a). Compared with the conventional rectangular viewing sub-zones illustrated in Fig. 2(b), the annular sector viewing sub-zones can be seamlessly stitched, and the viewpoints outside the annular area can be eliminated, so no viewpoint is wasted. The viewing angles of the viewing sub-zone are identical in both circular and radial directions, and they are denoted as θ. Adjacent viewing sub-zones have an overlapping angle denoted as Δθ, and it is satisfied by

$$({\theta - \Delta \theta } )K\textrm{ = }360.$$

 figure: Fig. 2.

Fig. 2. Top view of the distribution of the (a) annular sector and (b) rectangular viewing sub-zones.

Download Full Size | PDF

3. Proposed method

Owing to the employed time division scheme of the tabletop II 3D display, multiple EIAs should be generated to be the 3D content, in which each EIA corresponds to a viewing sub-zone. The viewing sub-zones can be regarded as a combination of multiple viewpoints. Each human eye can receive a 3D image from more than 1 viewpoint. In a specific viewpoint position, the human eye receives multiple rays originating from 3D points, and each ray is launched from one pixel of EI along with the optical center of the corresponding lens. The number of rays equals to that of the lens in the lens array, and the viewpoint number equals to the pixels of each EI. Considering the ideal case that the viewing sub-zone is annular sector, the EIs of the EIA should also be set to annular sector, as depicted in Fig. 3. The EI is represented as the ASEI, and the EIA is called the ASEIA correspondingly. Note that the converge arrangement is adopted in the display where the dimension of the EI is slightly larger than the pitch of the lens array [28]. Therefore, at the optimal viewing height H above the screen, the viewing angles of all lenses can be converged to produce a common annular sector viewing sub-zone.

 figure: Fig. 3.

Fig. 3. ASEI distribution corresponding to the annular sector viewing sub-zone.

Download Full Size | PDF

In the annular viewing zone, the viewpoints should also be arranged in an annular way. Therefore, in the pickup process, we place the cameras in multiple full circles and at the same height to ensure that the 3D information obtained is consistent with the one received by the annular viewpoints, as illustrated in Fig. 4. The camera array consists of M × N cameras, where M and N denote the number of cameras in the circular and radial directions, respectively, and m and n are the indexes of the cameras. Adjacent cameras have an angle spacing α0 in the circular direction and a gap g in the radial direction. Considering that the viewing sub-zone is shifted to an oriented direction, the angle between the optical axis of the cameras and the vertical axis is large. Therefore, the calibration of the camera array is necessary [29]. Alternatively, off-axis cameras can be used without calibration. It should be noted that the below-mentioned perspective images in this subsection represent the aligned perspective images.

 figure: Fig. 4.

Fig. 4. Arrangement of the camera array in the pickup process.

Download Full Size | PDF

When obtaining the 360-degree perspective images, the image indexes corresponding to each viewing sub-zone should be first selected. For each viewing sub-zone, the number of perspective images is denoted as Ms × N. Assuming that mk and nk represent the circular and radial indices of the perspective images for the kth viewing sub-zone, respectively, the relationships between mk and nk, m and n are given as

$$m = \bmod ({m_k} + (k - 1)R - {M_s}/2 - 1,M),$$
$$n = {n_k},$$
where Ms is the number of perspective images in the circular direction, and R is the offset of the perspective images between two adjacent viewing sub-zones. Ms and R can be calculated as
$${M_s} = \frac{{M\theta }}{{360}},$$
$$R = \frac{{M({\theta - \Delta \theta } )}}{{360}}.$$

As mentioned above, the shape of the ASEIs should be annular sector. As presented in the display system shown in Fig. 1(b), the viewing sub-zones are shifted from the area right above the screen to an oriented direction. Thus, each ASEI is shifted by s0 pixels in the vertical direction. The pixel offset s0 is determined by the oriented angle φ. Owing to the annular sector shape of the ASEI, the center pixel O of the ASEI is still that of the conventional rectangular EI. The coordinates (xo, yo) of the pixel O in (i, j)th ASEI are calculated as

$$\left\{ {\begin{array}{l} {{x_o} = i{W_{EI}} + \lfloor{({W_{EIA}} + 1)/2} \rfloor }\\ {{y_o} = j{H_{EI}} + \lfloor{({H_{EIA}} + 1)/2} \rfloor } \end{array}} \right.,$$
where WEI × HEI is the resolution of the rectangular region around the ASEI, and WEIA × HEIA is the resolution of ASEIA. By setting the center pixel O to be the pole and the horizontal direction to be the polar axis, a polar coordinate system of the (i, j)th ASEI is created, as depicted in Fig. 5. The radius r and the angular coordinate β of the neighboring pixel P are given as
$$\left\{ {\begin{array}{l} {r = \sqrt {{{({x - {x_o}} )}^2} + {{({y - {y_o}} )}^2}} }\\ {\beta = 2\pi - \bmod \left( {{{\tan }^{ - 1}}\left( {\frac{{y - {y_o}}}{{x - {x_o}}}} \right),2\pi } \right)} \end{array}} \right.,$$
where x and y are the original coordinates of the neighboring pixel P. In order to guarantee the smooth motion parallax and no flipping effect, the redundant pixels outside the annular sector region should be filtered by measuring the radius r and the angular coordinate β. Then Na effective pixels inside the annular sector region are mapped from the corresponding pixels of the perspective images. The relationship between ASEIA Ek(x, y) and the perspective image Imk, nk(x, y) is formulated as
$${E_k}({x,y} )= {I_{{m_k},{n_k}}}({x,y} ),$$
$$\left\{ {\begin{array}{l} {{m_k} = \left\lceil {\frac{{2\beta - \pi + \theta }}{{2\theta }}{M_s}} \right\rceil }\\ {{n_k} = \lceil{\bmod ({r - {s_o},N} )+ 1} \rceil } \end{array}} \right.,\,where\left\{ {\begin{array}{l} {r \in \left[ {{s_o} - \left\lfloor {\frac{{N + 1}}{2}} \right\rfloor + 1,{s_o} - \left\lfloor {\frac{{N + 1}}{2}} \right\rfloor + 1 + N} \right)}\\ {\beta \in \left[ {\frac{\pi }{2} - \frac{\theta }{2},\frac{\pi }{2} + \frac{\theta }{2}} \right)} \end{array}} \right.,$$
where the resolution of Ek(x, y) is equal to that of Imk, nk(x, y). Since the redundant pixels do not need to be mapped from perspective images and these perspective images do not need to be rendered, the rendering time can be decreased. Suppose that the total rendering time of one rectangular EIA in the conventional method is tc, it is WEI × HEI times rendering time of a single viewpoint. Then the total rendering time of ASEIA can be calculated as Natc / WEIHEI. Obviously, the rendering time of ASEIA is decreased compared to conventional EIA, which means that the rendering efficiency is improved.

 figure: Fig. 5.

Fig. 5. Polar coordinate system of the ASEI.

Download Full Size | PDF

4. Experiment and discussion

In order to evaluate the feasibility and effect of the major properties of the proposed method, we implement experiments to compare the performance between the proposed and conventional methods. In the experiments, a time division tabletop II 3D display system is developed to show the 360-degree continuous annular viewpoints. Figure 6 shows the experimental setup, including a 15.6 inch LCD with a resolution of 3840×2160 pixels and a pixel density of 282 pixels per inch, a tilted barrier array with a height of 5 mm, a compound lens array with a pitch of 13 mm, a light shaping diffuser screen and a rotation stage (GCD-011200, from Daheng Optics). The LCD is placed on the rotation stage and is driven by it. Their centers are strictly aligned.

 figure: Fig. 6.

Fig. 6. Experimental setup of the time division tabletop II 3D display system.

Download Full Size | PDF

Table 1 lists the parameters of the tabletop II 3D display system. The compound lens array consists of 10×10 compound lenses with an aperture diameter of 12 mm, and each compound lens consists of three spherical lenses for suppressing the aberrations and improving the image quality. The barrier array is made of black rigid resin, and the tilt angle of each barrier is 40 degrees. Thus, the oriented angle of the viewing sub-zone is 40 degrees, which is suitable for observers standing around the table. The light shaping diffuser screen is placed at 137 mm from the compound lens array, and the diffusion angle is 6.5 degrees. The preset viewing height H is 697 mm, and it is also the optimal 3D display height. Following the imaging law, the viewing angles of each viewing sub-zone are 58 degrees in the circular and radial directions, respectively. The overlapping angle of adjacent viewing sub-zones is 22 degrees in the circular direction, so the number of viewing sub-zones is 10. While in the conventional method introduced by Ling et al. [25], the overlapping angle is 1 degree and the number of viewing sub-zones is 360. Therefore, the number of viewing sub-zones is reduced by 350 in the proposed method.

Tables Icon

Table 1. Configuration Parameters of the Tabletop II 3D Display System

In order to reduce the flicker of the display, the refresh frequency of the tabletop 3D display should be more than 30 Hz. As a result, the turntable speed of the entire display assembly should be at least 30 revolutions per second, and the 2D display device should display at least 300 frames ASEIAs per second. In the experiment, a commercial LCD with a low refresh rate of 60 Hz and a rotation stage with a low rotation speed of 1 r.p.m are used, because we would like to focus on verifying the principle of the proposed ASEIA generation method in this study. Once the principle is verified, it is able to deduce that multiple viewers around the table can observe the 3D images simultaneously when the refresh rate and rotation speed are fast enough.

A 3D fruit plate model is used for the tabletop 3D display, as shown in Fig. 7(a). Figures 7(b) and 7(c) compare the 0th ASEIA of the proposed method to the 0th EIA of the conventional method. Clearly, the EIs of the proposed method are annular sector, as depicted in Fig. 7(b). The pixels outside the annular sector area are filtered. While in the conventional method, the EIs are rectangular due to the rectangular grid configuration of the camera array and the conventional EIA generation scheme, as depicted in Fig. 7(c). A computer is used to test the rendering time of the 0th ASEIA and 0th conventional EIA, which are 250 seconds and 450 seconds, respectively. The computer is composed of Intel Core i7-8550U CPU @ 1.80 GHz with 8Gb RAM and NVIDIA GeForce 1050 graphic card. It can be calculated that the rendering time of the proposed method is reduced by 250 seconds compared with the conventional method, and the rendering efficiency is improved by 56%.

 figure: Fig. 7.

Fig. 7. 3D fruit plate model and EIAs of the proposed and conventional methods. (a) A 3D fruit plate model. (b) 0th ASEIA of the proposed method. (c) 0th rectangular EIA of the conventional method.

Download Full Size | PDF

To demonstrate the capability of producing annular viewing zone, different perspectives are photographed. Figure 8 illustrates 3D images from different positions along with a circular trajectory. The oriented angle is 30 degrees. Clearly, our approach is capable of providing 3D images with 360-degree circular parallax. Because the number of compound lenses is only 10×10, the resolution of the 3D image deteriorates severely, so the 3D images shown in Fig. 8 are slightly blurred. One can confirm that the 3D images would be clear when the number of lenses is increased. Figure 9 shows 3D images from different positions along with the radial direction. The 3D images can be seen in both sides of the display system rather than in the middle, it verifies that the viewpoints are shifted to the side and the 3D images have radial parallax. Note that the 3D image quality has a little bit of degradation in Figs. 9(a) and 9(e), which is caused by the severe off-axis aberrations and distortion at large oriented viewing angles of -45° and 45°, and the small number of lenses used in the experiments. The degraded 3D image quality can be easily improved by optimizing the lenses with the aim of reducing the distortion, controlling off-axis aberrations and decreasing the RMS radius of spot diagram in the large field of view.

 figure: Fig. 8.

Fig. 8. Photos taken from different positions of the 3D fruit plate model along with a circular trajectory.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Photos taken from different positions of the 3D fruit plate model along with the radial direction. (a) -45°, (b) -20°, (c) 0°, (d) 20°, and (e) 45°.

Download Full Size | PDF

Figure 10 compares the 3D images from redundant viewpoints between the proposed and conventional methods. In the proposed method, when the viewing positions are located outside the annular viewing area but still inside the conventional rectangular viewing area, we cannot observe the correct 3D images corresponding to the viewing positions, as shown in Fig. 10(a). While in the conventional method illustrated in Fig. 10(b), 3(D) images can be seen. The results confirm that the redundant 3D images can be eliminated in the proposed method.

 figure: Fig. 10.

Fig. 10. Photos taken from redundant viewing positions of (a) the proposed and (b) the conventional methods.

Download Full Size | PDF

To demonstrate the smooth motion parallax, the 3D images from different positions in the overlapping area of the 9th and 0th viewing sub-zones are captured, as shown in Fig. 11. It can be seen that the relative positions of different fruits change slightly as the viewing position moves, which infers the motion parallax is smooth. A visualization (Visualization 1) to show smooth motion parallax within 0th viewing sub-zone are also provided. From Fig. 11 and Visualization 1, it can deduce that the 9th and 0th viewing sub-zones can be seamlessly stitched. Although we only present 3D images with smooth motion parallax among two adjacent viewing sub-zones in this experiment results, theoretically all viewing sub-zones can be seamlessly stitched with smooth motion parallax.

 figure: Fig. 11.

Fig. 11. Photos taken from different positions in the overlapping area of the 9th and 0th viewing sub-zones. (a) -27°, (b) -24°, (c) -21°, and (d) -19°.

Download Full Size | PDF

5. Conclusion

This paper presents an ASEIA generation method for the tabletop II 3D display. The experimental results indicate that the ASEIAs are satisfactory to make an annular viewing zone with smooth motion parallax for the tabletop II 3D display. Compared with the conventional method, the number of viewing sub-zones is decreased by 350, and the rendering efficiency is improved by 56%. In the following work, we will focus on increasing the resolution of the 3D images and improving the efficiency of the 3D scene rendering by adopting the ray-tracing techniques.

Funding

National Key Research and Development Program of China (2017YFB1002900); National Natural Science Foundation of China (61927809).

Disclosures

The authors declare no conflicts of interest.

References

1. H. Ren, L. Ni, H. Li, X. Sang, X. Gao, and Q. H. Wang, “Review on tabletop true 3D display,” J. Soc. Inf. Disp. 28(1), 75–91 (2020). [CrossRef]  

2. H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE 99(4), 540–555 (2011). [CrossRef]  

3. T. Inoue and Y. Takaki, “Table screen 360-degree holographic display using circular viewing-zone scanning,” Opt. Express 23(5), 6533–6542 (2015). [CrossRef]  

4. Y. Lim, K. Hong, H. Kim, H. E. Kim, E. Y. Chang, S. Lee, T. Kim, J. Nam, H. G. Choo, and J. Kim, “360-degree tabletop electronic holographic display,” Opt. Express 24(22), 24999–25009 (2016). [CrossRef]  

5. S. Tay, P. A. Blanche, R. Voorakaranam, A. V. Tunç, W. Lin, S. Rokutanda, T. Gu, D. Flores, P. Wang, and G. Li, “An updatable holographic three-dimensional display,” Nature 451(7179), 694–698 (2008). [CrossRef]  

6. D. Wang, C. Liu, C. Shen, Y. Xing, and Q. H. Wang, “Holographic capture and projection system of real object based on tunable zoom lens,” PhotoniX 1(1), 6 (2020). [CrossRef]  

7. X. Xia, X. Liu, H. Li, Z. Zheng, H. Wang, Y. Peng, and W. Shen, “A 360-degree floating 3D display based on light field regeneration,” Opt. Express 21(9), 11237–11247 (2013). [CrossRef]  

8. S. Yoshida, “fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector arrays,” Opt. Express 24(12), 13194–13203 (2016). [CrossRef]  

9. Y. Takaki and J. Nakamura, “Generation of 360-degree color three-dimensional images using a small array of high-speed projectors to provide multiple vertical viewpoints,” Opt. Express 22(7), 8779–8789 (2014). [CrossRef]  

10. Y. Takaki and S. Uchida, “Table screen 360-degree three-dimensional display using a small array of high-speed projectors,” Opt. Express 20(8), 8848–8861 (2012). [CrossRef]  

11. S. K. Patel, J. Cao, and A. R. Lippert, “A volumetric three-dimensional digital light photoactivatable dye display,” Nat. Commun. 8(1), 15239 (2017). [CrossRef]  

12. T. Yendo, T. Fujii, M. Tanimoto, and M. P. Tehrani, “The seelinder: cylindrical 3D display viewable from 360 degrees,” J. Vis. Commun. Image Represent. 21(5-6), 586–594 (2010). [CrossRef]  

13. G. E. Favalora, “Volumetric 3D displays and application infrastructure,” Computer 38(8), 37–44 (2005). [CrossRef]  

14. R. Hirayama, D. M. Plasencia, N. Masuda, and S. Subramanian, “A volumetric display for visual, tactile and audio presentation using acoustic trapping,” Nature 575(7782), 320–323 (2019). [CrossRef]  

15. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7(1), 821–825 (1908). [CrossRef]  

16. M. Martínez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics 10(3), 512–566 (2018). [CrossRef]  

17. M. Makiguchi, D. Sakamoto, H. Takada, K. Honda, and T. Ono, “Interactive 360-degree glasses-free tabletop 3D display,” in Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (Association for Computing Machinery, 2019), pp. 625–637.

18. O. Eldes, K. Akşit, and H. Urey, “Multi-view autostereoscopic projection display using rotating screen,” Opt. Express 21(23), 29043–29054 (2013). [CrossRef]  

19. K. Akşit, H. Baghsiahi, P. Surman, S. Ölçer, E. Willman, D. R. Selviah, S. Day, and H. Urey, “Dynamic exit pupil trackers for autostereoscopic displays,” Opt. Express 21(12), 14331–14341 (2013). [CrossRef]  

20. M. Martinez-Corral, A. Dorado, J. C. Barreiro, G. Saavedra, and B. Javidi, “Recent Advances in the Capture and Display of Macroscopic and Microscopic 3-D Scenes by Integral Imaging,” Proc. IEEE 105(5), 825–836 (2017). [CrossRef]  

21. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt. 52(4), 546–560 (2013). [CrossRef]  

22. D. Zhao, B. Su, G. Chen, and H. Liao, “360 Degree Viewable Floating Autostereoscopic Display Using Integral Photography and Multiple Semitransparent Mirrors,” Opt. Express 23(8), 9812–9823 (2015). [CrossRef]  

23. X. Yu, X. Sang, X. Gao, Y. Bin, D. Chen, B. Liu, L. Liu, C. Gao, and P. Wang, “360-degree tabletop 3D light-field display with ring-shaped viewing range based on aspheric conical lens array,” Opt. Express 27(19), 26738–26748 (2019). [CrossRef]  

24. M. U. Erdenebat, K. C. Kwon, E. Dashdavaa, Y. L. Piao, K. H. Yoo, G. Baasantseren, Y. Kim, and N. Kim, “Advanced 360-degree integral-floating display using a hidden point removal operator and a hexagonal lens array,” J. Opt. Soc. Korea 18(6), 706–713 (2014). [CrossRef]  

25. L. Luo, Q. H. Wang, Y. Xing, H. Deng, H. Ren, and S. Li, “360-degree viewable tabletop 3D display system based on integral imaging by using perspective-oriented layer,” Opt. Commun. 438, 54–60 (2019). [CrossRef]  

26. D. Miyazaki, N. Akasaka, K. Okoda, Y. Maeda, and T. Mukai, “Floating three-dimensional display viewable from 360 degrees,” Proc. SPIE 8288, 82881H (2012). [CrossRef]  

27. Z. L. Xiong, Q. H. Wang, S. L. Li, H. Deng, and C. C. Ji, “Partially-overlapped viewing zone based integral imaging system with super wide viewing angle,” Opt. Express 22(19), 22268–22277 (2014). [CrossRef]  

28. H. Deng, Q. H. Wang, L. Li, and D. H. Li, “An integral-imaging three-dimensional display with wide viewing angle,” J. Soc. Inf. Disp. 19(10), 679–684 (2011). [CrossRef]  

29. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Smooth motion parallax can be observed

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. (a) Schematic diagram of the time division tabletop II 3D display. (b) Perception of 3D scene in a directional viewing sub-zone.
Fig. 2.
Fig. 2. Top view of the distribution of the (a) annular sector and (b) rectangular viewing sub-zones.
Fig. 3.
Fig. 3. ASEI distribution corresponding to the annular sector viewing sub-zone.
Fig. 4.
Fig. 4. Arrangement of the camera array in the pickup process.
Fig. 5.
Fig. 5. Polar coordinate system of the ASEI.
Fig. 6.
Fig. 6. Experimental setup of the time division tabletop II 3D display system.
Fig. 7.
Fig. 7. 3D fruit plate model and EIAs of the proposed and conventional methods. (a) A 3D fruit plate model. (b) 0th ASEIA of the proposed method. (c) 0th rectangular EIA of the conventional method.
Fig. 8.
Fig. 8. Photos taken from different positions of the 3D fruit plate model along with a circular trajectory.
Fig. 9.
Fig. 9. Photos taken from different positions of the 3D fruit plate model along with the radial direction. (a) -45°, (b) -20°, (c) 0°, (d) 20°, and (e) 45°.
Fig. 10.
Fig. 10. Photos taken from redundant viewing positions of (a) the proposed and (b) the conventional methods.
Fig. 11.
Fig. 11. Photos taken from different positions in the overlapping area of the 9th and 0th viewing sub-zones. (a) -27°, (b) -24°, (c) -21°, and (d) -19°.

Tables (1)

Tables Icon

Table 1. Configuration Parameters of the Tabletop II 3D Display System

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

( θ Δ θ ) K  =  360.
m = mod ( m k + ( k 1 ) R M s / 2 1 , M ) ,
n = n k ,
M s = M θ 360 ,
R = M ( θ Δ θ ) 360 .
{ x o = i W E I + ( W E I A + 1 ) / 2 y o = j H E I + ( H E I A + 1 ) / 2 ,
{ r = ( x x o ) 2 + ( y y o ) 2 β = 2 π mod ( tan 1 ( y y o x x o ) , 2 π ) ,
E k ( x , y ) = I m k , n k ( x , y ) ,
{ m k = 2 β π + θ 2 θ M s n k = mod ( r s o , N ) + 1 , w h e r e { r [ s o N + 1 2 + 1 , s o N + 1 2 + 1 + N ) β [ π 2 θ 2 , π 2 + θ 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.