Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector arrays

Open Access Open Access

Abstract

A novel glasses-free tabletop 3D display to float virtual objects on a flat tabletop surface is proposed. This method employs circularly arranged projectors and a conical rear-projection screen that serves as an anisotropic diffuser. Its practical implementation installs them beneath a round table and produces horizontal parallax in a circumferential direction without the use of high speed or a moving apparatus. Our prototype can display full-color, 5-cm-tall 3D characters on the table. Multiple viewers can share and enjoy its real-time animation from any angle of 360 degrees with appropriate perspectives as if the animated figures were present.

© 2016 Optical Society of America

1. Introduction

The surface area of a table, a tabletop, is a useful space for a variety of tasks. Many documents, materials, objects, and information can be shared and exchanged on tabletops to perform tasks involving multiple people seated around the table. Our target scenario is to support such collaborative work on tabletops using virtual 3D media. To realize natural tabletop communication, we consider that the following requirements must be satisfied: (1) 3D images of 360° should be observed by each viewer from a correct angle; (2) ordinary tabletop activities should not be inhibited; (3) the number of viewers should not be limited; and (4) no special 3D glasses or wearable tracking system should be required.

Full parallax auto-stereoscopic displays such as those based on integral imaging can be placed on a table [1, 2] to satisfy conditions (1), (2), and (4). However, such displays generally have a narrow viewing angle. The best observation direction is almost directly in front of the display (from the direct top for tabletop use). Viewing is unsuitable at an angle from above, such as the likely angle of view for people seated around the table.

Volumetric (volume swept) displays and recently expanded approaches based on light-field reproduction are other candidates [3–5]. Generated glasses-free 3D images of 360° have binocular and motion parallax along a circular direction in the horizontal plane, the so-called horizontal-parallax-only (HPO) 3D display, thereby satisfying conditions (1), (3), and (4). However, these use showcase-like mechanical components on the table, which invades the tabletop space. For that reason, condition (2) is not satisfied.

Several techniques have been proposed to float a virtual 3D image on a flat tabletop surface [6–10]. IllusionHole [6] provides shareable 3D visual information on a table that satisfies conditions (1) and (2), but it requires the wearing of special 3D glasses and a head tracker. Therefore, it does not simultaneously satisfy conditions (3) and (4).

Some of the glasses-free tabletop HPO 3D displays [7–10] fully satisfy the conditions explained above. These methods generally employ a horizontal high-speed rotation disk on the table and one or a few high-refresh-rate projectors. However, the high-speed component requires tradeoffs among the color depth range, refresh rate, and the number of directions to be displayed in a second. Therefore, these systems generally entail important shortcomings when displaying full-color animation and interactive contents. Moreover, the moving component is usually bulky, involving difficulty of momentum control when the display is enlarged.

In this paper, we propose a novel method to float 3D images on an empty, flat tabletop surface using only static components. Our tabletop HPO 3D display enables multiple viewers to observe hovering 3D images on the tabletop from any angle of 360° as if they were there, while requiring no special 3D glasses. Because it obviates the use of high-speed components around the table, full-color animation and interactive contents can be realized easily using our glasses-free tabletop 3D display.

This paper describes our light-field generation principle. Its viewing style is specifically designed for tabletop tasks by assuming a seated condition. Several practical techniques to realize the tabletop 3D display, named fVisiOn, are based on the proposed principle.

2. Light-field reproduction by a conical screen and multiple projectors

In the real world, objects are illuminated by light sources. Any surface of the objects reflects light in all directions with different light properties of luminosity and wavelength. In geometric optics, the radiation of light is represented briefly by light rays. Therefore, it is generally accepted that a virtual point light source exists at every point on the surface, radiating an uncountable number of rays in all directions. Each eye gathers different bunches of rays. The images then formed by the separated directional rays include binocular disparity. Therefore, the viewer can perceive the scene as 3D. Our idea simulates such light rays of assumed 3D scenes using an optical device and many projectors, as portrayed in Fig. 1. The combination of those static components can provide horizontal parallax in a circular direction as described below.

 figure: Fig. 1

Fig. 1 Generating tabletop 3D images using a conical screen and projector array.

Download Full Size | PDF

The optical device, which is a surface of revolution: in this case a cone, is fixed underneath the table. Its curved surface has an anisotropic diffusion characteristic for the incoming rays. It widely diffuses in the direction of the shape’s edge line and diffuses less in the direction of the circumference, so it diffuses light in a vertical fan. It works as a kind of rear-projection screen. Therefore, we simply call it a “screen.”

As sources to generate numerous directional light rays, many tiny projectors are arranged in a circle. Each pixel cast by the projectors corresponds to a particular ray. Our viewing area, defined as continuous viewing points, is designed around and above the table, forming an annular viewing area. On a vertical plane, each ray enters the screen and extends at an angle. Because we assume a seating condition as a usage of our tabletop 3D display, part of the fan-like diffused light of the ray is caught by the viewer’s eye, as shown in Fig. 1 left.

In contrast, on a horizontal plane, the orientation of the rays produced from a series of projection centers is preserved after they pass through the screen, as shown in Fig. 1 right. For example, point Ps located on the screen receives different colors coming from several separated projectors, simultaneously. Then, our screen’s relative lack of diffusion in the horizontal plane makes the projected colors visible only from particular corresponding direction. Therefore, at any eye position on the annular viewing area, the eye observes only the slit-like parts of each projection image. For example, eye Ea collects particular light rays which are depicted as red solid lines in Fig. 1 right, but another eye Eb located at a separate position sees a different grouping of rays even from the same projectors: the rays along green dotted lines.

Here, each ray must convey a particular light property to form virtual object surfaces by virtual light sources. For example, for generating virtual light source Pa, the pixels which are started from each projector and pass Pa are in charge of the light property which marches from Pa to the direction. By applying proper light properties to every pixel of the projection images, virtual point light sources can be generated to form the shape of an assumed 3D scene.

At each viewpoint on the viewing area, the retina collects fractional slit-like images from different projectors, forming an appropriate image for the perspective in the retina. By virtue of this principle, and without the need for 3D glasses, appropriate individual-perspective images can be provided to all viewers around the table. This configuration generates binocular and motion parallax: the viewers perceive 3D images.

3. Practical implementation of the proposed light-field reproduction principle

3.1 Configuration design for interactive tabletop 3D display

We designed an exterior of our latest prototype of the tabletop 3D display to look like an ordinary round table as shown in Fig. 2. The fabricated table is 70 cm tall, with 90 cm diameter. Generated 3D images are around 10 cm wide and 5 cm tall, floating in the center of the table.

 figure: Fig. 2

Fig. 2 Exterior of the latest prototype named fVisiOn (left).The tabletop area can be used as an ordinary round table of 90 cm diameter and 70 cm height. It provides virtual 3D images in the center of the table (right).

Download Full Size | PDF

The principle of our tabletop 3D display is optimized for seating conditions. We set the radius of the annular viewing area as 50 cm and the height of the area as 35 cm from the level of the tabletop. These parameters of the annular viewing area were inferred from the general seating condition. This configuration enables users to reach their hands to the center of the tabletop and touch the floating virtual 3D characters. This seating situation gently leads viewers’ eyes to the designated annular viewing area. However, their viewpoints occasionally deviate from the area because of their body movement. The vertical diffusion of our principle guarantees that the viewers see 3D images even though the eyes might deviate slightly from the sweet spots.

Figure 2 also presents an example of usage in which two persons are individually watching generated 3D images from each side. This photograph was taken at the third person’s viewpoint. However, the 3D images can be photographed in the right perspective because the display provides every horizontal parallax in 360°. This photograph demonstrates interactive live content of playing 3D card game; creatures printed on physical cards are summoned on the table as virtual 3D characters by putting the cards on sensors embedded in the table. An exciting battle is performed in the center of the table in real-time.

3.2 Fabrication of conical optical device

The principle requires a surface of revolution. We chose a concave cone for the screen: viewers seated around the table effectively look down the side surface of the cone. The core of the screen is made from acrylic resin with a refractive index of 1.49 and forms a 12-cm-deep hollow cone with 20 cm diameter. This dimension determines the size of 3D images that the display can produce. A bigger cone might display larger 3D images, but our prototype was restricted by a procured resin block size of 20 cm3.

Our method requires an anisotropic scattering characteristic for incoming rays on a side surface of the cone. To apply the anisotropic optical function, we studied several approaches [11, 12]. For preliminary examinations to prove the availability of the proposed principle, we proposed a handmade screen which wound a filament-like lens on the concave cone [11]. The method used a 0.4 mm diameter nylon fishing line as the filament-like lens. The line was wound regularly and bonded on the cone’s surface using an ultraviolet-curing adhesive. Then, a nano-machining process for optics was used to fabricate the latest prototype for this study. It directly and precisely engraves wave shapes similar with the handmade method on the cone. The profile of the machined lens is a cyclic sequential semicircle engraved on the outer surface (3-mm-thickness) of the cone. Circles of 0.07 mm radius are arranged at a pitch of 0.135 mm to shape the waves.

This idea resembles putting a lenticular lens onto the cone surface. The lenticular lens is an array of half-cylindrical lenses. In a wave-shaped cut plane that crosses a series of lenses, a ray that passed through the lens is spread out, but the ray passes straight through the lens in an orthogonal plane because the lens thickness is constant in the plane.

Figure 3 shows the characteristic of horizontal diffusion of the handmade lens and a machined one. A laser beam (532 nm wavelength, 1.5 mm diameter) was cast on the side surface of the screen. Diffused light was measured using a 2D luminance meter. Here, intensity is the average in each angle and is normalized by a maximum value.

 figure: Fig. 3

Fig. 3 Characteristics of horizontal angular diffusion of prototype screens.

Download Full Size | PDF

The handmade lens naturally presented some difficulty for precise fabrication although we were able to examine several parameters briefly. It produced several recognizable streaky defects on the cone, thereby degrading the quality of the observed final 3D images. These defects caused a side lobe around −0.15° as shown in the profile of the handmade lens. This result also shows that the more accurate machined lens was brighter than the handmade lens.

Figure 4 presents the anisotropic optical function of the screen. Because of the limited horizontal diffusion, two beams projected on the same point pass through in different directions. The vertical diffusion is considerably wider than the horizontal diffusion. The diffusion characteristic of the outgoing light is specified as the full angle of radiation at half maximum (FAHM) of intensity. The measured FAHM of the machining lens was approximately 60° in the vertical direction and 0.1° in the horizontal direction.

 figure: Fig. 4

Fig. 4 Anisotropic diffusion of screen.

Download Full Size | PDF

3.3 Circular arrangement of projectors and modularization

The latest prototype employs tiny, 7-mm-wide qHD LCoS projectors. Those are arranged on a circle of 34 cm radius at a constant pitch; i.e. 288 projectors are at a 1.25° (7.4 mm) pitch on the circle. The screen is installed immediately beneath the tabletop surface and is covered by a thick black transparent acrylic plate. The projectors are arranged circularly underneath the 28-cm-deep table and are tilted 39° above for aiming at the screen as shown in Fig. 5. Here, the width of the slit-like fractional images observed on the conical screen is related to the horizontal diffusion characteristics of the screen. The angular pitches of the projectors are expected to equal the diffusion characteristics. In this prototype configuration, the distance between an eye at the annular viewing area and a projector arranged on the circle is approximately 1 m; therefore the perceived projectors’ angular pitch at the viewpoints is around 0.4° (7.4 mm / 1 m). To adjust the mismatch, a weak isotropic diffuser is applied to the screen in this implementation, which increases the horizontal angular diffusion from 0.1° to around 0.4°.

 figure: Fig. 5

Fig. 5 Circularly arranged projectors and screen (left). The screen is covered by a plate. A 3D image has appeared (right).

Download Full Size | PDF

For ease-of-use and portability, we fabricated modular projector arrays, as shown in Fig. 6. Each module has 24 projectors and a video splitter. It inputs a video signal of 2400 × 1600 pixels in 60 Hz. Thereby, the video is split into 24 projection images of 400 × 400 pixels. One module covers 30° of viewing area. All together, 12 modules accomplish production of the 360°-view 3D images. The module works individually. A few modules are sufficient for a brief demonstration as shown in Fig. 6 right.

 figure: Fig. 6

Fig. 6 Overview of prototype projector module (left). Single-module setting to produce a 30° viewing area (middle) and side view of two-module setting for a portable demonstration (right).

Download Full Size | PDF

3.4 Generation of multi-perspective images

As depicted in Fig. 1, each projector does not cast a single perspective image that corresponds to a particular viewpoint such as Ea or Eb and photographed from there. Each projection image of 400 × 400 pixels must include multiple perspectives. Actually, each pixel of a projection image has a corresponding different viewpoint such as Ea and Eb. However, a common computer graphics rendering method generates an image based on a pinhole camera model. Therefore, we saw the need to prepare a special rendering algorithm for our purposes.

In our principle, each pixel of the projection images forms numerous directional rays. Each ray reaches different viewpoints. Figure 7 depicts the relation between any vertex Pa on a surface of the assumed 3D objects in the 3D world coordinate and corresponding pixel Pi at the projection image. To reproduce a virtual point light source at Pa, the pixel at Pi must convey the light property that is radiated from Pa and which marches along vector v.

 figure: Fig. 7

Fig. 7 Geometrical path of each ray for reproducing 3D images.

Download Full Size | PDF

The first step is to find a corresponding viewpoint for each pixel Pi. Here, we use a conical screen. Each parameter like shape of screen S, projector position Pp, and annular viewing area C is already known from the designed system configuration. We applied anisotropic diffusion for the screen. In this condition, an incoming ray at Ps projected from Pp and passed Pi is ideally radiated only in the vertical plane. Its outgoing light at Ps directly reaches C in the horizontal plane, as shown in Fig. 7 left. Therefore, appropriate viewpoint Pe on C for each Pi can be computed in this horizontal 2D plane. Its 3D position can be obtained easily from the height of C.

The second step is to compute vector v. Because Ps is obtained as the intersection of the known shape S and because vector (PiPp) and Pe is also already known, v is defined as (PePs). Finally, Pa is derived as the intercession point of v and assumed 3D objects’ surface in the world coordinate.

The color and luminosity traveling from Pa toward v is computed based on a rendering algorithm such as a classical ray-tracing method started from Pe toward –v and its GPU-based real-time rendering approach [13].

Figure 8 portrays rendering results obtained using our method. In these examples, the assumed 3D objects were a checked cube painted with different colors and a scene of placing three objects on a round floor. The left and middle images are drawn in ordinary perspective. For that reason, one cannot see the side of the cube in the single-perspective image of the top-middle as a natural result. However, our method renders the multi-perspective image which includes different directional views. Therefore, the cube’s red face and the opposite face painted by blue are visible simultaneously in the top-right image. Additionally, the projection image is flipped horizontally because of its geometrical configuration. Those shapes are deformed as computational results of vertical diffusion on the conical screen.

 figure: Fig. 8

Fig. 8 Example of computed projection image: assumed 3D scene (left), ordinary single-perspective image photographed at Pe (middle), and multi-perspective image to be projected from Pp (right).

Download Full Size | PDF

3.5 Control of multiple modules

Our fabricated projector array module requires a dual-link DVI video signal of 2400 × 1600 pixels. There are corresponding rendering PCs for each module. These are joined in the same network. A master PC distributes some control messages such as update information of the virtual 3D scene and synchronous signals into the network. The node PCs render their responsible area’s multi-perspective images individually and synchronize the 3D scene using the distributed messages.

The dual-link DVI is recently used, even with compact PCs such as Apple Mac mini and Intel NUC. For example, the Mac mini has two dual-link video outputs. In the minimum setting, one PC generates 48 multi-perspective images for two modules. Six PCs are used as render nodes of 12 modules for the viewing area of 360°.

3.6 Calibration of rays

All projectors mounted on the module must be arranged precisely. Their optical axes should be crossed at a common point (table’s center). However, accurate physical adjustment of over hundreds of projectors is troublesome. Moreover, small differences between the ideal and real parameters cause errors, such as those involving the projection angle. This challenge is a common issue among all multi-projector displays [14], and an appropriate approach should be applied in each case.

As a practical approach for our system, we tweak the pose and position of each projector manually. Then, remaining errors caused by misalignments are corrected using a software technique. To correct the final projection image coordinates, a homographic transformation matrix is employed. Any pixel Pi should be projected on ideal 3D point Ps on the screen from the designed system parameters, but another Pi’ actually corresponds to Ps because of the inaccurate hardware alignment. The homographic transformation matrix can be computed from several pairs of Pi and Pi’. As a simple means of obtaining the pairs, we prepared a semi-transparent sheet that was laid horizontally on the tabletop surface instead of the curved screen. Then, several particular patterns to identify a path of each Pi and a camera captured the projected image on the sheet.

Figure 9 depicts the calibration results. The left shows the difficulty of a hardware-only alignment. The projective lines of crosshairs indicate the direction of each projector’s optical axis. Ideally, all crosshairs must be crossed at a unique point on the sheet, but many errors are apparent. To adjust the poses of each projector manually, an entire day is often necessary. On the right, software calibration was used and the errors were eliminated. The projective lines crossed at the same point and in uniform angles.

 figure: Fig. 9

Fig. 9 Calibration of rays (24 projectors): hardware-only alignment (left) and after software calibration (right).

Download Full Size | PDF

Uniformity of color and brightness of the projection images should also be considered because the viewer’s eye collects rays coming from different projectors as depicted in Fig. 1. The employed LCoS projector has three RGB LEDs to produce 24-bit colors. Our developed module has a capability to tweak the ratio of lighting duration and maximum luminosity of the LEDs. To correct the uniformity, we measured color temperatures and intensities of illumination of each projector by using a chroma meter. Then we computed appropriate parameters to be balanced at the provided color temperature and illumination.

Table 1 shows measured results of uniformity in 24 projectors for a particular module. The factory tolerance range for the color uniformity of the procured LCoS projectors was not sufficient. Their shipping test allowed a color temperature around 6500 ± 1000 K. In the non-calibrated condition, wide standard deviation appeared in the average of measured color temperatures. By applying computed parameters for adjustment of the LEDs, the standard deviation of the color temperature was dropped in 1/5. The intensity of illumination was also controlled around a target value of 37 lx.

Tables Icon

Table 1. Average and standard deviation of color temperature and intensity of illumination before and after calibration. Target values were 6500 K and 37 lx for after calibration.

4. Discussion

Figure 10 presents results obtained for a generated virtual 3D scene photographed from different angles at three separate viewpoints on the annular viewing area on our prototyped glasses-free tabletop 3D display. The scene includes several virtual 3D objects as seen in Fig. 8: the Utah teapot, the Stanford bunny, a textured cube, and a blue round floor. The 3D objects can be displayed inside a displayable volume: a sphere of around 5 cm radius which is defined under the hardware configuration of the latest prototype.

 figure: Fig. 10

Fig. 10 Reproduced virtual 3D scene on the tabletop surface.

Download Full Size | PDF

From the photographs, one can confirm that the reproduced 3D images provide parallax without special glasses. In this configuration, the distance between the viewpoint and the center of the table is around 70 cm. Thus, a 10-cm-wide virtual 3D image has a visual angle of around 8.2°. Because the perceived projector angular pitch is around 0.4°, the observed 3D image at each viewpoint is an integration of fractional slit-like images coming from approximately 21 projectors.

The reflection and the shadow cast on the imaginary blue floor on the tabletop surface are also reproduced appropriately. We used multi-pass rendering to apply the effect of the reflection on the floor and cast shadows on the objects. In the first pass, a shadow map for casting shadows was generated for the scene in each light coordinate. The shadow map is referred when shading the surface of the objects in the later steps. Next, a mirrored scene was rendered on the tabletop surface based on the proposed algorithm. The frame buffer stores each pixel color and the depth of the mirrored scene based on the corresponding viewpoint. Then, the blue floor was drawn with a certain transparency, so alpha blending and a depth test were applied to each pixel. In the last pass, the original objects were overwritten in the frame buffer in the same manner. These rendering passes are implemented on the vertex shader and fragment shader on an ordinary GPU.

For each module, the 24 multi-perspective images are expected to be rendered simultaneously in real time. Therefore, a high-end PC is expected to provide better performance. However, the proposed rendering algorithm can function even on a compact PC cluster. As an experimental configuration, Apple Mac minis (2.6 GHz, Core i5; Intel Corp. with built-in GPU, Iris Graphics 5100; Intel Corp.) were employed. The rendering speed depends on the number of polygons and the complexity of the 3D contents, but such a lower-range PC was sufficient to perform our demonstrations. As examples, an average rendering speed of a simple shading scene of 7,200 polygons that includes the round floor and the Stanford bunny was around 25 frames per second when the single Mac mini renders 48 multi-perspective images of 400 × 400 pixels for two modules. When applying additional multi-pass rendering effects such as cast shadows and a reflective floor, the frame rate drops to half or less.

Figure 11 portrays another symbolic result achieved by fVisiOn. In this photograph, a real mirror and a toy duck are added around the generated 3D objects. It is readily apparent that the glasses-free 3D objects and the duck coexist on the table, naturally. Additionally, both the virtual and real objects can be reflected by the mirror because of the existence of horizontal parallax around the table.

 figure: Fig. 11

Fig. 11 Mixed-reality environment that consists from virtual and real objects.

Download Full Size | PDF

5. Conclusion

This report described a novel glasses-free 3D image reproduction principle as a tabletop 3D display, in addition to its practical implementation. Our method is achieved through the combination of static components of a conical anisotropic rear-projection screen and a circularly arranged projector array. Because it has no high-speed controlled components or moving devices, it is more appropriate for displaying full-color and real-time animation.

The developed prototype appears to be an ordinary round table. The conical screen and portable projector array modules are inside. It is able to display full-color, 5-cm-tall 3D images that are animated on a flat tabletop surface in real time. Several people around the table can share and enjoy the virtual 3D characters simultaneously from appropriate perspectives for each position. Moreover, people standing at a distance around the table can enjoy the 3D animation dancing on the tabletop because our method is designed to be viewed from above at an angle.

Additional benefits of our method include the blank space around 3D images. The tabletop is merely covered by a semi-transparent acrylic plate. Therefore, some sensors, even cameras and magnetic devices, can be set underneath the table.

fVisiOn has high affinity with ordinary collaborative activities on tabletops. It provides natural mixed-reality environments. It would be appropriate for group discussions and teleconferences using digital documents and virtual mock-ups around the table.

Reproduced 3D images of the prototype are somewhat blurred and unfocused because of the use of the weak isotropic diffuse on the screen. The angular pitch of the projectors must be shortened to generate sharper results. Refining the 3D image quality by improving the hardware equipment and configurations as well as software algorithms remain as subjects for future work.

Acknowledgments

Part of this research was supported by the Japan Science and Technology Agency (JST), Core Research for Evolutional Science and Technology (CREST).

References and links

1. J. Arai, H. Kawai, and F. Okano, “Microlens arrays for integral imaging system,” Appl. Opt. 45(36), 9066–9078 (2006). [CrossRef]   [PubMed]  

2. M. Yamasaki, H. Sakai, T. Koike, and M. Oikawa, “Full-parallax autostereoscopic display with scalable lateral resolution using overlaid multiple projection,” J. Soc. Inf. Disp. 18(7), 494–500 (2010). [CrossRef]  

3. G. E. Favalore, “Volumetric 3D displays and application infrastructure,” Computer 38(8), 37–44 (2005). [CrossRef]  

4. A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “An interactive 360° light field display,” in Proceedings of ACM SIGGRAPH 2007 Emerging Technologies (2007), pp. 13.

5. T. Yendo, T. Fujii, M. Tanimoto, and M. P. Tehrani, “The Seelinder: Cylindrical 3D display viewable from 360 degrees,” J. Vis. Comun. Image Res. 21(5–6), 586–594 (2010). [CrossRef]  

6. Y. Kitamura, T. Konishi, S. Yamamoto, and F. Kishino, “Interactive stereoscopic display for three or more users,” in Proceedings of ACM SIGGRAPH 2001 (2001), pp. 231–240.

7. G. E. Favalore and O. S. Cossairt, “Theta-parallax-only (TPO) displays,” United States patent 7,364,300 (2008).

8. H. Horimai, D. Horimai, T. Kouketsu, P. B. Lim, and M. Inoue, “Full-color 3D display system with 360 degree horizontal viewing angle,” J. Three Dimensional Images 24(2), 7–10 (2010).

9. X. Xia, X. Liu, H. Li, Z. Zheng, H. Wang, Y. Peng, and W. Shen, “A 360-degree floating 3D display based on light field regeneration,” Opt. Express 21(9), 11237–11247 (2013). [CrossRef]   [PubMed]  

10. Y. Takaki and S. Uchida, “Table screen 360-degree three-dimensional display using a small array of high-speed projectors,” Opt. Express 20(8), 8848–8861 (2012). [CrossRef]   [PubMed]  

11. S. Yoshida, S. Yano, and H. Ando, “Prototyping of glasses-free table style 3D display for tabletop tasks,” in Proceedings of SID 2010 Digest (2010), pp. 211–214.

12. S. Yoshida, M. Kawakita, and H. Ando, “Light-field generation by several screen types for glasses-free tabletop 3D display,” in Proceedings of 3DTV-CON 2011 (2011), pp. 1–4.

13. S. Yoshida, “Real-time rendering of multi-perspective images for a glasses-free tabletop 3D display,” in Proceedings of 3DTV-CON 2013 (2013), pp. 1–4.

14. N. S. Holliman, N. A. Dodgson, G. E. Favalora, and L. Pockett, “Three-dimensional displays: a review and applications analysis,” IEEE Trans. Broadcast 57(2), 362–371 (2011). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Generating tabletop 3D images using a conical screen and projector array.
Fig. 2
Fig. 2 Exterior of the latest prototype named fVisiOn (left).The tabletop area can be used as an ordinary round table of 90 cm diameter and 70 cm height. It provides virtual 3D images in the center of the table (right).
Fig. 3
Fig. 3 Characteristics of horizontal angular diffusion of prototype screens.
Fig. 4
Fig. 4 Anisotropic diffusion of screen.
Fig. 5
Fig. 5 Circularly arranged projectors and screen (left). The screen is covered by a plate. A 3D image has appeared (right).
Fig. 6
Fig. 6 Overview of prototype projector module (left). Single-module setting to produce a 30° viewing area (middle) and side view of two-module setting for a portable demonstration (right).
Fig. 7
Fig. 7 Geometrical path of each ray for reproducing 3D images.
Fig. 8
Fig. 8 Example of computed projection image: assumed 3D scene (left), ordinary single-perspective image photographed at Pe (middle), and multi-perspective image to be projected from Pp (right).
Fig. 9
Fig. 9 Calibration of rays (24 projectors): hardware-only alignment (left) and after software calibration (right).
Fig. 10
Fig. 10 Reproduced virtual 3D scene on the tabletop surface.
Fig. 11
Fig. 11 Mixed-reality environment that consists from virtual and real objects.

Tables (1)

Tables Icon

Table 1 Average and standard deviation of color temperature and intensity of illumination before and after calibration. Target values were 6500 K and 37 lx for after calibration.

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.