Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multiple ray cluster rendering for interactive integral imaging system

Open Access Open Access

Abstract

In this paper, we present an efficient Computer Generated Integral Imaging (CGII) method, called multiple ray cluster rendering (MRCR). Based on the MRCR, an interactive integral imaging system is realized, which provides accurate 3D image satisfying the changeable observers’ positions in real time. The MRCR method can generate all the elemental image pixels within only one rendering pass by ray reorganization of multiple ray clusters and 3D content duplication. It is compatible with various graphic contents including mesh, point cloud, and medical data. Moreover, multi-sampling method is embedded in MRCR method for acquiring anti-aliased 3D image result. To our best knowledge, the MRCR method outperforms the existing CGII methods in both the speed performance and the display quality. Experimental results show that the proposed CGII method can achieve real-time computational speed for large-scale 3D data with about 50,000 points.

©2013 Optical Society of America

1. Introduction

Integral imaging technology [1] is one of the most promising methods allowing full-color, full-parallax, and auto-stereoscopic 3D images to be simultaneously observed. The technique comprises a capture part and a display part. During the capture part, 3D information are captured through a lens array and recorded as elemental image array (EIA). While in the display part, the 3D images are integrated from the elemental images through the lens array [2]. The capture part can be replaced with computer-generated integral imaging (CGII) technology. Currently, the CGII is important and widely used in the integral imaging system, and it can obtain the EIA by using computer graphic techniques with a virtual lens array whose parameters are determined from the real lens array of the integral imaging display.

Early CGII method, such as point retracing rendering (PRR) [3], renders the EIA point by point by retracing the displayed 3D object. Such method is simple and widely used, but with very low rendering speed. Several efficient CGII methods have been proposed, such as multiple viewpoint rendering (MVR) [4], parallel group rendering (PGR) [5, 6], and viewpoint vector rendering (VVR) [79]. MVR generates each elemental image sequentially as rendering the perspective image captured by the corresponding virtual lens. The computational time increases linearly with the number of the micro lens. MVR obtains only off-line processing with large number of lens elements. PGR is a more efficient algorithm in which the EIA is obtained from the directional scenes. The directional scenes are imaginary scenes which are observed in certain directions. PGR can reduce the number of scene rendering passes to the number of displayed pixels in one elemental image. The method is fast but it is limited to the focused mode. VVR also generates the directional scenes like PGR. The difference is that VVR generates more directional scenes which correspond to larger elemental image. Hence VVR can be suitable for all display modes, including the real, virtual and focused mode. Both VVR and PGR need multiple rendering passes which deteriorate the speed performance. Therefore, they are limited to small-scale 3D scenes and EIAs with small size.

Fast algorithm of CGII is essential because the ability to visualize and manipulate the 3D data interactively is of great importance in analysis and interpretation of the data [10]. The inadequate computational speed will greatly impede the applications of integral imaging technology. For example, low speed performance may lead to cumbersome manipulation or even no feedback for interactive occasions.

For improving computational speed for the CGII, an effective means is to explore hardware with high computational power such as graphics processing units (GPU). A good example of this is the image space parallel computing method [11], which calculates the pixel values of EIA on GPU, where multiple threads can run in parallel. In each thread, the method calculates the intersection point between the corresponding ray and the 3D volume data. The solution significantly decreases the computation time, but it is not suitable for the widely used polygon based graphics contents, in which the ray intersection with polygons needs complex judgment and the ray rendering cannot be efficiently implemented in a parallel way.

In this paper, we propose a novel real-time CGII method, called multiple ray cluster rendering (MRCR) method, to realize an interactive integral imaging system with almost all-types of 3D graphics data. The proposed CGII method exploits the programmability of graphic processing unit (GPU), on which multiple clusters of perspective rays are rendered in parallel. The MRCR method can obtain EIA of 1000 pixels by1000 pixels in about 50 frames per second (fps) from large-scale graphic data. Rather than grouping parallel rays as in PGR, the MRCR method clusters and manipulates the perspective rays to achieve an optimized viewing zone. The optimized viewing zone allows users to perceive 3D image in a maximal viewing range with the same integral imaging system [12]. It is worth noting that the proposed multiple ray clusters are adaptively generated according to the view distance determined by the observers. Moreover, an anti-aliasing method is included in our rendering method to improve the display quality.

Based on the MRCR method, the integral imaging system is able to achieve two additional features to improve its 3D image visual experience. First, inspired by the previous tracking integral imaging system [13, 14], the integral image in our interactive integral imaging system is also rendered with adaption to the observers’ positions but with very fast speed. This enables creation of correct 3D images for users within a much larger field of view. Second, the users are allowed to manipulate the 3D image in real time such as 3D scene content rotation, translation and zoom. The immediate update of 3D image responding to the user’s manipulation endows a fluent 3D visual experience and it significantly enhances the 3D image perception. Performance of our method has been extensively examined with various graphics contents with an integral imaging system. A frame rate of 24 fps has been achieved with a graphic scene of about 50,000 vertices. To the best of our knowledge, our work is among the first efforts that is able to achieve such rendering speed on large-scale data. Also, experiments show satisfactory 3D image display quality thanks to our super-sampling methods.

2. Multiple ray cluster rendering method

In this section, we firstly introduce our interactive integral imaging system in section 2.1. Then the principle of the MRCR method is described in section 2.2. Lastly implementation method of MRCR on GPU is given in section 2.3.

2.1 Configuration of the interactive integral imaging system

Figure 1 shows the setup of our interactive integral imaging system. The system mainly consists of two parts: the optical system for displaying the 3D image and the real-time calculation system using a GPU.

 figure: Fig. 1

Fig. 1 An illustration for our interactive integral imaging system.

Download Full Size | PDF

The optical system is based on a traditional integral imaging setup, which consists of a LCD panel and a lens array which bends the rays emitted from EIA to form 3D image. In order to adaptively produce 3D image according to the observers’ positions, the proposed optical system also includes a depth camera – PrimeSense 3D sensor [15], which can capture the depth image of the observers. Our interactive integral imaging system is inspired by the tracking integral imaging system [13, 14], which uses the infrared camera to track the viewer’s locations for integrating the EIA from different viewers’ directions. The difference is that our work acquires the depth value of the viewers’ locations as an input for generate the EIA, and achieves much faster EIA generation speed which can meet the requirement of dynamic reconstructing the 3D image according to viewers’ locations.

The calculation system implements our MRCR method. Firstly, the multiple ray cluster (which will be introduced in section 2.2) is generated with input parameters, that include: the lens array parameters, the display panel parameters, and the view distance determined by the depth image of multiple observers. Secondly, the multiple ray clusters are efficiently rendered on GPU from the graphic model. Finally, the rendering result is composited to form the displayed EIA. For each frame of the EIA image, the view distance used by the calculation system is updated according to the observers’ location captured by the depth camera. In our interactive integral imaging system, the proposed calculation system can adaptively render the EIA for moving persons at different view distances in real-time speed. Compared to traditional integral imaging system, where the viewing zone is static if the parameters, such as the focal length of the lens array, the gap between a display device and a lens array, and the size of EIA, are all fixed [16], the viewing zone in our interactive integral imaging system is dynamically changed and can be optimally enlarged.

2.2 Multiple ray cluster calculation

In order to achieve 3D imaging with maximized viewing zone at arbitrary view distance, our interactive integral imaging system calculates the multiple ray cluster (MRC), which can reconstruct the optimal light field rays (OLFRs) in a viewing zone control method [12]. In the conventional integral imaging system, the light field rays are arranged as multiple groups of parallel rays and therefore the size of the elemental image is the same as the size of the elemental lens, as shown in Fig. 2(b) . When a viewer stands near the display, he or she will perceive crosstalk because the sight rays may reach the pixels under the neighboring elemental lens. To overcome the problem, viewing zone control method was proposed [1214]. In the viewing zone control method, the elemental image is slightly bigger than the elemental lens and is not exactly under the corresponding elemental lens but with a small lateral shifting, as shown in Fig. 2(a). By constructing the OLFRs in the viewing zone control method, even though the viewer stands near the display, no crosstalk will be perceived because all the perceived pixels are rendered for the viewing zone. For each frame, the OLFRs are generated according to the given view distance determined by current observers.

 figure: Fig. 2

Fig. 2 Viewing zone of integral imaging system with light field rays (a) The proposed integral imaging system (b) Conventional integral imaging system.

Download Full Size | PDF

Previous viewing zone control method [12] obtains OLFRs by computing each light field ray’s direction one by one, which needs a large number of calculations, while our method does not need to compute the specific direction of each ray in OLFRs. Instead, we calculate the MRC, by which, the geometry of OLFRs is obtained automatically. Moreover, the MRC provides the rendering parameters for multiple viewing frusta, which simplifies the EIA rendering procedure from ray tracing to multiple view images rendering. In rendering theory [17], the ray tracing process is much more time-consuming than popular rasterization method, which is used by all current graphics cards. As the view image is usually rendered by rasterization method, the multiple view images rendering is faster than ray tracing every pixels of the whole EIA.

The principle of the proposed MRC is illustrated in Fig. 3 . The light rays that converge at a point on the viewing width line are grouped in one ray cluster. Three examples of ray clusters (C1, C2 and C3) are indicated in Fig. 3(a). As mentioned above, the rays are grouped into clusters to enable efficient EIA rendering since multiple rays in one cluster can be configured in one shear perspective view frustum (SPVF) as shown in Fig. 3(b).

 figure: Fig. 3

Fig. 3 Illustration of multiple ray cluster (a) Ray clusters in integral imaging system (b) Multiple perspective view frustums for rendering.

Download Full Size | PDF

In the MRC method, we firstly compute the viewing width W and the elemental image width E with the following equations:

W=p×(D+g)g,
E=W×gD,

where p stands for the pitch value of a lens element in the lens array, and gis the gap value between the lens array and the display panel. Here D is defined as the current view distance.

The number of ray clusters n equals to the nearest integer number of pixels in one EI. Let nx represent the ray cluster number in the horizontal direction, nxis determined by Eq. (3):

nxEpd,nxN,

where pdis the pixel pitch of the display panel. Here the value of nxshould be non-zero integer. The ray cluster number in the vertical direction ny is calculated similarly.

The rendering parameters for each SPVF in the horizontal direction, includes the viewpoint Vi and the view angleθi, which are given by the following equations:

Vi=-W2+Wnx1×i,
θi=arctan(L/2p/2ViD)arctan(L/2+p/2ViD),

where Lstands for the width value of the whole lens array, and i is the order number of ray cluster in the horizontal direction and i[0,nx). By the above rendering parameters, each shear perspective view is generated with image resolution of L/p.

Considering the rendering process for calculating n shear perspective views, a straightforward algorithm is to perform n rendering passes. Here, a rendering pass means a whole procedure for rendering one image. Assuming that the rendering time of a single view image is t, the time cost for the n-pass rendering will be n×t. For efficient calculation, we propose a method to calculate the MRC in one perspective view frustum. Specifically, all the ray clusters are translated to one joint viewpoint V, as shown in Fig. 4 .

 figure: Fig. 4

Fig. 4 MRC calculation in one perspective view frustum.

Download Full Size | PDF

By rendering the above perspective view frustum, MRC can be computed in only one rendering pass. Compared to the straightforward algorithm, the computational time cost of MRC is significantly. An analysis on the reduced time is given in section 2.3.

For acquire the perspective view image, the rendering parameters are defined as:

V=(0,0),
θ=2arctan((Lp)nxD).

The resolution of the perspective view image is L×nx/p. The EIA is computed by pixel rearrangement (which will be introduced in section 2.3) in the perspective view image. It is worth noting that the desired EIA should be scaled with a scaling factors, which is defined as the following equation:

s=Epd×nx.

The reason for scaling the composition result is to correct the calculation error. Because the MRC method takes the resolution of the elemental image as an integer, however, in some situation, the actual resolution of the elemental image is not an integral value, so the scaling process is of great importance for achieving the correct EIA.

2.3 Rapid calculation of EIA on GPU

In recent years, vertex, geometry and pixel shaders on GPU have been widely used to speed up and improve the rendering quality [18]. Our method makes good use of the shader programming for efficient EIA rendering. The process of EIA computation consists of two rendering pass. The first pass is to compute the MRC, and the second pass is to composite the EIA from the rendered MRC. The computation flowchart is as shown in Fig. 5 .

 figure: Fig. 5

Fig. 5 Flowchart of EIA computation process on GPU.

Download Full Size | PDF

In the first rendering pass, the displayed 3D content is duplicated by the geometry shader on GPU, as shown in Fig. 6 .

 figure: Fig. 6

Fig. 6 Description of the geometry duplication used in the MRC rendering.

Download Full Size | PDF

The duplicated 3D content is transformed by translation matrix T, which is defined as:

Ti',j'=[1/nx00001/ny000010W2+(Wnx1)·i'W2+(Wny1)·j'01],

where i', j' respectively stands for the horizontal and the vertical number of the duplicated 3D content, which is employed to be rendered to obtain the corresponding ray clusters. Here, i'=nxi and j'=nyj.

Assuming the displayed 3D content is M, which can be expressed as:

M={v1,v2,,vm},vkM,

where vkis the 3D point in M. vkis given by the Eq. (11):

vk=[xkykzk1].

Specifically, each 3D point vi',j',k belonging to a cloned 3D content Mi',j'={vi',j',1,vi',j',2,,vi',j',m} is generated as the following equation:

vi',j',k=vk·Ti',j'.

By the above method, nnew 3D content are generated for n ray clusters. The MRCs can be calculated by rendering nnew 3D content in one perspective view frustum (shown as in Fig. 6) within only one rendering pass. Although the geometry data of 3D content to be rendered has been increased byn times, the rendering time for generating MRC is still decreased compared to the straightforwardn-pass method, since the transmission time between CPU and GPU has been greatly reduced. Besides, the duplication is implemented after the vertex shader, thus, we only need to calculate the translated and rotated points of one 3D content. In all, the proposed GPU method can release the great burden resulted from traditional multi-rendering pass method.

The duplicated n new 3D content are rasterized and transmitted to the fragment shader. In order to acquire accurate 3D image, we calculate super-sampling shading result for one pixel by using the 32xMSAA (Multi-Sampling Anti-Aliasing) technique [18] in fragment shader. As illustrated in Fig. 7 , the super-sampling result equals computing the weighted average value of multiple rays from one pixel (here, 2xMSAA is given as example in Fig. 7). By using the super sampling method, the integral imaging system can reconstruct the smooth 3D image with eliminated jagged edges. For comparison, the shading result without the super sampling is also given in Fig. 7. The 32xMSAA technique only cost a half of the previous rendering speed, but achieve much more accurate 3D image.

 figure: Fig. 7

Fig. 7 Illustration of super sampling algorithm and the comparison of rendered ray cluster between without and with 32xMSAA.

Download Full Size | PDF

The initial rendering result of multiple ray clusters are shown in Fig. 8(b) and undesirable pixels can be observed due to the overlap of different ray clusters in one perspective view frustum. In order to eliminate the artifacts, an image processing of pixel translation, shown in Fig. 8(a), is proposed and implemented after the super sampling process. Specifically, each view result according to the corresponding ray cluster is translated to the correct position.

 figure: Fig. 8

Fig. 8 Description of (a) the pixel translation calculation, (b) the render result of MRC, and (c) the rectified result of MRC.

Download Full Size | PDF

As shown in Fig. 8(a), Ri stands for the center coordinate of the rendering result of the ith ray cluster. Here, Ri is represented by the following equation:

Ri=Vi

Si represents the center coordinate of the correct result of the ray cluster, and it can be calculated by:

Si=(Lp)nx2+(Lp)(nxi)+(Lp)2

Thus, the offset value Oi of each view result is calculated by the following equation:

Oi=SiRi=Vi+(Lp)(nx+1)2-(Lp)i

Then the pixels in the view result i can be translated byOi. By implementing the translation of image processing on fragment shader, the rectified rendering result is as shown in Fig. 8 (c). In our method, multiple ray clusters are stored as one image, which does not need employing the time-consuming MRT (Multiple Render Target) extensions [18] for storing multiple images in the previous work [19, 20].

In the second rendering pass, the EIA is calculated by interleaving the rendered ray clusters which are acquired in the first rendering pass. Figure 9 depicts the pixel re-arrangement method for computing the interleaving process. To speed up the computational time of this stage, the re-arrangement is also implemented on the fragment shader of GPU in parallel way.

 figure: Fig. 9

Fig. 9 Illustration of the pixel re-arrangement method.

Download Full Size | PDF

After the two rendering passes, the EIA is generated. Figure 10 shows an example including the mesh data with textures, a set of the elemental images generated by the proposed method, and the reconstructed 3D image.

 figure: Fig. 10

Fig. 10 Example of (a) 3D data (mesh with 4798 vertices and texture), (b) elemental images set generated by the proposed method (Media 1) and (c) 3D image optically reconstructed using interactive integral imaging display system.

Download Full Size | PDF

3. Experimental results

We have implemented the proposed MRCR method with the computing parameters in the experiment environment listed in Table 1 . The configuration parameters about our integral imaging system are given in Table 2 .

Tables Icon

Table 1. Experiment Environment and Computing Parameters

Tables Icon

Table 2. Integral imaging system characteristics

The experimental setup of our interactive integral imaging system is shown in Fig. 11 , in which the user’s motion is captured by the depth camera for manipulating the displayed 3D image. In each frame, the EIA is generated by the MRCR rendering method and subsequently displayed on the interactive integral imaging system, 3D image is successfully reconstructed and the motion parallax can be observed, as shown in Fig. 12 .

 figure: Fig. 11

Fig. 11 Interactive integral imaging system with (a) optical experimental setup and (b) user controlled 3D image (Media 2). (The tiled lens array consists of 2x2 small lens arrays)

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 Displayed EIA accompanied with the reconstructed view images and the 3D image. (Media 3)

Download Full Size | PDF

In order to testify the adaptability of our MRC method, multiple types of 3D data inputs have been experimented, which includes the triangle mesh with and without the texture data, volume medical data (whose slices can be represented as triangle meshes with alpha texture) and also the scanned point data. Figure 13 shows the 3D input data, the generated elemental images, and the displayed 3D images.

 figure: Fig. 13

Fig. 13 3D objects and displayed 3D images in experiments; (a) Dragon: 50,000 vertices, (b) Bunny: 2503 vertices, (c) MRI: 128x128x40, (d) Buddha: 49,990 point, (e) CT: 128x128x256.

Download Full Size | PDF

The computational speed of MRCR method is also experimented with these 3D data inputs. Results show that our method can reach real-time rendering speed of EIA generation for large-scale 3D data. For comparison, this work implemented previous methods of MVR, PGR, and image space parallel processing. Figure 14 shows the processing speed comparison between the above methods. It can be seen that the proposed method is faster than previous methods.

 figure: Fig. 14

Fig. 14 The measurement result of proposed CGII and previous methods.

Download Full Size | PDF

Figure 14 shows that the data size of 3D content influences the speed performance of EIA calculation. In order to analysis this relationship, we experiment a set of EIAs, which are generated from the same 3D content with different number of vertices. In Fig. 15 , the measured calculation speed result obviously indicates that the computational speed decreases with the increased number of vertices. However, the speed drop slows down as the vertex number increases.

 figure: Fig. 15

Fig. 15 The measurement result of speed performance with different 3D data size.

Download Full Size | PDF

It is worth noting that the proposed MRCR method on GPU is faster than the MRCR method implemented on CPU. The reason is due to: (a) the 3D content duplication is accelerated on GPU; (b) To render MRC on CPU, the method must compute all the translated and rotated vertices which belongs to the duplicated 3D contents, while the GPU method only compute the vertices of one 3D content.

In order to evaluate the enlargement of viewing zone by our interactive integral imaging system, cases of different view distance are analyzed and the viewing zones are given by Fig. 16 . As shown in Figs. 16(a), 16(b) and 16(c), the viewers, who are stand close to the lens array, can observe correct 3D image in the viewing zone. However, if the light rays are organized as the manner as which is used in conventional integral imaging system, as shown in Fig. 16(f), the viewers will not observe the correct 3D image because they are stand out of the viewing zone. Figure 16 illustrates that our interactive integral imaging system can achieve optimally enlarged viewing zone by OLFRs.

 figure: Fig. 16

Fig. 16 Viewing zone results in our interactive integral imaging system at (a) view distance is 0.6m, (b) view distance is 0.8m, (c) view distance is 1.0m, (d) view distance is 2.0m, and (f) viewing zone in conventional integral imaging system.

Download Full Size | PDF

Experimental parameters for our interactive integral imaging system are given in Table 3 , including the image location, observer’s view distance and viewing angle. Here, we give the depth range value for the image location, and 0mm represents the depth location of the lens array. The viewing angle, which we calculated, is illustrated in Fig. 16. For comparison, we have also calculated the viewing angle of traditional integral imaging system with the view distance of 2 meters, and the result is 8.8 degrees. Moreover, when the view distance is smaller than 1.025 meters, the viewing angle becomes zero, which means that the viewers cannot observe the correct 3D image when they stand closer than 1.025 meters.

Tables Icon

Table 3. Experimental parameters for interactive integral imaging system

From Fig. 17 and Table 3, we can learn that an enlarged viewing zone and viewing angle is demonstrated by our interactive integral imaging system.

 figure: Fig. 17

Fig. 17 The reconstructed 3D images from (a) traditional EIA rendering method, and (b) our MRC method.

Download Full Size | PDF

4. Conclusion

A new type of CGII method called MRCR method has been presented and demonstrated. In this method, MRCs are calculated and reconstructed in only one rendering pass on GPU. This drastically reduces the process time, so that it is possible to achieve a high resolution, real-time rendering integral imaging system. Moreover, since light field rays are optimized in this method, maximal viewing zone can be obtained with the rendering process. As an experiment, an interactive integral imaging system based on MRCR method with an EIA resolution of 1000 × 1000 pixels is implemented. 3D images are rendered in real-time considering user’s position and motion captured by a depth camera. In future work, more real-time applications on integral imaging system will be presented, and we also want to further enlarge the viewing zone of our interactive integral imaging system.

Acknowledgments

The 3D content data used in our experiment are taken from Stanford University Computer Graphics Laboratory.

References and links

1. G. Lippmann, “La photographie integrale,” C.R. Acad. Sci. 146, 446–451 (1908).

2. J.-H. Park, G. Baasantseren, N. Kim, G. Park, J. M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16(12), 8800–8813 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-16-12-8800. [CrossRef]   [PubMed]  

3. Y. Igarashi, H. Murata, and M. Ueda, “3D display system using a computer generated integral photography,” Jpn. J. Appl. Phys. 17(9), 1683–1684 (1978). [CrossRef]  

4. M. Halle, “Multiple viewpoint rendering,” SIGGRAPH’98, Proceedings of 25th annual conference on Computer graphics and interactive techniques, 243–254 (1998).

5. S.-W. Min, J. Kim, and B. Lee, “New characteristics equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), L71–L74 (2005). [CrossRef]  

6. R. Yang, X. Huang, and S. Chen, “Efficient rendering of integral images,” SIGGRAPH’05, Proceedings of 32nd Annual Conference on Computer Graphics and Interactive Techniques, 44 (2005).

7. S.-W. Min, K. S. Park, B. Lee, Y. Cho, and M. Hahn, “Enhanced image mapping algorithm for computer-generated integral imaging system,” Jpn. J. Appl. Phys. 45(28), L744–L747 (2006). [CrossRef]  

8. B.-N.-R. Lee, Y. Cho, K. S. Park, S.-W. Min, J.-S. Lim, M. C. Whang, and K. R. Park, “Design and implementation of a fast integral image rendering method,” International Conference on Electronic Commerce, 135–140 (2006). [CrossRef]  

9. K. S. Park, S.-W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E 90-D, 231–241 (2007).

10. F. P. Brooks, “What’s real about virtual reality?” IEEE Comput. Graph. Appl. 19(6), 16–27 (1999). [CrossRef]  

11. K.-C. Kwon, C. Park, M.-U. Erdenebat, J.-S. Jeong, J.-H. Choi, N. Kim, J.-H. Park, Y.-T. Lim, and K.-H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express 20(2), 732–740 (2012), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-20-2-732. [CrossRef]   [PubMed]  

12. R. Fukushima, K. Taira, T. Saishu, and Y. Hirayama, “Novel viewing zone control method for computer generated integral 3-D imaging,” Proceedings of SPIE – IS&T Electronic Imaging, SPIE Vol. 5291, 81–92, (2004). [CrossRef]  

13. G. Park, J.-H. Jung, K. Hong, Y. Kim, Y.-H. Kim, S.-W. Min, and B. Lee, “Multi-viewer tracking integral imaging system and its viewing zone analysis,” Opt. Express 17(20), 17895–17908 (2009), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-19-5-4129. [CrossRef]   [PubMed]  

14. G. Park, J. Hong, Y. Kim, and B. Lee, “Enhancement of viewing angle and viewing distance in integral imaging by head tracking,” Digital Holography and Three-Dimensional Imaging, OSA Technical Digest, DWB27 (1990).

15. PrimerSense 3D sensor: http://www.primesense.com/solutions/sensor/.

16. H. Choi, Y. Kim, J.-H. Park, S. Jung, and B. Lee, “Improved analysis on the viewing angle of integral imaging,” Appl. Opt. 44(12), 2311–2317 (2005). [CrossRef]   [PubMed]  

17. J.-D. Foley, D. Van, Feiner, and Hughes, Computer Graphics: Principles and Practice, 2nd ed. (Addison-Wesley, 1990).

18. R. Fernando, GPU Gems: Programming Techniques, Tips and Tricks for Real-Time Graphics (Addison-Wesley, 2004).

19. F. de Sorbier, V. Nozick, and V. Biri, “GPU rendering for autostereoscopic displays,” 4th International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT’ 08), Jun. 2008, (2008).

20. F. de Sorbier, V. Nozick, and H. Saito, “Multi-view rendering using GPU for 3-D displays,” Computer Games, Multimedia and Allied Technology (CGAT’10), April. 2010, (2010). [CrossRef]  

Supplementary Material (3)

Media 1: MOV (2760 KB)     
Media 2: MOV (2234 KB)     
Media 3: MOV (703 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1
Fig. 1 An illustration for our interactive integral imaging system.
Fig. 2
Fig. 2 Viewing zone of integral imaging system with light field rays (a) The proposed integral imaging system (b) Conventional integral imaging system.
Fig. 3
Fig. 3 Illustration of multiple ray cluster (a) Ray clusters in integral imaging system (b) Multiple perspective view frustums for rendering.
Fig. 4
Fig. 4 MRC calculation in one perspective view frustum.
Fig. 5
Fig. 5 Flowchart of EIA computation process on GPU.
Fig. 6
Fig. 6 Description of the geometry duplication used in the MRC rendering.
Fig. 7
Fig. 7 Illustration of super sampling algorithm and the comparison of rendered ray cluster between without and with 32xMSAA.
Fig. 8
Fig. 8 Description of (a) the pixel translation calculation, (b) the render result of MRC, and (c) the rectified result of MRC.
Fig. 9
Fig. 9 Illustration of the pixel re-arrangement method.
Fig. 10
Fig. 10 Example of (a) 3D data (mesh with 4798 vertices and texture), (b) elemental images set generated by the proposed method (Media 1) and (c) 3D image optically reconstructed using interactive integral imaging display system.
Fig. 11
Fig. 11 Interactive integral imaging system with (a) optical experimental setup and (b) user controlled 3D image (Media 2). (The tiled lens array consists of 2x2 small lens arrays)
Fig. 12
Fig. 12 Displayed EIA accompanied with the reconstructed view images and the 3D image. (Media 3)
Fig. 13
Fig. 13 3D objects and displayed 3D images in experiments; (a) Dragon: 50,000 vertices, (b) Bunny: 2503 vertices, (c) MRI: 128x128x40, (d) Buddha: 49,990 point, (e) CT: 128x128x256.
Fig. 14
Fig. 14 The measurement result of proposed CGII and previous methods.
Fig. 15
Fig. 15 The measurement result of speed performance with different 3D data size.
Fig. 16
Fig. 16 Viewing zone results in our interactive integral imaging system at (a) view distance is 0.6m, (b) view distance is 0.8m, (c) view distance is 1.0m, (d) view distance is 2.0m, and (f) viewing zone in conventional integral imaging system.
Fig. 17
Fig. 17 The reconstructed 3D images from (a) traditional EIA rendering method, and (b) our MRC method.

Tables (3)

Tables Icon

Table 1 Experiment Environment and Computing Parameters

Tables Icon

Table 2 Integral imaging system characteristics

Tables Icon

Table 3 Experimental parameters for interactive integral imaging system

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

W=p× (D+g) g ,
E=W× g D ,
n x E p d , n x N,
V i =- W 2 + W n x 1 ×i,
θ i =arctan( L/2p/2 V i D )arctan( L/2+p/2 V i D ),
V=(0,0),
θ=2arctan( (Lp) n x D ).
s= E p d × n x .
T i',j' =[ 1/ n x 0 0 0 0 1/ n y 0 0 0 0 1 0 W 2 +( W n x 1 )·i' W 2 +( W n y 1 )·j' 0 1 ],
M={ v 1 , v 2 ,, v m }, v k M,
v k =[ x k y k z k 1 ].
v i',j',k = v k · T i',j' .
R i = V i
S i = (Lp) n x 2 +(Lp)( n x i)+ (Lp) 2
O i = S i R i = V i + (Lp)( n x +1) 2 -(Lp)i
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.