Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction

Open Access Open Access

Abstract

A high-efficient computer-generated integral imaging (CGII) method is presented based on the backward ray-tracing technique. In traditional CGII methods, the total rendering time is long, because a large number of cameras are established in the virtual world. The ray origin and the ray direction for every pixel in elemental image array are calculated with the backward ray-tracing technique, and the total rendering time can be noticeably reduced. The method is suitable to create high quality integral image without the pseudoscopic problem. Real time and non-real time CGII rendering images and optical reconstruction are demonstrated, and the effectiveness is verified with different types of 3D object models. Real time optical reconstruction with 90 × 90 viewpoints and the frame rate above 40 fps for the CGII 3D display are realized without the pseudoscopic problem.

© 2017 Optical Society of America

© 2017 Optical Society of America

1. Introduction

Three-dimensional (3D) displays are important tools to achieve high immersion 3D images. Among various 3D display methods, integral imaging technique can display full-color and quasi-continuous 3D images based on two-dimensional (2D) images [1,2], and the conventional generation, processing and transmission techniques can easily be used. Since integral image was firstly proposed [3] in 1908, many methods to generate the elemental image array (EIA) have been exploited. In recent years, computer-generated integral imaging (CGII) techniques [4,5] have attracted people’s attentions. However, there are some problems in existing CGII methods. According to the lens array parameters and the object size, it usually takes a long time to render the EIA for the proposed methods, such as each camera viewpoint independent rendering (ECVIR) [6], multiple viewpoints rendering (MVR) [7],viewpoint vector rendering (VVR) [8], and parallel group rendering(PGR) [9]. If an elemental image contains M’ × N’ pixels for ECVIR, the total rendering time will be M’ × N’ times rendering time of a single viewpoint, where M and N are the pixel numbers in the row and the column of the elemental image. MVR, VVR and PGR can be treated as the special ECVIR method by accelerating the generation of a single viewpoint image. With increasing the spatial information in the elemental image and the object size, the rendering time is too long to generate the animation for the integral imaging display system. The pseudoscopic problem should be addressed for the integral imaging display. Though most of previous CGII methods work well in the virtual mode, the pseudoscopic problem in the real mode is not taken into consideration. For an example, high speed image space parallel processing [10] was used for the real-time integral image 3D display system with the hexagonal lens array based on CGII [11]. Incompatibility for different kinds of 3D object model is another problem for CGII. Many previous algorithms are only suitable for one kind of 3D data files. For an example, PGR [12] only can process point data, and a real-time pickup method with GPU & Octree [13] worked well with medical data. However, it cannot deal with the general mesh 3D graphic data file, which are widely used in other field, such as 3Ds for 3DMax,etc.

Here, a novel CGII technique based on the backward ray-tracing technique is presented. It is known that the ray tracing technique [14] is a main rendering approach to generate a virtual image. The real time ray tracing technique [15–18] has been developed very quickly in recent years. Single-stage integral photography capturing system [19] used the ray tracer to generate integral image, but the real pseudoscopic problem must be eliminated by performing a 180-deg rotation of each elemental image around its optical axis. Ray tracking technique was presented [20]. In fact, it is the object-order rendering similar with the Rasterization technique in computer graphics. Here the backward ray-tracing technique in computer graphic is proposed to generate integral imaging image, which is suitable for both in the real mode and the virtual mode without pseudoscopic problem. The standard ray tracing method is used to implement the BRT CGII. Optical experiments for real-time reconstruction are presented, and the processing time with different 3D objects are analyzed. In addition, the real-time rendering for big mesh data with SBVH acceleration structure is realized.

2. Principles of BRTCGII

2.1 The relationship between EIA pixels in integral image and the camera position in the virtual world

A light ray can be represented as an origin and its direction. As shown in Fig. 1, Sin and Ein is the start point and the end point of lenses’ incident rayRin, and Din=EinSin is the direction ofRin. The optical lens in the sheet is arbitrary and can be treated as a black box, and Plensis the optical property of the lens or lenses. Given that Rin and Plens is known, the exit rayRexcan be calculated. Rex is the function of Rin and Plensas depicted in Eq. (1).

Rex=flens(Rin,Plens).
To address the pseudoscopic problem, The arrangement of camera array in the matrix format [6] is used. The camera intersected with Rexcan be found and the obtained camera position Pcam can be represent by Eq. (2).
Pcam=fcam(Rex,PcamArray).
where PcamArrayis the property of camera array.

 figure: Fig. 1

Fig. 1 Rays starting from Sinand received with the camera at the end, and the transmission process determined by thePlens.

Download Full Size | PDF

Through the above analysis, we can conclude that if PcamArrayand Plens are known, the camera position Pcam is determined by the pixel positionSin, which is the start point of incident rayRin.

2.2 The principle of backward ray-tracing technique

Backward ray-tracing (BRT) technique works by tracing a path from a virtual camera through each pixel in a virtual screen, as shown in Fig. 2, and the color of the object visible through it is calculated. A basic backward ray tracer includes three parts [21]. The first part is ray generation, where the origin and the direction of each pixel’s view ray based on the camera geometry are calculated. The second part is ray intersection, where the closest object intersecting the view ray is found. Shading part is the final part, which computes the pixel color based on the results of ray intersection.

 figure: Fig. 2

Fig. 2 The principle of the backward ray-tracing technique [22]

Download Full Size | PDF

BRT technique shows several advantages over rasterization. One is that special camera models are easy to be supported. The user can associate points on the screen with any expected directions. There are many virtual cameras for integral image, and every pixel in the virtual screen has its virtual camera. Only the ray generation part is modified to generate the ideal view rays for integral images. The backward view rays in BRT CGII should start from one of virtual cameras, and their directions should be from the start point to its pixel positions in the virtual camera. The total count of the view rays has the same value with a single 2D picture, so BRT CGII can effectively reduce the rendering time.

Because of the reversibility of optical path, the direction of the view ray Rdis Rex, and the start point of Rd is the position of one virtual camera in Fig. 1. The view rayRdinsects with 3D object at the closest hit and the farthest hit. The pixel Sin color is the color of the closest hit rather than the color of the farthest hit, because the latter will cause the pseudoscopic problem in the real mode. Therefore, it is easy to diminish the pseudoscopic problem with the BRT technique.

2.3 Parameter settings for BRT CGII

For simplicity, optical lenses in Section 2.1 are treated as pinholes. It is assumed that the elemental image on LCD panel consists of N × N pixels as shown in Figs. 3(a) and 3(b). N × N virtual cameras are arranged in the matrix format. Given that a point (x, y,0) is on the LCD panel, and its index (k,m) can be expressed as Eq. (5). Each pixel has its own index (k,m) in whole LCD panel and the index (i,j) in its elemental image. Index (i,j) can be induced by Eq. (3);

 figure: Fig. 3

Fig. 3 Elemental images and virtual cameras

Download Full Size | PDF

{i={(N1)/2(kmodN)N/2(kmodN)j={(mmodN)(N1)/2(mmodN)N/2NNNNisisisisoddevenoddeven.

A virtual camera can be represented as Cam(i,j).The position of Cam(i,j) can be calculated with the Eq. (4). Pijis the function of the index(k,m).

Pij=P00+(iDH,jDv,0).
(k,m)=(floor(x+pp*W/2pp),floor(pp*H/2ypp)).
where pp represents the width of the LCD pixel, and W and H are the horizontal and vertical resolution, respectively. Therefore, the camera positionPij can be obtained by combining Eqs. (3), (4) and (5) under the condition that the pixel position is (x, y,0).

Through the above discussion, the originRo and the direction Rdof the pixel (k,m)’s view ray can be described in the following equation.

{R0=PijRd=Pij(x,y,0).

As depicted as Fig. 3(d),the distance DCL between camera array and lens array can be represented as DCL = g / (LEAI/LLEN - 1 ), where LLEN is the distance between adjacent lenses; g represents the gap between lens array and LCD panel. LEAI is the side length of EAI. The distance DC between two adjacent cameras can be calculated with the formulation DC = pp·g/DCL.

The general process of BRT CGII technique can be expressed in Fig. 4(a), and we adapt parallel computing method to generate the view rays. Every ray has its dependent thread, and its ray-object intersection, shading is happened in the same thread. Sampling technique make us to get away from shape-edged, clinical clean, traditional ray tracing image. The higher sampling rate can enhance image quality, so a pixel may contain many random-distribution view rays from its virtual camera [23]. At the end, the final integral image is directly rendered without generating each viewpoint picture like ECVIR. Therefore, the higher efficiency of BRT CGII over ECVIR is obvious.

 figure: Fig. 4

Fig. 4 The process comparison between (a) BRT CGII and (b) ECVIR

Download Full Size | PDF

3. Experimental results

The proposed BRT CGII is implemented with non-real time and real time backward ray tracing engines. Non-real time rendering enables the leveraging of limited processing power in order to obtain higher image quality integral image elemental array, which is suitable to render featured films and videos. For games and simulations, the frame rate of 30 to 120 FPS should be achieved, real time rendering should be realized the real time back ray tracing engine.

To illustrate the generality of BRT CGII for dealing with 3D graphics data files, different 3D mesh models are rendered the in the experiment. ECVIR implemented with the raster render engine is used for comparison. The main procedure of ECVIR is shown in Fig. 4(b).

Table 1 shows specifications of 3D data required in the experiment. The PC hardware is composed of Intel(R) Core(TM) i7-4790 CPU @ 3.6GHZ with 8Gb RAM and NVidia GeForce GTX 770 (2 GB / NVidia) graphic card. The integral image display configuration can display the 3D light field with the view angle of 40° × 40°. The performance of the proposed method is evaluated by comparing the generation time of integral pictures. Three kinds of 3D object models including Monkey, Cup and Skull with different file formats are used. Figure 5 shows non-real time 3D mesh models and their integral images. The monkey model shown on the integral image display [24] in different perspectives is given in Fig. 6. The real time BRT CGII rendering result is shown in Fig. 7.

Tables Icon

Table 1. Specifications of 3D data and the Display Configuration in the Experiment

 figure: Fig. 5

Fig. 5 The images generated with the center camera and the non-real time integral images created with the backward ray tracing approach, including (a) Monkey (b) Cup (c) Skull. (see Visualization 1)

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Different perspectives of the reconstructed 3D scene on 3D light field display(see Visualization 2)

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Real time rendering for volume data and 3D mesh data.(1) a cell volume data rendered with the frame rate of 75fps (2) a 3D mesh object has 200 triangles, which rendered without acceleration structure at 40fps.(3)The builds is a 3D mesh object with 1.5million triangles, which rendered with acceleration structure SBVH. The frame rate is 30fps (see Visualization 3)

Download Full Size | PDF

Table 2 shows the processing time comparison between the method proposed by Yanaka et al [6] and our proposed method. In the optical reconstruction, the lens array consists of 43 × 36 lenses and each elemental image covers 90 × 90 pixels. We can see that the proposed method is much faster than the ECVIR method for different kinds of 3D data files.

Tables Icon

Table 2. Processing time comparison between proposed BRT method and the traditional CGII method

The rendering time depends on the count view rays in serial computing (CPU) mode. If parallel computing mode with GPU is used with enough computation unit, the one pixel rendering time is nearly equal to the total rendering time. Under this mode, the real time rendering can be realized.The factors that affect the speed of BRT computer-generated integral imaging is similar to the factors that affect the speed of common 2D image with ray tracing technique. The main factor influencing the rendering efficiency is the ray-object intersection efficiency. The common approach to enhance the intersection efficiency is to construct an acceleration structure, as presented in Ref [25]. For instance, SBVH is the best acceleration structure for static geometry group. In our experiment in Fig. 7(c), the number of view rays and polygons is 8 million and 1.5 million respectively, and real-time rending can be realized with SBVH acceleration structure .

Figure 8 illustrates the processing time with different sizes of the EIA. The processing time with the ECVIR method for three different size mesh models is shown in Fig. 8(a). We can see that when the amount of EAI pixels becomes larger, the rending time noticeably increased. Figure 8(b) shows the processing time with different number of pixels for three models with our proposed BRTCGII method, whose rendering time is not observably affected by the number of elemental image pixels. Comparing Figs. 8(a) and 8(b), we can see that BRT CGII is more efficient than ECVIR. Therefore, it is possible for us to generate elemental images for 3D mesh models with our proposed method when the amount of EIA pixels is large.

 figure: Fig. 8

Fig. 8 the processing time for different render method:(a) three mesh 3D models rendered by using ECVIR method;(b) three mesh 3D model rendered by using BRTCGII method.

Download Full Size | PDF

4. Conclusion

In summary, a high-efficient computer-generated integral imaging (CGII) method is presented based on the backward ray-tracing technique. Integral image with different kinds of 3D object models can be efficiently generated . Integral images are generated for different type 3D data in non-real time and real time, and optical reconstruction for different kinds of 3D models is demonstrated. The number of viewpoints in experimental optical system can reach 90 × 90.The rendering efficiency of our method is not affected by the number of viewpoints. When the number of view rays and polygons in scene is 8 million and 1.5 million respectively, BRTCGII with the SBVH acceleration structure in GPU mode can realize real time rendering for integral image display device with the 3840 × 2160 LCD panel.

Funding

This work is partly supported by the “863” Program (2015AA015902), the National Natural Science Foundation of China (NSFC) (61575025), the Fund of the State Key Laboratory of Information Phonics and Optical Communications.

References and links

1. J. Y. Son, V. V. Saveljev, B. Javidi, D. S. Kim, and M. C. Park, “Pixel patterns for voxels in a contact-type three-dimensional imaging system for full-parallax image display,” Appl. Opt. 45(18), 4325–4333 (2006). [CrossRef]   [PubMed]  

2. J. Y. Son, V. V. Saveljev, K. D. Kwack, S. K. Kim, and M. C. Park, “Characteristics of pixel arrangements in various rhombuses for full-parallax three-dimensional image generation,” Appl. Opt. 45(12), 2689–2696 (2006). [CrossRef]   [PubMed]  

3. H. E. Ives, “Optical properties of a Lippman lenticulated sheet,” J. Opt. Soc. Am. 21(3), 171 (1931). [CrossRef]  

4. B. N. R. Lee, Y. Cho, K. S. Park, S. W. Min, J. S. Lim, C. W. Min, and R. P. Kang, “Design and implementation of a fast integral image rendering method,” in Entertainment Computing - ICEC 2006 (2006), pp. 135–140.

5. H. Liao, K. Nomura, and T. Dohi, “Autostereoscopic integral photography imaging using pixel distribution of computer graphics generated image,” in ACM SIGGRAPH 2005 Posters (2005), pp. 73.

6. K. Yanaka, “Integral photography using hexagonal fly’s eye lens and fractional view,” Proc. SPIE 6803, 68031K (2008). [CrossRef]  

7. M. Halle, “Multiple viewpoint rendering,” in Conference on Computer Graphics and Interactive Techniques (2010), pp. 243–254.

8. K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E90-D(1), 233–241 (2007). [CrossRef]  

9. S. W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. Lett. 44(2), L71–L74 (2005). [CrossRef]  

10. K. C. Kwon, C. Park, M. U. Erdenebat, J. S. Jeong, J. H. Choi, N. Kim, J. H. Park, Y. T. Lim, and K. H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express 20(2), 732–740 (2012). [CrossRef]   [PubMed]  

11. D. H. Kim, M. U. Erdenebat, K. C. Kwon, J. S. Jeong, J. W. Lee, K. A. Kim, N. Kim, and K. H. Yoo, “Real-time 3D display system based on computer-generated integral imaging technique using enhanced ISPP for hexagonal lens array,” Appl. Opt. 52(34), 8411–8418 (2013). [CrossRef]   [PubMed]  

12. Y. Igarashi, H. Murata, and M. Ueda, “3-D display system using a computer generated integral photograph,” Jpn. J. Appl. Phys. 17(9), 1683–1684 (1978). [CrossRef]  

13. Y. H. Jang, P. Chan, J. S. Jung, J. H. Park, N. Kim, J. S. Ha, and K. H. Yoo, “Integral imaging pickup method of bio-medical data using GPU and octree,” Electrophoresis 10, 924–929 (2010).

14. K. Akeley, D. Kirk, L. Seiler, P. Slusallek, and B. Grantham, “When will ray-tracing replace rasterization?” in ACM SIGGRAPH 2002 Conference Abstracts and Applications (2002), pp. 206–207.

15. J. M. Singh and P. J. Narayanan, “Real-time ray tracing of implicit surfaces on the GPU,” IEEE Trans. Vis. Comput. Graph. 16(2), 261–272 (2010). [CrossRef]   [PubMed]  

16. Z. Li, T. Wang, and Y. Deng, “Fully parallel kd-tree construction for real-time ray tracing,” in Meeting of the ACM SIGGRAPH Symposium on Interactive 3d Graphics and Games (2014), pp. 159. [CrossRef]  

17. B. C. Budge, J. C. Anderson, C. Garth, and K. I. Joy, “A straightforward CUDA implementation for interactive ray-tracing,” in IEEE Symposium on Interactive Ray Tracing (IEEE, 2008), pp. 178. [CrossRef]  

18. S. G. Parker, A. Robison, M. Stich, J. Bigler, A. Dietrich, H. Friedrich, J. Hoberock, D. Luebke, D. McAllister, M. McGuire, and K. Morley, “OptiX: a general purpose ray tracing engine,” ACM Trans. Graph. 29(4), 157–166 (2010). [CrossRef]  

19. S. S. Athineos and P. G. Papageorgas, “Photorealistic integral photography using a ray-traced model of capturing optics,” J. Electron. Imaging 15(4), 043007 (2006). [CrossRef]  

20. G. Milnthorpe, M. Mccormick, and N. Davies, “Computer modeling of lens arrays for integral image rendering,” in Proceedings of the Eurographics UK Conference,2002 (2002), pp. 136–141. [CrossRef]  

21. M. Ashikhmin, Fundamentals of Computer Graphics (AK Peters, 2005), pp. 70–79.

22. Wikipedia, Ray tracing (graphics), https://en.wikipedia.org/wiki/Ray_tracing_(graphics)

23. S. K. Ray, Tracing from the Ground Up (A. K. Peters, Ltd. 2015), pp. 93–119.

24. X. Sang, F. C. Fan, C. C. Jiang, S. Choi, W. Dou, C. Yu, and D. Xu, “Demonstration of a large-size real-time full-color three-dimensional display,” Opt. Lett. 34(24), 3803–3805 (2009). [CrossRef]   [PubMed]  

25. NVIDIA, SBVH acceleration structure, http://docs.nvidia.com/gameworks/index.html#gameworkslibrary/optix/optix_host_api.htm?Highlight=sbvh

Supplementary Material (3)

NameDescription
Visualization 1: MP4 (5388 KB)      The non-real time II images created with the backward ray tracing approach
Visualization 2: MP4 (5549 KB)      Different perspectives of the reconstructed 3D scene on 3D light field display
Visualization 3: MP4 (4627 KB)      Real time rendering for volume data and 3D mesh data.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Rays starting from S i n and received with the camera at the end, and the transmission process determined by the P l e n s .
Fig. 2
Fig. 2 The principle of the backward ray-tracing technique [22]
Fig. 3
Fig. 3 Elemental images and virtual cameras
Fig. 4
Fig. 4 The process comparison between (a) BRT CGII and (b) ECVIR
Fig. 5
Fig. 5 The images generated with the center camera and the non-real time integral images created with the backward ray tracing approach, including (a) Monkey (b) Cup (c) Skull. (see Visualization 1)
Fig. 6
Fig. 6 Different perspectives of the reconstructed 3D scene on 3D light field display(see Visualization 2)
Fig. 7
Fig. 7 Real time rendering for volume data and 3D mesh data.(1) a cell volume data rendered with the frame rate of 75fps (2) a 3D mesh object has 200 triangles, which rendered without acceleration structure at 40fps.(3)The builds is a 3D mesh object with 1.5million triangles, which rendered with acceleration structure SBVH. The frame rate is 30fps (see Visualization 3)
Fig. 8
Fig. 8 the processing time for different render method:(a) three mesh 3D models rendered by using ECVIR method;(b) three mesh 3D model rendered by using BRTCGII method.

Tables (2)

Tables Icon

Table 1 Specifications of 3D data and the Display Configuration in the Experiment

Tables Icon

Table 2 Processing time comparison between proposed BRT method and the traditional CGII method

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

R e x = f l e n s ( R i n , P l e n s ) .
P c a m = f c a m ( R e x , P c a m A r r a y ) .
{ i = { ( N 1 ) / 2 ( k mod N ) N / 2 ( k mod N ) j = { ( m mod N ) ( N 1 ) / 2 ( m mod N ) N / 2 N N N N i s i s i s i s o d d e v e n o d d e v e n .
P i j = P 00 + ( i D H , j D v , 0 ) .
( k , m ) = ( f l o o r ( x + p p * W / 2 p p ) , f l o o r ( p p * H / 2 y p p ) ) .
{ R 0 = P i j R d = P i j ( x , y , 0 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.