Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Dual-camera enabled real-time three-dimensional integral imaging pick-up and display

Open Access Open Access

Abstract

A new real-time integral imaging pick-up and display method is demonstrated. This proposed method utilizes the dual-camera optical pick-up part to collect 3D information of real scene in real-time without pre-calibration. Elemental images are then provided by a computer-generated integral imaging part and displayed by a projection-type integral imaging display part. The theoretical analysis indicates the method is robust to the camera position deviation, which profits the real-time data processing. Experimental results show that the fully continuous, real 3D scene pick-up and display system is feasible with a throughput of 8 fps in real time. Further analysis predicts that the parallel optimization can be adopted by the proposed method for real-time 3D pick-up and display with a throughput of 25 fps.

©2012 Optical Society of America

1. Introduction

Integral imaging is a true 3D display technology that produces auto-stereoscopic images with full parallax and continuous viewing points [1]. Moreover, this technology rarely requires special viewing devices, thus becoming one of the hottest topics in the 3D imaging and display community recently. Numerous publications report many methods for enhancing various integral imaging display parameters, including viewing angle [2], resolution [3], depth range of 3D images [4], and other techniques. Among them, real-time 3D integral imaging pick-up and display for real scenes is an important application in the 3D display technology.

Okano [5] et al., Jang [6] et al., and Arai [7] et al. have proposed methods for achieving real-time 3D integral imaging pick-up and display using a lens array as the real-time 3D information pick-up device. It is difficult, however, to collect real-time information on a real and large-sized 3D scene in practice. The quality of the 3D integrated image is reduced because of the limitations imposed by the manufacturing technique used for the lens array, the cross-talk effect between neighboring elemental lenses [8] and the low resolution of elemental images (EIs).

Liao [9] et al. have proposed a method that is capable of generating EIs for a high-quality integrated video display using computer-generated integral imaging (CGII). However, limitations of computer technology itself make the collection of real-time information on real 3D scenes difficult, and issues on speed, positioning, and viewing angle exist. To collect various full-color and high-resolution images of a real large-sized 3D scene, a camera array has been utilized in the pick-up part [1013]. Sang [12] et al. have presented a 3D holographic display system that employs a camera array with 64 color cameras to collect 3D information on an object in real time. Taguchi [13] et al. have achieved integral photography display at 60 different viewing angles using a camera array with 64 cameras. Though it has been proved by Navarro [14] et al. that integral imaging with a camera array pick-up system is superior to binocular parallax based on classical two view stereoscopic camera. These methods have some disadvantages obviously due to large number of cameras adopted, including cost, complexity of operation, difficulty of synchronism, and poor flexibility. At the same time, the large amounts of data derived from the camera array gives rise to an issue on latency and bandwidth (BW). Moreover, every elemental camera has to be calibrated precisely to avoid the decline in the image quality of the reconstructed image caused by the position deviation of the camera. These limitations significantly hinder the large-scale promotion and application of the technology.

To solve the aforementioned problems of conventional lens array and camera array in integral imaging, we propose a method to achieve a real scene capturing and 3D display in real-time, which is robust to the camera position deviation. The proposed method comprises three parts, namely, a real-time optical pick-up integral imaging part (OPII), a computer-generated integral imaging part (CGII), and a projection-type integral imaging display part (PII). In OPII, a dual-camera pick-up system is used to capture 3D information of real scene with no need of calibration. It reduces the volume, weight, cost, difficulty of synchronism and the requirement of BW. In CGII, the EIs are determined and calculated by the disparity of the homologous points in the two images captured by the dual cameras. The generation of EIs has no need of model reconstruction or depth extraction and is independent of object space. Moreover, The camera position deviations do not have any influence on the CGII process. Experimental results verify the effectiveness of the proposed method.

2. Real-time optical pick-up integral imaging part (OPII)

The proposed method comprises three parts, namely, a OPII, a CGII, and a PII. Figure 1 shows the schematic configuration of the proposed method.

 figure: Fig. 1

Fig. 1 Schematic configuration of the real-time integral imaging pick-up and display system (Media 1)

Download Full Size | PDF

The 3D information of a real scene in Fig. 1 is picked-up by two cameras in the OPII part [Fig. 2(a) ]. To capture the 3D information of real scene from different viewpoints, the two cameras are used to replace two random elemental lenses in different rows and columns within the lens array. The distances between the two cameras in the x and y directions are Lx and Ly, respectively, which are decided by the parameters of 3D display characteristics. Moreover, the distances should ensure that the 3D scene could be captured by two cameras simultaneously to find the homologous points and the partial occlusion of other objects is as far as possible to be reduced. In our experiment, Lx is 80mm and Ly is 30 mm, both of them are the actual distance of the two replaced lenses for simplicity. The images captured by OPII are shown in Fig. 2(b). The pixels of the images are 800 (H) × 448 (V). In order to verify the robustness to the camera position deviation of the proposed method, the two cameras are not calibrated in our experiments.

 figure: Fig. 2

Fig. 2 Real-time optical pick-up integral image part: (a) Sketch, (b) Results.

Download Full Size | PDF

Compared with the camera array [12, 13] in the conventional method, the OPII method effectively simplifies the system structure, as well as reduces both the cost of system and the requirement for data transmission BW. For example, if an EI array with 36 × 18 EIs is required to make an integral imaging 3D display, comparison between conventional and OPII methods is shown in Table 1 . The data transmission BW of the real-time video is reduced to only 0.3% of the conventional method. Thus, the improvement in the optical pick-up process makes the real-time 3D integral imaging pick-up and display device feasible.

Tables Icon

Table 1. Characteristics comparison between the conventional and OPII methods.

3. Computer-generated integral imaging part (CGII) and theoretical analysis

EIs are calculated by the CGII using the images picked-up from the OPII part. The detailed principle on the CGII is expounded in the following section. The pick-up process of integral imaging is shown in Fig. 3 . The EIs I(m,n) are captured by the lens array, which contains M × N closely packed elemental lenses. The image points of the object point O in every elemental image are called homologous points R′(m,n), where O is one point in the 3D object. The coordinates of the homologous points R′(m,n) are highly correlated and determined by the Lens Law, which is shown in Eq. (1)

{px=(xR(i,j)xR(i,j))/(ii)py=(yR(i,j)yR(i,j))/(jj),(ii),(jj)
where p′x and p′y are the coordinate differences of two adjacent homologous points. R′(i,j) and R′(i′,j′) are homologous points in two random elemental images I(i,j) and I(i′,j′). Both p′x and p′y are constants determined by the object point O and the lens array based on the integral imaging principle. If R′(i,j) and R′(i′,j′) are known, and any one of them is chosen as an initial value, the coordinates of R′(m,n) in the elemental image I(m,n) can be calculated by

 figure: Fig. 3

Fig. 3 Schematic diagram of pick-up part for integral imaging

Download Full Size | PDF

R(m,n)={xR(m,n)=xR(i,j)+(im)pxyR(m,n)=yR(i,j)+(jn)py

In this work, the two elemental images, I(i,j) and I(i′,j′), have been captured by cameras A and B in the OPII part. The coordinates of the homologous points R′(i,j) and R′(i′,j′) are obtained by the block-matching algorithm [15]. R′(m,n) in the elemental image I(m,n) can then be calculated by Eq. (1) and Eq. (2). To be noted here, a multi-resolution block-matching algorithm [16] is adopted in the proposed method to match the partially occluded information in the two elemental images due to multiple objects in the scene, which cannot be solved by the conventional block-matching algorithm. It’s well known that the occlusion is difficult to be resolved completely in the general field of image processing and computer vision. Fortunately, however, the occlusion is not extreme in the integral imaging technology because the difference between any of two viewpoints in integral imaging is small [14]. Then the multi-resolution block-matching algorithm can work well enough to solve the small amount of partially occluded information in our optical experiment. Moreover, in order to match and extract the homologous points successfully in CGII, the 3D scene captured by the proposed system is required to have complex textures and be captured by two cameras simultaneously. After calculating all the homologous points in I(i,j) and I(i′,j′), EIs can be generated. Therefore, the generation of EIs needs only the calculation of homologous points based on images analysis and processing, and it has no need to extract the depth information or reconstruct the model. So from this point of view, it is independent of the object space.

As shown in Eq. (1) and (2), the coordinates of the homologous points in EIs are highly correlated based on the principle of integral imaging, which is premised on the regular arrangement of the camera array. But the correlation of the homologous points and the regularity of the camera array arrangement will be destroyed by the position deviation of the camera. Therefore, the integration of the EIs will be affected, and the quality of the 3D integral images will be reduced. In the proposed method, however, the coordinates of the homologous points in all the calculated EIs are still highly correlated, due to the EIs calculation is implemented by choosing any one of the images captured by two cameras. That means, if a real camera array is used as a pick-up aparatus, each camera in the array has the same position deviation and all the cameras are still regularly arranged. Thus, the generated EIs can be integrated as expected. From this point, the cameras in the proposed dual-camera pick-up system have no need to be pre-calibrated, which is an advantage of robustness to the camera position deviation over conventional camera array pick-up system. Another advantage of the proposed method in terms of the CGII part is that a number of post-processing techniques can be flexibly added to the real-time 3D information. Therefore, the CGII part makes it possible that the dual-camera pick-up system can generate multi-viewpoint EIs for integral imaging. This is quite different from the classical two view stereoscopic camera mentioned in reference [14].

In order to verify the feasibility of the proposed method, peak signal-to-noise ratio (PSNR) is employed to evaluate the image quality of the generated EIs [15, 17]. The EIs for the similar 3D real scene in Fig. 1 are captured by the conventional direct camera array pick-up method and the proposed method, respectively. In the experiment, a 2D-scanned camera is used to simulate a conventional 7 × 9 camera array and caputered the EIs directly, as shown in Fig. 4(b) . By contrast, 7 × 9 EIs generated by the OPII and CGII method are shown in Fig. 4(a). The elemental image (Fig. 4(a2)) is one of the original images captured by the dual-camera pick-up system, while Figs. 4(a1) and 4(a3) are generated by the proposed method, which exceeds the position of the two cameras. Obviously, some information of object texture would be missed. However, the missing information is limited, and could be compensated by the multi-resolution block-matching algorithm. If the missing information is too much, it may be solved by using a OPII with larger field of view, which could obtain the information of object texture as much as possible. Then the PSNR is calculated between each corresponding EI achieved by the two different method. As shown in Fig. 4(c), PSNR value fluctuates between 34.25 dB and 36.99 dB. PSNR of elemental image (a1) and (a3) is 35.37 dB and 35.97 dB, respectively. According to the literature [15, 17], the quality of the EIs with over 30 dB PSNR can fully satisfy the display requirement. Therefore, it confirms that the quality of the EIs generated by the proposed method and direct camera array pick-up method, respectively, is equally comparable.

 figure: Fig. 4

Fig. 4 EIs and 3D reconstructed images comparison results: (a) EIs generated by the proposed method, (b) EIs achieved by the conventional direct camera array pick-up method, (c) PSNR of each corresponding EI, (d) 3D reconstructed images of EIs 4(a), (e) 3D reconstructed images of EIs 4(b).

Download Full Size | PDF

These two EIs in Figs. 4(a) and 4(b) are reconstructed by using a 2.8-inch LCD integral imaging display system. The 3D reconstructed images comparison results from different viewpoints are shown in Figs. 4(d) and 4(e), respectively. The display lens array used in the display system has 63 elemental lens (7 (H) × 9 (V)), with its size 7 mm (H) × 5.4 mm (V). The focal length is 42.61 mm. The PSNR of the 3D images shown in Fig. 4(d) is 31.19 dB on average, which is reconstructed from EIs generated by the proposed method. It confirms that the quality of the two 3D images, reconstructed from the EIs obtained by different methods, is comparable.

Figure 5 shows a frame EIs from EIs sequences generated by using the proposed method in the real-time experiment, with the parameters in Table 2 .

 figure: Fig. 5

Fig. 5 EIs generated by CGII

Download Full Size | PDF

Tables Icon

Table 2. Parameters of Part II (CGII)

4. Projection-type integral imaging display part (PII)

A real-time projection-type integral imaging display system comprising a projector and a lens array is shown in Fig. 1. EIs calculated in the CGII part are shown by the projector. A 3D image is integrated by the lens array. All parameters are shown in Table 3 . The result of the optical experiment is shown in Fig. 6 . The resolution of the display plane is 490 dpi. The resolution of the reconstructed images is 760 dpi calculated by the parameters of PII [18]. Limited by large f-number of the lens array and the small aperture of the camera, the brightness of the reconstructed images captured by the camera is non-uniform. The experimental results demonstrate that the EIs generated by the proposed method can be integrated into 3D display and real-time displayed successfully.

Tables Icon

Table 3. Parameters of Part III (PII)

 figure: Fig. 6

Fig. 6 Reconstructed 3D images observed from different viewpoints: (a) 3D real scene 1 and (b) 3D real scene 2.

Download Full Size | PDF

5. Performance

In this experiment, the processing time for each process in the proposed real-time pick-up and display system is shown in Fig. 7 . The processing time for generating the image by OPII and transmitting the data into a computer is 0.04 s/frame. By contrast, the processing time for image generation and transmission of conventional camera array [13] should be 11.3 s/frame (USB 2.0, 480 Mbps) or 1.1 s/frame (USB 3.0, 5 Gbps) in a directly connected universal serial bus, with 5433.2 Mb data in 1 frame. The data volume is extremely huge and is thus unable to satisfy the requirement for real-time generation and translation. Benefiting from the calibration-free OPII part, the second processing time for calculating the elemental images by CGII after receiving the images from OPII is 0.06 s (single threaded). The third processing time for uploading the elemental images to PII is 0.04 s.

 figure: Fig. 7

Fig. 7 The process flow of the proposed method and the processing time of each process.

Download Full Size | PDF

In summary, the total processing time from the pick-up to the display of information is about 0.2 s. The processing time required for the calculation can be reduced from 0.06 s to 0.02 s through the acceleration provided by the Parallel Computing Toolbox in MATLAB. This software can enable the proposed method to provide real-time integral imaging 3D pick-up and display with a throughput of 8 fps. Compared with the 64-camera system with latency of 0.5 s reported by Taguchi [13], in the proposed system, the number of cameras is decreased significantly, and the data volume is reduced effectively, thus mitigating the difficulty of synchronism. Moreover, the latency is reduced to 0.06 s.

The single-threaded system with a throughput of 8 fps and latency of 0.06 s is shown in Fig. 7. After comprehensive analysis, the three parts in the proposed method are found to be independent from each other. This characteristic demonstrates the feasibility for the parallel optimization of the proposed method for reducing processing time [13]. In each time slice of parallel optimization, eight-core CPUs are divided into three parts, namely, the pick-up and translation of images from OPII (one thread), the calculation of the elemental images by CGII (two to six threads), and the uploading of elemental images to PII (one thread). After parallel optimization, each time slice comprising these three parts is 0.04 s. Thus, a throughput of 25 fps can be realized by the proposed system.

6. Conclusion

In conclusion, a novel dual-camera real-time 3D integral imaging pick-up and display method, which is robust to the camera position deviation, has been proposed and verified experimentally. This method comprises three parts, namely, the OPII, CGII, and PII. In OPII, a dual-camera pick-up system with no need of calibration is used to capture 3D information of real scene. In CGII, the EIs are calculated by the disparity of the homologous points in the two images captured by the dual cameras and independent of object space. Though the matching algorithm works well for current experiments and applications, it could resolve small amount of occlusion only and should be improved in the future. The proposed system is simple, flexible, robust, general and economical. In our experiment, with help of the Parallel Computing Toolbox, real-time 3D pick-up and display for real scenes with a throughput of 8 fps is achieved. However, parallel optimization might be adopted to enhance the throughput upto 25 fps in follow-up work. Furthermore, the proposed method has the potential to realize real-time 3D integral imaging pick-up and display by portable terminals.

Acknowledgments

This work was supported by the Ministry of Science and Technology of China under grant 2010CB327702, the National Natural Science Foundation of China (NSFC) under grant 61108046, and the Research Fund for the Doctoral Program of Higher Education of China under grant 20100031120033.

References and links

1. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef]   [PubMed]  

2. B. Lee, S. Jung, and J.-H. Park, “Viewing-angle-enhanced integral imaging by lens switching,” Opt. Lett. 27(10), 818–820 (2002). [CrossRef]   [PubMed]  

3. X.-R. Wang, Q.-F. Bu, and D.-Y. Zhang, “Method for quantifying the effects of aliasing on the viewing resolution of integral images,” Opt. Lett. 34(21), 3382–3384 (2009). [CrossRef]   [PubMed]  

4. D.-Q. Pham, N. Kim, K.-C. Kwon, J.-H. Jung, K. Hong, B. Lee, and J.-H. Park, “Depth enhancement of integral imaging by using polymer-dispersed liquid-crystal films and a dual-depth configuration,” Opt. Lett. 35(18), 3135–3137 (2010). [CrossRef]   [PubMed]  

5. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef]   [PubMed]  

6. J.-S. Jang and B. Javidi, “Real-time all-optical three-dimensional integral imaging projector,” Appl. Opt. 41(23), 4866–4869 (2002). [CrossRef]   [PubMed]  

7. J. Arai, M. Okui, T. Yamashita, and F. Okano, “Integral three-dimensional television using a 2000-scanning-line video system,” Appl. Opt. 45(8), 1704–1712 (2006). [CrossRef]   [PubMed]  

8. Z. Kavehvash, K. Mehrany, and S. Bagheri, “Optimization of the lens-array structure for performance improvement of integral imaging,” Opt. Lett. 36(20), 3993–3995 (2011). [CrossRef]   [PubMed]  

9. H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12(6), 1067–1076 (2004). [CrossRef]   [PubMed]  

10. Y. Xu, X. R. Wang, Y. Sun, and J. Q. Zhang, “Homogeneous light field model for interactive control of viewing parameters of integral imaging displays,” Opt. Express 20(13), 14137–14151 (2012). [CrossRef]   [PubMed]  

11. I. Moon and B. Javidi, “Three-dimensional recognition of photon-starved events using computational integral imaging and statistical sampling,” Opt. Lett. 34(6), 731–733 (2009). [CrossRef]   [PubMed]  

12. X.-Zh. Sang, F.-C. Fan, C.-C. Jiang, S. Choi, W.-H. Dou, C. Yu, and D. Xu, “Demonstration of a large-size real-time full-color three-dimensional display,” Opt. Lett. 34(24), 3803–3805 (2009). [CrossRef]   [PubMed]  

13. Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of vieding parameters,” Proc. IEEE 15(5), 841–852 (2009).

14. H. Navarro, A. Dorado, G. Saavedra, A. Llavador, M. Martínez-Corral, and B. Javidi, “Is it worth using an array of cameras to capture the spatio-angular information of a 3D scene or is it enough with just two?” Proc. SPIE 8384, 838406, 838406-7 (2012). [CrossRef]  

15. H.-H. Kang, J.-H. Lee, and E.-S. Kim, “Enhanced compression rate of integral images by using motion-compensated residual images in three-dimensional integral-imaging,” Opt. Express 20(5), 5440–5459 (2012). [CrossRef]   [PubMed]  

16. J. H. Lee and N. S. Lee, “variable block size motion estimation algorithm and its hardware architecture for H.264/AVC,” Proceedings of the 2004 Int. Symp. On Circuits and Syst., 3, 741–744 (2004).

17. J.-J. Lee, D.-H. Shin, and B.-G. Lee, “Simple correction method of distorted elemental images using surface markers on lenslet array for computational integral imaging reconstruction,” Opt. Express 17(20), 18026–18037 (2009). [CrossRef]   [PubMed]  

18. B. Lee, S.-W. Min, and B. Javidi, “Theoretical analysis for three-dimensional integral imaging systems with double devices,” Appl. Opt. 41(23), 4856–4865 (2002). [CrossRef]   [PubMed]  

Supplementary Material (1)

Media 1: MOV (4060 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Schematic configuration of the real-time integral imaging pick-up and display system (Media 1)
Fig. 2
Fig. 2 Real-time optical pick-up integral image part: (a) Sketch, (b) Results.
Fig. 3
Fig. 3 Schematic diagram of pick-up part for integral imaging
Fig. 4
Fig. 4 EIs and 3D reconstructed images comparison results: (a) EIs generated by the proposed method, (b) EIs achieved by the conventional direct camera array pick-up method, (c) PSNR of each corresponding EI, (d) 3D reconstructed images of EIs 4(a), (e) 3D reconstructed images of EIs 4(b).
Fig. 5
Fig. 5 EIs generated by CGII
Fig. 6
Fig. 6 Reconstructed 3D images observed from different viewpoints: (a) 3D real scene 1 and (b) 3D real scene 2.
Fig. 7
Fig. 7 The process flow of the proposed method and the processing time of each process.

Tables (3)

Tables Icon

Table 1 Characteristics comparison between the conventional and OPII methods.

Tables Icon

Table 2 Parameters of Part II (CGII)

Tables Icon

Table 3 Parameters of Part III (PII)

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

{ p x = ( x R (i,j) x R ( i , j ) ) / (i i ) p y = ( y R (i,j) y R ( i , j ) ) / (j j ) ,(i i ) ,(j j )
R (m,n) ={ x R (m,n) = x R (i,j) +(im) p x y R (m,n) = y R (i,j) +(jn) p y
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.