Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Three-dimensional volumetric object reconstruction using computational integral imaging

Open Access Open Access

Abstract

We propose a three-dimensional (3D) imaging technique that can sense a 3D scene and computationally reconstruct it as a 3D volumetric image. Sensing of the 3D scene is carried out by obtaining elemental images optically using a pickup microlens array and a detector array. Reconstruction of volume pixels of the scene is accomplished by computationally simulating optical reconstruction according to ray optics. The entire pixels of the recorded elemental images contribute to volumetric reconstruction of the 3D scene. Image display planes at arbitrary distances from the display microlens array are computed and reconstructed by back propagating the elemental images through a computer synthesized pinhole array based on ray optics. We present experimental results of 3D image sensing and volume pixel reconstruction to test and verify the performance of the algorithm and the imaging system. The volume pixel values can be used for 3D image surface reconstruction.

©2004 Optical Society of America

1. Introduction

Three-dimensional (3D) sensing and imaging have been subject of research due to their diverse benefits and applications [14]. Integral imaging (II) [516] is a 3D imaging technique which uses a microlens array or a pinhole array to capture light rays emanating from an object from different directions. To record 3D objects using an II system, intensities and directional information of light rays that pass through each microlens or pinhole are captured on a 2D image sensor. Captured information through each microlens or pinhole forms a demagnified 2D image with its own perspective. These captured 2D images are referred to as elemental images. To reconstruct the 3D image from the elemental images, we need to propagate the rays coming from the elemental images through a similar microlens array used for the recording. This process generates reverse propagation to form a 3D image where the object is originally located. Microlens array used to record the elemental images is referred to as pickup microlens array, and microlens array used to display the reconstructed images is called display microlens array.

For direct 3D image display in II, optical reconstruction is accomplished by displaying elemental images on a 2D display panel such as an LCD [1415] with a display microlens array. Direct optical reconstruction introduces image quality degradation because of diffraction and limitations of optical devices. There are many applications in which volume pixels (voxels) of 3D images are needed for further image processing, such as extracting surface profiles of 3D images. For these purposes, and to overcome image quality degradation, computational II reconstruction [9, 13] of 3D images is of interest. The advantage of computational reconstruction II is the freedom to generate the viewing angle of the reconstructed objects without optically displaying the elemental images. However, reconstructed images using computational II technique in reference 9 produce images of the 3D object viewed from one specific view point through a microlens array. Therefore, it does not give us full 3D volume information. In reference 13, the algorithm uses triangulation and limited segments of the elemental images (or sampled elemental images) to reconstruct an image from a single viewpoint, which results in resolution degradation of the reconstructed image. Resolution improvement techniques can be applied to the computational II to obtain higher resolution images. In this case, however, a multiple set of integral images are needed using time multiplexing II [11]. In addition, the reconstructed higher resolution image is still an image from a single viewpoint.

In this paper, we propose a new computational II reconstruction method that uses all of the information of the elemental images to reconstruct the full 3D volume of a scene. It allows for reconstructing the 3D voxel values at any arbitrary distance from the display microlens array without the limiting effects of diffraction and device degradations. Computational reconstruction can be achieved by digitally simulating geometrical optics to process the elemental images obtained optically by direct pickup.

 figure: Fig. 1.

Fig. 1. Pickup and reconstruction process of 3D scene using II technique. (a) Optical setup to pickup the elemental images of 3D objects. (b) Computational reconstruction and display II image. The full 3D volume image can be reconstructed. (c) Optical 3D display of the II image with the elemental images displayed on LCD. Pickup microlens array in Fig. 1(a) and display microlens array in Fig. 1(c) have the same specifications.

Download Full Size | PDF

2. 3D volumetric reconstruction using computational II

To form the elemental images in the pickup plane, each voxel of the 3D objects at location (x, y, z) is mapped to the imaging plane of the pickup microlens array and recorded by a CCD camera. Therefore, each voxel of the 3D object contributes to form all the pixels of elemental images. The other voxels (such as those in free space or inside of the 3D object) may not contribute to the pixels of the elemental images. Figure 1(a) illustrates the experimental setup of the pickup process of the elemental images using II. Figure 1(b) illustrates the 3D scene reconstruction process using computational II. Figure 1(c) illustrates 3D image display using an optical reconstruction II system, which is a direct pickup one-step II [12].

The proposed computational II reconstruction method is an inverse mapping procedure through a computer synthesized (virtual) pinhole array that forms the elemental images. The reconstruction procedure extracts pixels from each elemental image and displays the corresponding voxels at coordinates (x, y, z). In Reference 17, a 3D reconstruction method was described where multiple pinhole images were obtained [1720] by a pinhole scanning technique. The Fourier transform of the images projected by the pinholes and additional coordinate transforms followed by an inverse 3D Fourier transform were used to reconstruct the 3D object. Our proposed system reconstructs the 3D object by a very different method. Instead of using a series of Fourier transforms, coordinate transforms, and inverse 3D Fourier transform, our proposed technique obtains 2D elemental images using a microlens array. These 2D elemental images are directly projected through a virtual pinhole array to reconstruct the 3D scene by superposition according to geometrical optics. Thus, the proposed approach is computationally simple, and can be implemented in real-time with commercially available hardware. Figure 2 illustrates the proposed computational reconstruction of the 3D image on a display plane at distance z=L. For a fixed distance z=L from the display pinhole array, each elemental image is inversely projected through each synthesized pinhole in the array. Each inversely projected image is magnified according to the magnification factor M. M is the ratio of the distance between the synthesized pinhole array and the reconstruction image plane at z=L, to the distance between the synthesized pinhole array and the elemental image plane (g), that is M=L/g. For M>1, the adjacent inversely projected images at reconstruction image plane z overlap each other. The intensity at the reconstruction plane is inversely proportional to the square of the distance between the elemental image and the reconstruction plane. The inverse mappings of all the elemental images through synthesized pinhole array are computed, which may be overlapping each other at any display plane z=L. These inversely mapped elemental images have different perspectives and distance information about the 3D object. They form one image at reconstruction image plane z=L. In order to form the 3D volume information, we repeat this process for different distances corresponding to all reconstruction planes of interest. Figure 3 is the illustration of the lateral (x) axis coordinate of the reconstruction plane according to the pth elemental image at (x, z). This can be extended to any voxel location at (x, y, z).

 figure: Fig. 2.

Fig. 2. Computational reconstruction II process by inversely mapping of the recorded elemental images. Reconstructed image plane at distance z=L is obtained by linear superposition of each inversely mapped elemental image through computer synthesized pinhole array.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Formation of the images at reconstructed image plane by each computer synthesized pinhole array. This diagram shows only the lateral (x) axis coordinate of the reconstruction plane. The bottom elemental image is assumed to be the 1st elemental image. sx is the elemental image size along x direction. The magnification factor M is M=z/g.

Download Full Size | PDF

Let Ipq be the pth row and the qth column elemental image, and Opq(x, y, z) be the inversely mapped image of the elemental image Ipq at the location (x, y, z). As can be seen from Fig. 3, Opq(x, y, z) can be represented in terms of elemental image Ipq within the boundary of the inversely mapped elemental image:

Opq(x,y,z)=Ipq(sxp(xsxp)M,syq(ysyq)M)(z+g)2+[(xsxp)2+(ysyq)2](1+1M)2,for{sx(pM2)xsx(p+M2)sy(qM2)ysy(p+M2)

where, sx, sy are the sizes of elemental image Ipq in x, y directions, respectively. The denominator of Eq. (1) is the square of the distance from the pixel of elemental image Ipq to the corresponding voxel of the inversely mapped elemental image at reconstruction plane z. Equation (1) can be written as:

Opq(x,y,z)=Ipq(xM+(1+1M)sxp,yM+(1+1M)syq)(z+g)2+[(xsxp)2+(ysyq)2](1+1M)2,for{sx(pM2)xsx(p+M2)sy(qM2)ysy(p+M2)

The reconstructed 3D image at (x, y, z) is the summation of all the inversely mapped elemental images:

O(x,y,z)=p=0m1q=0n1Opq(x,y,z),

where m, n are the number of the elemental images in x, y directions, respectively.

3. Experiments and results

Figure 4(a) shows the 3D scene used in the experiments. A toy crossing road sign and a hospital sign are used to create a 3D scene. The objects are illuminated by incoherent light. The pickup microlens array is placed in front of the objects to form the elemental image array. The microlens array has 53×53 square refractive lenses in a 55 mm square area. The distance between the crossing road sign and the pickup microlens array is 14 mm, and the distance between the hospital sign and the pickup microlens array is 33.5 mm. The elemental images of the objects are formed on the CCD camera by inserting a camera lens with focal length of 50 mm between the CCD camera and the pickup microlens array. The magnification factor of the elemental image array formed by the camera lens is adjusted so that the size of the elemental image array becomes almost the same as the size of the CCD sensor of the camera.

Some of the captured elemental images which are used to reconstruct the 3D scene are shown in Fig. 4(b). 19 columns×14 rows elemental images are used to reconstruct the 3D scene. The size of each elemental image is 60 pixels×60 pixels. The CCD camera had pixels that could register 10 bits. Therefore, each pixel of the elemental images is recorded as 10 bits of data in the computer. Figures 4(c)–4(f) are the computationally reconstructed images at various display planes using the proposed II reconstruction technique. Each reconstructed image size is 1140×840 pixels. Figure 4(c) shows reconstructed images at the distance of z=6 mm, where there is no object. Figure 4(d) shows reconstructed image at the distance of z=14 mm which is the image display plane of the crossing road sign. We can see the crossing road sign. However, the hospital sign is not reconstructed. Figure 4(e) shows reconstructed images at the distance of z=26 mm, where there is no object. Figure 4(f) shows reconstructed image at the distance of z=33.5 mm which is the image display plane of the hospital sign. In this case, we can see the hospital sign reconstructed and a blurred crossing road sign.

The reconstruction results are reasonably good even though there are a number of factors that impact the quality of 3D reconstruction. These include diffraction due to small size of the lenslets, limited sampling rate due to the finite pitch of microlens array, imperfections of optical sensor in the pickup process, and truncation errors due to the limited dynamic range of the computer that performs the inverse mapping algorithm. The lateral resolution of the reconstructed 3D scene is determined by various factors, such as ray sampling rate of the pickup microlens array, the resolution of the elemental images, and the magnification factor M during the reconstruction process. The longitudinal resolution is proportional to the gap g between the lenslets and display device and inversely proportional to the number of pixels of the elemental images [16]. Within the depth of focus of the display microlens, reconstructed images which are adjacent to image plane are very close to each other. The reconstructed images at the correct locations of the objects are clear. However, we can see horizontal and vertical lines in the reconstructed images. These lines are made due to the shape of the pickup microlens array. To remove the lines, we may superimpose some of the adjacent reconstructed images which are within the depth of focus of the lenslet. The results are shown in Fig. 4(g) and 4(h). Although the superimposed images are slightly blurred because of the low pass filtering effect due to the summation of the adjacent reconstructed images, we still have good quality reconstructed images. Figure 5 is a movie of a series of the reconstructed 3D volume imagery using the proposed computational II from the image display plane at z=3.16 mm to the image display plane at z=37 mm with increment of 0.09 mm. The quality of the reconstructed images using the proposed computational II presented in Fig. 4 and Fig. 5 is compared with that of the optically reconstructed II images using direct pickup one-step II [12]. Figure 6 shows the optically reconstructed 3D scene using direct pickup one-step II. Figures 6(a) and 6(b) are the optically reconstructed pseudoscopic virtual image of the objects, and orthoscopic real image of the objects, respectively. We can observe that the reconstructed image quality using the proposed computational II reconstruction is better than that of the images reconstructed using direct pickup one-step II performed optically. This is because computational II reconstruction is free of diffraction, device limitations, and system misalignment, even if there are truncation errors due to the digitally computed inverse mapping. Note that the voxel values of 3D image can be obtained from direct optical reconstruction by positioning a 2D image sensor at z>0 and then recording the 2D image along the z axis. Even in this case, we still believe that computational reconstruction will give better volumetric image quality than that obtained by direct optical reconstruction. Using the proposed computational II reconstruction technique, it is possible to reconstruct the full 3D volume of image which can generate display planes at any arbitrary distances from display pinhole array. In our computer reconstruction, we are able to display the crossing road sign and the hospital sign at separate display planes, which are the correct location of the original 3D object.

 figure: Fig 4.

Fig 4. Experimental results for 3D object reconstruction using the proposed computational reconstruction II technique. (a) Objects used in the experiments. The crossing sign and the hospital sign are 14 mm and 33.5 mm away from the pickup microlens array, respectively. (b) Some of the recorded elemental images. (c) Reconstructed image at distance of z=6 mm. (d) Reconstructed image at the distance of z=14 mm. (e) Reconstructed image at the distance of z=26 mm. (f) Reconstructed image at the distance of z=33.5 mm. (g) Reconstructed image by linear superposition of the road sign images within the depth of focus of lenslets. (h) Reconstructed image by linear superposition of the hospital sign images within the depth of focus of lenslets.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. (.avi 1.84 MB) Movie of the reconstructed 3D volume imagery from the image display plane at z=3.16 mm to the image display plane at z=37 mm with increment of 0.09 mm.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Optical 3D object reconstruction using direct pickup one-step II technique [see reference 12]. Objects used in the experiments are the same as those used in the experiments in Fig. 4. (a) Reconstructed 3D pseudoscopic virtual image of the objects. (b) Reconstructed 3D orthoscopic real image of the objects.

Download Full Size | PDF

4. Conclusion

A new computational reconstruction method using integral imaging has been proposed to obtain voxel values of a 3D object or a 3D scene. Images of objects along the longitudinal axis are reconstructed computationally from optically recorded elemental images formed by a pickup microlens array. The reconstructed 3D images using proposed reconstruction method appear in the correct planes of the original 3D scene. We have conducted experiments with 3D objects to test and verify the performance of the proposed technique. The experimental results show that full 3D volume images can be successfully reconstructed at the exact location of the original input objects. We have also compared the performance of the proposed computational reconstruction II method with that of optically reconstructed II. We can reconstruct better quality images using the proposed computational reconstruction method than using an optical reconstruction method. The volume pixel values obtained by the proposed method can be used for other image processing applications, such as 3D image surface extraction.

Acknowledgments

Ju-Seog Jang was supported in part by Korea Science and Engineering Foundation Grant R05-2003-000-10968-0.

References and links

1. D. H. McMahon and H. J. Caulfield, “A technique for producing wide-angle holographic displays,” Appl. Opt. 9, 91–96, (1970). [CrossRef]   [PubMed]  

2. P. Ambs, L. Bigue, R. Binet, J. Colineau, J.-C. Lehureau, and J.-P. Huignard, “3D Image Reconstruction using Electrooptic Holography,” in Proceedings of The 16th Annual Meeting of the IEEE Lasers and Electro-Optics Society, LEOS 20031, pp. 172–173, October (2003).

3. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef]   [PubMed]  

4. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, New York, NY, 1996).

5. G. Lippmann, “La photographic intergrale,” C. R. Acad. Sci. 146, 446–451 (1908).

6. T. Okoshi, Three-dimensional imaging techniques (Academic Press, New York, 1976).

7. H. E. Ives, “Optical properties of a Lipmann lenticulated sheet,” J. Opt. Soc. Am. 21, 171–176 (1931). [CrossRef]  

8. C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. 58, 71–76 (1968). [CrossRef]  

9. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001). [CrossRef]  

10. T. Okoshi, “Optimum design and depth resolution of lens sheet and projection type three dimensional displays,” Appl. Opt. 10, 2284–2291 (1971). [CrossRef]   [PubMed]  

11. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]  

12. J.-S. Jang and B. Javidi, “Formation of orthoscopic three-dimensional real images in direct pickup one-step integral imaging,” Opt. Eng. 42, 1869–1870 (2003). [CrossRef]  

13. Y. Frauel and B. Javidi, “Digital three-dimensional image correlation by use of computer-reconstructed integral imaging,” Appl. Opt. 41, 5488–5496 (2002). [CrossRef]   [PubMed]  

14. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real time integral photography for three-dimensional Images,” Appl. Opt. 37, 2034–2045 (1998). [CrossRef]  

15. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15, 2059–2065 (1998) [CrossRef]  

16. F. Jin, J.-S. Jang, and B. Javidi, “Effect of device resolution on three-dimensional integral imaging,” submitted to Opt. Lett..

17. D. L. Marks and D. J. Brady, “Three-dimensional source reconstruction with a scanned pinhole camera,” Opt. Lett. 23, 820–822 (1998) [CrossRef]  

18. J. W. V. Gissen, M. A Viergever, and C. N. D. Graff, “Improved tomographic reconstruction in seven-pinhole imaging,” IEEE Trans. Med. Imag. MI-4, 91–103 (1985) [CrossRef]  

19. L. T. Chang, B. Macdonald, and V. Perez-Mendez, “Axial tomography and three dimensional image reconstruction,” IEEE Trans. Nucl. Sci. NS-23, 568–572 (1976) [CrossRef]  

20. L. I. Yin and S. M. Seltzer, “Tomographic decoding algorithm for a nonoverlapping redundant array,” Appl. Opt. 32, 3726–3735 (1993). [CrossRef]   [PubMed]  

Supplementary Material (1)

Media 1: AVI (1887 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Pickup and reconstruction process of 3D scene using II technique. (a) Optical setup to pickup the elemental images of 3D objects. (b) Computational reconstruction and display II image. The full 3D volume image can be reconstructed. (c) Optical 3D display of the II image with the elemental images displayed on LCD. Pickup microlens array in Fig. 1(a) and display microlens array in Fig. 1(c) have the same specifications.
Fig. 2.
Fig. 2. Computational reconstruction II process by inversely mapping of the recorded elemental images. Reconstructed image plane at distance z=L is obtained by linear superposition of each inversely mapped elemental image through computer synthesized pinhole array.
Fig. 3.
Fig. 3. Formation of the images at reconstructed image plane by each computer synthesized pinhole array. This diagram shows only the lateral (x) axis coordinate of the reconstruction plane. The bottom elemental image is assumed to be the 1st elemental image. sx is the elemental image size along x direction. The magnification factor M is M=z/g.
Fig 4.
Fig 4. Experimental results for 3D object reconstruction using the proposed computational reconstruction II technique. (a) Objects used in the experiments. The crossing sign and the hospital sign are 14 mm and 33.5 mm away from the pickup microlens array, respectively. (b) Some of the recorded elemental images. (c) Reconstructed image at distance of z=6 mm. (d) Reconstructed image at the distance of z=14 mm. (e) Reconstructed image at the distance of z=26 mm. (f) Reconstructed image at the distance of z=33.5 mm. (g) Reconstructed image by linear superposition of the road sign images within the depth of focus of lenslets. (h) Reconstructed image by linear superposition of the hospital sign images within the depth of focus of lenslets.
Fig. 5.
Fig. 5. (.avi 1.84 MB) Movie of the reconstructed 3D volume imagery from the image display plane at z=3.16 mm to the image display plane at z=37 mm with increment of 0.09 mm.
Fig. 6.
Fig. 6. Optical 3D object reconstruction using direct pickup one-step II technique [see reference 12]. Objects used in the experiments are the same as those used in the experiments in Fig. 4. (a) Reconstructed 3D pseudoscopic virtual image of the objects. (b) Reconstructed 3D orthoscopic real image of the objects.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

O pq ( x , y , z ) = I pq ( s x p ( x s x p ) M , s y q ( y s y q ) M ) ( z + g ) 2 + [ ( x s x p ) 2 + ( y s y q ) 2 ] ( 1 + 1 M ) 2 , for { s x ( p M 2 ) x s x ( p + M 2 ) s y ( q M 2 ) y s y ( p + M 2 )
O pq ( x , y , z ) = I pq ( x M + ( 1 + 1 M ) s x p , y M + ( 1 + 1 M ) s y q ) ( z + g ) 2 + [ ( x s x p ) 2 + ( y s y q ) 2 ] ( 1 + 1 M ) 2 , for { s x ( p M 2 ) x s x ( p + M 2 ) s y ( q M 2 ) y s y ( p + M 2 )
O ( x , y , z ) = p = 0 m 1 q = 0 n 1 O pq ( x , y , z ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.