Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Free-depths reconstruction with synthetic impulse response in integral imaging

Open Access Open Access

Abstract

Integral Imaging provides spatial and angular information of three-dimensional (3D) objects, which can be used both for 3D display and for computational post-processing purposes. In order to recover the depth information from an integral image, several algorithms have been developed. In this paper, we propose a new free depth synthesis and reconstruction method based on the two-dimensional (2D) deconvolution between the integral image and a simplified version of the periodic impulse response function (IRF) of the system. The period of the IRF depends directly on the axial position within the object space. Then, we can retrieve the depth information by performing the deconvolution with computed impulse responses with different periods. In addition, alternative reconstructions can be obtained by deconvolving with non-conventional synthetic impulse responses. Our experiments show the feasibility of the proposed method as well as its potential applications.

© 2015 Optical Society of America

1. Introduction

Integral imaging (InI) is a 3D passive technique that has been intensively investigated in recent years. Based on the original idea proposed by Lippmann in 1908 [1], InI uses a microlens array (MLA) placed in front of an image sensor array to capture a set of elemental images (EIs). Each elemental image shows a different perspective of an incoherently illuminated 3D scene. As a consequence, InI systems capture both the spatial and also the directional information of the rays from a scene [2,3]. This information can be projected onto a 2D display producing an optical 3D image that can be visualized without the need for any special glasses [4–6]. Although autostereoscopic display was the original application of InI techniques, many different uses have been proposed [7–12]. Among others, depth information of 3D scenes can be extracted from a single-shot [13,14]. For performing this task, every EI is computationally projected through a virtual pinhole array, which matches the microlens array pitch. In terms of image processing, this is equivalent to superimposing properly shifted EIs and to sum the intensity projections [14]. Different axial positions are reconstructed depending on the number of overlapped pixels. Based on this pinhole array model, several computational reconstruction techniques have been developed [15–17], including the possibility of shifting and adding directly the EIs by means of the convolution by a set of dirac Delta functions [18–20].

Here, we propose a novel free depth synthesis method to digitally reconstruct 3D scenes by means of the 2D deconvolution between the integral image and its axially-dependent impulse response. This approach can reconstruct the 3D scene at arbitrary depths by adjusting the period of the impulse response. In a second stage, we take advantage of the possibility of performing complementary digital processing. Thus, we show that by use of synthetic impulse responses it is possible to focus simultaneously at different depths, or to extend the depth of field of reconstructed image in a selected axial region.

The paper is organized as follows. In Section 2, we introduce the basic principles of an InI system. Section 3 is devoted to present our proposed method. The experimental results are shown in Section 4. Finally, Section 5 summarizes the achievements of this work.

2. Basic Theory

Let us consider the pickup process of an InI system shown in Fig. 1(a). A 3D object, with intensity distribution O(x,z) is placed in front of a MLA, and the 2D image given by each microlens is recorded using a sensor array, located at a distance g from the MLA. The focal length and the diameter of the microlenses are f' and d, respectively.

 figure: Fig. 1

Fig. 1 Scheme of the pickup process in an InI system. Any point source located in a given transverse plane in the object space produces a pattern with the same periodicity over the sensor (a), whereas point sources placed at different axial positions generate diverse IRFs as shown in (b)

Download Full Size | PDF

Neglecting diffraction effects in the imaging process, the projection through each microlens of every single point source located, for example, at a given distance z0, intersects a single pixel of the sensor (see Fig. 1(a)). Under this assumption, the result of imaging a point source through the MLA, i.e. the IRF of the system, is a periodic pattern or a comb function with period p(z0). Simple geometrical calculations show that this period is related to the axial position of the point object by

p(z0)=d(1+gz0).
Note that Eq. (1) is valid for all values of the axial coordinates that satisfy the condition z2g.

In our formalism, we can assume that the 3D scene is confined within the spatial region in which our system can be considered, plane by plane, 2D linear and shift invariant (2D-LSI). In addition, the 3D scene must be placed within the depth of field of the microlens array. In this case, the 2D intensity distribution in the image sensor plane I(x;z), corresponding to an arbitrary section of the object placed at a distance z from the MLA, can be calculated as the 2D convolution between a scaled version of the object section, O(x;z), and the impulse response, h(x;z):

I(x;z)=1Mz2O(xMz;zMz2)2h(x;z),
where Mz=g/z is the lateral magnification, and the symbol ⊗2 represents the 2D convolution. Considering that the depth of field of the system is large enough for providing an in-focus image of all the sections of the 3D scene, the IRF can be expressed as:
h(x;z)=mxmyδ(xmp(z)),
where the symbol δ refers to Dirac delta function, and m=(mx,my) is a vector that denotes the microlens index in transverse directions, x and y, in such a way that EIs of the object are centered at x=mp.

The intensity distribution at the sensor plane produced by the entire 3D object is calculated as:

I(x)=z=2gI(x;z)dz.

In Fig. 2, we show a scheme of the imaging process represented by Eqs. (2)-(4). For the description of the procedure, we have considered a 3D scene composed of two plane objects placed at different depths so that their IRFs have different periods. The integral image is composed of a set of spatially shifted replicas of the two objects.

 figure: Fig. 2

Fig. 2 Acquisition process with an InI system. The integral image is obtained by summation of the individual intensity distributions relative to each object of the scene. These individual intensity distributions are the result of the convolution of the object intensity distribution with the corresponding IRF.

Download Full Size | PDF

3. Proposed method

To extract the depth information from an integral image and refocus the scene to different planes, we propose to perform a pair of 2D deconvolutions of the integral image with the corresponding impulse responses.

Since the sensor has a finite number of pixels, there are a discrete number of periodic pixelated IRFs. As a result, the number of reconstruction planes is limited. Let us consider now a more general 3D scene that consists of a set of N discrete reconstruction planes, whose intensity distributions can be expressed by Eq. (2). By discretizing Eq. (4) we have:

I(x)=n=1NI(x;zn).
Performing the 2D Fourier transform of Eq. (2), we obtain the integral image spectrum
I˜(u)=n=1NO˜(Mznu;znMzn2)H(u;zn),
where u=(u,v) is the transverse spatial frequencies and H(u;zn) is the 2D optical transfer function (OTF) of the system. As every plane of the scene is uniquely defined by the periodicity of the IRF, we can compute a 2D deconvolution using a variety of tools such as Wiener filters with a set of OTFs corresponding to different periods in the image space. For instance, to obtain the reconstruction at plane n=1, we perform the following operation:
I˜R(u)=I˜(u)H^*(u;z1)|H^(u;z1)|2+w2,
in which the symbol ~denotes the 2D Fourier transform of the function, * denotes complex conjugation, w2 is the Wiener parameter [21], and H^(u;z1) is the computed OTF. Considering that the latter perfectly matches the OTF of the system for the corresponding plane at z=z1, we have

I˜R(u)=O˜(Mz1u;z1Mz12)+n=2NO˜(Mznu;znMzn2)H(u;zn)H^*(u;z1)|H^(u;z1)|2+w2.

Note that if w2|H^(u;z1)|2 which is the case for a proper signal-to-noise ratio in the reconstructed images, then we can extract the object information. The final reconstruction can be obtained by performing the inverse Fourier transform of the above equation,

IR(x)O(xMz1;z1Mz12)+n=2NOzn(xMzn;znMzn2)hdef(x;z1),
where hdef(x;z1) is a function that contains the information of the planes that do not produce an impulse response with period p(z1). As a consequence, these planes suffer from a certain amount of defocus. From the above equation it can be seen that in the reconstruction, the plane z1 is in focus, whereas the rest of the planes are summed with a periodicity imposed by the function H^(u;z1), creating a blurring effect. It should be pointed out that, by applying this method, the reconstruction is an array representing the intensity of the scene as it is the application of the inverse physical process of the capture. However, the standard algorithms require an intensity normalization matrix for correcting intensity fluctuations with respect to the real object, which appear as artifacts in the reconstruction.

4. Experimental results

To show the feasibility of our method, we performed a capture experiment in which the 3D scene was composed of two dolls placed at different depths. In this experiment, instead of using an array of microlenses we used the synthetic aperture method [22]. The digital camera (Canon 450D, with a CMOS sensor of 4272x2848 pixels with 5.2 µm pixel size assembled with an EFS 18-55 mm lens) was mechanically displaced to capture EIs and was translated by a total distance of 50 mm in both directions x and y. The two objects used for the experiment were placed at a distance of 320 and 350 mm in front of the digital camera. The recorded integral image is shown in Fig. 3(a). It is composed by 11x11 EIs with 600x600 pixels each (see inset of Fig. 3(b)).

 figure: Fig. 3

Fig. 3 (a) Integral image obtained with the experimental setup. The integral image is composed of 11x11 EIs. (b) 3x3 EIs extracted from (a). The inset shows the central view of the integral image.

Download Full Size | PDF

The reconstruction algorithm was implemented in Matlab© as follows: First, we created a set of matrices with the IRFs associated with the system. The size of these matrices matches the integral image size, and the total number of feasible impulse responses is determined by the number of pixels of the EIs. We computed the fast Fourier transform (FFT) of both the integral image and every impulse response, and applied our deconvolution method given by Eq. (7). By performing the inverse Fourier transforms, we obtained a stack of depth-reconstructed images.

Although acquiring a large number of EIs is useful to increase the number of perspectives of the 3D scene, for the application of our depth reconstruction method we only considered 3x3 EIs of the integral image. Instead of taking consecutive images from the integral image, we took the most separated images marked in Fig. 3 (a). This is done for increasing the parallax of the capture and, therefore the number of feasible IRFs for depth reconstruction. Note that, the higher the number of EIs, the smoother the blur of the out-of-focus regions of the 3D scene.

With the reduced integral image shown in Fig. 3(b) and the IRFs calculated from Eq. (3), we performed the depth reconstruction with the proposed method (Eq. (8)). In Fig. 4, we show two planes of reconstruction for the 3D scene captured in our experiment and a representation of their corresponding impulse responses.

 figure: Fig. 4

Fig. 4 Depth reconstruction of the 3D scene after applying our algorithm based on 2D deconvolution. The reconstruction is calculated at planes located at distances (a) 320 mm, and (b) 350 mm. The corresponding IRFs are depicted in the bottom-right of the figures. The impulse response array is shown in the red box, with a periodicity of (a) 515 and (b) 523 pixels.

Download Full Size | PDF

Note that the algorithm is carried out in the Fourier domain. As a result, the reconstructed images have the same number of pixels as the Integral image, independently of the reconstruction depth. However, the reconstructed information is contained inside of a window with the same size of the elementary images.

To evaluate the resolution of the reconstructed images, we performed a second capture experiment with a USAF 1951 test chart as the object. Again, we selected 3x3 elemental images of the test chart. In this case, the total displacement of the camera was 44 mm along both x and y directions, and the object was placed at 300 mm from the camera. As it can be seen from Fig. 5, the resolving power is 0.4 mm (Group 0, Element 3). This resolution is equivalent to the resolution achieved with the traditional reconstruction methods, and is within the theoretical prediction [23].

 figure: Fig. 5

Fig. 5 USAF 1951 test chart reconstructed with (a) the standard reconstruction algorithm and (b) our proposed method. We achieve a resolution of 0.4 mm, corresponding to the Element 3 of Group 0 (green box).

Download Full Size | PDF

The most important feature of the proposed algorithm is that, as long the deconvolution is applied in the Fourier domain, it provides the possibility of digital processing in integral imaging. To show an example of this advantage, let us consider the integral image shown in Fig. 3(b). We can easily extend the depth of field of the reconstructed images by deconvolving the integral image with a synthetic IRF that is a combination of the individual impulse responses related to different axial positions. An example of this is shown in Figs. 6(a) and 6(b), which are calculated from a synthetic IRF obtained as the sum of two individual IRFs, representing different depths. The periodicity of the corresponding impulse responses are 510 and 523 pixels for Fig. 6(a), and 517 and 525 pixels for Fig. 6(b). Note that in Fig. 6(a), we focus simultaneously the label “Teacher” and the moustache, whereas in Fig. 6(b) we focus the apple and the elbow.

 figure: Fig. 6

Fig. 6 Application of the proposed algorithm for simultaneously synthesizing different depths. (a) and (b) show the simultaneously performed reconstruction of two non-consecutive planes in the 3D scene. Besides, an extended depth of field reconstruction is possible. In (c), all planes of the object located at 320mm are focused, whereas in (d) the whole 3D scene is focused.

Download Full Size | PDF

More complex synthetic IRFs permit to increase at will the depth of field of reconstructed images within the whole DOF of the system. This is the case in Fig. 6(c) in which a synthetic IRF composed by three individual ones permits us to focus on the whole first doll. A synthetic impulse response composed by seven individual ones put in focus the whole 3D scene, as seen in Fig. 6(d). Periodicities of the IRFs include the range from 510 to 518 pixels in Fig. 6(c), and from 510 to 526 pixels in Fig. 6(d).

Note that in case of using MLA, there could be cases in which EIs are distorted due to the microlenses quality. The method proposed here could be applied for compensating image distortions, including the diffraction effects in the IRFs. Also, in Eq. (7) measured OTFs for several distances could be included in order to take into account any source of image quality degradation.

5. Conclusions

In this paper, we have presented a new method for free depth synthesis and reconstruction in InI. It allows us to extract the depth information and refocus them into different planes of a 3D scene captured by an InI system. The reconstruction algorithm is based on the 2D deconvolution of the integral image and a set of calculated impulse responses with a different periodic pattern corresponding to axial planes of reconstruction. Our experimental results show the feasibility of the method, as well as alternative applications of the reconstruction.

Acknowledgments

This work was supported in part by the Plan Nacional I + D + I, under the grant DPI2012-32994, Ministerio de Economía y Competitividad, Spain. We also acknowledge the support from the Generalitat Valenciana, Spain, (grant PROMETEOII/2014/072). A. Llavador acknowledges a predoctoral grant from University of Valencia (UV-INV-PREDOC13-110484). E. Sánchez-Ortiga acknowledges a postdoctoral contract from Generalitat Valenciana (APOSTD/2015/094). Bahram Javidi acknowledges support under NSF IIS 1422179.

References and links

1. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908).

2. H. E. Ives, “Optical properties of a Lippmann lenticulated sheet,” J. Opt. Soc. Am. 21(3), 171–176 (1931). [CrossRef]  

3. C. B. Burckhardt, “Optimum parameters and resolution limitation of Integral Photography,” J. Opt. Soc. Am. 58(1), 71–76 (1968). [CrossRef]  

4. T. Okoshi, “Three-dimensional displays,” Proc. IEEE 68(5), 548–564 (1980). [CrossRef]  

5. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef]   [PubMed]  

6. A. Stern, Y. Yitzhaky, and B. Javidi, “Perceivable light fields: Matching the requirements between the human visual system and autostereoscopic 3-D displays,” Proc. IEEE 102(10), 1571–1587 (2014). [CrossRef]  

7. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37(11), 2034–2045 (1998). [CrossRef]   [PubMed]  

8. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002). [CrossRef]   [PubMed]  

9. S. Jung, J.-H. Park, H. Choi, and B. Lee, “Viewing-angle-enhanced integral three-dimensional imaging along all directions without mechanical movement,” Opt. Express 11(12), 1346–1356 (2003). [CrossRef]   [PubMed]  

10. P. Latorre-Carmona, E. Sánchez-Ortiga, X. Xiao, F. Pla, M. Martínez-Corral, H. Navarro, G. Saavedra, and B. Javidi, “Multispectral integral imaging acquisition and processing using a monochrome camera and a liquid crystal tunable filter,” Opt. Express 20(23), 25960–25969 (2012). [CrossRef]   [PubMed]  

11. A. Carnicer and B. Javidi, “Polarimetric 3D integral imaging in photon-starved conditions,” Opt. Express 23(5), 6408–6417 (2015). [CrossRef]   [PubMed]  

12. X. Xiao, B. Javidi, M. Martínez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [Invited],” Appl. Opt. 52(4), 546–560 (2013). [CrossRef]   [PubMed]  

13. H. Arimoto and B. Javidi, “Integral 3D imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001). [CrossRef]   [PubMed]  

14. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004). [CrossRef]   [PubMed]  

15. S.-H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12(19), 4579–4588 (2004). [CrossRef]   [PubMed]  

16. M. Cho and B. Javidi, “Computational reconstruction of three-dimensional integral imaging by rearrangement of elemental image pixels,” J. Disp. Technol. 5(2), 61–65 (2009). [CrossRef]  

17. D. H. Shin and H. Yoo, “Computational integral imaging reconstruction method of 3D images using pixel-to-pixel mapping and image interpolation,” Opt. Commun. 282(14), 2760–2767 (2009). [CrossRef]  

18. J.-Y. Jang, J. I. Ser, S. Cha, and S. H. Shin, “Depth extraction by using the correlation of the periodic function with an elemental image in integral imaging,” Appl. Opt. 51(16), 3279–3286 (2012). [CrossRef]   [PubMed]  

19. J. Y. Jang, D. Shin, B. G. Lee, S. P. Hong, and E. S. Kim, “3D image correlator using computational integral imaging reconstruction based on modified convolution property of periodic functions,” J. Opt. Soc. Korea 18(4), 388–394 (2014). [CrossRef]  

20. J. Y. Jang, D. Shin, and E. S. Kim, “Optical three-dimensional refocusing from elemental images based on a sifting property of the periodic δ-function array in integral-imaging,” Opt. Express 22(2), 1533–1550 (2014). [CrossRef]   [PubMed]  

21. C. W. Helstrom, “Image restoration by the method of least squares,” J. Opt. Soc. Am. 57(3), 297–303 (1967). [CrossRef]  

22. J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27(13), 1144–1146 (2002). [CrossRef]   [PubMed]  

23. H. Navarro, E. Sánchez-Ortiga, G. Saavedra, A. Llavador, A. Dorado, M. Martinez-Corral, and B. Javidi, “Non-homogeneity of lateral resolution in integral imaging,” J. Disp. Technol. 9(1), 37–43 (2013). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Scheme of the pickup process in an InI system. Any point source located in a given transverse plane in the object space produces a pattern with the same periodicity over the sensor (a), whereas point sources placed at different axial positions generate diverse IRFs as shown in (b)
Fig. 2
Fig. 2 Acquisition process with an InI system. The integral image is obtained by summation of the individual intensity distributions relative to each object of the scene. These individual intensity distributions are the result of the convolution of the object intensity distribution with the corresponding IRF.
Fig. 3
Fig. 3 (a) Integral image obtained with the experimental setup. The integral image is composed of 11x11 EIs. (b) 3x3 EIs extracted from (a). The inset shows the central view of the integral image.
Fig. 4
Fig. 4 Depth reconstruction of the 3D scene after applying our algorithm based on 2D deconvolution. The reconstruction is calculated at planes located at distances (a) 320 mm, and (b) 350 mm. The corresponding IRFs are depicted in the bottom-right of the figures. The impulse response array is shown in the red box, with a periodicity of (a) 515 and (b) 523 pixels.
Fig. 5
Fig. 5 USAF 1951 test chart reconstructed with (a) the standard reconstruction algorithm and (b) our proposed method. We achieve a resolution of 0.4 mm, corresponding to the Element 3 of Group 0 (green box).
Fig. 6
Fig. 6 Application of the proposed algorithm for simultaneously synthesizing different depths. (a) and (b) show the simultaneously performed reconstruction of two non-consecutive planes in the 3D scene. Besides, an extended depth of field reconstruction is possible. In (c), all planes of the object located at 320mm are focused, whereas in (d) the whole 3D scene is focused.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

p ( z 0 ) = d ( 1 + g z 0 ) .
I ( x ;z ) = 1 M z 2 O ( x M z ; z M z 2 ) 2 h ( x ;z ) ,
h ( x ;z ) = m x m y δ ( x m p ( z ) ) ,
I ( x ) = z = 2 g I ( x ;z ) d z .
I ( x ) = n = 1 N I ( x ; z n ) .
I ˜ ( u ) = n = 1 N O ˜ ( M z n u ; z n M z n 2 ) H ( u ; z n ) ,
I ˜ R ( u ) = I ˜ ( u ) H ^ * ( u ; z 1 ) | H ^ ( u ; z 1 ) | 2 + w 2 ,
I ˜ R ( u ) = O ˜ ( M z 1 u ; z 1 M z 1 2 ) + n = 2 N O ˜ ( M z n u ; z n M z n 2 ) H ( u ; z n ) H ^ * ( u ; z 1 ) | H ^ ( u ; z 1 ) | 2 + w 2 .
I R ( x ) O ( x M z 1 ; z 1 M z 1 2 ) + n = 2 N O z n ( x M z n ; z n M z n 2 ) h d e f ( x ; z 1 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.