Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Enhanced depth of field integral imaging with sensor resolution constraints

Open Access Open Access

Abstract

One of the main challenges in integral imaging is to overcome the limited depth of field. Although it is widely assumed that such limitation is mainly imposed by diffraction due to lenslet imaging, we show that the most restricting factor is the pixelated structure of the sensor (CCD). In this context, we demonstrate that by proper reduction of the fill factor of pickup microlenses, the depth of field can be substantially improved with no deterioration of lateral resolution.

©2004 Optical Society of America

1. Introduction

Three-dimensional (3D) image display and visualization have been subjects of great interest [1–4]. Integral imaging is a promising 3D imaging technique that provides autostereoscopic images from a continuous viewpoint and does not require the use of any special glasses [5–11]. In an integral imaging system (IIS), a 2D collection of elemental images of a 3D object is generated by a microlens array, and recorded in a sensor such as a CCD. Each elemental image has a different perspective of the 3D object. The recorded 2D elemental images are displayed by an optical device, such as a LCD, placed in front of another microlens array to reconstruct the 3D image. Since its rebirth in 1997 by Okano et al. [12], many important theoretical and experimental contributions on integral imaging have been reported [13–27].

One of the main problems facing integral imaging is its limited depth of field (DOF). The DOF of IIS is influenced by many parameters related with both the capture and the display setups. However, to reconstruct a clear 3D integral image of an axially elongated 3D object, it is essential to capture sharp 2D elemental images of the object. It is commonly assumed that the DOF is governed by the limited focal depth of the microlenses used in the pickup. Therefore, techniques have been proposed to improve the DOF while maintaining the resolution. These methods are based on the use of amplitude modulated microlenses [20], the use of a uniaxial crystal plate to alter the optical path length [23], the synthesis of real and virtual image fields [24] or on the use of lenslets with non-uniform focal lengths and aperture sizes [25].

In this paper, we show that the most restricting factors of lateral resolution at any depth is the pixelated structure of the recording device (CCD). It is slightly surprising, considering the age of Lippmann’s proposal, the little theoretical consideration, nor experimental one, concerning this topic. Let us remark in this sense only the paper by Okano et al. [12], were the influence of the circle of confusion of the camera was analyzed in terms of geometrical optics. Park et al. [16] studied the influence of pixelation on resolution, but in the display process. What we demonstrate here is that the elemental images corresponding to a wide range of axial positions of the 3D object have the same lateral resolution, which is imposed by the detector pixel size. Beyond this axial range, the lateral resolution decays very fast and is governed by diffraction. We take profit from this fact to demonstrate that by proper reduction of the fill factor of the microlenses in the pickup, one can significantly enlarge the DOF of the IIS. This enlargement is not accompanied by a deterioration of spatial resolution.

2. Theory

Consider the capture setup of an IIS, as shown in Fig. 1. In this scheme a 3D surface object is illuminated by a spatially-incoherent light beam of mean wavelength λ. The light emitted by the object is collected by the microlens array to form, in the aerial pickup plane, a collection of 2D elemental aerial images. The reference and the aerial pickup planes are conjugated through the microlenses so that distances a and g are related by the lens law, 1/a+1/g=1/f. Each elemental image has a different perspective of the 3D surface object. A relay system projects the aerial images onto the pickup device (CCD). The lateral magnification of relay system is adjusted so that the size of the elemental-images collection matches the CCD.

 figure: Fig. 1.

Fig. 1. Scheme, not to scale, of the capture setup of a 3D IIS. The field lens collects the rays from the outermost microlenses, the camera lens projects the images onto the CCD.

Download Full Size | PDF

Consider now the light scattered at an arbitrary point (x,z) of the 3D object. It is straightforward, by application of paraxial scalar diffraction equations [28], to find that the intensity distribution at a given point x′=(x′,y′) of the aerial pickup plane is given by [20]

H(x';x,z)=mexp{iπmpx2λ(az)}Pz(xo)exp{i2πxox'+[Mz(mpx)mp]λgd2xo}2,

where m=(m,n) accounts for the microlenses indexes in the (x,y) directions, and p for the constant pitch of the microlens array. Mz=g/(a-z) is the lateral magnification, which depends on the depth coordinate z. The so-called generalized pupil function is:

Pz(xo)=p(xo)exp{iπzλa(az)xo2}.

This function accounts for the microlenses pupil function, p(xo), and for phase modulation due to defocus. Let us remark that the matter of interest of our investigation is not the intensity distribution at the aerial pickup plane, but at the pickup device. Note however that such a distribution is a uniformly scaled version of Eq. (1). Assuming insignificant overlapping between the elemental diffraction spots, Eq. (1) can be rewritten as the 2D convolution:

H(x',x,z)=P˜z(x'λg)2mδ{x'[mp(1Mz)Mzx]}.

Eq. (3) indicates that any point of the surface object produces a collection of identical diffraction spots onto the CCD. The positions of the spots depend on both, the microlens indexes, and the lateral and axial position of the point. The spots shape depends on the lenslets pupil function, and on the point depth coordinate. The intensity distribution of each spots is

Horz=0ϕ2p(ro)exp{iπλza(az)ro2}Jo(2πrroλg)rodro2,

where we assume that the lenslets are circular , with diameter ϕ. Thus, we express the intensity distribution in cylindrical coordinates. In Fig. 2(a), we represent a meridian section of Eq. (4). The parameters for calculation were: ϕ=1.0mm,f=5.0mm, λ=0.55μm, and a=100mm.

Let us revisit at this point the concepts of lateral resolution and depth of field. The resolution of an imaging system evaluates its capacity for producing sharp images of the finest features of the object, when it is in focus. In case of diffraction-limited imaging systems, resolution is usually evaluated in terms of the Rayleigh criterion. According to it, the resolution of the pickup system under study is determined by the radius of the first zero ring of the Airy disk, H o(r,0). Note that the central lobe contains 84% of the energy in the Airy disk.

The DOF of an imaging system is the distance by which the object may be axially shifted before an unacceptable blur is produced. In diffraction-limited imaging systems, the DOF is usually evaluated in terms of the so-called Rayleigh range: the axial distance from the in-focus plane to the point that produces an spot whose radius has increased by a factor 21/2. The evaluation of the radii of the spots produced by out-of-focus points (z≠0) is not as simple as in the in-focus case. This is because as z increases the spot spreads and neither a central lobe nor zero ring are recognized. In this case we define the defocused-spot diameter as the one of a circle that encircles 84% of the overall pattern energy. In mathematical terms such diameter, D(z), is the one which solves the equation

0D2Horzrdr=0.840Horzrdr.

In Fig. 2(b) we represent, with black line, the defocused-spot diameter for different values of distance z. We conclude, from the figure, that if only the limits imposed by diffraction are considered, the resolution limit of the pickup system under study is of is of 3.33 μm, measured in the aerial pickup plane, and the DOF is, for positive values of z, of +8.5 mm.

 figure: Fig. 2.

Fig. 2. (a) Grey-scale representation of Eq. (4). Any cross-sections correspond to the spot produced by an object point at a depth z. White lines delimit the back-projected pixel size. The effect of defocus is much more appreciable for positive values of z; (b) Spot diameter for different values of the fill factor. The black thick line is used to mark the back-projected pixel size.

Download Full Size | PDF

3. The influence of the detector pixel size

To appreciate the influence of pixelation on the lateral resolution at any depth position of the object, we assumed that the CCD has 1024×768 square pixels and the array has 34×25 microlenses. Therefore each elemental image has 30×30 pixels. In Fig. 2(a), we have drawn a pair of horizontal lines separated by a distance that equals the back projection of the pixel size onto the aerial pickup plane. When the intensity spot is smaller than the (back projected) pixel size, the resolution limit is imposed by the pixelated structure of the CCD. On the contrary, when the intensity spot is bigger than the pixel size, the resolution limit is imposed by diffraction. In Fig. 2(b) we have plotted a horizontal thick line that corresponds to the back-projected pixel size. From this figure, some important properties of the captured elemental images can be deduced: (a) The resolution limit for objects at z=0 is determined by the CCD pixel size. This limit is much higher than the one imposed by diffraction; (b) For objects in a large range of axial positions z, the resolution limit is still imposed by the CCD. Therefore this limit is the same as for objects at z=0; (c) For objects beyond the above range, the resolution is imposed by the spot diameter, which rapidly increases as z increases. (d) The range of axial positions in which the resolution limit does not change, defines now the DOF of the capture setup of an IIS.

Then we can conclude that, contrarily to what is commonly assumed, in a large rage of depth positions the lateral resolution of the capture setup is determined not by diffraction but by the CCD. This fact provides us with one additional degree of freedom in the design of the optimum pickup. Specifically, one can safely increase the DOF by use of techniques that in diffraction-limited systems would deteriorate the lateral resolution at z=0. In this sense, one can decrease the lenslets fill factor, defined as the quotient between the diameter of the microlenses, ϕ, and the pitch, p. It is known that decreasing the lenslets fill factor, produces the increase of the spot diameter at z=0, but a significant reduction for larger values of z. Reducing the fill factor does not affect to the lateral resolution at z=0 (which is determined by the CCD), but importantly increases the DOF. In Fig. 2(b), we have represented, with colored lines, the evolution of the spot diameter for different values of the fill factor. All the cases represented have the same lateral resolution at low values for z. However, for example, the DOF obtained with ϕ/p=0.5 is 40% longer than the one obtained with ϕ/p=1.0. At z=54 mm the resolution limit obtained with ϕ/p=0.5 is half of the one obtained with a fill factor of 1. Besides, lenslets with low fill factor can improve the viewing angle of IIS [22].

To further illustrate our proposal, we have performed a numerical imaging experiment with a computer-generated synthetic object, see Fig. 3(a). In the first step, we have calculated the elemental images assuming that the object was placed at z=0 and the fill factor is ϕ/p=1.0. The images captured from 49 different views are shown in Fig. 3(b). In Fig. 3(c) we show an enlarged image of the central element m=(0,0).

 figure: Fig. 3.

Fig. 3. (a) Synthetic object; (b) 2D elemental images captured from 49 views; (c) Enlarged view of the central image. The object was placed at z=0 and the fill factor was set at ϕ/p=1.0

Download Full Size | PDF

Next, in Fig. 4, we show the elemental images obtained with the fill factor ϕ/p=0.5. There are no differences in resolution between this image and the one obtained with ϕ/p=1.0.

 figure: Fig. 4.

Fig. 4. (a) 2D elemental images of the object captured from 49 different views; (b) Enlarged view of the central image. The object was placed at z=0 and the fill factor was set at ϕ/p=0.5.

Download Full Size | PDF

In Fig. 5 we show the evolution of the central elemental image as the synthetic object is axially displaced from z=0 to z=67.5 mm. Note that the magnification factor Mz increases with z. It is apparent that for large values of z the resolution obtained with ϕ/p=0.5 is much better than the resolution obtained with of ϕ/p= 1.0.

 figure: Fig. 5.

Fig. 5. Central elemental image as the object is displaced from z=0 to z=67.5 mm. Left-hand image corresponds to ϕ/p=0.5. Right-hand one to ϕ/p=1.0 (Video file of 0.33 Mb).

Download Full Size | PDF

To finish with our numerical integral imaging experiment, we calculated the reconstructed image by simulation with the simulated elemental images. For this numerical reconstruction we considered that the integral image was observed by an eye with pupil diameter ϕE = 3 mm which was placed in the optical axis of the central microlens, and at a distance ℓ = 300mm from the reference plane. The reconstructed images, shown in Fig. 6, were calculated from a collection of 17×17 elemental images. For the reconstruction we used, in the two cases under study, the lenslets array of ϕ/p=1.0, to minimize the vigneting. Despite of that, some residual vigneting is observed in the images. The images show, of course, pixelated structure. Note that, for large z the image obtained in the case of ϕ/p=1.0 has 5 gray levels, the image obtained in the ϕ/p=0.5 case has only 3 levels. Since the original object was binary, the later constitute a much more precise reproduction of it.

 figure: Fig. 6.

Fig. 6. Evolution of the reconstructed image as the object is displace from z=0 to z=67.5 mm. Left-hand image corresponds to ϕ/p=0.5. Right-hand one to ϕ/p=1.0 (Video file of 0.63 Mb).

Download Full Size | PDF

4. Conclusions

We have reported a method for improvement of depth of field of 3D integral imaging with no deterioration of lateral resolution. The technique takes profit from the influence of pixelation on resolution of defocused objects. By proper reduction of the microlenses fill factor one can substantially increase the depth of field. The technique slightly reduces the light efficiency. Our detailed analysis has been based on the scalar diffraction theory. The conclusions of it could be heuristically understood in terms of simple ray-tracing arguments. However, such arguments would not permit to obtain precise values neither for the evolution of the lateral resolution, nor for the range of axial positions of the 3D object in which the lateral resolution is imposed by the detector pixel size. Moreover, the ray-tracing model would not allow analyzing the case of other more elaborate pupil functions as, for example, the annular pupils or the gaussian ones.

Acknowledgments

This work has been funded by the Plan Nacional I+D+I (grant DPI2003-4698), Ministerio de Ciencia y Tecnología, Spain. R. Martínez-Cuenca acknowledges funding from the Universitat de València (Cinc Segles grant). We also acknowledge the support from the Generalitat Valenciana (grant GV04B-186).

References and Links

1. S. A. Benton, ed., Selected Papers on Three-Dimensional Displays (SPIE Optical Engineering Press, Bellingham, WA, 2001).

2. B. Javidi and F. Okano, eds., Three Dimensional Television, Video, and Display Technologies (Springer, Berlin, 2002).

3. D. H. McMahon and H. J. Caulfield, “A technique for producing wide-angle holographic displays,” Appl. Opt. 9, 91–96 (1970). [CrossRef]   [PubMed]  

4. P. Ambs, L. Bigue, R. Binet, J. Colineau, J.-C. Lehureau, and J.-P. Huignard, “Image reconstruction using electrooptic holography,” Proc. 16th Annual Meeting of IEEE LEOS 2003, vol. 1 (IEEE, Piscataway, NJ, 2003) pp. 172–173.

5. M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. (Paris) 7, 821–825 (1908).

6. H. E. Ives, “Optical properties of a Lippmann lenticulated sheet,” J. Opt. Soc. Am. 21, 171–176 (1931). [CrossRef]  

7. N. A. Valyus, Stereoscopy (Focal, London, 1966).

8. T. Okoshi, “Optimum design and depth resolution of lens-shet and projection type three-dimensional displays,” Appl. Opt. 10, 2284–2291 (1971). [CrossRef]   [PubMed]  

9. Y. Igarishi, H. Murata, and M. Ueda, “3D display system using a computer-generated integral photograph,” Jpn. J. Appl. Phys. 17, 1683–1684 (1978). [CrossRef]  

10. N. Davies, M. McCormick, and M. Brewin, “Design and analysis of an image transfer system using microlens arrays,” Opt. Eng. 33, 3624–3633 (1994). [CrossRef]  

11. T. Motoki, H. Isono, and I. Yuyama, “Present status of three-dimensional television research,” Proc. IEEE 83, 1009–1021 (1995). [CrossRef]  

12. F. Okano, H. Hoshino, J. Arai, and I. Yayuma, “Real time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef]   [PubMed]  

13. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photgraphy,” J. Opt. Soc. Am. A 15, 2059–2065 (1998). [CrossRef]  

14. L. Erdman and K. J. Gabriel, “High resolution digital photography by use of a scanning microlens array,” Appl. Opt. 40, 5592–5599 (2001). [CrossRef]  

15. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001). [CrossRef]  

16. J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “Analysis of viewing parameters for two display methods based on integral photography,” Appl. Opt. 40, 5217–5232 (2001). [CrossRef]  

17. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]  

18. J. Arai, H. Hoshino, M. Okui, and F. Okano, “Effects on the resolution characteristics of integral photography,” J. Opt. Soc. Am. 20, 996–1004 (2003). [CrossRef]  

19. S. Kishk and B. Javidi, “Improved resolution 3D object sensing and recognition using time multiplexed computational integral imaging,” Opt. Express 11, 3528–3541 (2003). [CrossRef]   [PubMed]  

20. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Integral imaging with improved depth of field by use of amplitude modulated microlens array,” Appl. Opt. 43, (31) (2004). [CrossRef]   [PubMed]  

21. Y. Frauel, O. Matoba, E. Tajahuerce, and B. Javidi, “Comparison of passive ranging integral imaging and active imaging digital holography for 3D object recognition,” Appl. Opt. 43, 452–462 (2004). [CrossRef]   [PubMed]  

22. J.-S. Jang and B. Javidi, “Improvement of viewing angle in integral imaging by use of moving lenslet arrays with low fill factor,” Appl. Opt. 42, 1996–2002 (2003). [CrossRef]   [PubMed]  

23. J.-H. Park, S. Jung, H. Choi, and B. Lee, “Integral imaging with multiple image planes using uniaxial crystal plate,” Opt. Express 11, 1862–1875 (2003). [CrossRef]   [PubMed]  

24. J.-S. Jang, F. Jin, and B. Javidi, “Three-dimensional integral imaging with large depth of focus by use of real and virtual image fields,” Opt. Lett. 28, 1421–1423 (2003). [CrossRef]   [PubMed]  

25. J.-S. Jang and B. Javidi, “Large depth-of-focus time-multiplexed three-dimensional integral imaging by use of lenslets with nonuniform focal lengths and aperture sizes,” Opt. Lett. 28, 1924–1926 (2003). [CrossRef]   [PubMed]  

26. F. Jin, J.-S. Jang, and B. Javidi, “Effects of device resolution on three-dimensional integral imaging,” Opt. Lett. 29, 1345–1347 (2004). [CrossRef]   [PubMed]  

27. J. Arai, M. Okui, M. Kobayashi, and F. Okano, “Geometrical effects of positional errors in integral photography,” J. Opt. Soc. Am A 21, 951–958 (2004). [CrossRef]  

28. J.W. Goodman, Introduction to Fourier Optics (McGraw-Hill, Inc., New York1996).

Supplementary Material (2)

Media 1: GIF (333 KB)     
Media 2: GIF (632 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Scheme, not to scale, of the capture setup of a 3D IIS. The field lens collects the rays from the outermost microlenses, the camera lens projects the images onto the CCD.
Fig. 2.
Fig. 2. (a) Grey-scale representation of Eq. (4). Any cross-sections correspond to the spot produced by an object point at a depth z. White lines delimit the back-projected pixel size. The effect of defocus is much more appreciable for positive values of z; (b) Spot diameter for different values of the fill factor. The black thick line is used to mark the back-projected pixel size.
Fig. 3.
Fig. 3. (a) Synthetic object; (b) 2D elemental images captured from 49 views; (c) Enlarged view of the central image. The object was placed at z=0 and the fill factor was set at ϕ/p=1.0
Fig. 4.
Fig. 4. (a) 2D elemental images of the object captured from 49 different views; (b) Enlarged view of the central image. The object was placed at z=0 and the fill factor was set at ϕ/p=0.5.
Fig. 5.
Fig. 5. Central elemental image as the object is displaced from z=0 to z=67.5 mm. Left-hand image corresponds to ϕ/p=0.5. Right-hand one to ϕ/p=1.0 (Video file of 0.33 Mb).
Fig. 6.
Fig. 6. Evolution of the reconstructed image as the object is displace from z=0 to z=67.5 mm. Left-hand image corresponds to ϕ/p=0.5. Right-hand one to ϕ/p=1.0 (Video file of 0.63 Mb).

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

H ( x ' ; x , z ) = m exp { i π m p x 2 λ ( a z ) } P z ( x o ) exp { i 2 π x o x ' + [ M z ( m p x ) m p ] λ g d 2 x o } 2 ,
P z ( x o ) = p ( x o ) exp { i π z λ a ( a z ) x o 2 } .
H ( x ' , x , z ) = P ˜ z ( x ' λ g ) 2 m δ { x ' [ m p ( 1 M z ) M z x ] } .
H o r z = 0 ϕ 2 p ( r o ) exp { i π λ z a ( a z ) r o 2 } J o ( 2 π r r o λ g ) r o d r o 2 ,
0 D 2 H o r z r dr = 0.84 0 H o r z r dr .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.