Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D object scaling in integral imaging display by varying the spatial ray sampling rate

Open Access Open Access

Abstract

In this paper, we present a method to scale (that is, to enlarge or reduce) three-dimensional integral images of objects. The scaling is achieved by controlling the spatial ray sampling rate of elemental images in the pickup procedure of the object. The sampling is obtained by using a moving array-lenslet technique (MALT) during the recording stage. In the display process, a stationary display lenslet array is used. The lateral and longitudinal magnifications obtained with the proposed method are the same, therefore three-dimensional image distortion can be avoided. To illustrate the feasibility of our method, experiments are performed to magnify a small object in three-dimensional space.

©2005 Optical Society of America

1. Introduction

In integral imaging (II) [113], true three-dimensional (3D) images are reconstructed in space by crossing discrete rays emerging from a two-dimensional (2D) display panel and passing them through a micro-lenslets array, in a way that simulates the back propagation of the recorded rays. II exhibits full parallax and continuous view points. Visual fatigue is caused by convergence-accommodation conflict and thus is a serious problem in stereoscopic displays [6]. II, which is autostereoscopic, does not have convergence-accommodation conflict because II does not make different images on each eye compulsory to see 3D images. In II, true 3D images are reconstructed by converging or diverging rays from each elemental image. Because incoherent light is used in II, there is no speckle problem associated with holographic imaging. Because of these benefits, II has been studied for 3D TV, video, and movies [113]. A drawback of II is that the lateral resolution and the depth-of-focus of 3D images are limited. It was shown that the product of depth-of-focus and lateral resolution square (PDLRS) for 3D images in conventional diffraction-limited II systems is limited by the inverse of the illumination wavelength λ [5]. If we display a 3D image with large depth, we have to sacrifice the resolution, and vice versa.

In the conventional II pickup process, direction and intensity information of rays coming from a 3D object are spatially sampled by use of the pickup lenslet array, and recorded by a 2D image sensor as depicted in Fig. 1(a). The ray information sampled by each lenslet is a 2D image representing a different perspective of the 3D object. It is referred to as an elemental image. Reconstruction of a 3D image of the object from 2D elemental images is the reverse of the pickup process. The recorded 2D elemental images are displayed on a 2D display panel, and then rays coming from the elemental images are redirected through the display lenslet array to form a real 3D image as depicted in Fig. 1(b). This 3D image is a pseudoscopic (depth-reversed) image of the original 3D object. The pseudoscopic real image can be converted into an orthoscopic virtual image when every elemental image is rotated by 180 degrees around its own center optical axis [14]. A micro-convex-mirror array can be used instead of the display lenslet array to remedy the pseudoscopic image problem as shown in Fig. 1(c) [15].

For II to be a practical 3D imaging technique, it should be possible to produce enlarged or reduced 3D images in an optimally designed display system. So far, it has not been clear how the size of 3D images could be changed uniformly in both lateral and longitudinal (depth) directions. The use of magnifying or demagnifying lenses before the pickup process cannot be a solution because magnification is non-uniform along the longitudinal depth direction, thus the true dimensions of the 3D object are not preserved during the reconstruction. In addition, if the aperture size of the imaging lens is not large enough, vignetting [16] may occur, that is, not all of the 3D ray information is collected by the pickup lenslet array. As a result, the reconstructed images do not exhibit a noticeable difference in viewing positions.

 figure: Fig. 1.

Fig. 1. Pickup and display in 3D II. (a) Pickup of elemental images using a lenslet array. (b) Conventional image display using a lenslet array and a display panel. (c) 3D projection II with a micro-convex-mirror array.

Download Full Size | PDF

The use of stationary lenslet arrays with different pitches in the pickup and display processes may be considered. In such a case, the magnification is determined by the ratio between the display and pickup microlens array pitch sizes. However, since optimum size of both pickup and display lenslet size is considered to be approximately 1~2 mm [3], the ratio between the display and pickup microlens array pitch sizes cannot be varied much, so image scaling by this method is very limited. If a smaller than optimum lenslet size is used (that is, small pitch) the resolution is degraded because of diffraction. On the other hand, if large lenses are used (that is, the pitch is enlarged) the resolution is decreased because of lower spatial sampling of rays [4]. Another way to obtain image scaling is by using different gaps between the microlens array and recording and display planes. However, it is often desirable to choose this gap to be constant and equal to the microlens focal length. Recently, it was shown that uniformly magnified 3D images can be reconstructed in II by use of sectioning images of confocal microscopy [17]. This approach is useful for magnification of micro-objects such as biological cells. It cannot be applied to demagnification of large objects.

In this Letter, we present a method to scale (that is, to magnify or demagnify) small or large objects with uniform scaling of lateral and longitudinal spatial coordinates. The scaling is accomplished by controlling the spatial ray sampling rate in the pickup process of II. Therefore, we used a moving array-lenslet technique (MALT) [1] to control the spatial ray sampling rate in the pickup process of II. In the MALT, the spatial ray sampling rate is increased by rapidly and synchronously vibrating the positions of the lenslet arrays for both pickup and display. However, for 3D image reconstruction in this letter, only the pickup array is vibrated and the sampled ray information with adjusted rate is displayed in front of a stationary display lenslet array. The pickup lenslet array pitch need not be different from the display lenslet array pitch so optimal lenslet sizes can be used [3]. The proposed method is equivalent to the method based on using different lenslet pitch size in the display and pickup, but without sacrificing the optimal lenslet size condition. The lateral magnification obtained with the proposed method equals the longitudinal magnification when vertical and horizontal spatial ray sampling rates are the same. Experiments are performed to demonstrate our approach.

2. Resolution Priority II and Depth Priority II

 figure: Fig. 2.

Fig. 2. Ray integration to reconstruct 3D images in II. (a) 3D image formation in RPII. The display plane is placed at distance |g|>f from the array and the imaging plane is at distance Limg from the array. (b) 3D image formation in DPII. The display plane is placed at distance |g|=f from the array. The spot size is denoted by the gray ellipse and equals the lenslet diameter. (c) 3D image formation with changing the pitch of the lenslet array in RPII. (d) 3D image formation with changing the pitch of the lenslet array in DPII.

Download Full Size | PDF

Two types of II [5] in terms of resolution and depth of focus have been reported. The first type is resolution-priority integral imaging (RPII) in which the gap distance between the display panel and the lenslet array g should be larger than the focal length of each lenslet f. The image is composed of many integral point images, as shown Fig. 2(a). The second type is depth-priority integral imaging (DPII) in which the gap distance between the display panel and the lenslet array is the same as the focal length of each lenslet f, as depicted in Fig. 2(b). With DPII the rays emerging from the microlenses are parallel and have a spot size similar to the lenslet size. Let us suppose that the distance between the lens and display plane were changed as shown in Fig. 2(c) and (d). In the case of RPII, the depth of the reconstructed 3D images increases but the resolution of the reconstructed 3D images decreases. In the case of DPII, the depth of the reconstructed 3D images increases and the resolution of the reconstructed 3D images does not decrease because the rays from each lenslet are parallel. Therefore, we use DPII in this letter to change spatial ray sampling rate without degradation of resolution.

3. Magnification and Demagnification Principles of II

When the display lenslet array is identical to the pickup lenslet array, the reconstructed image and the 3D object have the same size. Here, we show that it is possible to control the reconstructed 3D image size by changing the spatial ray sampling rate. The spatial ray sampling rate is inversely proportional to the pitch of the pickup lenslet array. To increase or decrease the spatial ray sampling rate, we can reduce or increase the pitch of the pickup lenslet array as shown in Fig. 3(a) and 3(b), respectively. We note that with the standard II pickup system the sampling rate is determined by the lenslet array pitch and equals 1/p. A different sampling rate can be obtained by using MALT together with appropriate digital image registration. Let us denote this change by δ=αp where δ is the period of the spatial ray sampling rate and p is the pitch of the pickup lenslet array. If α<1, the sampling rate is larger than 1/p (oversampling), and the reconstructed 3D image is magnified by 1/α times of the original 3D object.

 figure: Fig. 3.

Fig. 3. Controlling the spatial ray sampling rate by adjusting the lenslet pitch. The upper half of (a) and (b) indicates the pickup process. The lower half indicates the display process. (a) Increasing spatial ray sampling rate. The distance δ between the elemental images during the pickup is lower than the lenslet array pitch yielding a high sampling. The reconstructed 3D object is magnified. (b) Decreasing spatial ray sampling rate. The distance δ between the elemental images during the pickup is higher than the lenslet array pitch yielding a low sampling rate. The reconstructed 3D object is demagnified.

Download Full Size | PDF

If α>1, we have an undersampling case, and the reconstructed 3D image is demagnified and has a size 1/α times the original 3D object.

In the following, we will show that with the proposed method a similar lateral and longitudinal magnification is obtained. Let us consider two object points A and B in Fig. 4(a). The distance w 1, ray information obtained from object point B on the recording plane, is given by:

w1=δ+(δfL),

where δ is the distance between the center of each lenslet, f is the focal length of lenslets, and L is the gap distance between the pickup lenslet array and point B from the node of the triangle represented by the solid lines in Fig. 4(a). Now, suppose that we change the distance between the elemental images from δ in the pickup setup [Fig. 4(a)] to δ′ in the display setup [Fig. 4(b)]. Using similar geometrical consideration that yields Eq. (1), we have:

w1=δ+(δfL),

where w 1’ denotes the distance between the point image of B in the two elemental images and L′ denotes the location of the reconstructed point B′. Since the location of the recorded rays representing point B in the two elemental images is the same in the display setup and as in the recorded setup, we have:

w1δ=w1δ.

From Eq. (1)(3), we have

δδ=LL.

For point A, we use equations similar to Eq. (1)(3), and Eq. (4) is rewritten

δδ=(L+b)(L+b).

From Eq. (5), we find that the longitudinal (depth) magnification is:

b/b=δ/δ=L/L.

From the similarity between the triangles ABC and A′B′C′ together with Eq. (5) we find that the lateral magnification is:

aa=(L+b)(L+b)=δδ.

Thus we confirm that lateral magnification equals the longitudinal magnification when the vertical and horizontal spatial ray sampling rates are the same.

 figure: Fig. 4.

Fig. 4. Controlling the 3D image size by controlling spatial ray sampling rate. The object consists of two points A and B. The lateral distance between the points is a and the longitudinal is b. (a) Pickup with sampling rate 1/δ. (b) Display with sampling rate 1/δ′.

Download Full Size | PDF

4. Limitation Imposed by Display and Pickup Device Resolution

The vertical height Δp of the smallest detail in elemental image is given by:

Δp=fδ0L,

where δ0 is the minimum size of object, and L is the distance between the lenslet array and object as shown Fig. 5. According to the Nyquist sampling theorem, if the ray information of the 3D object has a minimum period of δ0, the 2D sensor must store twice the minimum object detail to restore the original ray information. In addition, even if we pick up the object in a suitable distance such that the ray information is sufficiently recorded by the 2D sensor, the resolution of the display device needs to be sufficiently high to satisfy the Nyquist criterion. Otherwise, the reconstructed 3D image will not fully represent the original 3D object because of insufficient ray information. The ratio between the number of pixels of one elemental image in the pickup and display is given by:

k=NDNP,

where NP is the number of pixels of one elemental image in the pickup procedures, and ND is the number of pixels of one elemental image in the display procedures. In the case of k≥1, we can fully reconstruct a 3D integral image because there is sufficient ray information to represent the object. But in the case of k<1, Eq. (8) is represented as:

2hskΔp=fδ0L.

Equation (10) shows that the information that represents the minimum period of the object is decreased by a factor k when k is smaller than 1. From the Nyquist sampling theorem, kΔp must be larger than 2hs where hs is the pixel size of the pickup device. The resolution of the display device must be considered in order to reconstruct integral images with original states according to Eq. (10). Hence, we have considered these limitations in the experiments.

 figure: Fig. 5.

Fig. 5. An elemental image in the pickup process of conventional integral image.

Download Full Size | PDF

5. Experimental Results

In our experiment, the pickup lenslet array we used is made from acrylic and has 53×53 plano-convex lenslets. Each lenslet element is square-shaped and has a uniform base size of 1.09mm×1.09mm, with less than 7.6×10-3mm separating the lenslet elements. The focal length of the lenslets is approximately 3mm. The display lenslet array has 53×53 micro-convex-mirror lenslets. The reflectivity of the micro-convex-mirror array is larger than 90%. Other characteristics are the same as the pickup lenslet array. The longitudinal size of the reconstructed integral images is 4 times smaller than that of the object because the focal length of the micro-convex-mirror array is 4 times shorter than that of the pickup lenslet array. To keep the magnification in the longitudinal and the lateral directions equal, the object has to be picked up by a micro-concave-mirror array that has the same focal length of the micro-convex-mirror array [15]. A color LCD projector that has 3 (RGB) panels was used as the display panel. The focal length of the micro-convex-mirror array is 0.75mm in magnitude. Each panel has 1024×768 square pixels with a pixel pitch of 18µm. The diverging angle of the projection beam θ is approximately 1.4 degrees.

In the experiments, to increase the spatial ray sampling rate, we pick up each elemental image at a distance which is smaller than the pitch of the micro-lens. To explain the magnification of the object, we present a method to magnify the object 2 times in the y direction as shown in Fig. 6. We pick up two integral images placed at y=0 and y=p/2 in Fig. 6(a) and (b). p is the pitch of the pickup lenslet array. The two integral images are interlaced with a ratio of 1:2 to form a new integral image as shown in Fig. 6(c). The new integral image in Fig. 6(c) is equivalent to an II captured by using an array with a pitch p/2, but with the spatial resolution obtained with an array with a pitch of p. Thus the spatial ray sampling is doubled. In order to magnify the object m times, we pick up the integral images at mxm sampling points because we must consider all directions. The displacement steps δ need to be p/m and the integral images should be combined together by interlacing with a ratio of 1:m. The gap distance between the center points of two lenses is p and we use a 53×53 lenslet array in the pickup procedures. If the magnification factor is an integer, the gap distance between sampling points of the elemental images is also p. However, if the magnification factor is a fraction, the gap distance between sampling points of the elemental images is not p. Therefore, to magnify an object by an arbitrary fraction, we have to pickup elemental images more than m 2 times. In our experiment, we magnify the object 5 times. Therefore, we pick up the elemental images at 25 sampling points. To show how to magnify the object 2 times in the pickup procedure, a movie is presented in Fig. 7. The object to be imaged is a dandelion seed as shown in Fig. 8(a). The distance between the pickup lenslet array and the front of the seed is approximately 5cm. The width of the seed is 0.8cm and the height of the seed is 1.3cm. The raw array of elemental images that has a size of 10×10 elemental images is shown in Fig. 8(b). By taking 25 shifted exposures and interlacing them with a ratio of 1:5, a new array of elemental images that is 50×50 in size [Fig. 8(c)] is obtained.

 figure: Fig. 6.

Fig. 6. Example of magnification. (a) Pickup process at y=0. (b) Pickup process at y=2/p. (c) The twice magnified 3D integral images.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. (49 KB) Movie of magnifying object 2 times in pickup procedure.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Small 3D object and elemental images used in experiments. (a) Small 3D object used in the experiments. (b) Array of elemental images obtained from direct camera pickup without increasing spatial ray sampling rate. (c) New array of elemental image with increased spatial ray sampling rate of 5 times the rate in (b).

Download Full Size | PDF

Left and center views of the reconstructed 3D images are shown in Fig. 9. The object is magnified by a factor of 5 to 1 as shown in Fig. 9(a) to (e) and (f) to (j). Viewing directions for the images are deviated from the optical axis by ~30 degrees. A movie showing the reconstructed object with a magnification factor of 5 is presented in Fig. 10. We only used some parts of the magnifed elemental images as shown Fig. 8(c) because a certain number of display device pixels is needed to reconstruct the smallest detail in the object. Our display device had 1024 by 768 pixels. When we used the micro-convex-mirror array, the reconstructed 3D image becomes an orthoscopic virtual image. Hence, the experiment demonstrates that the object that is smaller than 1~2cm can be magnified by increasing the spatial ray sampling rate through those experiments. We can magnify or demagnify without image distortions because lateral and longitudinal magnifications are the same. The depth-of-focus is limited by Nf even if ray optics are considered where N is the finite number of pixels in each elemental image and f is the focal length of each lenslet [18].

 figure: Fig. 9.

Fig. 9. Experimental results for increasing spatial ray sampling rate. (a)~(e) Magnification 5 to 1 times on the left view. (f)~(j) Magnification 5 to 1 times on the center view.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. (2.28 MB) Movie of 5 times magnified object.

Download Full Size | PDF

5. Conclusions

In conclusion, we presented a method to show that a small 3D object can be magnified without longitudinal image distortion by increasing the spatial ray sampling rate. Similarly, it is possible to demagnify a 3D object by decreasing the spatial ray sampling rate without longitudinal image distortion.

Acknowledgments

We dedicate this paper to professor Ju-Seog Jang who passed away on June 10, 2004. We thank Myungjin Cho and Sonia Sanchez for their help with the preparation of the manuscript.

References and links

1. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]  

2. G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

3. C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. 58, 71–76 (1968). [CrossRef]  

4. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15, 2059–2065 (1998). [CrossRef]  

5. J.-S. Jang, F. Jin, and B. Javidi, “Three-dimensional integral imaging with large depth of focus by use of real and virtual image fields,” Opt. Lett. 28, 1421–1423 (2003). [CrossRef]   [PubMed]  

6. B. Javidi and F. Okano, eds., Three Dimensional Television, Video, and Display Technology (Springer-Verlag, Berlin, 2002).

7. T. Okoshi, “Optimum design and depth resolution of lens-sheet and projection-type three-dimensional displays,” Appl. Opt. 10, 2284–2291 (1971). [CrossRef]   [PubMed]  

8. S. A. Benton, ed., Selected Papers on Three-Dimensional Displays (SPIE Optical Engineering Press, Bellingham. WA, 2001).

9. T. Okoshi, “Three-dimensional display,” Proc. IEEE 68, 548–564 (1980). [CrossRef]  

10. S. W. Min, B. Javidi, and B. Lee, “Enhanced 3D Integral Imaging System by use of double display devices,” Journal of Applied Optics-Information Processing 42, 4186–4195 (2003).

11. P. Ambs, L. Bigue, R. Binet, J. Colineau, J.-C. Lehureau, and J.-P. Huignard, “Image reconstruction using electro-optic holography,” in Proc. of the 16th Annual Meeting of the IEEE Lasers and Electro-Optics Society, LEOS 2003, vol. 1 (IEEE, Piscataway, NJ, 2003) pp. 172–173.

12. N. Davies, M. McCormick, and M. Brewin, “Design and analysis of an image transfer system using microlens array,” Opt. Eng. 33, 3624–3633 (1994). [CrossRef]  

13. M. Martínez-Corral, M. T. Caballero, and A. Pons, “Axial apodization in 4Pi-confocal microscopy by annular binary filters,” J. Opt. Soc. Am. A 19, 1532–1536 (2002). [CrossRef]  

14. F. Okano, J. Arai, H. Hoshino, and I. Yayuma, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999). [CrossRef]  

15. J.-S. Jang and B. Javidi, “Three-dimensional projection integral imaging using micro-convex-mirror arrays,” Opt. Express 12, 1077–1083 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-6-1077 [CrossRef]   [PubMed]  

16. A. Stern and B. Javidi, “3D computational synthetic aperture integral imaging (COMPSAII),” Optics Express 11(19), 2446–2450 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-19-2446 [CrossRef]  

17. J.-S. Jang and B. Javidi, “Three-dimensional integral imaging of micro-objects,” Opt. Lett. 29, 1230–1232 (2004). [CrossRef]   [PubMed]  

18. F. Jin, J.-S. Jang, and B. Javidi, “Effects of device resolution on three-dimensional integral imaging,” Opt. Lett. 29, 1345–1347 (2004). [CrossRef]   [PubMed]  

Supplementary Material (2)

Media 1: MPG (2335 KB)     
Media 2: MOV (48 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Pickup and display in 3D II. (a) Pickup of elemental images using a lenslet array. (b) Conventional image display using a lenslet array and a display panel. (c) 3D projection II with a micro-convex-mirror array.
Fig. 2.
Fig. 2. Ray integration to reconstruct 3D images in II. (a) 3D image formation in RPII. The display plane is placed at distance |g|>f from the array and the imaging plane is at distance Limg from the array. (b) 3D image formation in DPII. The display plane is placed at distance |g|=f from the array. The spot size is denoted by the gray ellipse and equals the lenslet diameter. (c) 3D image formation with changing the pitch of the lenslet array in RPII. (d) 3D image formation with changing the pitch of the lenslet array in DPII.
Fig. 3.
Fig. 3. Controlling the spatial ray sampling rate by adjusting the lenslet pitch. The upper half of (a) and (b) indicates the pickup process. The lower half indicates the display process. (a) Increasing spatial ray sampling rate. The distance δ between the elemental images during the pickup is lower than the lenslet array pitch yielding a high sampling. The reconstructed 3D object is magnified. (b) Decreasing spatial ray sampling rate. The distance δ between the elemental images during the pickup is higher than the lenslet array pitch yielding a low sampling rate. The reconstructed 3D object is demagnified.
Fig. 4.
Fig. 4. Controlling the 3D image size by controlling spatial ray sampling rate. The object consists of two points A and B. The lateral distance between the points is a and the longitudinal is b. (a) Pickup with sampling rate 1/δ. (b) Display with sampling rate 1/δ′.
Fig. 5.
Fig. 5. An elemental image in the pickup process of conventional integral image.
Fig. 6.
Fig. 6. Example of magnification. (a) Pickup process at y=0. (b) Pickup process at y=2/p. (c) The twice magnified 3D integral images.
Fig. 7.
Fig. 7. (49 KB) Movie of magnifying object 2 times in pickup procedure.
Fig. 8.
Fig. 8. Small 3D object and elemental images used in experiments. (a) Small 3D object used in the experiments. (b) Array of elemental images obtained from direct camera pickup without increasing spatial ray sampling rate. (c) New array of elemental image with increased spatial ray sampling rate of 5 times the rate in (b).
Fig. 9.
Fig. 9. Experimental results for increasing spatial ray sampling rate. (a)~(e) Magnification 5 to 1 times on the left view. (f)~(j) Magnification 5 to 1 times on the center view.
Fig. 10.
Fig. 10. (2.28 MB) Movie of 5 times magnified object.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

w 1 = δ + ( δ f L ) ,
w 1 = δ + ( δ f L ) ,
w 1 δ = w 1 δ .
δ δ = L L .
δ δ = ( L + b ) ( L + b ) .
b / b = δ / δ = L / L .
a a = ( L + b ) ( L + b ) = δ δ .
Δ p = f δ 0 L ,
k = N D N P ,
2 h s k Δ p = f δ 0 L .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.