Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Layered-panel integral imaging without the translucent problem

Open Access Open Access

Abstract

An improved integral imaging process is proposed that can display a depth-enhanced three-dimensional (3D) image by using two parallel-layered display devices. With the use of layered display devices, it is possible to construct 3D images on several depth planes and increase the depth remarkably. In addition, the translucent problem between 3D images was resolved for the first time to the author’s knowledge by a time-multiplexing method. The proposed method was proven and compared with the conventional one by preliminary experiments.

©2005 Optical Society of America

1. Introduction

Integral imaging (InIm), which is also called integral photography [1], is a method to display full parallax three-dimensional (3D) images with no special glasses. In the early days, InIm did not attract much attention, since film was the only material for recording and displaying 3D images. Recently with the use of active devices such as the charge-coupled device (CCD) camera for recording and the liquid-crystal display (LCD) panel for displaying devices [2], the field of InIm was remarkably widened and various research was performed to investigate its many advantages [3–10]. There are still some bottlenecks, however, that limit the usefulness of InIm. Among them, the depth-limitation of the reconstructed 3D image is one of the most severe problems and needs to be overcome. Before explaining the principles of the proposed method, we need to review the conventional InIm system and the cause of the depth limitation. The basic principle of the conventional InIm display system is shown in Fig. 1. The conventional InIm display system is composed of two main devices: the display device and the lens array. The display device is used to display the elemental images that contain the 3D information of the original object within 2D images. Those elemental images reconstruct the 3D image through the lens array as shown in Fig. 1. In the conventional InIm system, since the focal length of the lens array has a constant value f, the location of the central depth plane (CDP) where the 3D image is reconstructed by the lens array is also set to a fixed position.

 figure: Fig. 1.

Fig. 1. Basic principle of conventional InIm.

Download Full Size | PDF

The location of CDP is determined by the following, equation which is called the lens law:

1a+1b=1f,

In Eq. (1), a means the distance between the display device and the lens array, and b is the location of the CDP measured from the lens array. The quality of the reconstructed 3D image is severely distorted when it goes far from the CDP [11]. Therefore, the 3D image can be formed around the CDP, and its possible depth is also restricted within a few centimeters typically. From Eq. (1), it is easily recognized that the depth of 3D image can be enhanced if the number of CDPs can be increased. There are some methods that introduced and realized InIm systems with multiple CDPs [12–15]. However, they needed a mechanically moving system or a bulky system because beam splitters were used. In addition, since the previously proposed methods simply integrate the 3D images on multiple CDPs, there are interferences between the 3D images on different CDPs. These interferences appear because a rear image is observed through a front image. A method to prevent this problem will also be proposed in this paper.

2. Principles

The basic principles of the proposed method are shown in Fig. 2. In Fig. 2(a), a transparent display device called diplay device 2 is located between display device 1 and the lens array. Since display device 2 is transparent, it is possible to locate elemental images on both display devices. Display device 1 and display device 2 form integrated 3D images of their own at CDP 1 and CDP 2, respectively, through the lens array. If we let the distance between display device 1 and lens array be a 1, and that between display device 2 and the lens array be a 2, the locations of CDP 1 and CDP 2 that are formed by the lens array with display devices 1 and 2 can be acquired by the following equations:

1a1+1b1=1f,
1a2+1b2=1f.

These equations are also derived from the lens law. In the above equations, b 1 and b 2 mean the location of CDP 1 and CDP 2 measured from the lens array, respectively.

 figure: Fig. 2.

Fig. 2. Principles of the proposed method (a) with two real CDPs and (b) real and virtual CDPs.

Download Full Size | PDF

Since the number of CDPs is increased compared with the simple InIm structure, it is possible to construct the 3D images through the CDPs, and the image depth can be widened remarkably. If we let a 1>f and a 2<f, then from Eqs. (2) and (3) it is easily found that b 1>0 and b 2<0, which means that the 3D image on CDP 1 is in front of the lens array as a real image and that on CDP 2 is behind the lens array as a virtual image. The principle of real and virtual CDPs is shown in Fig. 2(b). In this case, the depth difference between the two images can be almost tens of centimeters.

As noted in the previous section, the interference between the 3D images on different CDPs is a serious problem that degrades the quality of the 3D image. In a volumetric display system, this is called the translucent problem. Figure 3 shows the translucent problem in our system. As can be seen in Fig. 3, the front image is partially distorted because the rear image can be observed through it. The key point of this problem is that the front image cannot cover the rear image on the area where the two images are overlapped. This is because the integrated 3D images are not real objects and are basically transparent. Therefore, the front image needs to be opaque in order to cover the overlapped part of rear image, and this is called the occlusion.

 figure: Fig. 3.

Fig. 3. Interference between the integrated images.

Download Full Size | PDF

In this paper, a time-multiplexing method is adopted to realize the occlusion effect. The time multiplexing consists of two phases: the rear image phase and the front image phase. The principle of the rear image phase is shown in Fig. 4(a). The elemental images for the rear image (the image that is integrated on CDP 1) are the same as those in Fig. 2, whereas the elemental images for the front image (the image that is integrated on CDP 2) are replaced with the black images of same shape and size, which are defined as elemental masks. The elemental masks form a black mask of the original front image with the same shape and size on CDP 2. The role of the black mask is only to cover the rear image as if there were a real object that located in CDP 2. As a result, the occlusion between the rear image and the mask of the front image is induced in the rear image phase to any observer within the viewing angle. In the rear image phase, however, the front image is not integrated yet and the observer views the black mask of the front image. The front image is realized in the front image phase. The principle of the front image phase is similar to that of the conventional InIm and shown in Fig. 4(b). Display device 1 emits white light and the elemental images of the front image are displayed on display device 2. The elemental images are integrated and form the front image on CDP 2. In this phase, only the front image is observed at the same place where the mask of the front image was located in the rear image phase. As a result, the observer can view the un-interfered front image at the front image phase and the rear image at the rear image phase, respectively. Therefore, the interference problem can be resolved by shifting these two phases with enough speed to induce the afterimage effect.

 figure: Fig. 4.

Fig. 4. Principle of the time-multiplexing method for real CDPs in (a) rear image phase and (b) front image phase.

Download Full Size | PDF

In our system, it is also possible to form simultaneously real and virtual 3D images by adjusting a 1 and a 2 in Eqs. (2) and (3) as shown in Fig. 2(b). Then, the depth between them can be tens of centimeters. The time-multiplexing method can also be applied to this case for the occlusion effect. All other principles are the same except that a real image is formed in front of the lens array from elemental images on display device 1, and a virtual image is formed from those on display device 2. In this case, CDP 2 is located behind the lens array, while CDP 1 is located in front of it. Therefore, the elemental masks of the front image have to be on display device 1. In Fig. 5(a), the rear image phase is shown for the real and virtual CDPs method. The elemental masks are displayed on display device 1 to form the real mask of the front image. The elemental images of the virtual rear image on CDP 2 are displayed on display device 2. In the front image phase, in contrast to the real CDPs method, display device 2 is set to be transparent, and the elemental images on display device 1 form a real front image on CDP 1 as shown in Fig. 5(b).

 figure: Fig. 5.

Fig. 5. Principle of the time-multiplexing method for real and virtual CDPs in (a) rear image phase and (b) front image phase.

Download Full Size | PDF

3. Experimental result

Several experiments were performed to prove the proposed principles. In experiments, a 17-inch LCD monitor with a pixel size of 0.27mm is used as display device 1. An LCD panel is an ideal device for experiments because it displays an image by transmitting or blocking the light through it. If the backlight unit is removed from an LCD system, the LCD panel itself can be used as a spatial light modulator for a white screen located behind the LCD. Hence, this device can be used as the display device 2. The LCD panel for display device 2 is obtained from the same kind of LCD monitor as display device 1. Therefore, the optical structure and properties of the two display devices are the same, except that display device 2 is non-emissive. In addition, the direction of the polarizer of display device 2 is adjusted to minimize the optical loss. In acquiring maximum possible depth, the locations of display device 1 and 2 are set as a 1>f and a 2<f to realize real and virtual CDPs. The lens array consists of 13×13 elemental lenses with a pitch of 10mm and a focal length of 22mm. In the rear image phase, two sets of elemental images shown in Fig. 6 are used. The elemental masks in Fig. 6(a) are displayed on display device 1, and elemental images in Fig. 6(b) are displayed on display device 2. In the front image phase, elemental images shown in Fig. 7 are used in a similar way. As shown in Figs. 6(a) and 7(a), the elemental images and masks of the front image have same properties except for the colors.

 figure: Fig. 6.

Fig. 6. Elemental images used in rear image phase to integrate (a) the mask of front image and (b) the rear image.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Elemental images used in front image phase to integrate (a) the front image and (b) the white screen.

Download Full Size | PDF

The experimental results of integrated images obtained by combining the two phases are shown in Fig. 8. There are two images that are integrated in different CDPs. The front image of the bananas is formed at 118.8mm in front of the lens array. The rear image of the apple is formed at 74.8mm behind the lens array. Therefore, the total depth difference between the two images is 193.6mm. It is easily recognized that the relative positions of the two images are varied remarkably with the change in observing direction, and the results prove that the proposed method can reconstruct two 3D images on different CDPs with enhanced total depth.

 figure: Fig. 8.

Fig. 8. Experimental results of the proposed method observed at (a) left viewpoint, (b) center viewpoint, and (c) right viewpoint.

Download Full Size | PDF

For comparison with the conventional method of using a single display panel, the same images of the apple and bananas are reconstructed with same depth by the conventional method. The experimental results for the conventional InIm are shown in Fig. 9. In Fig. 9(a), the distance a between the display device and the lens array is set to form the 3D images on CDP1 at 118.8mm in front of the lens array. As shown in Fig. 9(a), only the image of the bananas is reconstructed properly, while the image of apple is severely distorted. If we let a be the distance to form the image at CDP 2, i.e., at 74.8mm behind of the lens array, the image of the apple is well-formed but the other image is distorted as in Fig. 9(b). Therefore, it is proven that the proposed two-layer display panel method can enhance the total depth of the 3D images in InIm.

 figure: Fig. 9.

Fig. 9. Integrated images by the conventional InIm method (a) focused at 118.8mm and (b) focused at -74.8mm.

Download Full Size | PDF

Figure 10 shows two images that are integrated by the same method as in Fig. 8 except the time multiplexing. That is, it shows the integrated image with elemental images for banana and apple displayed on the display device 1 and 2, respectively, at the same time. As shown in Fig. 10, the rear image of apple is observed through the front image of banana and it means that the translucent problem occurs. Therefore, it is proved that the proposed time-multiplexing method provides an effective way to prevent the translucent problem.

 figure: Fig. 10.

Fig. 10. Integrated image on multiple CDPs without the time-multiplexing method.

Download Full Size | PDF

4. Conclusion

In this paper, a novel method to enhance the depth of 3D images in InIm by using two parallel-layered devices is proposed. By adopting a transparent display device such as an LCD panel for display device 2, it is possible to enlarge the total depth of the reconstructed images to tens of centimeters. In addition, for the first time to our knowledge, the translucent problem between 3D images is also resolved by a time-multiplexing method. The proposed method is proven and compared with the conventional one by experimental results. The proposed method can be helpful in realizing a depth-enhanced InIm 3D display system without the translucent problem.

Acknowledgment

This work was supported by the Information Display R&D Center, one of the 21st Century Frontier R&D Programs funded by the Ministry of Commerce, Industry and Energy of Korea.

References and links

1. G. Lippmann, “La photographie integrale,” C. R. Acad, Sci. 146, 446–451 (1908).

2. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef]   [PubMed]  

3. T. Okoshi, “Optimum design and depth resolution of lens-sheet and projection-type three-dimensional displays,” Appl. Opt. 10, 2284–2291 (1971). [CrossRef]   [PubMed]  

4. S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Three-dimensional display system based on computer-generated integral photography,” The 2001 Stereoscopic Displays and Applications Conference, Photonics West, Proc. SPIE 4297, 187–195 (2001). [CrossRef]  

5. J.-H. Park, H.-R. Kim, Y. Kim, J. Kim, J. Hong, S.-D. Lee, and B. Lee, “Depth-enhanced three-dimensional-two-dimensional convertible display based on modified integral imaging,” Opt. Lett. 29, 2734–2736 (2004). [CrossRef]   [PubMed]  

6. Y. Kim, J.-H. Park, S.-W. Min, S. Jung, H. Choi, and B. Lee, “Wide-viewing-angle integral three-dimensional imaging system by curving a screen and a lens array,” Appl. Opt. 44, 546–552 (2005). [CrossRef]   [PubMed]  

7. S.-H. Shin and B. Javidi, “Speckle reduced three-dimensional volume holographic display using integral imaging,” Appl. Opt. 41, 2644–2649 (2002). [CrossRef]   [PubMed]  

8. Y. Frauel and B. Javidi, “Digital three-dimensional image correlation by use of computer-reconstructed integral imaging,” Appl. Opt. 41, 5488–5496 (2002). [CrossRef]   [PubMed]  

9. S. Manolache, A. Aggoun, M. McCormick, N. Davies, and S. Y. Kung, “Analytical model of a three-dimensional integral image recording system that uses circular and hexagonal-based spherical surface microlenses,” J. Opt. Soc. Am. A 18, 1814–1821 (2001). [CrossRef]  

10. T. Naemura, T. Yoshida, and H. Harashima, “3-D computer graphics based on integral photography,” Opt. Express 8, 255–262 (2001), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-8-4-255. [CrossRef]   [PubMed]  

11. J. Hong, J.-H. Park, J. Kim, and B. Lee, “Elemental image correction in integral imaging for three-dimensional display,” Proceedings of 2004 IEEE Lasers and Electro-Optics Society Annual Meeting (LEOS 2004) (2004), paper ML6, pp. 116–117.

12. B. Lee, S. Jung, S.-W. Min, and J.-H. Park, “Three-dimensional display by use of integral photography with dynamically variable image planes,” Opt. Lett. 26, 1481–1482 (2001). [CrossRef]  

13. B. Lee, S.-W. Min, and B. Javidi, “Theoretical analysis for three-dimensional integral imaging systems with double devices,” Appl. Opt. 41, 4856–4865 (2002). [CrossRef]   [PubMed]  

14. S.-W. Min, B. Javidi, and B. Lee, “Enhanced three-dimensional integral imaging system by use of double display devices,” Appl. Opt. 42, 4186–4195 (2003). [CrossRef]   [PubMed]  

15. S. Jung, J. Hong, J.-H. Park, Y. Kim, and B. Lee, “Depth-enhanced integral-imaging 3D display using different optical path lengths by polarization devices or mirror barrier array,” J. Soc. Inf. Display 12, 461–467 (2004). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Basic principle of conventional InIm.
Fig. 2.
Fig. 2. Principles of the proposed method (a) with two real CDPs and (b) real and virtual CDPs.
Fig. 3.
Fig. 3. Interference between the integrated images.
Fig. 4.
Fig. 4. Principle of the time-multiplexing method for real CDPs in (a) rear image phase and (b) front image phase.
Fig. 5.
Fig. 5. Principle of the time-multiplexing method for real and virtual CDPs in (a) rear image phase and (b) front image phase.
Fig. 6.
Fig. 6. Elemental images used in rear image phase to integrate (a) the mask of front image and (b) the rear image.
Fig. 7.
Fig. 7. Elemental images used in front image phase to integrate (a) the front image and (b) the white screen.
Fig. 8.
Fig. 8. Experimental results of the proposed method observed at (a) left viewpoint, (b) center viewpoint, and (c) right viewpoint.
Fig. 9.
Fig. 9. Integrated images by the conventional InIm method (a) focused at 118.8mm and (b) focused at -74.8mm.
Fig. 10.
Fig. 10. Integrated image on multiple CDPs without the time-multiplexing method.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

1 a + 1 b = 1 f ,
1 a 1 + 1 b 1 = 1 f ,
1 a 2 + 1 b 2 = 1 f .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.