Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-speed full analytical holographic computations for true-life scenes

Open Access Open Access

Abstract

We develop a novel method to generate hologram of three-dimensional (3D) textured triangle-mesh-model that is reconstructed from ordinary digital photos. This method allows analytically encoding the 3D model consisting of triangles. In contrast to other polygon based holographic computations, our full analytical method will free oneself from the numerical error that is in the angular spectrum due to the Whittaker-Shannon sampling. In order to saving the computation time, we employ the GPU platform that is remarkably superior to the CPU’s. We have rendered a true-life scene with colored textures as the first demo by our homemade software. The holographic reconstructed scene possesses high performances in many aspects such as depth cues, surface textures, shadings, and occlusions, etc. The GPU’s algorithm performs hundreds of times faster than those of CPU.

©2010 Optical Society of America

1. Introduction

3D imaging attracts increasing attention nowadays [1]. 3D data for real existent or nonexistent objects can be obtained more conveniently since the information technology made great progress during the past decades [2,3]. They have been widely used in games, movies, virtual reality, reverse engineering, etc. However, the existing two-dimensional display limits the performance. One of the promising technologies for true 3D display is computer generated holograms (CGHs) [4]. It can use the prescribed 3D object data stored in computer to produce wavefronts with amplitude and phase distribution for creating the most accurate depth cues of 3D objects [5].

However, the holographic computation is time consuming. Researchers have developed fast algorithms to accelerate the calculation [6,7]. These methods treat 3D models as a combination of individual self-luminous points. The object wave distribution on the hologram is calculated for each point and then superimposed. It requires a large number of points to achieve the solid effect, and enormous memory size for sampling [7]. Alternatively, 3D objects may be constructed of planar segments whose elements number is much less than that of point sets. In this way, the time will decrease significantly. In these methods, the numerical algorithm- fast Fourier transform (FFT), with the extra interpolated should be employed for each plane [810]. Recently, a triangle-mesh-based algorithm is proposed to further reduce the computational time [11,12]. The analytical angular spectrum of tilted triangle is calculated in frequency domain and a single FFT is required for the whole wave field. However, the uses of FFT will introduce numerical errors when the object goes beyond a certain distance. This limitation comes from the Whittaker-Shannon sampling theorem [5,1317]. As a result, the diffraction field of the triangular aperture becomes noisy when the object moves away from the hologram [17]. Such noises will mainly affect the surface performances (e.g. introducing additional artifacts) in the holographic reconstruction. The additional artifacts will bring a blur defocus feeling and weaken the quality of the reconstruction scene. In addition, if the hologram becomes large, the huge number of sampling points for FFT will bring some calculation problems [18]. Using analytical Fourier CGH may save the last FFT, but it will also suffer from the short depth-of-field because of the limitation of aperture and focus depth of lens [12].

On the other hand, people use high-class hardware to accelerate the computation speed. For example, a MIT group has developed a special purpose computational board for holographic rendering [19]. They achieved a record that the implementation was 50 times faster than workstation of the time. Most recently, some researchers presented parallel algorithms based on commodity graphic processing units (GPUs) [2022]. These approaches, using GPGPU (General-Purpose computation on GPU) techniques, can accelerate the CGH calculation in an inexpensive way.

In this paper, we derive the analytical formulas to describe the diffraction field of 3D true-life scenes. Since there no need for FFT, we can get rid of the numerical errors from the discrete algorithm. In addition, we propose a simple and useful phase adjustment technique to remove the visible mesh edges that appear in previous analytical polygon-based methods. Therefore, the smooth reconstructed surface, as well as the high quality 3D display, can be accomplished. By using CUDA [23], another GPU technique, the algorithm performs hundreds of times faster than those of CPUs.

2. Theory

2.1 Analytical diffraction field of 3D triangle-mesh model

In the common computer graphics field, 3D surface models are represented by polygons, for instance, triangles, as shown in Fig. 1(a) . If the diffraction field emitted from each triangle of the 3D model can be calculated explicitly on the hologram plane, the whole objects field distribution will be obtained through their linear superposition.

 figure: Fig. 1

Fig. 1 (a) Local coordinate system of triangle mesh object and the global coordinate system. (b)Diffraction by one of the triangles

Download Full Size | PDF

For the yellow triangle of the object [Fig. 1(a)], the local coordinate system (xl,yl,zl) and the global coordinate system (xg,yg,zg) can be related by the rotation matrix:

[xlylzl]=(r11r12r13r21r22r23r31r32r33)[xgxcygyczgzc],  and [xgygzg]=(r11'r12'r13'r21'r22'r23'r31'r32'r33')[xlylzl]+[xcyczc]
where (xc,yc,zc) is the center of mass of the triangle in global coordinate system, and zl on the triangle facet is 0. Let three vertices of the triangle be (xgi,ygi,zgi),i=1,2,3 in global coordinate system, and (xli,yli,0), i=1,2,3 in the local coordinate system.

Consider the diffraction of monochromatic plane wave by the finite triangular-shape aperture in an infinite opaque screen, as shown in Fig. 1(b). The wave propagates along the axis z=zg. The hologram is perpendicular to z=zg and located at zg=0. In local coordinate system, the complex amplitude of the plane wave is exp[i2π(r31'xl+r32'yl+zc)/λ]. By using the Rayleigh-Sommerfeld diffraction integral [5], the diffraction field of the yellow triangle on the hologram [see Fig. 1(b)] can be expressed,

OH(xH,yH)=(1/iλ)OO(xl,yl)exp[i2π(r31'xl+r32'yl+zc)/λ][exp(ikr)/r]dxldyl
where k=2π/λ and λ is the wavelength; r=(xHxg)2+(yHyg)2+(zHzg)2 is the distance between the points within triangle and the hologram pixels. (xH,yH) denotes the hologram coordinate system. Combined by Eq. (1), it can be found that the quantity r is a square root function with the local coordinate variables(xl,yl). Therefore, if the assumption
(xHxc)2+(yHyc)2+zc2>>(xgxc)2  and (ygyc)2
is satisfied, then r can be expanded into:
r=r0+xl2+yl22r0[r11'(xHxc)+r21'(yHyc)r31'zc]xl+[r12'(xHxc)+r22'(yHyc)r32'zc]ylr0
where r0=(xHxc)2+(yHyc)2+zc2 is the distance between the triangular mass center and the hologram pixel. In fact, Eq. (3) can be regarded as the modified paraxial approximation. It indicates that the linear scale of each triangle should be much less than the distance between the mass center of triangle and the hologram plane. This restriction can be easily satisfied as the scene usually contains numbers of triangles and each triangle is small. In this way, Eq. (2) becomes,
OH(xH,yH)=exp[ik(zc+r0)]iλr0OO(xl,yl)Q(xl,yl)exp[i2πλr0(xH'xl+yH'yl)]dxldyl
where Q(xl,yl)=exp[ik(xl2+yl2)/2r0] is the quadratic phase factor; and xH'=r11'(xHxc) +r21'(yHyc)r31'zcr31'r0, yH'=r12'(xHxc)+r22'(yHyc)r32'zcr32'r0. Note that Eq. (5) is similar in form to the well known Fresnel diffraction between two parallel planes [5]. As we know the Fresnel diffraction formula can be simplified to Fraunhofer form when an infinity aperture lens set immediately in front of the object [5]. Numerous experiment results of Fraunhofer holography reveal that the quadratic phase factor within Fresnel integral does not affect the reconstruction performance [5]. So in the CGH, the quadratic phase factor Q(xl,yl) in Eq. (5) can also be discarded but would not affect the reconstruction result. Thus, Eq. (5) will be simplified as
OH(xH,yH)=exp[ik(zc+r0)]iλr0F[OO(xl,yl)]
where F[OO(xl,yl)] is the Fourier transform of the yellow triangle, evaluated at frequencies (xH'/λr0,yH'/λr0). Although the yellow triangle is arbitrary, we can still derive the analytical form with the help of the affine transformation for two-dimensional Fourier transform [11,12,24]. After the cumbersome algebras, it yields
F[OO(xl,yl)]=(a22a11a12a21)exp(i2πa13xH'+a23yH'λr0)FΔ(a11xH'+a21yH'λr0,a12xH'+a22yH'λr0)
where a11=xl2xl1, a12=xl3xl2, a13=xl1,a21=yl2yl1,a22=yl3yl2,a23=yl1. The last term FΔis Fourier transform of a right triangle with vertices in the points (0,0),(1,0),(1,1), and it can be expressed analytically as Ref. 11. Using Eqs. (6) and (7), we can compute the diffraction field of an arbitrary tilted triangle analytically. The reconstruction results of a slanted triangle are presented in Fig. 2 for verification purpose. The reconstructions at different distances reveal that the different part of a triangle is focused while other parts are defocused. For example, in Fig. 2(b), point A is focus but point B and C are blur, when we focus on plane I where point A locates; in Fig. 2(c), point B is focus but point A and C are blur, when we focus on plane II where point B locates. This is a very important result, as it allows us to generate the hologram directly pixel by pixel without FFT, and easily to perform the theory on the GPU’s parallel platform. Finally, the whole objects field can be obtained through the linear superposition of the diffraction field per triangle.

 figure: Fig. 2

Fig. 2 (a) A schematic illustration of a slanted triangle reconstruction. (b) and (c) are reconstruction results at the plane I and II where point A and B locate.

Download Full Size | PDF

2.2 Texture mapping, occlusion, and phase adjustment

To achieve a better performance, some specifics should be taken into account. For example, we adopt a modified Lambert brightness to avoid the unexpected shading [10]. Another important feature is texture of the whole scene. 3D model with texture will bring us a more realistic sense than those without texture. In our method, the amplitude of each triangle is a constant that is proportional to the color of the triangular mass center. It requests the texture of each triangle is as color-similar as possible. Otherwise, one should subdivide the triangle into small ones before extracting the texture. Hidden-surface removal is a necessary and troublesome issue since an object may occlude or partially overlapping by another one. Before coding, we let all the triangles be projected onto the hologram plane, and then sort them according to their distance. The shielded rear triangles are discarded. For nowadays the bandwidth of SLM is narrow, this method is quite well for our situation.

One problem of using analytical formulas is that there are some visible mesh-edges in the reconstruction. This is because of the interferences between the diffraction fields of the same edge that belongs to two conjoint triangles. In other words, we record two virtual triangular edges with an additional phase difference (δz) instead of one real edge. Here, we develop a phase adjustment technique to smooth such visible edges. Our method only needs to adjust the z component of mass center of each triangle (zc) to an integral multiple of wavelength. There are two reasons. It is the fact that, when we reconstruct the scene, the view angle is narrow along the z axis due to the narrow SLM bandwidth nowadays. Thus, only the z component of the phase adjustment is considered. Another is that, the light intensity of the triangular edge is equal to I=α2[1+exp(i2πδz/λ)]2, where α relates to the response of coherent system to sharp triangular edges, with an approximate value of 0.5 at the edge [5]. When δz is an integer multiple of the wavelength, it yields I~1. It means that the intensity of triangular edge is almost the same as that inside the triangle, and thus, the visible edges is smoothed. In practice, we let all the z components of mass center of triangles (zc) to be an integer multiple of the wavelength, so that δz is also the integer multiple of the wavelength. In addition, this simple phase adjustment can be easily integrated into our analytical algorithm, and make us employ using the GPU platform. Since the adjustment is wavelength order of magnitude, it is too slight to affect the reconstruction positions of the macroscopical triangle. The reconstructed scene shown below will demonstrate how the phase adjustment works well.

3. Parallel algorithm and implementation on CUDA

To accelerate the computation in an inexpensive way, we consider the parallel algorithm based on GPU. The CUDA compiler is employed as a programming environment. It can compile a C-like language source code for parallel application on the GPU [23]. Data communication is always a troublesome problem in the parallel calculation. In our full analytical encoding method, however, the sampling points on the hologram can be calculated individually, leading that the data communication time could be extremely decreased. We use a CPU server (Dell PE2900, Quad-Core Intel Xeon Pro X5460 2X3.16GHz) and a graphics card with GPU chipset (GeForce GTX 285) for comparison. From the Table 1 , it can be seen that while the triangles is 120, we can achieve a video rate. The performance in the case of huge number is more encouraging. When the number of triangles is more than three thousand, the accelerated effect by CUDA is amazing. The CUDA algorithm runs more than 500 times faster than that of serial algorithm in CPU.

Tables Icon

Table 1. Holographic computation times for several triangular-mesh models with the resolution of 1024 by 768

4. Results

To calculate the hologram for true life, we should firstly bring the real 3D scene into computer. The image-based modeling and photogrammetry software- ImageModeler [25] is adopted to generate 3D models from ordinary 2D digital photos. This software provides an inexpensive, flexible and fast way to record the real scene and even for a living.

Figures 3(a) and 3(c) are the digital photos we take from two arbitrary different aspects for modeling. Needless to input information about the camera, we use the ImageModeler for calibration and modeling. After that, photorealistic texture maps of the model are automatically extracted from the pixel information of the source images. Till then, we obtain the textured 3D scene model. Finally, we encode them in the full analytical CGH algorithm as proposed above. The 3D scene model here contains 22408 triangles, and the resolution of CGH is 4096×3072, while pixel size is 2μm. The scene is at a distance of 15cm from the hologram, and the pawns in back are 1.76cm apart from the king in front.

 figure: Fig. 3

Fig. 3 Images for different aspects of the scene: (a), (c) original photos of the true-life scene taken by digital camera. (b), (d) and (e)-(g) numerical reconstructions of the scene from different aspects.

Download Full Size | PDF

To verify the texture technique, numerical colored reconstruction is employed. The textured scene model is separated into three primary color components; i.e., red, green, and blue. Each monochromatic target is encoded and reconstructed independently. The final colored result is merged from these three monochromatic reconstructions. Figure 3 shows five aspects of the reconstructed scene to demonstrate the enhancement on the realistic effect. The reconstructed views in Figs. 3(b) and 3(d) are consistent with Figs. 3(a) and 3(c), respectively. Moreover, since we obtained the 3D scene model, we can reconstruct not only the same aspects as those of photos but also other aspects, as shown in Figs. 3(e)3(g). The occlusion effects between chesses cause a strong depth sensation. For instance, the king in front shields a part of the right bishop in Fig. 3(e); when the scene rotates a slight clockwise angle, the king does not shield any chesses [Fig. 3(f)]; when it rotates more, the king shields the back pawn. It demonstrates our occlusion methods works quite well.

Thanks to our phase adjustment technique, the artificial edges now disappear in our reconstruction, so that the reconstructed scenes perform a well solid effect. Meanwhile, the objects retain their positions and are not affected by the phase adjustment. The shading technique used here also adds to the impression of surface curvature.

In Fig. 4 , simulation results of two cases focused on the king [Fig. 4(a)] and pawn [Fig. 4(b)] are presented. It is clear that the pawn becomes defocus blur when focusing on the king, and vice versa. This defocus effect further enhances the depth sensation, and further demonstrates that the 3D true-life scene is faithfully reconstructed.

 figure: Fig. 4

Fig. 4 Images of numerical reconstruction: (a) focus on king, (b) focus on pawn

Download Full Size | PDF

5. Conclusion

We proposed a novel method for high-speed hologram generation of 3D real scenes. A full analytic theory is derived to directly describe the object field on the hologram plane. FFT is no need, and thus, we can get rid of some limitation from it. To remove the artificial visible edges in analytical Fourier transforms for polygon based model, we develop a simple but useful phase adjustment technique. Finally we employ the GPU to accelerate the calculation, and obtain a five hundred times faster than the serial CPU algorithm. One limitation of our theory is the modified paraxial approximation. But it seems that this limitation can be further overcome by using a curved array of spatial light modulators [26].

Since we just use a single GPU and achieve such an amazing speed, we believe that our full analytical algorithm based on Multi-GPU will make the real time display come true in the future. Our method described above is suitable for 3D data from not only image-based modelings, but also any other ways such as CT, NMR, laser scanner, 3D design software, and so on. Furthermore, if we have two different aspects synchronized videos, we can record and display a motional scene.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (10874250, 10674183, 10804131), Ph.D Degrees Foundation of Education Ministry of China (20060558068), and SYSU Young Teacher Funding (2009300003161450). J.W.D.’s email: dongjwen@mail.sysu.edu.cn .

References and links

1. K. Iizuka, “Welcome to the wonderful world of 3D: introduction, principles and history,” Opt. Photon. News 17(7), 42–51 (2006). [CrossRef]  

2. See for example, Stanford 3D Scanning Repository, and XYZ RGB Inc.

3. J. L. Zhao, H. Z. Jiang, and J. L. Di, “Recording and reconstruction of a color holographic image by using digital lensless Fourier transform holography,” Opt. Express 16(4), 2514–2519 (2008). [CrossRef]   [PubMed]  

4. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38(8), 46–53 (2005). [CrossRef]  

5. J. W. Goodman, Introduction to Fourier optics (Roberts & Company Publishers, 2004)

6. M. Lucente, “Interactive Computation of Holograms Using a Look-Up Table,” J. Electron. Imaging 2(1), 28–34 (1993). [CrossRef]  

7. S. C. Kim and E. S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. 48(6), 1030–1041 (2009). [CrossRef]  

8. D. Leseberg and C. Frère, “Computer-generated holograms of 3-D objects composed of tilted planar segments,” Appl. Opt. 27(14), 3020–3024 (1988). [CrossRef]   [PubMed]  

9. N. Delen and B. Hooker, “Free-space beam propagation between arbitrarily oriented planes based on full diffraction theory: a fast Fourier transform approach,” J. Opt. Soc. Am. A 15(4), 857–867 (1998). [CrossRef]  

10. K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular sprctrum of plane waves,” J. Opt. Soc. Am. A 20(9), 1755–1762 (2003). [CrossRef]  

11. L. Ahrenberg, P. Benzie, M. Magnor, and J. Watson, “Computer generated holograms from three dimensional meshes using an analytic light transport model,” Appl. Opt. 47(10), 1567–1574 (2008). [CrossRef]   [PubMed]  

12. H. Kim, J. Hahn, and B. Lee, “Mathematical modeling of triangle-mesh-modeled three-dimensional surface objects for digital holography,” Appl. Opt. 47(19), D117–D127 (2008). [CrossRef]   [PubMed]  

13. E. Lalor, “Conditions for the Validity of the Angular Spectrun of Plane Waves,” J. Opt. Soc. Am. A 58(9), 1235–1237 (1968). [CrossRef]  

14. J. García, D. Mas, and R. G. Dorsch, “Fractional-Fourier-transform calculation through the fast-Fourier-transform algorithm,” Appl. Opt. 35(35), 7013–7018 (1996). [CrossRef]   [PubMed]  

15. D. Mendlovic, Z. Zalevsky, and N. Konforti, “Computation considerations and fast algorithms for calculation the diffraction integral,” J. Mod. Opt. 44, 407–414 (1997). [CrossRef]  

16. D. Mas, J. Garcia, C. Ferreira, L. M. Bernardo, and F. Marinho, “Fast algorithms for free-space diffraction patterns calculation,” Opt. Commun. 164(4-6), 233–245 (1999). [CrossRef]  

17. K. Matsushima and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17(22), 19662–19673 (2009). [CrossRef]   [PubMed]  

18. M. Zhang, T. S. Yeo, L. W. Li, and Y. B. Gan, “Parallel FFT Based Fast Algorithms for Solving Large Scale Electromagnetic Problems,” in IEEE Antennas and Propagation Society International Symposium, pp. 3995–3998 (2006).

19. J. Watlington, M. Lucente, C. Sparrell, V. M. Bove, and J. Tamitani, “A Hardware Architecture for Rapid Generation of Electro-Holographic Fringe Patterns,” Proc. SPIE 2406, 172–183 (1995).

20. C. Petz and M. Magnor, “Fast Hologram Synthesis for 3D Geometry Models using Graphics Hardware,” Proc. SPIE 5005, 266–275 (2003). [CrossRef]  

21. V. M. Bove, Jr., W. J. Plesniak, T. Quentmeyer, and J. Barabas, “Real-Time Holographic Video Images with Commodity PC Hardware,” Proc. SPIE Stereoscopic Displays and Applications 5664, 255–262 (2005).

22. L. Ahrenberg, P. Benzie, M. Magnor, and J. Watson, “Computer generated holography using parallel commodity graphics hardware,” Opt. Express 14(17), 7636–7641 (2006). [CrossRef]   [PubMed]  

23. NVIDIA, “NVIDIA CUDA Compute Unified Device Architecture Programming Guide Version 2.1,” (2008).

24. R. N. Bracewell, K.-Y. Chang, A. K. Jha, and Y.-H. Wang, “Affine theorem for two-dimensional Fourier transform,” Electron. Lett. 29(3), 304–306 (1993). [CrossRef]  

25. “Autodesk® ImageModeler™,” URL http://usa.autodesk.com/adsk/servlet/index?siteID=123112&id=11390028

26. J. Hahn, H. Kim, Y. Lim, G. Park, and B. Lee, “Wide viewing angle dynamic holographic stereogram with a curved array of spatial light modulators,” Opt. Express 16(16), 12372–12386 (2008). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1
Fig. 1 (a) Local coordinate system of triangle mesh object and the global coordinate system. (b)Diffraction by one of the triangles
Fig. 2
Fig. 2 (a) A schematic illustration of a slanted triangle reconstruction. (b) and (c) are reconstruction results at the plane I and II where point A and B locate.
Fig. 3
Fig. 3 Images for different aspects of the scene: (a), (c) original photos of the true-life scene taken by digital camera. (b), (d) and (e)-(g) numerical reconstructions of the scene from different aspects.
Fig. 4
Fig. 4 Images of numerical reconstruction: (a) focus on king, (b) focus on pawn

Tables (1)

Tables Icon

Table 1 Holographic computation times for several triangular-mesh models with the resolution of 1024 by 768

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

[ x l y l z l ] = ( r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 ) [ x g x c y g y c z g z c ] ,   and [ x g y g z g ] = ( r 11 ' r 12 ' r 13 ' r 21 ' r 22 ' r 23 ' r 31 ' r 32 ' r 33 ' ) [ x l y l z l ] + [ x c y c z c ]
O H ( x H , y H ) = ( 1 / i λ ) O O ( x l , y l ) exp [ i 2 π ( r 31 ' x l + r 32 ' y l + z c ) / λ ] [ exp ( i k r ) / r ] d x l d y l
( x H x c ) 2 + ( y H y c ) 2 + z c 2 > > ( x g x c ) 2   and ( y g y c ) 2
r = r 0 + x l 2 + y l 2 2 r 0 [ r 11 ' ( x H x c ) + r 21 ' ( y H y c ) r 31 ' z c ] x l + [ r 12 ' ( x H x c ) + r 22 ' ( y H y c ) r 32 ' z c ] y l r 0
O H ( x H , y H ) = exp [ i k ( z c + r 0 ) ] i λ r 0 O O ( x l , y l ) Q ( x l , y l ) exp [ i 2 π λ r 0 ( x H ' x l + y H ' y l ) ] d x l d y l
O H ( x H , y H ) = exp [ i k ( z c + r 0 ) ] i λ r 0 F [ O O ( x l , y l ) ]
F [ O O ( x l , y l ) ] = ( a 22 a 11 a 12 a 21 ) exp ( i 2 π a 13 x H ' + a 23 y H ' λ r 0 ) F Δ ( a 11 x H ' + a 21 y H ' λ r 0 , a 12 x H ' + a 22 y H ' λ r 0 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.