Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Full-color computer-generated holograms using 3-D Fourier spectra

Open Access Open Access

Abstract

A new method for synthesizing a full-color computer-generated hologram (CGH) of real-existing objects has been proposed. In this method, the synthesizing process of CGHs and the adjustments of magnifications for each wavelength are considered based on parabolic sampling of three-dimensional (3-D) Fourier spectra. Our method requires only one-dimensional (1-D) azimuth scanning of objects, does not require any approximations in the synthesizing process, and can perform efficient magnification adjustments required for color reconstruction. Experimental results have been presented to verify the principle and validity of this method.

©2004 Optical Society of America

1. Introduction

A computer-generated hologram (CGH) is one of the most important techniques for three-dimensional (3-D) imaging since it can yield ideal 3-D visual effects even for virtual 3-D objects [1]. Some encoding methods for synthesizing CGHs were proposed. Yang et al. studied encoding of complex information into one phase-type CGH with area division and phase retardation [2]. Mishina et al. developed a technique for the enlargement of the viewing angle with a commercial liquid crystal display and the spatial filter synchronized with it [3].

 figure: Fig. 1.

Fig. 1. A virtual optical system for 3-D objects.

Download Full Size | PDF

However, it is extremely difficult to synthesize a CGH for real-existing 3-D objects because the 3-D object’s information is necessarily required. Recording of projection images is one of the most promising techniques for outdoor recording because it can be performed using incoherent white light and it is not sensitive to external vibration. Multiple projection images from different view points are required to obtain the 3-D object’s information. Yatagai utilized projection process for synthesizing CGHs [4]. However, an auto-stereoscopic display and not a CGH was fabricated at the final step.

In recent years, Abookasis et al. and Sando et al. proposed some methods for synthesizing a CGH from multiple projection images [5, 6]. These methods have potential for outdoor recording of 3-D objects and it is very easy to extend their methods to full-color reconstruction by using a color CCD camera [7]; however, in principle, they require two-dimensional(2-D) mechanical scanning of 3-D objects in their methods. This involved an enormous time for recording and precise and complicated control of a CCD camera.

In this paper, we have proposed an efficient method for synthesizing a CGH from projection images. The synthesizing process of CGHs is considered geometrically in the 3-D Fourier space. This reduces redundant 2-D scanning of objects [5, 6, 7] into essential one-dimensional (1-D) azimuth scanning for the recording process. In addition, an efficient adjustment method of magnifications is also proposed. In order to verify the effectiveness of this method, experimental results have been demonstrated with actually recorded projection images.

2. Principle

2.1. Relation between object waves and 3-D Fourier spectrum

A virtual optical system for recording 3-D objects is shown in Fig. 1. We assume that 3-D object surfaces reflect waves emerging from external light sources isotropically. In general, the spatial reflectivity distribution of 3-D objects is expressed as complex values. However, we have to treat the spatial phase distribution of the reflectivity as spatially unvaried because a CCD can not detect the phase information without interferometer. The 3-D spatial distribution with spatially unvaried phase is represented by O(x,y, z), which corresponds to the square root of the intensity. The complex wavefronts reflected on the objects are observed in the Fourier plane in Fig. 1. The distribution g(x 0,y 0) in the Fourier plane is expressed as follows [6]:

g(x0,y0)=O(x,y,z)exp{i2πλ[x0x+y0yf(x02+y02)z2f2]}dxdydz,

where λ and f are the virtual wavelength of incident light and the focal length, respectively, of the lens introduced in Fig. 1. Our principle is based on the relation between g(x 0,y 0) and the 3-D Fourier spectrum of the 3-D distribution O(x,y,z). The relation is revealed after substituting u=x 0/λf and v=y 0/λ f in Eq. (1).

g(u,v)=O(x,y,z)exp{i2π[ux+vyλ2(u2+v2)z]}dxdydz
={O(x,y,z)exp[i2π(ux+vy+wz)]dxdydz}w=λ(u2+v2)2
=[O(x,y,z)]w=λ(u2+v2)2,

where 𝓕[·] denotes a 3-D Fourier-transform operator. Subscripts in Eq. (2) represent a paraboloid of revolution in 3-D Fourier space (u,v,w). Consequently, we can determine that the wavefront distribution at the Fourier plane in Fig. 1 is completely identical to components on the paraboloid of revolution in the 3-D Fourier space of O(x,y,z) without any approximations, unlike other similar methods [5, 6, 7].

 figure: Fig. 2.

Fig. 2. Schematics of the principle of 3-D CST. (a) Orthogonal projection in the real space and (b) a sectional plane in the 3-D Fourier space obtained from a projection image.

Download Full Size | PDF

2.2. Extraction method for paraboloid of revolution

Equation (2) implies that indirect acquisition of the distribution g(u,v) becomes possible by direct access to the 3-D Fourier space of O(x,y,z). To achieve this, the principle of the 3-D central slice theorem (CST) is essential [8]. The principle of 3-D CST ensures that partial components of the 3-D Fourier spectrum of a 3-D object are obtained from an orthogonal projection image of the object. In this principle, at first, the 3-D object is projected onto a plane whose normal vector is inclined by θ to the z-axis on the z-x plane. Subsequently the projection image is 2-D Fourier-transformed. The 2-D Fourier spectrum then corresponds with a sectional Fourier plane whose normal vector is inclined by θ to the w-axis on the w-u plane in the 3-D Fourier space of the object, as shown in Fig. 2. Therefore, it is possible to obtain partial components on the paraboloid of revolution represented by Eq. (2) from one projection image using this principle, as shown in Fig. 3.

Figure 3(a) shows the components represented by Eq. (2). Only intersections between the sectional Fourier plane in Fig. 2(b) and the paraboloid of revolution in Fig. 3(a) can be extracted from the sectional Fourier plane. The intersections are calculated by solving the following simultaneous equations:

wcosθ+usinθ=0
w=λ2(u2+v2).

Equations (3) and (4) represent the planar equation shown in Fig. 2(b) and the equation of the paraboloid of revolution shown in Fig. 3(a), respectively. This simultaneous equations give the following solution:

(utanθλ)2+v2=(tanθλ)2,w=utanθ.

This solution shows that the intersections between the sectional Fourier plane and the paraboloid of revolution form an ellipse on the sectional Fourier plane, which is extracted components from one projection, as expressed with a red line in Fig. 3(b). Moreover, from this solution it is found that the projection of the ellipse onto the u-v plane in the 3-D Fourier space becomes a circle with a radius tan θ/λ, shown in Fig. 4(a). The position of the center and the radius depend on the direction of projection. On the other hand, to obtain all the components on the paraboloid of revolution is equal to filling the u-v plane with red circles. In order to fill the two-dimensional (2-D) u-v plane, 2-D scanning of 3-D objects is not necessarily needed. 1-D scanning is enough because 1-D components on the 2-D u-v plane are extracted from only one projection. Although there are some scanning methods to accomplish this, the diagram shown in Fig. 5 indicates the best scanning method in terms of the feasibility of a recording optical system. In this diagram, tangential θ and azimuth ϕ determine the radius of the extractive circle and the azimuth position of its center, respectively, (ϕ=0 in Fig. 4(a)). 3-D objects are imaged onto a CCD plane by an imaging lens. This projection is not exactly an orthogonal one; however, it can be approximated as an orthogonal projection in the case that the distance from the origin to the CCD camera is considerably longer than the depth of the 3-D objects. The CCD camera records the intensity of such projection images by revolving around the z-axis. Since the locus of the CCD camera and the Fourier components shown in Fig. 3(a) are in rotational symmetry to the z- and w- axes, respectively, the extractive components from a series of projection images recorded by this system can fill the u-v plane with red circles, as shown in Fig. 4(b). Hence, it is found that this scanning method can provide all the components on the paraboloid of revolution and requires only 1-D azimuth scanning of objects, unlike other similar methods [5, 6, 7].

 figure: Fig. 3.

Fig. 3. Paraboloid of revolution. (a) Components identical to objects waves and (b) intersections between the paraboloid of revolution and a sectional Fourier plane.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Extractive area on the u-v plane from (a) one projection image and (b) a series of projection images.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. A recording optical system.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Adjustments of magnifications.

Download Full Size | PDF

2.3. Adjustments of magnifications for each wavelength

To extend this method to color reconstruction is rather simple provided that a color CCD camera is used. Sando et al. have already proposed such a full-color CGH [7]. In full-color reconstruction, three light sources with wavelengths of λR, λG, and λG corresponding to red, green, and blue are essential. Moreover, adjustments of magnifications for each wavelength are required. These were performed by Sando et al. by changing the movement area of the CCD camera for each wavelength. However, this adjustment method is not applicable to the method proposed here because the locus of a CCD camera is fixed for every wavelength. In this method, the adjustment of the magnification of the z-axis, whose diagram is shown in Fig. 6, is performed by changing a radius of an extractive circle for each wavelength. U and V denote maximum spatial frequencies that depend not on wavelengths but on the size and pixel number of a projection image. Therefore, all the three color components are obtainable by a single one azimuth scanning using a color CCD camera. The magnifications of the x- and y- axes are fixed to one [7].

3. Experimental results

In order to verify the principle described above, we have demonstrated an experiment. A total of 90 projection images were recorded with a recording optical system proposed in Fig. 5. Some typical examples are shown in Fig. 7. These images are divided into color components and each component is binarized. The noise component is also removed from them. The size and pixel numbers of each image are 1×1cm and 256×256pixels, respectively. A grape and a mushroom on square planes are located at z≅0mm and z≅4.9mm in the object space, respectively. Three wavelengths, 632.8nm, 514.5nm, and 488.5nm, corresponding to red, green, and blue, respectively, are used for full-color reconstruction. The angle between the optical axis of a CCD camera and the z-axis is θ≅17°. Under these conditions, the magnification of the z-axis is approximately 53 [7]. Therefore, the grape and the mushroom should be reconstructed at z≅0cm and z≅26cm, respectively. According to the principle proposed here, the distributions at the Fourier plane in Fig. 1 are synthesized from the above projection images. Thus, the reconstructed images can be easily obtained by calculating back propagation from the distribution at the Fourier plane onto arbitrary sectional planes in the objects space. These procedures are performed for each color component and the three color components are then superimposed at the final step. Such reconstructed images are shown in Fig. 8. As can be observed from Fig. 8, each of the two different objects can be clearly reconstructed at each corresponding position. The adjustments of magnifications for each wavelength are also successful. Thus, it is verified that this method can reconstruct 3-D full-color objects from substantially smaller number of projection images as compared to the previous method [7].

 figure: Fig. 7.

Fig. 7. Color projection images at θ=17°.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Numerical reconstructed images.

Download Full Size | PDF

4. Conclusion

An efficient method for synthesizing full-color CGHs from multiple projection images has been proposed. In this method, holographic patterns are calculated using 3-D Fourier spectra of 3-D objects. The proposed method requires only 1-D mechanical scanning in an object space, unlike our previous method and does not require any approximations. Therefore, this method is superior to other similar methods in terms of efficiency of projection data, quality of reconstructed images, and practical feasibility of the recording system. Experimental demonstration has verified the principle and the validity of this method.

This method is concerned with the components on paraboloid of revolution in 3-D Fourier space. Consequently, this method can also be applied in the fields using 3-D Fourier spectrum such as X-ray CT, MRI imaging, diffraction tomography, and others.

References and links

1. A. W. Lohmann and D. P. Paris, “Binary Fraunhofer holograms, generated by computer,” Appl. Opt. 6, 1739–1748 (1967). [CrossRef]   [PubMed]  

2. M. Yang and J. Ding, “Area encoding for design of phase-only computer-generated holograms,” Opt. Commun. 203, 51–60 (2002). [CrossRef]  

3. T. Mishina, M Okui, and F. Okano, “Viewing-zone enlargement method for sampled hologram that uses high-order diffraction,” Appl. Opt. 41, 1489–1499 (2002) [CrossRef]   [PubMed]  

4. T. Yatagai, “Stereoscopic approach to 3-D display using computer-generated holograms,” Appl. Opt. 15, 2722–2729 (1976). [CrossRef]   [PubMed]  

5. D. Abookasis and J. Rosen, “Computer-generated holograms of three-dimensional objects synthesized from their multiple angular viewpoints,” J. Opt. Soc. Am. A 20, 1537–1545 (2003). [CrossRef]  

6. Y. Sando, M. Itoh, and T. Yatagai, “Holographic three-dimensional display synthesized from three-dimensional Fourier spectra of real existing objects,” Opt. Lett. 28, 2518–2520 (2003). [CrossRef]   [PubMed]  

7. Y. Sando, M. Itoh, and T. Yatagai, “Color computer-generated holograms from projection images,” Opt. Express 12, 2487–2493 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-11-2487. [CrossRef]   [PubMed]  

8. M-Y. Chiu, H. H. Barrett, and R. G. Simpson, “Three-dimensional reconstruction from planar projections,” J. Opt. Soc. Am. 70, 755–762 (1980). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. A virtual optical system for 3-D objects.
Fig. 2.
Fig. 2. Schematics of the principle of 3-D CST. (a) Orthogonal projection in the real space and (b) a sectional plane in the 3-D Fourier space obtained from a projection image.
Fig. 3.
Fig. 3. Paraboloid of revolution. (a) Components identical to objects waves and (b) intersections between the paraboloid of revolution and a sectional Fourier plane.
Fig. 4.
Fig. 4. Extractive area on the u-v plane from (a) one projection image and (b) a series of projection images.
Fig. 5.
Fig. 5. A recording optical system.
Fig. 6.
Fig. 6. Adjustments of magnifications.
Fig. 7.
Fig. 7. Color projection images at θ=17°.
Fig. 8.
Fig. 8. Numerical reconstructed images.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

g ( x 0 , y 0 ) = O ( x , y , z ) exp { i 2 π λ [ x 0 x + y 0 y f ( x 0 2 + y 0 2 ) z 2 f 2 ] } dxdydz ,
g ( u , v ) = O ( x , y , z ) exp { i 2 π [ u x + v y λ 2 ( u 2 + v 2 ) z ] } dxdydz
= { O ( x , y , z ) exp [ i 2 π ( u x + vy + w z ) ] dxdydz } w = λ ( u 2 + v 2 ) 2
= [ O ( x , y , z ) ] w = λ ( u 2 + v 2 ) 2 ,
w cos θ + u sin θ = 0
w = λ 2 ( u 2 + v 2 ) .
( u tan θ λ ) 2 + v 2 = ( tan θ λ ) 2 , w = u tan θ .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.