Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method

Open Access Open Access

Abstract

A large-scale full-parallax computer-generated hologram (CGH) with four billion (216×216) pixels is created to reconstruct a fine true 3D image of a scene, with occlusions. The polygon-based method numerically generates the object field of a surface object, whose shape is provided by a set of vertex data of polygonal facets, while the silhouette method makes it possible to reconstruct the occluded scene. A novel technique using the segmented frame buffer is presented for handling and propagating large wave fields even in the case where the whole wave field cannot be stored in memory. We demonstrate that the full-parallax CGH, calculated by the proposed method and fabricated by a laser lithography system, reconstructs a fine 3D image accompanied by a strong sensation of depth.

© 2009 Optical Society of America

1. Introduction

The technology of computer-generated holograms (CGHs) is the counterpart of computer graphics in holography. The technology has a long history and is sometimes referred to as the final 3D technology, because CGHs not only produce a sensation of depth but also generate light from the objects themselves. However, currently available CGHs cannot yet produce fine true 3D images accompanied by a strong sensation of depth. Such fine CGHs commonly require the following two conditions: the CGHs must have a large viewing zone to acquire the autostereoscopic property, i.e., motion parallax and, second, the dimensions of the CGHs must be large enough to reconstruct a 3D object that can be observed by two naked eyes. Both of these conditions lead to an extremely large number of pixels for a CGH, because the large viewing-zone requires high spatial resolution and the large dimensions require a large number of pixels for high resolution. In addition, scenes with occlusions should be reconstructed to give a CGH a strong sensation of depth, because the ability to handle occlusions is one of the most important mech anisms in the perception of 3D scenes.

The reason why fine 3D images could not be produced by CGH technology was that there was no practical technique to compute the object fields for such high-definition CGHs that reconstruct 3D scenes with occlusions. Computation of the object fields should be performed with an extremely large number of sampling points within a short span of time (typically a few hours or at least a few days), but this was very difficult to achieve even for a state-of-the-art computer. Research during the past few decades evolved from ray-oriented point-based methods, which are widely used for calculating the wave field. However, point-based methods are very time consuming, especially when full-parallax CGHs of surface objects are being computed. Although many techniques to accelerate computation of point-based methods have been proposed and developed in the past decade [1, 2, 3, 4, 5, 6, 7], the computation time or CGH size is still insufficient to produce fine full-parallax CGHs. Furthermore, there is no practical algorithm for full-parallax handling of occlusions in 3D scenes. This is an unresolved and fundamentally difficult problem of the point-based methods.

A field-oriented polygon-based method has been proposed [8] to overcome the limits of the point-based methods. In this new technique, an object is composed of polygonal facets. Each polygon is regarded as a surface source of light. The field emitted by the polygon is referred to as the polygon field. The field of an object is computed from the sum of the polygon fields. Numerical operations necessary for computing the polygon field are a double fast Fourier transform (FFT) and an interpolation for each polygon. Consequently, computation of a polygon field is slower than that of a spherical wave used in point-based methods. However, the number of polygons necessary for forming object surfaces is much less than that of point sources of light. As a result, the total computation time of the object field by the polygon-based method is shorter than the computation time of point-based methods. Several researchers have already proposed improvements on our method in order to compute faster but sacrifice texture mapping and uniform diffusiveness [9, 10].

In calculating object fields, light-shielding by obstacles should be realized to add an effect of mutual occlusion to the 3D scene reconstructed by the CGH. We have also proposed a practical solution to this problem. It is based on wave-optical consideration of field propagation [11, 12, 13]. This technique is referred to as silhouette masking or simply the silhouette method. Silhouette masking is applicable not only to polygon-based methods but also to point-based methods. However, silhouette masking best matches the polygon-based method because both methods use field propagation operations.

Though these techniques have advantages, so far the polygon-based methods cannot produce fine 3D images due to the inherent difficulty of segmented processing. Since the polygon-based methods use numerical field propagation, the size of the main memory needed for storing complex-valued wave fields limits the pixel size of CGHs. In this study, we propose a technique to overcome the limits on the pixel size in polygon-based methods. The large object field is divided into small segments in the proposed technique. Only a few segments of the field are simultaneously stored in the main memory and processed by multiple processors. Field propagation is also performed in segmented fields by using the shifted- Fresnel method [14].

We demonstrate the implementation of the large-scale CGH using a laser lithography system. This investigation is intended to produce a CGH by combining these techniques. The CGH reconstructs fine true 3D images that can be appreciated as works of art.

2. 3D Scene and 3D Object

The CGH created in this research is named “The Venus” because the 3D object is similar to the famous statue of the Venus de Milo. The Venus statue is 5.8cm in height and composed of 1396 polygons. The shape of the Venus is provided as a set of vertex coordinates of the polygons. The scene including the Venus statue and the hologram coordinates used in the computation are shown in Fig. 1. The hologram coordinate system (x^,y^,z^) is the world coordinates of the 3D scene, in which the hologram is placed in the (x^,y^,0) plane. The Venus statue is placed at z^=15cm behind the hologram. In addition, wallpaper with a simple planar image featuring a binary check pattern, is placed at z^=30cm behind the Venus statue to provide an occlusion and enhance the viewer’s sensation of depth. Both the wallpaper and hologram have the same dimensions, approximately 6.5cm×6.5cm. Some parameters used for creating the Venus CGH are summarized in Table 1.

3. Polygon-Based Method for Computing 3D Objects

In the polygon-based method, a wave field, given in the tilted plane that includes the polygon, represents the polygonal surface source of light. The polygon field in a plane parallel to the hologram is computed from the wave field in the tilted plane by using rotational transformation of the wave fields [15, 16]. This section comprehensively summarizes the details of the polygon-based method proposed in [8].

3A. Local Coordinate System and Rotation Matrix

The procedure for computing the object field of a triangular prism is shown in Fig. 2 as an example. There are three polygons, P1, P2, and P3, on the front face of the object.

A set of tilted local coordinate systems are defined as rn=(xn,yn,zn) for the polygon Pn, as shown in Fig. 2a. One more set of local coordinate systems, referred to as parallel local coordinates, is defined as r^n=(x^n,y^n,z^n). All axes of the parallel local coordinate systems are parallel to that of the hologram coordinates, but the origins are shared with the corresponding tilted coordinate systems. The parallel local coordinates are transformed from or to the tilted local coordinates by

r^n=Tnrn,rn=Tn1r^n.
where Tn is the rotation matrix for the polygon Pn.

There is an arbitrariness in determining the matrix Tn. We use the following Rodrigues rotation formula:

Tn=cosθnI+(1cosθn)snt·sn+sinθn[sn]×,
where I and sn are the 3×3 identity matrix and the unit vector of the rotation axis. Symbols [sn]× and snt denote the cross-product matrix and transposed matrix for sn. Here, the vector sn is given by using the normal vector nn of the polygon Pn and the unit vector z^=(0,0,1) in the hologram coordinates as follows:
sn=z^×nn.
The rotation angle θn in Eq. (2) is also given by θn=cos1(z^·nn).

3B. Surface Functions

The distribution of complex amplitudes, referred to as a surface function, is defined in the (xn,yn,0) plane for the tilted local coordinates of the polygon Pn. This function plays an important role to give the appearance of the polygon in optical reconstruction. Examples of the surface functions of the triangular prism are shown in Fig. 2b.

The surface function hn(xn,yn) for a polygon Pn is generally given in the form

hn(xn,yn)=an(xn,yn)exp[iϕ(xn,yn)],
where the real-valued amplitude an(xn,yn) gives the shape of the polygon as follows:
an(xn,yn)={Aninside polygon0outside polygon.
To shade the object as intended by the designer [8], the amplitude An has a different value dependent on the polygon. The texture of the polygon can be given as a modulation of the amplitude. In this case, An is not constant but a function of xn and yn.

If the surface function is a simple polygon-shaped distribution of real-valued amplitude, the surface function behaves like an aperture that shapes the polygon. In this case, light from the polygon does not spread over the hologram. The amplitude of the surface function should be multiplied by a phase distribution ϕ(xn,yn) to give diffusiveness to the polygon field. Random functions are candidates for the diffusive phase. However, full random functions are not appropriate, because the random phases are discontinuous and have a large spatial frequency. As a result, full random phases usually cause speckles in the reconstruction. In this paper, speckle-free quasi- random phase distribution proposed as a digital diffuser in Fourier holograms [17] is used for the phase distribution.

The wave field of a polygon is diffused by the diffuser phase. However, if the carrier frequency of the surface function is zero on the surface of the polygon, the emitted wave field propagates mainly in the direction normal to the polygon, and thus the polygon field cannot reach the hologram effectively. The center of the spectrum of the surface function should be shifted so that light travels along the optical axis of z^ and almost perpendicularly intersects the plane parallel to the hologram. This shift operation is a part of rotational transformation described in the following section.

3C. Rotational Transformation and Spectrum Remapping

The polygons forming the object surfaces are usually not parallel to the hologram, and thus wave fields in the hologram plane cannot be obtained simply by using ordinary formulas for the propagation of wave fields between parallel planes, such as the Fresnel diffraction formula. Before translational propagation, the surface function must be rotationally transformed into the wave field given in a plane parallel to the hologram by using the formula for rotational transformation [16].

Surface functions are Fourier transformed as follows:

Hn(un,vn)=F{hn(xn,yn)},
where F and F1 stand for the Fourier and the inverse Fourier transform. The Fourier spectrum Hn(un,vn) of polygon n is obtained numerically by using the FFT. Here, un and vn are Fourier frequencies.

Supposing that the transformation matrix for changing the parallel local coordinates into the tilted local coordinates is given as

Tn1=[t1,nt2,nt3,nt4,nt5,nt6,nt7,nt8,nt9,n],
the Fourier space (u^,v^) of the parallel coordinates is mapped onto the Fourier space (un,vn) of the tilted coordinates by
un=αn(u^n,v^n)=t1,nu^n+t2,nv^n+t3,nw^(u^n,v^n),vn=βn(u^n,v^n)=t4,nu^n+t5,nv^n+t6,nw^(u^n,v^n),
where w^(u^n,v^n)=(λ2u^n2v^n2)1/2 is the Fourier frequency on the z^n axis. Hence, the spectrum in the tilted coordinates is mapped onto the Fourier space of the parallel coordinates as
H^n(u^n,v^n)=Hn(αn(u^n,v^n),βn(u^n,v^n)).
This mapping moves the center of the spectrum far from the origin of the parallel Fourier space. This is interpreted as the fact that the polygon field does not travel perpendicular to the hologram plane as described in the former section. The spectrum H^(u^n,v^n) for the parallel coordinates should be shifted to change the direction of the polygon field as follows:
H^n(u^n,v^n)Hn(αn(u^n,v^n)un(0),βn(u^n,v^n)vn(0)),
where un(0) and vn(0) are shift values for moving the center of spectrum to a certain position around the origin of the Fourier space (u^n,v^n). In most polygons, the appropriate shift values are
un(0)=αn(0,0)=t3,n/λ,
vn(0)=βn(0,0)=t6,n/λ,
because the carrier frequencies are changed to zero by using these values, and thus the polygon fields are forced to travel precisely to the hologram [8]. However, there are a few exceptions. In some polygons for which normal vectors are almost parallel to the hologram, fine adjustments need to be made to the shift values. Note that the nearly equal sign in Eq. (10) implies that an interpolation is required because coordinate rotation not only moves the origin but also distorts the sampling grid [16].

The object field is a superposition of the polygon fields in a plane that is parallel to the hologram. This parallel plane is referred to as the object plane as shown in Fig. 1. Since the z^ positions of the polygons are not same, the polygon fields should be translationally propagated back or forth along the z^ axis before superposition. We use the angular spectrum method [18] for translational propagation. Supposing that the object plane is placed at z^obj and the origin of the local coordinates is positioned at (x^n(0),y^n(0),z^n(0)), translational propagation in the Fourier space is given as

H^n(u^n,v^n;z^obj)=H^n(u^n,v^n)exp[i2πw^(u^,v^)(z^objz^n(0))].
Polygon fields in the object plane are calculated by the inverse Fourier transform:
h^n(x^n,y^n;z^obj)=F1{H^n(u^n,v^n;z^obj)}.
The amplitude images of the polygon fields of the triangular prism are shown in Fig. 2c.

Finally, the object field is given by superposition of all polygon fields in the object plane as follows:

h(x^,y^;z^obj)=n=1Nph^n(x^nx^n(0),y^ny^n(0);z^obj),
where Np is the number of polygons. The object field h(x^,y^;z^obj) of the triangular prism is shown as the amplitude image in Fig. 2d. Note that the object plane is not the same as the hologram plane. The object plane should be placed so that it slices the object at the center, as shown in Fig. 1. The object field in the hologram plane h(x^,y^;0) is computed by translational propagation of the object field in the object plane.

The reason why the object field is not directly calculated in the hologram plane and the translational propagation is used is that calculating the object field in the object plane is faster than doing it in the hologram plane. Since the polygons are closer to the object plane than to the hologram plane, polygon fields are not diffracted as much in the object plane. In other words, the area for calculating the polygon field is much smaller than it is in the hologram plane. As a result, the object field can be computed much faster in the object plane than in the hologram plane.

Furthermore, there is another significant reason: light shielding of the wallpaper field is necessary to avoid a phantom image of the 3D object, as explained in the Section 5.

4. Segmentalized Computation of Object Fields by the Polygon-Based Method

4A. Segmented Frame Buffers

The wave field of Eq. (15) is a distribution of complex-valued amplitudes and is generally treated as a sampled 2D array of two floating-point variables in the numerical implementation. This 2D array is referred to as the frame buffer. The memory size required for the frame buffer for the surface function hn(xn,yn) and the parallel local field h^n(u^n,v^n) directly depends on the dimensions of the polygon. These are generally much smaller than the memory size for the object field if the object is constructed of curved surfaces, because curved surfaces are composed of many small polygons. In contrast, the memory size for the object field h(x^,y^;z^obj) is excessively large.

Since two single-precision floating points rep resent a complex value in our implementation, the memory size necessary for the frame buffer is 8N for an array of N sampling points. Therefore, the memory size required for storing the whole object field is 32 Gbytes in the object plane, because our CGH has 216×216 pixels. We note that this estimation is only for a single frame buffer. Other memory spaces required for running the program, such as the operating system area, code area, temporal frame buffers, I/O buffers, and so on are not included. As a result, segmentation of the frame buffer is essential for producing the Venus CGH by using ordinary personal computers (PCs). The segmented structure of the frame buffer for the object field is shown in Fig. 3. The frame buffer is divided into the same rectangular segments. Only a few segments are loaded in memory, and others are saved as files in a secondary storage medium such as a hard disk drive.

4B. Computation of Object Field by Using the Segmented Frame Buffer

The polygon field stored in the small frame buffers is numerically propagated in the object plane, and then all the polygon fields are gathered and summed up as described in Eqs. (13, 14, 15). The segmented frame buffer is used for summation in the object plane. Polygon fields are summed for a few segments loaded in memory. When the summation process is finished in a segment, the segment is saved in the file. After the segment is purged from memory, the next segment is loaded and processed in memory. This process is parallelized; i.e., a certain processor handles a certain segment, and multiple segments are processed simultaneously by multiple processors, which share the memory. Therefore, the number of segments simultaneously processed in memory is the same as the number of processors used for computation.

In the summation process, discrimination of polygons is important for avoiding unnecessary computations of the polygon fields. Polygons whose fields do not touch the current segment should be culled in computing the object field of the segment. For example, the fields of polygon P1 in Fig. 3 should be computed and totaled in the segment (2, 1) that is processed by CPU3, but polygons P2 and P3 should not be processed by CPU3. This discrimination is achieved by determining the maximum diffraction area of the polygon.

4C. Estimation of Maximum Diffraction Area of a Polygon

The Venus object is composed of 718 polygons. Since the processors scan all the polygons for each segment every time, discrimination should be made as fast as possible. It is not easy to calculate the exact diffraction area of a polygon in the object plane, and it is unnecessary for discrimination. We use a simple approach to estimate the maximum diffraction area as shown in Fig. 4. The maximum diffraction area of the vertex of a polygon can be obtained from the maximum diffraction angle given by

θx=sin1λ2Δx,θy=sin1λ2Δy,
where Δx and Δy are the sampling pitch of the hologram. The field emitted from the vertex is spread in the form of a rectangular pyramid with apex angles of θx and θy and forms a rectangle in the object plane. We define the maximum diffraction area of the polygon as the bounding box of the diffraction rectangles in the object plane. This is not same as the exact diffraction area of the polygon but meets the purpose for discrimination of the polygon.

4D. Numerical Propagation by Using Segmented Frame Buffers

Wave fields are propagated numerically using the shifted Fresnel method [14]. The method is suited for propagation by using segmented frame buffers, because the sampling area in the destination plane can be shifted from the sampling area in the source plane, as shown in Fig. 5.

Suppose that the wave field hp,q(x^,y^;z^dest) in a segment (p,q) of the destination plane is computed by numerical propagation of the wave field in the source plane at z^=z^source. If the destination plane at z^=z^dest is sufficiently far from the source plane, the wave fields in all segments of the source plane contribute to the wave field in the segment (p,q) of the destination plane. Therefore, the wave field in the segment (p,q) is computed by

hp,q(x^,y^;z^dest)=s,tPz^destz^source{hs,t(x^,y^;z^source)},
where the symbol Pd{·} represents the field propagation for the distance of d using the shifted Fresnel method.

5. Light-Shielding by Silhouette Method

5A. Generation of Wallpaper Field

The wallpaper is a planar object and is given by a 256×256 pixel texture. To give the texture image the appearance of wallpaper that is perpendicular to the optical axis and has the same dimensions as the hologram, the texture image is oversampled with the same sampling pitches as the hologram. The enlarged texture image is regarded as an amplitude distribution of the wave field and handled by the same segmented frame buffer as the object field. We multiply the same diffusive phase as in Eq. (4) into the oversampled texture image awp(x^,y^) to give a diffusiveness to the wallpaper. Hence, the wallpaper field is given by

hwp(x^,y^;z^wp)=awp(x^,y^)exp[iϕ(x^,y^)],
where z^wp is the z^ position of the wallpaper.

5B. Silhouette Masking

The wallpaper field generated according to Eq. (18) should be numerically propagated to the hologram plane and superimposed on the object field that is also propagated to the hologram plane. However, if the wallpaper field is simply propagated and added to the object field, the Venus object would be reconstructed as a phantom image, because simple superposition involves light transmission of the Venus object. To shield the wallpaper behind the Venus object and prevent the Venus from being a phantom image, the propagation process should be divided into two stages, as shown in Fig 6. The wallpaper field is first propagated to the object plane and then masked with the silhouette of the Venus object, as shown in Fig 6a. The silhouette mask asil(x^,y^) is simply obtained by orthogonal projection of the polygons forming the object on the object plane as follows:

asil(x^,y^)={0inside any projected polygon1outside any projected polygon.
The object field h(x^,y^;z^obj) shown in Fig 6b is superimposed on the masked wallpaper field. Thus, the object field combined with the wallpaper field is represented as
h(x^,y^;z^obj)=asil(x^,y^)hwp(x^,y^;z^obj)+h(x^,y^;z^obj),
where hwp(x^,y^;z^obj) is the wallpaper field propagated onto the object plane. The combined field shown in Fig 6c is finally propagated to the hologram plane, and the field for the 3D scene h(x^,y^;0) is computed.

6. Creation of “The Venus”

6A. Producing Fringe Pattern

The fringe intensity I(x^,y^) of the CGH is generated by numerical interference of the combined field h(x^,y^;0) with a reference field r(x^,y^) in the hologram plane as follows:

I(x^,y^)=|h(x^,y^;0)+r(x^,y^)|2h(x^,y^;0)r(x^,y^)*+B,
where the constant B is the offset to keep the fringe intensity positive. The plane wave with an incident angle of 9.1° is used for the reference field in the Venus CGH. The fringe actually printed is the binary pattern because of the fabrication method used in this investigation. Thus, the offset B is set to zero in practice and I(x^,y^) is quantized by the zero threshold.

The DWL 66 laser lithography system made by Heidelberg Instruments GmbH was used for printing the fringe pattern onto the photoresist that is coated on a chromium thin film on a substrate of fused sil ica. After development of the photoresist, the chromium thin film was etched and formed the binary transmittance pattern. As a result, the fabricated CGH has a fringe of binary amplitude.

6B. Optical Reconstruction

Photographs of the optical reconstruction of the fabricated hologram are shown in Fig. 7. A He–Ne laser is used for the transmission light source. The photographs in Fig. 7a, 7b, 7c, 7d captured from various viewpoints verify that the CGH reconstructs the 3D scene because the occlusion of the Venus and wallpaper is changed when the view point is changed. This motion parallax creates a strong sensation of depth in the viewer’s perception.

The fabricated CGH can be reconstructed as a reflection hologram as well as a transmission hologram because of the high reflectance of the chromium thin film. Photographs of optical reconstruction in the reflection mode are shown in Fig. 8. An ordinary red LED is used for the reflection light source. The CGH also gives good reconstruction. However, the dimensions of the reconstructed objects are somewhat reduced in this case, because the LED works as a point source of light, whereas a plane wave is assumed as the reference wave.

7. Discussion

7A. Total Computation Time and the Bottleneck

The software for computing the CGH was implemented by using Intel C++ Compiler 10 and Visual Studio 2008. The Intel Math Kernel Library was also used as the FFT package. Computation of the Venus CGH was executed by using a PC with the four CPUs of the AMD Opteron 852 (2.6GHz) processor and a shared memory of 32 Gbytes. The total computation time was approximately 45 h, and the itemized computation times are shown in Fig. 9. The longest time was consumed by the numerical propagation using the shifted Fresnel method and the segmented frame buffer. The computation time for this was 37 h and accounted for 82% of the total computation time.

Since the computation time of a 2D FFT is proportional to 2Nlog2N for N sampling points, the computation time of the shifted Fresnel method for a wave field sampled in N(=Nx×Ny) points without segmentation is estimated as

T(N)24N(2+log2N),
because the method requires the FFT operation three times for 4N points. When the frame buffer is divided into M(=Mx×My) segments, the computation time required for propagation of all segments from source to destination is proportional to T(N/M)M2. In this case, however, we can omit one FFT for each segment by using a precomputed N-point table [19]. Therefore, the computation time with segmented frame buffers is given approximately by
T(NM)M216NM(2+log2Nlog2M),
where T(N)16N(2+log2N) is the time required for propagation of an N-point wave field by using the precomputed table. As a rough estimation, computation time is approximately T(N/M)M2M, because N=232, whereas M=25 in the Venus CGH. Therefore, the computation time is directly dependent on the number of segments. Decreasing the number of segments by increasing the size of memory installed into the PC is the most effective way of reducing computation time.

7B. Computation Time of Object Fields

The computation time of the object field is short compared with that of numerical propagation in creating the Venus CGH. However, when the number of polygons increases, as is needed to create more complex objects, or the number of segments decreases as a result of increasing the memory size installed in the PC, the computation time of the object field cannot be ignored. The computation time is given by a summation of the computation times of the polygon fields. The computation time of a polygon field is generally given by

Tpolygon=TFFT1+Tinterpol+Tprop+TFFT2,
where the terms on the right-hand side are the computation time of the FFT of the surface function, interpolation, translational propagation by the angular spectrum method, and the inverse FFT of the spectrum of the polygon field in the object plane, respectively.

The computation times of the FFTs are given by

TFFT1(Nsurf)2Nsurflog2Nsurf,
TFFT2(Nfield)2Nfieldlog2Nfield,
where Nsurf and Nfield are the numbers of sampling points of the surface function and the polygon field in the object plane, respectively. The value of Nfield is determined by the maximum diffraction area of the polygon in the object plane, shown in Fig. 4. When the object plane is sufficiently close to the polygon, the number of sampling points of the surface function is the same as that of the polygon field, i.e., NsurfNfield. Computation times for interpolation and translational propagation are also governed by Nfield as follows:
Tinterpol(Nfield)Nfield,
Tprop(Nfield)Nfield.

As a result, the computation time of polygon fields is determined by Nfield, that is, the number of sampling points within the maximum diffraction area of the polygon in the object plane. This is the reason that the object plane should be placed close to the polygons. We note that in cases where the number of polygons of an object increases but the dimension of the object does not change, the total computation time does not increase much. This is because an increase in the number of polygons leads to a reduction in the size of each polygon, and thus a decrease in each value of Nfield in this case.

7C. Limitations of Silhouette Masking in the Venus CGH

The silhouette method adopted for the creation of the Venus CGH is a wave-optical algorithm for hidden-surface removal in CGHs. It is one of the lowest approximations of light shielding by obstacles [13]. This type of silhouette method shields only light that intersects a planar region given as the silhouette of the object in the object plane. Therefore, other light that does not intersect the planar silhouette region, but intersects the 3D region of the Venus object, are not shielded. This means that the silhouette method does not work well if a viewer observes the side of the Venus. However, a viewer cannot take a side view of the Venus in our CGH, because we place the Venus 15cm behind the hologram. This is a constraint in the configuration of the 3D scene. In addition, this type of silhouette method cannot handle the self- occlusion of objects; i.e., self-occluded objects such as a torus may be reconstructed as a partial phantom image.

Another type of silhouette approximation has been proposed [12]. Light behind the 3D object in the method is not shielded by the full silhouette of the object, but is shielded by individual silhouettes of polygons. This type of silhouette method has the ability to correctly reconstruct self-occluded objects. However, this method requires the same number of numerical propagations as the number of polygons; therefore, it is not practical for high-definition CGHs such as our Venus. More rigorous light shielding by polygon surfaces without silhouette approximation has been proposed [13], but the method is much more costly in computation time and is not applicable to a large-scale CGH.

8. Conclusion

A large-scale full-parallax CGH with 4 G pixels, named “The Venus,” is created by using the polygon- based and the silhouette methods. The polygon-based method numerically generates the object fields in a shorter computation time compared with the conventional point-based methods, while the silhouette method makes it possible to reconstruct the occluded scene. These methods had previously suffered from the problem that the pixel size is limited to a value dependent on the memory size installed in the PC, because numerical propagation of wave fields is necessary for these methods. To overcome this limitation, the technique of the segmented frame buffer and shifted Fresnel method for numerical propagation are used in this research. These techniques also contribute to a reduction of computation time by making it easy to introduce high-level parallel processing. The Venus CGH was fabricated by using a laser lithography system. The optical reconstruction gives a fine true 3D image of the occluded scene, which creates a strong sensation of depth due to the motion parallax.

On the other hand, the shifted Fresnel method adopted in this investigation brings a constraint into the configuration of the 3D scene. A severe aliasing error occurs when the wave field of a large object is numerically propagated for a short distance by using the method. Future work will involve the development of a new method for numerical propagation, which is able to shift the sampling area and is applicable for a wide range of propagation distances.

The mesh data for the Venus object is provided courtesy of INRIA by the AIM@SHAPE Shape Re pository. This work was supported by the JSPS. KAKENHI (21500114).

Tables Icon

Table 1. Parameters for Computing and Fabricating “The Venus” CGH

 figure: Fig. 1

Fig. 1 3D scene of “The Venus” and the definition of the hologram coordinate system.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Procedure for producing the object field of a triangular prism by the polygon-based method. (a) Object model. (b) Surface functions in tilted local coordinates. (c) Amplitude images of polygon fields in parallel local coordinates. (d) Amplitude image of the object field.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Segmentation of the frame buffer storing the object field. Note that the object plane is illustrated in this figure as a plane that does not slice the object for easy understanding, but in practical computation, it slices the object.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Estimation of the maximum diffraction area of a polygon as a bounding box of diffraction rectangles formed by each vertex of the polygon.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Numerical propagation in segmented frame buffers by the shifted Fresnel method.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Two-stage propagation for light shielding by the silhouette-masking technique. (a) Silhouette-masked wave field of the wallpaper. (b) Object field. (c) Combined field. All images are of amplitude.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Photographs of the optical reconstruction of the fabricated CGH by using transmitted illumination of a He–Ne laser. Photographs (a) – (d) are taken from different viewpoints.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Photographs of the optical reconstruction of the fabricated CGH by using reflected illumination of an ordinary red LED. (a) Medium shot (Media 1 , Media 2). (b) Close-up (Media 3,Media 4).

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Itemized computation time for computing the Venus CGH.

Download Full Size | PDF

1. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2, 28–34 (1993). [CrossRef]  

2. A. Ritter, J. Böttger, O. Deussen, M. König, and T. Strothotte, “Hardware-based rendering of full-parallax synthetic holograms,” Appl. Opt. 38, 1364–1369 (1999). [CrossRef]  

3. K. Matsushima and M. Takai, “Recurrence formulas for fast creation of synthetic three-dimensional holograms,” Appl. Opt. 39, 6587–6594 (2000). [CrossRef]  

4. H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000).

5. T. Ito, N. Masuda, K. Yoshimura, A. Shiraki, T. Shimobaba, and T. Sugie, “Special-purpose computer HORN-5 for a real-time electroholography,” Opt. Express 13, 1923–1932 (2005). [CrossRef]  

6. N. Masuda, T. Ito, T. Tanaka, A. Shiraki, and T. Sugie, “Computer generated holography using a graphicsprocessing unit,” Opt. Express 14, 603–608 (2006). [CrossRef]  

7. T. Shimobaba, A. Shiraki, Y. Ichihashi, N. Masuda, and T. Ito, “Interactive color electroholography using the FPGA technology and time division switching method,” IEICE Electron. Express 5, 271–277 (2008). [CrossRef]  

8. K. Matsushima, “Computer-generated holograms for three- dimensional surface objects with shade and texture,” Appl. Opt. 44, 4607–4614 (2005). [CrossRef]  

9. L. Ahrenberg, P. Benzie, M. Magnor, and J. Watson, “Computer generated holograms from three dimensional meshes using an analytic light transport model,” Appl. Opt. 47, 1567–1574 (2008). [CrossRef]  

10. H. Kim, J. Hahn, and B. Lee, “Mathematical modeling of triangle-mesh-modeled three-dimensional surface objects for digital holography,” Appl. Opt. 47, D117–D127 (2008). [CrossRef]  

11. K. Matsushima and A. Kondoh, “A wave optical algorithm for hidden-surface removal in digitally synthetic full-parallax holograms for three-dimensional objects,” Proc. SPIEProc. 5290, 90–97 (2004).

12. A. Kondoh and K. Matsushima, “Hidden surface removal in full-parallax CGHs by silhouette approximation,” Syst. Comput. Jpn. 38(6), 53–61 (2007). [CrossRef]  

13. K. Matsushima, “Exact hidden-surface removal in digitally synthetic full-parallax holograms,” Proc. SPIE 5742, 25–32 (2005).

14. R. P. Muffoletto, J. M. Tyler, and J. E. Tohline, “Shifted Fresnel diffraction for computational holography,” Opt. Express 15, 5631–5640 (2007). [CrossRef]  

15. K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A 20, 1755–1762 (2003). [CrossRef]  

16. K. Matsushima, “Formulation of the rotational transformation of wave fields and their application to digital holography,” Appl. Opt. 47, D110–D116 (2008). [CrossRef]  

17. R. Bräuer, F. Wyrowski, and O. Bryngdahl, “Diffusers in digital holography,” J. Opt. Soc. Am. A 8, 572–578 (1991). [CrossRef]  

18. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, 1996), chap. 3.10.

19. D. H. Bailey and P. N. Swarztrauber, “The fractional Fourier transform and applications,” SIAM Rev. 33, 389–404 (1991). [CrossRef]  

Supplementary Material (4)

Media 1: MOV (3737 KB)     
Media 2: MOV (14206 KB)     
Media 3: MOV (3897 KB)     
Media 4: MOV (14569 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 3D scene of “The Venus” and the definition of the hologram coordinate system.
Fig. 2
Fig. 2 Procedure for producing the object field of a triangular prism by the polygon-based method. (a) Object model. (b) Surface functions in tilted local coordinates. (c) Amplitude images of polygon fields in parallel local coordinates. (d) Amplitude image of the object field.
Fig. 3
Fig. 3 Segmentation of the frame buffer storing the object field. Note that the object plane is illustrated in this figure as a plane that does not slice the object for easy understanding, but in practical computation, it slices the object.
Fig. 4
Fig. 4 Estimation of the maximum diffraction area of a polygon as a bounding box of diffraction rectangles formed by each vertex of the polygon.
Fig. 5
Fig. 5 Numerical propagation in segmented frame buffers by the shifted Fresnel method.
Fig. 6
Fig. 6 Two-stage propagation for light shielding by the silhouette-masking technique. (a) Silhouette-masked wave field of the wallpaper. (b) Object field. (c) Combined field. All images are of amplitude.
Fig. 7
Fig. 7 Photographs of the optical reconstruction of the fabricated CGH by using transmitted illumination of a He–Ne laser. Photographs (a) – (d) are taken from different viewpoints.
Fig. 8
Fig. 8 Photographs of the optical reconstruction of the fabricated CGH by using reflected illumination of an ordinary red LED. (a) Medium shot (Media 1 , Media 2). (b) Close-up (Media 3,Media 4).
Fig. 9
Fig. 9 Itemized computation time for computing the Venus CGH.

Tables (1)

Tables Icon

Table 1 Parameters for Computing and Fabricating “The Venus” CGH

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

r ^ n = T n r n , r n = T n 1 r ^ n .
T n = cos θ n I + ( 1 cos θ n ) s n t · s n + sin θ n [ s n ] × ,
s n = z ^ × n n .
h n ( x n , y n ) = a n ( x n , y n ) exp [ i ϕ ( x n , y n ) ] ,
a n ( x n , y n ) = { A n inside polygon 0 outside polygon .
H n ( u n , v n ) = F { h n ( x n , y n ) } ,
T n 1 = [ t 1 , n t 2 , n t 3 , n t 4 , n t 5 , n t 6 , n t 7 , n t 8 , n t 9 , n ] ,
u n = α n ( u ^ n , v ^ n ) = t 1 , n u ^ n + t 2 , n v ^ n + t 3 , n w ^ ( u ^ n , v ^ n ) , v n = β n ( u ^ n , v ^ n ) = t 4 , n u ^ n + t 5 , n v ^ n + t 6 , n w ^ ( u ^ n , v ^ n ) ,
H ^ n ( u ^ n , v ^ n ) = H n ( α n ( u ^ n , v ^ n ) , β n ( u ^ n , v ^ n ) ) .
H ^ n ( u ^ n , v ^ n ) H n ( α n ( u ^ n , v ^ n ) u n ( 0 ) , β n ( u ^ n , v ^ n ) v n ( 0 ) ) ,
u n ( 0 ) = α n ( 0 , 0 ) = t 3 , n / λ ,
v n ( 0 ) = β n ( 0 , 0 ) = t 6 , n / λ ,
H ^ n ( u ^ n , v ^ n ; z ^ obj ) = H ^ n ( u ^ n , v ^ n ) exp [ i 2 π w ^ ( u ^ , v ^ ) ( z ^ obj z ^ n ( 0 ) ) ] .
h ^ n ( x ^ n , y ^ n ; z ^ obj ) = F 1 { H ^ n ( u ^ n , v ^ n ; z ^ obj ) } .
h ( x ^ , y ^ ; z ^ obj ) = n = 1 N p h ^ n ( x ^ n x ^ n ( 0 ) , y ^ n y ^ n ( 0 ) ; z ^ obj ) ,
θ x = sin 1 λ 2 Δ x , θ y = sin 1 λ 2 Δ y ,
h p , q ( x ^ , y ^ ; z ^ dest ) = s , t P z ^ dest z ^ source { h s , t ( x ^ , y ^ ; z ^ source ) } ,
h wp ( x ^ , y ^ ; z ^ wp ) = a wp ( x ^ , y ^ ) exp [ i ϕ ( x ^ , y ^ ) ] ,
a sil ( x ^ , y ^ ) = { 0 inside any projected polygon 1 outside any projected polygon .
h ( x ^ , y ^ ; z ^ obj ) = a sil ( x ^ , y ^ ) h wp ( x ^ , y ^ ; z ^ obj ) + h ( x ^ , y ^ ; z ^ obj ) ,
I ( x ^ , y ^ ) = | h ( x ^ , y ^ ; 0 ) + r ( x ^ , y ^ ) | 2 h ( x ^ , y ^ ; 0 ) r ( x ^ , y ^ ) * + B ,
T ( N ) 24 N ( 2 + log 2 N ) ,
T ( N M ) M 2 16 N M ( 2 + log 2 N log 2 M ) ,
T polygon = T FFT 1 + T interpol + T prop + T FFT 2 ,
T FFT 1 ( N surf ) 2 N surf log 2 N surf ,
T FFT 2 ( N field ) 2 N field log 2 N field ,
T interpol ( N field ) N field ,
T prop ( N field ) N field .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.