Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Silhouette method for hidden surface removal in computer holography and its acceleration using the switch-back technique

Open Access Open Access

Abstract

A powerful technique is presented for occlusion processing in computer holography. The technique offers an improvement on the conventional silhouette method, which is a general wave optics-based occlusion processing method. The proposed technique dramatically reduces the computation time required for computer-generated holograms (CGH) of self-occluded objects. Performance measurements show that a full-parallax high-definition CGH composed of billions of pixels and a small CGH intended to be reconstructed in electro-holography can be computed in only 1.7 h and 4.5 s, respectively, without any hardware acceleration. Optical reconstruction of the high-definition CGH shows natural and continuous motion parallax in the self-occluded object.

© 2014 Optical Society of America

1. Introduction

Holography is the single technology that is most likely to make it possible to produce perfect 3D images containing all depth cues from a flat panel. Traditional optical holography has a long history as a 3D photographic technique. Computer holography, which creates computer-generated holograms (CGHs), also has a long history [1], but has suffered from problems associated with the enormous space-band product required to reconstruct high-quality 3D images. The problem comprises two points: first, how to compute the fringe pattern, and second, how to display or print the computed pattern. The fringe pattern is simply a 2D digital image, but its physical resolution should be less than 1 μm to achieve a viewing angle of several tens of degrees. The display resolution must therefore reach more than a billion pixels to reconstruct 3D images with areas of several tens of square centimeters.

The problem with the display technique has still not been resolved in electro-holography, which is intended to provide holographic motion pictures. However, the status has changed for still 3D images, i.e. still CGHs. Over the past few years, the authors have developed and reported several extremely high-definition (HD) CGHs that are composed of several billions or sometimes several tens of billions of pixels with pitches of less than 1 μm [27]. These HD-CGHs can be printed using commercially available laser writers and can reconstruct impressive spatial 3D images of very deep virtual scenes that have never been displayed using conventional stereoscopic 3D (S3D) devices. Because the optical reconstruction is at a level comparable to that of traditional optical holography, one of these HD-CGHs is currently on display in a museum [8]. The 3D scenes in these HD-CGHs are mainly constructed from several 3D polygon-meshed objects. The object wave of the polygon mesh is computed using a polygon-based method [9] that is a wave-oriented technique, and that is considerably different to the point-based method described below. The scenes can also include 2D planar objects such as digital photographs/illustrations and the wave fields of real physical objects, which are captured using synthetic aperture digital holography techniques [4].

One reason why these full-parallax HD-CGHs give a strong depth sensation to the viewer is that the CGHs reconstruct the 3D image with continuous motion parallax over a wide viewing angle, e.g. 46°. When the viewers change their viewpoints, any occlusions among the objects that compose the scene are reconstructed appropriately by the CGHs. Occlusion is well known as one of the most important cues in depth perception. Thus, occlusion reconstruction is highly important in computer holography and many other 3D techniques. To reconstruct an occlusion, one object in front of another object must hide the object to the rear. In other words, the light on the rear object must be shielded by the front object. The technique is therefore similar to hidden surface removal in conventional computer graphics (CG). However, while the viewpoint cannot be changed when seeing a still CG image, the viewpoint in computer holography can be moved even in still images. This makes hidden surface removal much more difficult in computer holography.

Hidden surface removal in computer holography, which is also called occlusion processing [10] or occlusion culling [11], is necessarily associated with the numerical techniques used to generate the object field. Most researchers prefer point-based methods, in which surfaces or sometimes wire-framed objects are expressed as large collections of point light sources [12]. These techniques are generally ray-oriented; occlusion processing is commonly interpreted as a visibility test in ray casting, which examines the existence of obstacles between a sampling point in the hologram plane and a point source of the object. If something interrupts the ray between the sampling point and the object point, the ray is removed from the computation. The visibility test is suitably efficient for horizontal parallax-only (HPO) holograms [10, 13]. Chen et al. reported a minimum computation time of 2.47 s in 1280 × 1024 pixels for an object composed of 15,306 triangles [13]. They used hardware acceleration provided by a graphics processing unit (GPU). However, HPO-CGHs lose the advantages of holography over any other 3D techniques that generally display only the horizontal parallax.

Some researchers have also reported full-parallax versions of the visibility test-type techniques [1418]. It is very important to reduce the number of visibility tests in this case, because testing of entire rays between all sampling points and all object points is too time-consuming and is generally impractical. Grouping techniques are commonly used for the reduction process; grouping of rays [14], grouping of sampling points [14, 1618], grouping of object points [17, 18] and several combinations of these techniques are used. The reconstructed 3D image quality depends on the number of groupings, along with some other parameters, such as the radius of the circular block region of an object point [14] or the sizes of the rectangular patches [17]. In addition, hardware acceleration using a GPU is commonly used for the computation. As a result, the computation time ranges from several seconds to several tens of seconds, depending on the numbers of pixels and object points and on the quality parameters. However, the reported number of pixels is no more than several million in general, and the viewing angle is too narrow for viewers to perceive definite motion parallax. These techniques are likely to be impractical for high-quality CGHs like our HD-CGHs.

There are other techniques that do not use any visibility test. Wakunami et al. reported a technique for exact occlusion processing in the ray-sampling (RS) plane [11]. The RS plane technique is similar to holographic stereography and is a hybrid of wave-oriented and ray-oriented techniques [19]. In this technique, the light-ray information of a unitary object is converted into the wave field, and is then propagated by numerical diffraction. In the proposed technique, occlusion culling is performed by simply overwriting the light-ray information of the back object with that of the front object. This is a powerful technique. However, the RS plane technique itself has a disadvantage in that the accommodation depth cue is lost in objects given in a single RS plane; this depth cue is normally one of the most remarkable features of holography. A technique without a visibility test is also proposed in the point-based methods [20]. In this technique, the collection of point sources is switched, depending on the viewpoint. This is a useful and practical technique, but the motion parallax is not continuous.

Occlusion processing in our full-parallax HD-CGHs is performed by multiplying the wave-fields of the rear objects by an orthographic binary silhouette mask of the front object [2]. The visibility test is therefore unnecessary in this case. Even if the light behind an object includes heavy off-axis components, occlusion is properly processed because of the nature of the wave field of light. In this way, natural continuous motion parallax is successfully produced in the HD-CGHs. It should be noted that while the HD-CGHs are computed using the polygon-based method, the concept of the silhouette method itself is not restricted to the polygon-based method. Because this technique processes light as a wave field that is a sampled distribution of complex amplitudes in a plane, the technique is well matched to similar techniques such as the polygon-based method and digital holography. Thus, in digitized holography, which captures the wave field of a physical object by digital holography and optically reconstructs it using a HD-CGH, occlusion is also processed via the silhouette method [4]. Unfortunately, this technique is only a simple approximation of the physical light shielding of opaque objects. Therefore, small occlusion errors occur in some cases. A more exact algorithm based on wave optics has already been proposed but requires a much greater computational effort [21]. Since the polygon-based method itself is a powerful technique to compute wave fields of surface objects, several researchers have proposed the variations [2224] and also reported acceleration using GPU [24]. However, occlusion processing is ignored [22] or performed as a depth test that simply discards rear polygons hidden by front polygons [23, 24]. The simple depth test is reasonable only in cases where the bandwidth is too narrow to produce motion parallax.

Occlusion is classified into two categories: mutual occlusion and self-occlusion. Mutual occlusion means that an object is hidden by another separate object or objects, while self-occlusion refers to the situation where part of a surface is hidden by another surface of the same object. The technique described in [11], for example, is for processing of mutual occlusions. Self-occlusion in this case is intrinsically processed through the conversion from rays to wave fields. The HD-CGHs described above commonly reconstruct mutual occlusions only. The original silhouette masking technique was proposed to process both mutual and self-occlusions [25, 26]. However, to process self-occlusions, the masking process must be performed for each polygon. This is very time-consuming, especially in HD-CGHs, because the original silhouette method requires the same number of numerical propagations of the wave field as that of masking. Therefore, the silhouette masking process was performed for each object rather than for each polygon when creating the HD-CGHs. Because this type of silhouette method cannot deal with self-occlusion, self-occluded objects are reconstructed as partially transparent objects. We have therefore carefully chosen object models without any self-occlusions for HD-CGH creation to date.

In this paper, we present a novel and fast technique to perform the silhouette method for each polygon and to allow us to reconstruct the appropriate self-occlusions in full-parallax HD-CGHs. The measured performance of the proposed technique shows that the technique is not only suitable for HD-CGHs, but is also applicable to electro-holography. The concept of the original silhouette method is summarized in section 2. The principle and formulation of the new technique are presented in section 3, and the results of performance measurements are given in section 4. The HD-CGH used for the measurement was fabricated to verify the technique. We demonstrate optical reconstruction and give our conclusions for this paper in sections 5 and 6, respectively.

2. The silhouette method

2.1 Principle of object-by-object light shielding

Opaque objects hide their background, i.e. any part of the background that overlaps the object is shielded by that object, as shown in Fig. 1(a).If we see the object as shown in (a), we can see the near-side front surface of that object and the part of the background that does not overlap the object. If we change the viewpoint, then the visible portion of the background changes with the viewpoint.

 figure: Fig. 1

Fig. 1 (a) Schematic illustration of light-shielding of a real opaque object, and (b) emulation of light-shielding by the silhouette method.

Download Full Size | PDF

The silhouette method provides a simple and low cost computer-based emulation of this physical phenomenon. Consider the background light to be expressed using the wave field u(x,y), and the (x,y) plane perpendicular to the optical axis to be placed at a position at which the cross section of the object is at a maximum, as shown in Fig. 1(b). Part of the background field, which is overlapping the object, cannot pass through the object. The field amplitude vanishes in the area of the overlap. This is expressed by multiplying u(x,y) by a binary function M(x,y), which has values of zero inside the object silhouette and unity outside the silhouette. Consider the wave field of the object to be given by O(x,y); the final wave field that the viewer sees is then presented by M(x,y)u(x,y)+O(x,y).

If the viewpoint moves, then the viewed background changes with the viewpoint, because the background field u(x,y) includes off-axis light in a specific range given by the sampling conditions of the background field.

2.2 Formulation for multiple objects

Consider multiple objects in a 3D scene of a hologram, as shown in Fig. 2(a).Here, the origin of the coordinate system is placed in the plane of the hologram and the objects are numbered in order of depth. The polygon furthest from the hologram is numbered n = 0. The light in the 3D scene travels approximately along the z-axis toward the hologram. Let us define the wave field un(x, y) in the plane (x, y, zn). We choose the depth position zn of the plane such that the cross-section of the object approximately reaches a maximum in the plane. These planes are referred to as the object planes in this paper.

 figure: Fig. 2

Fig. 2 (a) Schematic illustration of object-by-object light-shielding, and (b) silhouette mask of the current object.

Download Full Size | PDF

In the object plane (x, y, zn), the shielded wave field is given as un(x, y)Mn(x, y), as described above. Here, Mn(x, y) is again the binary mask function, which is defined in the object plane (x, y, zn) as follows:

Mn(x,y)={10insideorthogonalprojectionofobjectnotherwise
The object n also emits light. If we consider the wave field to be given by On(x, y) in the object plane, then the total wave field is given by un(x, y)Mn(x, y) + On(x, y). This field propagates to the next object plane, (x, y, zn+1), as defined for object n + 1. Therefore, the procedure for the silhouette method is expressed using a recurrence formula:
un+1(x,y)=Pn+1,n{Mn(x,y)un(x,y)+On(x,y)},
where the notation Pn+1,n{ξ(x,y)}stands for the numerical propagation of the wave field ξ(x,y) from the plane at zn to that at zn+1. The propagation is required N times to process an entire 3D scene composed of N objects. This type of silhouette method is referred to as object-by-object (O-O) light-shielding in this paper.

2.3 Examples of high-definition CGHs created by object-by-object light shielding

Optical reconstructions of some CGHs that were created by O-O light shielding have already been reported in [28]. Unfortunately, conventional photographs are not capable of conveying an accurate impression of the strong depth sensation provided by these HD-CGHs because of their lack of spatial dimensions. Downloadable motion pictures provided in [25, 8] and the streaming movie in [27] are much better than the still photos for illustration of this aspect.

2.4 Principle of polygon-by-polygon light shielding and the associated problem

If an object has a complicated shape or concave surfaces, as shown in Fig. 3(a), then O-O light shielding does not work well. In this case, the reconstructed images may have a transparent portion or sometimes a black shadow that occurs when the viewer sees the mask directly [4]. These occlusion errors can be avoided by light shielding using masks for each polygon, as shown in Fig. 3(b). In this case, the light is shielded using many masks that are much smaller than the mask for the whole object, as shown in Fig. 4(b).The masked field is numerically propagated for a short distance between masks, as shown in Fig. 4(a). Only the unit of processing differs from that of O-O light shielding. This type of light shielding is referred to as polygon-by-polygon (P-P) light shielding in this paper.

 figure: Fig. 3

Fig. 3 Comparison between (a) object-by-object and (b) polygon-by-polygon light-shielding methods.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Schematic illustration of (a) polygon-by-polygon light shielding, and (b) the silhouette mask of a polygon.

Download Full Size | PDF

The procedure and the formulation used for P-P light shielding are exactly the same as those of O-O light shielding. We can simply apply the recurrence formula of Eq. (2) to each polygon. The concept of P-P light shielding is very simple. However, this simple P-P shielding is not useful in practice, because we need the same number of numerical propagations of the whole wave field as that of the polygons to complete the P-P shielding. This can be very time-consuming, particularly for high-definition CGHs that are composed of more than a billion pixels.

3. The switch-back technique for fast polygon-by-polygon shielding

P-P shielding has features that distinguish it from O-O shielding. The silhouette mask of a single polygon is, in general, very small when compared with the wave field. The ratio of silhouette area to field area is commonly less than 1%, and is sometimes less than 0.1% for objects formed using many polygons. This means that the use of silhouette-shaped apertures rather than silhouette-shaped masks offers a considerable advantage, as detailed in this section.

3.1 Introduction of silhouette apertures based on Babinet’s law

Figure 5 schematically shows three possible procedures for performing the single light-shielding step in the silhouette method. Here, the object fields are ignored for simplicity.

 figure: Fig. 5

Fig. 5 Three types of masking procedure. The final masked field is given by the subtraction of two fields in procedures (ii) and (iii).

Download Full Size | PDF

Procedure (i) shows the original masking technique given directly by Eq. (2). Consider a wave field u1=u1(x,y) given in the z1 plane. The wave field is propagated to the z2 plane and is then masked using the mask function M2=M2(x,y). The masked field, given by P2,1{u1}M2, is then propagated again to the z3 plane. The field in the z3 plane is thus given by:

u3(x,y)=P3,2[P2,1{u1(x,y)}M2(x,y)]

Procedure (ii) is equivalent to procedure (i) according to Babinet’s law. In this case, we use an inverted mask function, i.e. an aperture function, which is defined as:

An(x,y)=1Mn(x,y).
The field P2,1{u1} in the z2 plane is masked by the aperture and is then propagated to the z3 plane. The field is thus given by P3,2[P2,1{u1}A2]in the z3 plane. Babinet’s law ensures that the subtraction of this field from P3,1{u1} gives the same u3 as that given in Eq. (3). In fact, by substituting M2=1A2 into Eq. (3), the field in the z3 plane can be written as:
u3(x,y)=P3,2[P2,1{u1(x,y)}P2,1{u1(x,y)}A2(x,y)]=P3,1{u1(x,y)}P3,2[P2,1{u1(x,y)}A2(x,y)],
where the propagation P3,2[P2,1{ξ(x,y)}] is equivalent to P3,1{ξ(x,y)}, according to the definition of the propagation operator.

Procedure (iii) also gives the same result as those found using procedures (i) and (ii). In this case, the field u1 is propagated to the z3 plane once without any masking. This temporal field is written as:

u3(x,y)=P3,1{u1(x,y)}.
We retain this temporal field. The field before masking in the z3 plane is obtained from back-propagation of the retained field u3(x,y). The masked field P2,3{u3(x,y)}A2(x,y)is again forward-propagated to the z3 plane and is subtracted from the retained field u3(x,y). Thus, the final field u3 is given by:

u3(x,y)=u3(x,y)P3,2[P2,3{u3(x,y)}A2(x,y)].

Procedures (ii) and (iii), which use the aperture function rather than the mask function, require propagation three times, while procedure (i) uses the mask function and needs to be performed only twice. Therefore, procedures (ii) and (iii) seem to be redundant when compared with procedure (i). However, as shown in Fig. 6, use of the aperture offers a major advantage in that the full field is no longer required for two of the three propagations, because only a small portion of the field passes through the aperture. Therefore, the propagation of only the small part of this field is necessary to perform light shielding. Here, we must again emphasize that the masked area is much smaller than the area of the whole wave field in the case of P-P light shielding.

 figure: Fig. 6

Fig. 6 Schematic illustration of the advantages of the silhouette aperture over the silhouette mask.

Download Full Size | PDF

In addition, procedure (iii) has the feature that both the second and third propagations are performed between the z2 and z3 planes. The only difference is the direction. Also, the final field u3 is given by the difference between the temporal field u3 and the resultant of the “round-trip” propagation. As shown in the next section, this has a useful effect for the processing of multiple polygons; all intermediate information is accumulated in a single plane.

3.2 Formulation for multiple polygons

The examples described above are for a single polygon only. To formulate multiple shielding, we must redefine the object plane. This object plane is similar to that in O-O light shielding; it is perpendicular to the optical axis and is located near or across the object. However, the role of the object plane in P-P light shielding is quite different to the role of the plane in O-O light shielding. We intend to accumulate the temporal field in the object plane, which is the intermediate state of P-P light shielding before completion of the occlusion processing. We can retain every object field under computation in the object plane using a formulation based on procedure (iii), as follows.

Suppose that the operation Pobj,n{un(x,y)} propagates the field un(x,y) in the zn plane to the object plane. By using operator Pobj,n+1 for both sides of Eq. (2), the recurrence formula can be rewritten as:

Pobj,n+1{un+1(x,y)}=Pobj,n+1[Pn+1,n{Mn(x,y)un(x,y)+On(x,y)}]=Pobj,n{Mn(x,y)un(x,y)+On(x,y)},
where Pobj,n+1[Pn+1,n{ξ(x,y)}] is equivalent to Pobj,n{ξ(x,y)}, according to the definition of the operator. By substituting Mn from Eq. (4) into Eq. (8), the recurrence formula can then be rewritten as:
Pobj,n+1{un+1(x,y)}=Pobj,n{un(x,y)}Pobj,n{An(x,y)un(x,y)On(x,y)}.
We then introduce a new symbol:
unobj(x,y)Pobj,n{un(x,y)},
where unobj(x,y) is the temporally-accumulated wave field in the object plane, i.e. the sub-total object field from polygon 0 to polygon n1 with P-P light-shielding. Using this new symbol, Eq. (9) can be rewritten as follows:
un+1obj(x,y)=unobj(x,y)+Pobj,n{On(x,y)An(x,y)un(x,y)}.
This recurrence formula states that the field un(x,y) is required to advance by a single step of the calculation, i.e. to obtain un+1obj(x,y) from unobj(x,y). From the definition of unobj(x,y) in Eq. (10), this is provided by backward propagation as follows:
un(x,y)=Pn,obj{unobj(x,y)}.
As a result, Eqs. (11) and (12) comprise a simultaneous recurrence formula that offers another expression of the original recurrence formula of Eq. (2).

Computation based on Eqs. (11) and (12) requires double numerical propagation for each polygon, while the process was singular in the original recurrence formula. This seems to be ineffective. However, we should emphasize again that the propagation of the whole field is not necessary in this case. The polygon field On(x,y) is localized around the polygon and the aperture in the zn plane, as shown in Fig. 7(a).In addition, un(x,y) is required only around the aperture, as shown in (b). Therefore, we rewrite the simultaneous recurrence formulas as:

un(x,y)=P^n,obj{unobj(x,y)},
un+1obj(x,y)=unobj(x,y)+P^obj,n{On(x,y)An(x,y)un(x,y)},
where the new notation P^n,obj{unobj(x,y)} stands for the backward propagation of only a small part of the accumulated field unobj(x,y) to the aperture plane at zn. The other notation, P^obj,n{On(x,y)An(x,y)un(x,y)}, stands for forward propagation of the field that is localized around the polygon to the object plane. Greatly reduced computational effort is required for the two propagations when compared with that required for the original whole field propagation.

 figure: Fig. 7

Fig. 7 (a) Forward propagation of the localized field to the object plane, and (b) backward propagation of the small part of the accumulated field in the object plane to the current aperture.

Download Full Size | PDF

3.3 Practical procedure for computation of the object field with P-P shielding

In the case where there is no background field behind the object, we can set u0obj(x,y)0. Thus, backward propagation in Eq. (13) gives u0(x,y)0. By forward propagation in Eq. (14), the wave field of polygon 0 is given by u1obj(x,y)=Pobj,0{O0(x,y)} in the object plane. The three steps described below are essentially repeated after this, as shown in Fig. 8.

 figure: Fig. 8

Fig. 8 Schematic explanation of the procedure for the switch-back technique: (a) switch-back for the current polygon n, and (b) for the next polygon n + 1.

Download Full Size | PDF

Step (I). A small part, marked by a red-dashed line in (a), is extracted from the accumulated field unobj(x,y)in the object plane. The extracted partial field is kept in a separate small sampling array and propagated backward to the plane for polygon n, i.e. un(x,y) is computed in the small sampling array according to Eq. (13). Here, the small part required for the propagation is the region that is affected by the field of polygon n and by the light shielding. The affected region is obtained by the technique explained in the following section.

Step (II). The field un(x,y) is multiplied by the silhouette aperture function, An(x,y), and then the resultant is subtracted from the polygon field On(x,y), as per Eq. (14). Note that all operations can be performed in the small sampling array through this step.

Step (III). The resultant field, still kept in the small sampling array, is propagated forward to the object plane and then added to the current accumulated field, unobj(x,y). This in turn gives the next accumulated field un+1obj(x,y), as shown in Eq. (14).

If the object is composed of N polygons, then the accumulated field uNobj(x,y) gives the final object field. Thus, after performing step (III) when n=N1, we can obtain the final object field. As a result, the three steps must be performed N1 times in the case where u0obj(x,y)0. The number of propagations is 2N1. If u0obj(x,y)0, i.e. if there is a non-zero background field, then the three steps must be started from u0obj(x,y). In this case, the number of repetitions required and the propagation are N and 2N, respectively.

As mentioned earlier, the local wave fields are propagated back and forth to process each polygon in this technique. We therefore refer to this technique as the switch-back technique.

3.4 Numerical technique and sampling window for switch-back propagation

It is very important to avoid aliasing errors in numerical propagations of the step (I) and (III). If the aliasing error occurs in each propagation, the accumulated error is considerable because N is usually very large. Therefore, the method of the numerical propagation should be carefully chosen in the practical implementation. In this paper, the band-limited angular spectrum method is used for the numerical propagation [28]. The angular spectrum method is a convolution-based method and gives rigorous Rayleigh-Sommerfeld solutions. The band-limiting also solve the problem of aliasing errors of the sampled transfer function in short distance propagation. However, there is a common problem in convolution-based method. If the field is expanded by diffraction over the sampling window, a sort of aliasing errors occurs in the propagated field. To avoid this problem, the sampling window must be larger than the maximum diffraction area.

The maximum diffraction area of the polygon field On(x,y) is easily estimated in the object plane using the technique described in section 2.C of [2], i.e. given by the rectangular region including all maximum diffraction areas of point sources placed at the vertexes of the polygon. This is schematically shown in Fig. 9, where θmax=sin1[λ/(2δ)] is maximum diffraction angle, and λ and δ are wavelength and sampling interval of the field. The maximum diffraction area of the masked field An(x,y)un(x,y) is also estimated by the same technique, as in Fig. 9. The sampling window for switch-back propagation must include both areas that are slightly different from each other. If this condition is satisfied, any aliasing error caused by convolution-based propagation can be avoided. Note that extension of the sampling window, described in [28], is unnecessary in this case, because it is ensured that the field is not expanded to the outside of the sampling window.

 figure: Fig. 9

Fig. 9 Schematic illustration of maximum diffraction areas and the minimum sampling window required for avoiding arising errors in switch-back propagation.

Download Full Size | PDF

3.5 Dividing an object into multiple sub-objects to speed up the switch-back computation

The computation time required for the object field in the switch-back technique depends on the average distance between the polygons and the object plane, because the maximum diffraction area extends with increasing propagation distance, and thus the number of sampling points in the numerical propagation also increases with distance. Therefore, if the object extends for a long distance in the z-direction, i.e. the object has a large depth, then the technique will require a long computation time.

It is easy to predict that division of an object into multiple sub-objects and the use of an object plane for each sub-object, i.e. using multiple object planes, are valid for computation time reduction. In this case, the switch-back procedure presented in the section above is applied sequentially for each sub-object, as shown in Fig. 10.After the switch-back procedure is complete in the current object plane, the resultant object field is fully propagated to the next object plane. This field is the background field u0obj(x,y) for the switch-back computation of the next sub-object.

 figure: Fig. 10

Fig. 10 Schematic illustration of the division of an object and the multiple object planes.

Download Full Size | PDF

Note that division of the object does not give approximated results, i.e. has no side effects with respect to numerical errors. The object field computed with division has the same quality as that without division. However, too many sub-objects may leads to increase of computation time, because whole-field propagation is necessary for propagations between object planes.

4. Performance of the switch-back technique for polygon-by-polygon shielding

Figure 11 and Table 1 show the 3D scene and the associated parameters used for the switch-back technique performance measurements. The object models used for the measurements are shown in Fig. 12.All polygons used in these models are triangles, although the technique is not restricted to triangles. The main object model is Model 5, which is composed of 5,000 polygons, where the number of front-face polygons is 2,500. Because this object model suffers severe self-occlusion, computation of the exact object field is impossible without proper hidden surface removal. The hologram is composed of 64K × 64K pixels that are 0.8 μm × 0.8 μm squares, where 1K = 1024. The total number of pixels, which is the same as the number of sampling points of the whole wave field, is 4.3 billion pixels. We also measured the computation times for some reduced holograms, such as a 2K × 2K pixel hologram (1/32 scale hologram). In this case, all sizes and distances that are indicated in Fig. 11 are reduced to 1/32, but the numbers of polygons and the pixel pitches are not changed, even in the reduced hologram.

 figure: Fig. 11

Fig. 11 3D scene used for the performance measurements. The sizes and distances indicated are those of the full-size hologram.

Download Full Size | PDF

Tables Icon

Table 1. Parameters used for high-definition CGH computation.

 figure: Fig. 12

Fig. 12 Object models used for the performance measurements. The number of polygons indicated in brackets in each case is the total number of polygons. The number of front-face polygons is simply half of the total number. Model 5 is used to create the actual HD-CGH.

Download Full Size | PDF

We used two computers for the performance measurements, because the memory required for the computation depends on the scale of the hologram. The specifications of the two PCs and the software environment used for the measurements are summarized in Table 2.Please note that we did not use any hardware assistance such as GPUs.

Tables Icon

Table 2. Specifications of the hardware and software used for the performance measurements.

In the measurements, the polygon fields On(x,y) were computed using the polygon-based method [9]. The band-limited angular spectrum method [28] was used for the numerical propagation of P^n,obj and P^obj,n. This method is also used for the whole-field propagation between the object planes in the case where an object is split and the whole-field propagation from the final object plane to the hologram plane. Note here that the extension of the sampling window described in [28] is necessary only for the latter case, i.e. for propagation from the final object plane to the hologram plane. In other propagations, the extension is not required because the sampling window is large enough to avoid aliasing errors.

4.1 Effect of the number of sub-objects on the computation time

Splitting an object into sub-objects and using multiple object planes is most likely to reduce the computation time. However, there is an optimum number of sub-objects, because the number of propagations of the whole field increases with increasing numbers of sub-objects.

Figure 13 shows the computation times measured for different numbers of sub-objects. Model 5 in Fig. 12 was used for the measurements. Here, a full-size hologram (64K × 64K) and a 1/32 scale hologram (2K × 2K) were computed using PC1 and PC2, respectively. These holograms are intended for the high-definition CGHs and electro-holography, respectively. The computation time decreases with increasing numbers of sub-objects, as shown in Fig. 13. However, it becomes almost constant when more than seven or eight sub-objects are used in both cases. As shown by the itemized ratios indicated in both graphs, this is attributed to the increased computation time required for numerical propagation. The shortest times measured were 1.7 h and 4.5 s for the full-size and 1/32 scale holograms, respectively.

 figure: Fig. 13

Fig. 13 Computation time vs. number of sub-objects. The measured times are for (a) high-definition CGHs and (b) electro-holography.

Download Full Size | PDF

4.2 Computation time vs. number of polygons

The increase in computation time with increasing numbers of polygons was measured using the additional object models shown in Fig. 12. Here, we divided all object models into 10 sub-objects by taking the results in the previous section into account. The object models were placed at the same position in the 3D scene in Fig. 11. The object models also have the same ring size as that of Model 5, which has a width of 40 mm in the full-size hologram. The measured computation times are shown in Fig. 14(a).In all cases, the computation time increases almost linearly with the number of polygons, although a little undulation was detected. This undulation may be caused by irregular changes in the object depth, depending on the model.

 figure: Fig. 14

Fig. 14 Computation time vs. number of front-face polygons. The measured times are for (a) high-definition CGHs and (b) electro-holography.

Download Full Size | PDF

Figure 14(b) shows the computation time for the conventional silhouette method presented using Eq. (2) for comparison. In this case, the computation time is exactly proportional to the number of polygons, because the same numbers of whole-field propagations as that of polygons are required for the computation. The computation time is commonly much longer than that of the proposed switch-back technique. For example, computation of the hologram for Model 1 (500 front-face polygons) took 21.3 h for the full-size hologram, but took 0.9 h by the switch-back technique. We gave up on computation of models with more than 500 front-face polygons in the full-size hologram, because the expected computation time is too long. However, we can easily estimate the computation times because of the exact linearity of the method. The estimated computation time for Model 5 is approximately 106 h in full-size. This is approximately 60 times longer than that of the switch-back technique. In the reduced holograms, e.g., 2K × 2K pixels (1/32 scale hologram), the computation time using the conventional silhouette method is approximately 20 times longer than that using the switch-back technique.

5. CGH fabrication and optical reconstruction

The high-definition CGH of Model 5 was actually fabricated using a laser lithography system. The fringe pattern generated by numerical interference of the computed object field with a spherical wave was binarized, and then, printed as a photo-mask using a DWL-66 laser writer (Heidelberg Instruments).

The optical reconstruction of the HD-CGH is shown in Fig. 15.The CGH was reconstructed by reflected illumination using an ordinary red LED in (a) and by the transmitted illumination of a He-Ne laser in (b). The close-up photographs and the movie of the 3D image taken from various viewpoints are shown in Fig. 16.It is verified that the high-definition CGH reconstructs appropriate occlusions and continuous motion parallax in both the horizontal and vertical directions.

 figure: Fig. 15

Fig. 15 Optical reconstruction of a fabricated high-definition CGH using (a) the reflected illumination of a red LED, and (b) the transmitted illumination of a He-Ne laser.

Download Full Size | PDF

 figure: Fig. 16

Fig. 16 Close-up photographs of the optical reconstruction using illumination from a red LED. The photographs are taken from various viewpoints. Occlusions and continuous motion parallax are appropriately reconstructed (see Media 1).

Download Full Size | PDF

6. Conclusion

We have proposed techniques for hidden surface removal in computer holography. The silhouette method that has previously been used to create polygon-based high-definition CGHs is a powerful light-shielding technique. However, if the technique is applied to individual polygons to process self-occlusion, the computation of the object field requires extremely long computation times. To overcome this problem, we developed a new method called the switch-back technique. The proposed technique reduces the computation time for polygon-by-polygon light-shielding considerably. In fact, the computation of the object wave is accelerated by 20 to 60 times when compared with the conventional silhouette method. The shortest measured computation times were 1.7 h for the 4.3 Gpixel high-definition CGH and 4.5 s for the 4 Mpixel CGH for electro-holography applications.

The proposed technique can also be implemented in GPU, if the GPU has enough local memory to store the whole accumulated field of unobj(x,y) and the partial field required for switch-back propagation. It has verified with our quick-and-dirty implementation that CGHs up to 8K × 8K pixels can be computed by GPU with 1.5 GByte local memory. The computation time by the GPU was several times shorter than that by CPU. Polished implementation in GPU and the performance measurement is the future work.

Acknowledgments

This work was partially supported by JSPS KAKENHI (24500133) and by the MEXT Strategic Research Foundation at Private Universities (2013-2017).

References and links

1. A. W. Lohmann and D. P. Paris, “Binary Fraunhofer holograms, generated by computer,” Appl. Opt. 6(10), 1739–1748 (1967). [CrossRef]   [PubMed]  

2. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48(34), H54–H63 (2009). [CrossRef]   [PubMed]  

3. H. Nishi, K. Matsushima, and S. Nakahara, “Rendering of specular surfaces in polygon-based computer-generated holograms,” Appl. Opt. 50(34), H245–H252 (2011). [CrossRef]   [PubMed]  

4. K. Matsushima, Y. Arima, and S. Nakahara, “Digitized holography: modern holography for 3D imaging of virtual and real objects,” Appl. Opt. 50(34), H278–H284 (2011). [CrossRef]   [PubMed]  

5. K. Matsushima, H. Nishi, and S. Nakahara, “Simple wave-field rendering for photorealistic reconstruction in polygon-based high-definition computer holography,” J. Electron. Imaging 21(2), 023002 (2012). [CrossRef]  

6. H. Nishi, K. Matsushima, and S. Nakahara, “Advanced rendering techniques for producing specular smooth surfaces in polygon-based high-definition computer holography,” Proc. SPIE 8281, 828110 (2012). [CrossRef]  

7. K. Matsushima, S. Nakahara, Y. Arima, H. Nishi, H. Yamashita, Y. Yoshizaki, and K. Ogawa, “Computer holography: 3D digital art based on high-definition CGH,” J. Phys.: Conf. Ser. 415, 012053 (2013).

8. K. Matsushima and S. Nakahara, “Stepping closer to the perfect 3D digital image,” SPIE Newsroom (6 Nov. 2012), http://spie.org/x90909.xml. [CrossRef]  

9. K. Matsushima, “Computer-generated holograms for three-dimensional surface objects with shade and texture,” Appl. Opt. 44(22), 4607–4614 (2005). [CrossRef]   [PubMed]  

10. J. Underkoffler, “Occlusion processing and smooth shading for fully computed synthetic holography,” Proc. SPIE 3011, 19–30 (1997). [CrossRef]  

11. K. Wakunami, H. Yamashita, and M. Yamaguchi, “Occlusion culling for computer generated hologram based on ray-wavefront conversion,” Opt. Express 21(19), 21811–21822 (2013). [CrossRef]   [PubMed]  

12. J. P. Waters, “Holographic image synthesis utilizing theoretical methods,” Appl. Phys. Lett. 9(11), 405–407 (1966). [CrossRef]  

13. R. H.-Y. Chen and T. D. Wilkinson, “Computer generated hologram with geometric occlusion using GPU-accelerated depth buffer rasterization for three-dimensional display,” Appl. Opt. 48(21), 4246–4255 (2009). [CrossRef]   [PubMed]  

14. R. H.-Y. Chen and T. D. Wilkinson, “Computer generated hologram from point cloud using graphics processor,” Appl. Opt. 48(36), 6841–6850 (2009). [CrossRef]   [PubMed]  

15. H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011). [CrossRef]  

16. T. Ichikawa, T. Yoneyama, and Y. Sakamoto, “CGH calculation with the ray tracing method for the Fourier transform optical system,” Opt. Express 21(26), 32019–32031 (2013). [CrossRef]   [PubMed]  

17. I. Hanák, M. Janda, and V. Skala, “Detail-driven digital hologram generation,” Visual Comput. J. 26(2), 83–96 (2010). [CrossRef]  

18. I. Hanák, A. Herout, and P. Zemcik, “Acceleration of detail driven method for hologram generation,” Opt. Eng. 49(8), 085802 (2010). [CrossRef]  

19. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011). [CrossRef]   [PubMed]  

20. T. Fuji and H. Yoshikawa, “Improvement of hidden-surface removal for computer-generated holograms from CG,” in Digital Holography and Three-Dimensional Imaging, (Optical Society of America, 2007), paper DWB3.

21. K. Matsushima, “Exact hidden-surface removal in digitally synthetic full-parallax holograms,” Proc. SPIE 5742, 25–32 (2005). [CrossRef]  

22. L. Ahrenberg, P. Benzie, M. Magnor, and J. Watson, “Computer generated holograms from three dimensional meshes using an analytic light transport model,” Appl. Opt. 47(10), 1567–1574 (2008). [CrossRef]   [PubMed]  

23. H. Kim, J. Hahn, and B. Lee, “Mathematical modeling of triangle-mesh-modeled three-dimensional surface objects for digital holography,” Appl. Opt. 47(19), D117–D127 (2008). [CrossRef]   [PubMed]  

24. Y. Z. Liu, J. W. Dong, Y. Y. Pu, B. C. Chen, H. X. He, and H. Z. Wang, “High-speed full analytical holographic computations for true-life scenes,” Opt. Express 18(4), 3345–3351 (2010). [CrossRef]   [PubMed]  

25. K. Matsushima and A. Kondoh, “A wave optical algorithm for hidden-surface removal in digitally synthetic full-parallax holograms for three-dimensional objects,” Proc. SPIE 5290, 90–97 (2004). [CrossRef]  

26. A. Kondoh and K. Matsushima, “Hidden surface removal in full-parallax CGHs by silhouette approximation,” Syst. Comput. Jpn. 38(6), 53–61 (2007). [CrossRef]  

27. DigInfo TV, “Computer-synthesized holograms - The ultimate in 3D images” (Digitized Information, Inc., July 22, 2010), http://www.diginfo.tv/2010/07/22/10-0130-r-en.php.

28. K. Matsushima and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17(22), 19662–19673 (2009). [CrossRef]   [PubMed]  

Supplementary Material (1)

Media 1: MP4 (15001 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1
Fig. 1 (a) Schematic illustration of light-shielding of a real opaque object, and (b) emulation of light-shielding by the silhouette method.
Fig. 2
Fig. 2 (a) Schematic illustration of object-by-object light-shielding, and (b) silhouette mask of the current object.
Fig. 3
Fig. 3 Comparison between (a) object-by-object and (b) polygon-by-polygon light-shielding methods.
Fig. 4
Fig. 4 Schematic illustration of (a) polygon-by-polygon light shielding, and (b) the silhouette mask of a polygon.
Fig. 5
Fig. 5 Three types of masking procedure. The final masked field is given by the subtraction of two fields in procedures (ii) and (iii).
Fig. 6
Fig. 6 Schematic illustration of the advantages of the silhouette aperture over the silhouette mask.
Fig. 7
Fig. 7 (a) Forward propagation of the localized field to the object plane, and (b) backward propagation of the small part of the accumulated field in the object plane to the current aperture.
Fig. 8
Fig. 8 Schematic explanation of the procedure for the switch-back technique: (a) switch-back for the current polygon n, and (b) for the next polygon n + 1.
Fig. 9
Fig. 9 Schematic illustration of maximum diffraction areas and the minimum sampling window required for avoiding arising errors in switch-back propagation.
Fig. 10
Fig. 10 Schematic illustration of the division of an object and the multiple object planes.
Fig. 11
Fig. 11 3D scene used for the performance measurements. The sizes and distances indicated are those of the full-size hologram.
Fig. 12
Fig. 12 Object models used for the performance measurements. The number of polygons indicated in brackets in each case is the total number of polygons. The number of front-face polygons is simply half of the total number. Model 5 is used to create the actual HD-CGH.
Fig. 13
Fig. 13 Computation time vs. number of sub-objects. The measured times are for (a) high-definition CGHs and (b) electro-holography.
Fig. 14
Fig. 14 Computation time vs. number of front-face polygons. The measured times are for (a) high-definition CGHs and (b) electro-holography.
Fig. 15
Fig. 15 Optical reconstruction of a fabricated high-definition CGH using (a) the reflected illumination of a red LED, and (b) the transmitted illumination of a He-Ne laser.
Fig. 16
Fig. 16 Close-up photographs of the optical reconstruction using illumination from a red LED. The photographs are taken from various viewpoints. Occlusions and continuous motion parallax are appropriately reconstructed (see Media 1).

Tables (2)

Tables Icon

Table 1 Parameters used for high-definition CGH computation.

Tables Icon

Table 2 Specifications of the hardware and software used for the performance measurements.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

M n (x,y)={ 1 0 inside orthogonal projection of object n otherwise
u n+1 (x,y)= P n+1,n { M n (x,y) u n (x,y)+ O n (x,y) },
u 3 (x,y)= P 3,2 [ P 2,1 { u 1 (x,y)} M 2 (x,y)]
A n (x,y)=1 M n (x,y).
u 3 (x,y)= P 3,2 [ P 2,1 { u 1 (x,y)} P 2,1 { u 1 (x,y)} A 2 (x,y)] = P 3,1 { u 1 (x,y)} P 3,2 [ P 2,1 { u 1 (x,y)} A 2 (x,y)],
u 3 (x,y)= P 3,1 { u 1 (x,y)}.
u 3 (x,y)= u 3 (x,y) P 3,2 [ P 2,3 { u 3 (x,y)} A 2 (x,y)].
P obj,n+1 { u n+1 (x,y) }= P obj,n+1 [ P n+1,n { M n (x,y) u n (x,y)+ O n (x,y) } ] = P obj,n { M n (x,y) u n (x,y)+ O n (x,y) },
P obj,n+1 { u n+1 (x,y) }= P obj,n { u n (x,y) } P obj,n { A n (x,y) u n (x,y) O n (x,y) }.
u n obj (x,y) P obj,n { u n (x,y) },
u n+1 obj (x,y)= u n obj (x,y)+ P obj,n { O n (x,y) A n (x,y) u n (x,y) }.
u n (x,y)= P n,obj { u n obj (x,y) }.
u n ( x , y ) = P ^ n , obj { u n obj ( x , y ) } ,
u n + 1 obj ( x , y ) = u n obj ( x , y ) + P ^ obj , n { O n ( x , y ) A n ( x , y ) u n ( x , y ) } ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.