Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Method of single-step full parallax synthetic holographic stereogram printing based on effective perspective images’ segmentation and mosaicking

Open Access Open Access

Abstract

With the principle of ray-tracing and the reversibility of light propagation, a new method of single-step full parallax synthetic holographic stereogram printing based on effective perspective images’ segmentation and mosaicking (EPISM) is proposed. The perspective images of the scene are first sampled by a virtual camera and the exposing images, which are called synthetic effective perspective images, are achieved using the algorithm of effective perspective images’ segmentation and mosaicking according to the propagation law of light and the viewing frustum effect of human eyes. The hogels are exposed using the synthetic effective perspective images in sequence to form the whole holographic stereogram. The influence of modeling parameters on the reconstructed images are also analyzed, and experimental results have demonstrated that the full parallax holographic stereogram printing with the proposed method could provide good reconstructed images by single-step printing. Moreover, detailed experiments with different holographic element sizes, different scene reconstructed distances, and different imaging planes are also analyzed and implemented.

© 2017 Optical Society of America

1. Introduction

With the limited resolution power of human eyes, holographic stereogram combines the principle of holography and binocular parallax together, and the stereoscopy vision is generated by a series of superimposed plane images. As the wavefront needn’t to be reproduced accurately, the amount of data can be reduced greatly in holographic stereogram, and holographic stereogram is considered as one of the most promising three-dimensional (3D) display technology. Holographic stereogram has been widely used in various fields, such as in military, architecture, commerce, automotive industry, entertainment, etc. [1, 2].

According to the different sources of interference patterns and different approaches of recording, the printing of holographic stereogram can be categorized into three basic types: synthetic holographic stereogram printing, computer-generated holographic stereogram printing, and wavefront printing [3,4]. In synthetic holographic stereogram printing, a spatial light modulator (SLM) is used to load and display sequential perspective images as the signal beam. After interfered with the reference beam, interference patterns are recorded on the holographic recording medium which is spatially segmented into multiple hogels (holographic elements) [5]. In computer-generated holographic (CGH) stereogram printing, interference patterns are first calculated by the computer, then displayed on the SLM, scaled by the imaging lens and printed on the holographic recording medium [6–8]. Wavefront printing is the integration of synthetic holographic stereogram printing and CGH stereogram printing. The hologram calculated by computer is displayed on SLM, and illuminated by the reconstruction beam. After passing through a band-pass filter, the wavefront of the scene is achieved and it is considered as the signal beam, then interfered with the reference beam to form reflection hologram [9–12]. During the reproduction of holographic stereogram, the stereoscopy vision occurs when different perspective images with parallax information are viewed, and the parallax is changing when eyes are moving.

Compared with CGH stereogram printing as well as wavefront printing, there is no complicated diffraction calculation in synthetic holographic stereogram printing. Synthetic holographic stereogram was first proposed by DeBitetto [13] and promoted by King et al. [14]. The Massachusetts Institute of Technology (MIT) media lab has committed to the development of digital synthetic holographic technology since the 1990s, and studied on the principle of synthetic holographic stereogram with large angle and wide view, as well as distortions and other issues [15, 16]. Zebra Imaging Inc. which was founded by MIT scientists successfully developed digital synthetic holographic printers [2, 17]. Brotherton-Ratcliffe introduced a printing technology for digital holograms with large area [18], and developed holographic printer with RGB pulse lasers [19]. The issues like the design of printing system [20–22], printing efficiency [23–26], resolution in images [27, 28], color reproduction [29–31] and updatable holographic recording medium [32–34] are among high-priority topics in synthetic holographic stereogram.

To achieve reconstructed images surfacing the recording medium, there are three types of the existing approaches for synthetic holographic stereogram printing, the two-step method [14], the infinite viewpoint camera method proposed by MIT [16], and the single-step Lippmann holographic stereogram method based on computer and image-processing technologies proposed by Yamaguchi’s group [35, 36]. In the two-step method, the perspective images for exposing are acquired easily without transform processing, but there should be twice exposures. Since it is difficult to achieve large-format collimating light, we can hardly make a large size hologram during the second exposure. Compared with the two-step method, there is only once exposure either in the infinite viewpoint camera method or in the single-step Lippmann holographic stereogram method. However, in the infinite viewpoint camera method, the resolution of reconstructed images equals to the number of hogels and it is relatively low, especially for a small size hologram. In the single-step Lippmann holographic stereogram method, the resolution of reconstructed images is relatively high, and a large size hologram is easily achieved by increasing the number of hogels. Nevertheless, the point cloud data of the scene should be acquired and the occlusion relation of scene points in space should be considered, leading to a complicated process. In this paper, a novel printing method of synthetic holographic stereogram based on effective perspective images’ segmentation and mosaicking (EPISM) is proposed. On the basis of ray-tracing principle and the reversibility of light propagation, the viewing frustum effect of human eyes is analyzed and simulated. With the segmentation and mosaicking of effective perspective images, the synthetic visual information for a specific viewpoint can be extracted, and this visual information is exposed on a square area of holographic recording medium, i.e., a hogel. After all the hogels are exposed successively, a holographic stereogram can be achieved. Compared with the existing approaches, the proposed EPISM method have some advantages. The full parallax holographic stereogram could be printed by single-step exposure, and the resolution of reconstructed images is much higher than that of the infinite viewpoint camera method as it is determined by the resolution of original sampled perspective images, not by the number of hogels. Moreover, the process of achieving exposing images is easier than that of the single-step Lippmann holographic stereogram method without considering the occlusion relation of scene points, as the occlusion relation is embodied in the original sampled perspective images and the color and directional information is easily encoded by computing a virtual master hologram.

The rest of paper is organized as follows. In Section 2, the principles of previous methods are discussed and the principle of proposed method is introduced briefly. In Section 3, the realization of the proposed method is introduced in detail. In Section 4, the influence of modeling parameters on the quality of reconstructed images is analyzed. The principle of the proposed method is verified with experiments and the results are discussed in Section 5. The conclusions are presented in Section 6.

2. Principles

2.1. Previous methods

Two-step method

Two-step method includes three procedures: the acquisition of perspective images, the exposure of master hologram, and the reproduction of master hologram to transfer hologram, as shown in Fig. 1. Full parallax images of the scene are acquired under incoherent illumination, and Fresnel holograms of perspective images are recorded in different hogels successively, then the master hologram is generated (Fig. 1(a)). When viewing the master hologram, eyes should be close to the positions of aperture pupils, as shown in Fig. 1(b). The master hologram is then recopied to the transfer hologram with image hologram photographing method (Fig. 1(c)), and the virtual aperture pupils are separated from the transfer hologram. When viewing the transfer hologram from different virtual pupil positions, different perspective images can be captured by human eyes, and the stereoscopic vision is formed, as shown in Fig. 1(d). For conveniences, the master hologram is called H1 plate, and the transfer hologram is called H2 plate in the following pages. The size of the master hologram should be much larger than that of the transfer hologram to ensure a same viewing angle.

 figure: Fig. 1

Fig. 1 The production and reconstruction of two-step method. (a)The production of master hologram. (b)The reconstruction of master hologram. (c)The reproduction of master hologram to transfer hologram. (d)The reconstruction of transfer hologram.

Download Full Size | PDF

Infinite viewpoint camera method

Infinite viewpoint camera method also includes three procedures: the acquisition of perspective images, the transformation from perspective images to parallax related images, and the printing of parallax related images. Perspective images of the scene are acquired by infinite camera first and it can be realized by orthogonal projection in modeling software. The perspective images are then transformed to parallax related images. When the distance between the camera and the holographic plate is far enough, the light arriving at the hogels can be considered as bundles of parallel light approximately, and the transformation is just an operation on a series of arrays or stacks. The transformation principle of perspective images in infinite viewpoint camera method is shown in Fig. 2. Suppose there are s × t (s = 1, 2, ⋯, M, t = 1, 2, ⋯, N) perspective images, and each image contains i × j (i = 1, 2, ⋯, m, j = 1, 2, ⋯, n) pixels. The perspective image matrix is expressed as Pst (i, j). All the pixels at the same location of each Pst (i, j) are extracted to form a new matrix Hij (s, t), which denotes a parallax related image. Finally, the parallax related images are printed on the hogels. When taking infinite viewpoint camera method, the resolution of reconstructed images is equal to the number of hogels.

 figure: Fig. 2

Fig. 2 The transformation principle of perspective images in infinite viewpoint camera method.

Download Full Size | PDF

Single-step Lippmann holographic stereogram method

Single-step Lippmann holographic stereogram method also includes three procedures: the acquisition of point cloud data, the calculation of exposing images, and the printing of exposing images. The principle of this method is shown in Fig. 3. Compared with the two methods mentioned above, in this approach, the exposing images for hogels are acquired by perspective projection, not by camera sampling. Based on the center of each hogel, scene points are projected to the position of liquid crystal display (LCD) respectively (see Fig. 3(a)). According to the viewer’s position, the occlusion relation of scene points in space should be considered and the hidden surfaces should be removed. The calculated images are then displayed on the LCD, converged through the lens, and recorded on the hogels (see Fig. 3(b)). During the reconstruction of the scene, light rays are diffracted from each hogel in the same way as the image calculation, thus the viewer can perceive the scene (see Fig. 3(c)).

 figure: Fig. 3

Fig. 3 Principle of the single-step Lippmann holographic stereogram method. (a) Calculation of an exposing image. (b) Optical setup of the method. (c) Reconstruction geometry for the holographic stereogram.

Download Full Size | PDF

2.2. Proposed EPISM method

We proposed a novel printing method of single-step full parallax holographic stereogram named EPISM. For convenience, the proposed single-step method can be comprehended equivalently as a two-step method. In conventional two-step method, the perspective images of the scene should be exposed on the master hologram first, then the master hologram will be recopied to the transfer hologram. However, in our proposed method, there isn’t an optical exposure of the master hologram. Generation of the master hologram is a virtual process and can be implemented by computer rendering. The virtual master hologram is called virtual H1 plate, meanwhile the size and the number of virtual hogels in it are controllable. The transfer hologram (i.e., the actually printed holographic stereogram) is also supposed to be composed of a lot of hogels. By simulating the propagation process of information from different perspective images during the reproduction of the master hologram to the transfer hologram, the exposing images for hogels in the transfer hologram can be achieved by computer rendering directly. The transfer hologram is called H2 plate whose field of view (FOV) is determined by both the limited resolution of holographic recording medium and the structure of printing system, and the FOV at any position is identical presumptively.

The virtual H1 plate is composed of some virtual hogels, and each of them is corresponding to a specific perspective image. Suppose there is a specific hogel in H2 plate, i.e., hogel′, and the center of it is point O. The primitive principle of the proposed method can be illustrated by taking the acquisition of exposing image for hogel′ as an example (see Fig. 4). The exposing image for hogel′ in H2, named the synthetic effective perspective image, is comprised by the mosaicking of some specific effective perspective images segments, and each effective perspective image segment is extracted from the specific perspective image corresponding to a virtual hogel in virtual H1 plate. The extraction of effective perspective image segment is shown in Fig. 4(a). Suppose the perspective image displayed on the LCD panel is exposed on a specific virtual hogel in virtual H1 plate. With the principle of ray-tracing, when viewing virtual hogel from point O in H2 plate, the viewed effective pixels of perspective image in LCD panel are limited. The effective perspective image segment is just a cross section between the perspective image displayed on LCD panel and the viewing frustum, whose vertex is point O and terminal region is the boundary of virtual hogel. That is to say, only the pixels inside the cross section can propagate to point O. Moreover, when viewing multiple virtual hogels from point O in H2 plate, the synthetic effective perspective image mosaicked by effective images segments of multiple virtual hogels is shown in Fig. 4(b). Suppose there are n × m virtual hogels among the FOV of point O, then all effective images segments of perspective images corresponding to these virtual hogels can be extracted and reassembled. The resulted mosaicked image is the synthetic effective perspective image which is marked with dotted box (see Fig. 4(b)), and it will be exposed on hogel′ in H2 plate. In Fig. 4, to illustrate more clearly, there are gaps among the virtual hogels in virtual H1 plate. However, gaps are non-existent in the actual printing. Similarly, the exposing images for the other hogels in H2 plate can be obtained in the same manner described above, and the single-step full parallax holographic stereogram can be obtained after printing all the hogels in H2 plate.

 figure: Fig. 4

Fig. 4 The primitive principle of the proposed method. (a) The extraction of effective perspective image segment corresponding to a single virtual hogel. (b) The synthetic effective perspective image mosaicked by effective images segments of multiple virtual hogels.

Download Full Size | PDF

3. Detailed principle of EPISM

The proposed EPISM method includes three key procedures: parameters determination of geometry, acquisition and selection of perspective images, segmentation and mosaicking of perspective images.

3.1. Parameters determination of geometry

The principle of effective pixels mosaicking for the hogel is shown in Fig. 5. For simplicity, only one-dimension case (horizontal parallax) is analyzed. The virtual H1 plate, LCD panel and H2 plate are placed parallelly along the z-axis, with the distances among them are L1, L2 as shown in Fig. 5. The size of LCD panel is lLCD. The viewing angle provided by H2 plate is θ, and the range of sight in virtual H1 plate viewing from point O is lMN.

 figure: Fig. 5

Fig. 5 The principle of effective pixels mosaicking for the hogel(in planar view).

Download Full Size | PDF

Suppose the hogel in virtual H1 plate facing and centering around point O is virtual hogel0, and the corresponding perspective image part displayed on LCD panel is segment AB. Based on ray-tracing principle, the effective pixels part of segment AB is only segment CD when point O is exposed by virtual hogel0, i.e., only the perspective pixels information displayed on segment CD can propagate to point O. Similarly, for virtual hogel1 below virtual hogel0, the corresponding part of LCD panel is segment A′B′, and the effective pixels part of A′B′ contributing for point O is just segment DE. Regularly, corresponding to each virtual hogel in virtual H1 plate, taking all these effective pixels segments CD, DE, ⋯ among the FOV of point O together, the effective viewed pixels segment of point O can be tiled and obtained. Then this synthetic pixels segment can be recorded on the hogel centering around point O in H2 plate. In the full parallax case, the effective perspective pixels on LCD turn into the square effective perspective image which is the intersection region between the viewing frustum and the perspective image displayed on LCD, and the synthetic image can be obtained by mosaicking the resulted effective perspective images segments.

The determination of the viewing angle is shown in Fig. 6. Virtual hogel in virtual H1 plate can be regarded as single point approximately since its small size. The field angle between LCD panel and its corresponding virtual hogel in virtual H1 plate is θ′.

 figure: Fig. 6

Fig. 6 The determination of viewing angle.

Download Full Size | PDF

When the edge point of LCD panel, i.e., point B, is just on line MO, the information displayed on LCD panel could not propagate to point O. Thus, there exists a critical relationship

θ=θ=2arctanlLCD2L1.

Restricted by the limited viewing angle θ, only a few effective pixels segments (corresponding to certain virtual hogels in virtual H1 plate) can contribute to each view point O in H2 plate. For a certain view point in H2 plate, as shown in Fig. 6, with paraxial approximation, the number of contributed virtual hogels in virtual H1 plate is πθ(L1+L2)180l1, where l1 is the size of virtual hogel in virtual H1 plate, and operator denotes the operation of rounding down. Considering multiple viewpoints in H2 plate, they are all supposed to be located at the center of each hogel in H2 plate. The determination of the number of all the contributed virtual hogels in virtual H1 plate is shown in Fig. 7. Apart from the hogel in virtual H1 plate facing point O#, the other contributed virtual hogels in virtual H1 plate are distributed symmetrically on both sides of point O#, and the number is denoted as nhogel for either side presumptively. We have

nhogel=πθ(L1+L2)360l11,
and
NH1=2nhogel+[l2×(NH21)+l1]/l1,
where l2 is the size of hogel in H2 plate, NH1 and NH2 are the number of hogels in virtual H1 plate and H2 plate, respectively.

 figure: Fig. 7

Fig. 7 The determination of the number of all the contributed hogels in virtual H1 plate.

Download Full Size | PDF

3.2. Acquisition and selection of perspective images

The one-dimension capture geometry of the perspective images is shown in Fig. 8. Perspective images of the scene are captured by simple camera in modeling software. The camera is moving along the motion plane, and the lens is perpendicular to the trace for photographing. The resolution of sampling images is limited to the pixel count of LCD. The FOV of camera equals to the angle θ′ as shown in Fig. 6. The sampling number coincides with the number of hogels in virtual H1 plate, i.e., NH1. The camera’s sampling interval satisfies the relationship of lsap/l1 = Lsap/L1, where lsap is the sampling interval of the camera, Lsap is the distance between the camera motion plane and the scene, l1 is the size of hogel in virtual H1 plate, and L1 is the distance between the LCD panel and the virtual H1 plate as mentioned above.

 figure: Fig. 8

Fig. 8 The one-dimension capture geometry of the perspective images.

Download Full Size | PDF

As mentioned in Section 3.1, for any view point O in H2 plate, the corresponding contributed hogels in virtual H1 plate are finite. Consequently, we have to select effective hogels in virtual H1 plate and get their corresponding effective perspective images segments for mosaicking. The corresponding relationship between the hogel in H2 plate and the effective contributed hogels in virtual H1 plate is shown in Fig. 9. For the specific hogel Hij in H2 plate, hogels Pstin virtual H1 plate are effective, and with the following relationship s′ ∈ (i′, i′+ 2nhogel), t′ ∈ (j′, j′ + 2nhogel) satisfied.

 figure: Fig. 9

Fig. 9 The corresponding relationship between the hogel in H2 plate and the effective contributed hogels in virtual H1 plate.

Download Full Size | PDF

3.3. Segmentation and mosaicking of effective perspective images

The diagram of algorithm for segmentation and mosaicking of effective perspective images is shown in Fig. 10. Fig. 10(a) and Fig. 10(b) are the perspective view and front view with respect to the viewing direction, respectively. Additionally, in Fig. 10(b), the image displayed on LCD panel, which is corresponding to the virtual hogel, is illustrated. The Cartesian coordinates of the point A, which is the upper left vertex of the first row and first column hogel in virtual H1 plate, are denoted as A (0, 0, 0). In Fig. 10(a), the Cartesian coordinates of point O are O (x0, y0, z0). To observe the hogel ABCD from point O in virtual H1 plate with a viewing frustum, the cross section in LCD panel is region A′B′C′D′, and its z-coordinate is z1. Thus, we have L1 = |z1| and L2 = |z0z1|.

 figure: Fig. 10

Fig. 10 The diagram of precise algorithm in segmentation and mosaicking of perspective images. (a)The perspective view along the gazing direction. (b)The front view along the gazing direction.

Download Full Size | PDF

In the front view with respect to z-axis (see Fig. 10(b)), the central point of virtual hogel ABCD, point P, coincides with the center of LCD panel which projects perspective image to it. The boundary of perspective image is region A″B″C″D″ and it is N × N pixels. The overlapping region between A′B′C′D′ and A″B″C″D″, i.e., the intersection area between the viewing frustum and the perspective image displayed on LCD, is effective perspective image segment which should be extracted. With view point O fixed, and repeating above procedure for all the contributed virtual hogels in virtual H1 plate, the eventual effective perspective image segment for point O, i.e., synthetic effective perspective image, can be achieved with all the effective perspective images segments reassembled. Since the resolution of LCD panel is certain when it is selected, and ABCD and A″B″C″D″ have a common center point P, the extraction of all the pixels in the effective perspective image segment can be calculated by locating this common center point. The center point of A′B′C′D′ is denoted as P′.

The coordinates of the vertices of the (n, m) hogel in virtual H1 plate are A (nl1l1, ml1l1, 0), B (nl1, ml1l1, 0), C (nl1, ml1, 0) and D (nl1l1, ml1, 0). Thus, the coordinates of point P is P (nl1l1/2, ml1l1/2, 0). Using the similar triangle, the width of the square region A′B′C′D′ is

lAB=lBC=lCD=lDA=Z0Z1Z0l1.

The line OP′P passes through points O (x0, y0, z0) and P (nl1l1/2, ml1l1/2, 0), therefore, the coordinates of point P′ can be calculated as

P(xP=z1z0z0(x02nl1l12)+x0,yP=z1z0z0(y02ml1l12)+y0,zP=z1).

Since the x-axis coordinate of point P is xP = nl1l1/2 and the size of LCD panel is lLCD, the x-axis coordinate of point A″ can be obtained as

xA=xPlLCD2=2nl1l1lLCD2.

Combined Eq. (4) and Eq. (5), we have

xA=xPlAB2=z1z0(x0nl1+l1)+nl1l1,
where xA′ is the x-axis coordinate of point A′.

Similarly, the y-axis coordinate of point A′ is

yA=yPlAB2=z1z0(y0ml1+l1)+ml1l1.

The distance between point A′ and point A″ along x-axis is

lxAA=|xAxA|=|z1z0(x0nl1+l1)l12+lLCD2|.

Similarly, the distance along y-axis is

lyAA=|z1z0(y0ml1+l1)l12+lLCD2|.

Then, the indices of the pixels for the effective perspective image segment can be obtained by mapping the coordinates to the pixels indices. Since the perspective image corresponding to the virtual hogel in virtual H1 plate is N × N pixels and is displayed on the LCD panel whose size is lLCD, the pixel number of unit length in LCD panel is Nuni = N/lLCD. Suppose the pixel index of point A″ are (1, 1), with Eq. (9) and Eq. (10), the pixel indices of vertices for region A′B′C′D′ can be calculated as follows:

A(|z1z0(x0nl1+l1)l12+lLCD2|×NlLCD+1,|z1z0(y0ml1+l1)l12+lLCD2|×NlLCD+1),
B(|z1z0(x0nl1+l1)l12+lLCD2|×NlLCD+z0z1z0×Nl1lLCD,|z1z0(y0ml1+l1)l12+lLCD2|×NlLCD+1),
C(|z1z0(x0nl1+l1)l12+lLCD2|×NlLCD+z0z1z0×Nl1lLCD,|z1z0(y0ml1+l1)l2+lLCD2|×NlLCD+z0z1z0×Nl1lLCD),
D(|z1z0(x0nl1+l1)l12+lLCD2|×NlLCD+1,|z1z0(y0ml1+l1)l12+lLCD2|×NlLCD+z0z1z0×Nl1lLCD).

Then the pixels’ indices in region A′B′C′D′ can be extracted from the perspective image, and the effective perspective image segment contributed by virtual hogel ABCD is obtained.

For view point O (x0, y0, z0), we can extract all these pixels corresponding to every contributed virtual hogels in virtual H1 plate and reassemble these pixels together (i.e., mosaicking all the effective perspective images segments), and then the synthetic effective perspective image for point O (x0, y0, z0) is obtained. This rendering algorithm of segmentation and mosaicking can be implemented by the computer. Repeating the same algorithm for all the viewpoints in H2 plate, the image consequences for the exposure of hogels on the final full parallax holographic stereogram can be prepared.

4. The influence of modeling parameters on the reconstructed images

As a type of holographic stereogram, the proposed EPISM based synthetic holographic stereogram also employs the principle of holography and binocular parallax together, where the stereoscopy vision is generated by a series of mosaicked effective perspective images segments. Therefore, the reconstructed performance is affected significantly with the parameters chosen in EPISM. Improper parameters may result into serious reduction of reconstruction quality, such as the flipping effect, low resolution, etc.

4.1. The flipping effect

The size of hogel, the depth of scene, and the distance from the scene to the holographic plate have been demonstrated to play important roles on the flipping effect in traditional holographic stereogram [37]. In the proposed EPISM based synthetic holographic stereogram, these parameters also affect the flipping effect. The diagram for analyzing the flipping of the image is shown in Fig. 11. l denotes the size of hogel. L denotes the distance between the image plane and the hologram, and it can be regarded as the distance from the scene to the holographic plate. ΔL is the depth of the scene. Suppose there is a scene point Q, and it is Ld away from the image plane.

 figure: Fig. 11

Fig. 11 The diagram for calculating the flipping of the image.

Download Full Size | PDF

When viewing point Q at neighboring hogels Hi,j and Hi+1,j, eyes will perceive point Qi,j and point Qi+1,j on the image plane. The discontinuity between reconstructed images of neighboring hogels won’t be observed if the minimum distance of parallax movement δ = |Qi,jQi+1,j| on image plane is small enough. With Fig. 11, we have

δ=LdlL+Ld.

We can neglect the flipping effect when the distance δ is smaller than the limited resolution of the observing system, namely,

|δ|β,
where
β=1.22λLα.

In Eq. (17), λ is the wavelength, α is the diameter of the smallest pupil in the observing system. The pupil diameter of human eyes is a, when it is smaller than l,α = a, whereas α = l.

Substituting Eq. (15) and Eq. (17) into Eq. (16), we have

1.22λL2lα+1.22λLLd1.22λL2lα1.22λL.

With Eq. (18), the depth of scene which stereogram holographic could provide is

ΔL=1.22λL2lα1.22λL+1.22λL2lα1.22λL.

Generally, lα ≥ 1.22λL, then Eq. (19) can be rewritten as

ΔL2.44λL2lα={2.44λL2lα,whenal2.44λL2l2,whena>l.

Eq. (20) shows that, there is a constraint among the depth of scene ΔL, the size of hogel l, and the distance from the scene to the holographic plate L. When the depth of scene and the size of hogel remain the same, the distance from the scene to the holographic plate must be longer than a certain value. Otherwise, the flipping effect will occur, namely, discontinuity of the reconstructed images will be observed when moving the eye from one hogel to the next. This flipping effect will disturb the viewer and reduce the reconstruction quality.

4.2. The resolution of synthetic effective perspective image

We can also analyze the resolution of synthetic effective perspective image under the condition of different system parameters. The effective length of LCD panel (the segment CD as shown in Fig. 5) for each perspective image is

leff=L2L1+L2×l1.

The corresponding pixel count is

Neff=leff×Nuni.

The resolution of synthetic effective perspective image is

M=Neff×(2nhogel+1).

Substituting Eq. (21) and Eq. (22) into Eq. (23), we have

M=Nuni×(2nhogel+1)×l1×L2L1+L2.

Eq. (24) shows that, when the LCD is selected and the parameters of geometry are determined, the resolution of synthetic effective perspective image can be calculated. With Eq. (24), the advantage of our proposed method over the infinite viewpoint camera method, i.e., a higher resolution of reconstructed images, will be illustrated in the following experiments.

4.3. The size of the scene

To record all the information of the scene, the virtual camera needs to thoroughly cover the scene when sampling. Combing Fig. 5, Fig. 6 and Fig. 7 together, there exists a relationship between the size of scene and the size of holographic plate as follows

2L1×tanθ2+LH1l1>LO,
LH1=2(L1+L2)×tanθ2+LH2l2,
where LH1, LH2, LO are the size of virtual H1 plate, H2 plate and the scene, respectively. Eq. (25) and Eq. (26) will be helpful to design the scene model, as the size of the scene is confined by the size of holographic plate we could use.

4.4. The influence of LCD parameters

The selection of LCD panel will also affect the reconstructed effect. With Eq. (1), when l1, l2, L1 and L2 keep unchanged, the larger lLCD is, the larger θ will be, and it will be more convenient for viewing. With Eq. (24), when the resolution of LCD panel is higher, i.e., Nuni is bigger, the resolution of synthetic effective perspective image will be higher, i.e., the reconstructed image will be clearer. Meanwhile, when the pixel count of synthetic effective perspective image is certain, the area of image in LCD panel as well as the diameter of illumination beam required is smaller. Consequently, the laser energy is more concentrated, and it can reduce the exposure time and enhance the printing rate.

5. Experiments and discussions

A LCD panel (VVX09F035M20) produced by Panasonic was used. The LCD panel was 8.9 inches with 1920 × 1080 pixels, and 1000 × 1000 pixels of it were used and the corresponding effective displaying size was 10 cm × 10 cm, i.e., lLCD = 10 cm, and Nuni = N/lLCD = 100 pixels/cm. A teapot model with 6.4 cm width, 3.2 cm height and 4 cm depth was utilized as the 3D scene, and it was tipped 45°. The FOV of sampling camera was set as 30°, i.e., θ′= 30°. With Eq. (1), there was L1 = lLCD/[2tan (θ′/2)] = 18.6 cm. When simulating the generation of virtual H1 plate, L = L1 = 18.6 cm, ΔL = 6.4/sin 45° = 4.5 cm and λ = 639 nm. These parameters were substituted into Eq. (20) and result in l = 1.1 mm. However, when the sampling interval of original perspective images was selected as 1.1 mm, the sampling number was so big that the calculation load was too heavy. By the simulation, a good mosaicking effect could be achieved for synthetic effective perspective images when l = 2.5 mm. Consequently, we assumed that L1 = 18.6 cm and l1 = 2.5 mm for all the next experiments. Moreover, the distance from the scene to H2 plate was assumed to be 11.4 cm, i.e., L = L2 = 11.4 cm, then we had l = 0.67 mm from Eq. (20). Due to the limited power of the laser we used, the cost of exposure time was so high if such a small size of hogel was adopted. We assumed that l2 = 1 cm. The size of holographic plate we used was 8 cm × 8 cm, then NH2 = 8/l2 = 8. With Eq. (1)Eq. (3), we had θ ≈ 30°, nhogel = 31 and NH1=91. The 3D modeling software Blender was used for the virtual capture of perspective images of 3D scene, and the virtual camera was put 18.6 cm laway from the median plane of teapot. Substituting all the parameters above into Eq. (24), there was M = 599 pixels. In the infinite viewpoint camera method, the resolution equated to the sampling number of virtual H1 plate, i.e., NH1. Since MNH1, the proposed method could improve the resolution of image drastically. Substituting all the parameters above into Eq. (25) and Eq. (26), the conditions for virtual H1 plate to record the complete information of the scene could also be satisfied. When l1 < l2, l1 determined the mosaicking effect of effective perspective images, and l2 determined the flipping effect.

In numerical simulation, some original perspective images and synthetic effective perspective images corresponding to specific viewpoints are shown in Fig. 12. The index (x, y) in Fig. 12(b) denote the xth row and yth column’s hogel in H2 plate. Obviously, the synthetic effective perspective images are observed much larger than original perspective images. The reason is that the viewing distance is 11.4 cm for synthetic effective perspective images, and it is smaller than 18.6 cm for original perspective images. Moreover, comparing Fig. 12(a) with Fig. 12(b), the synthetic effective perspective images are pseudoscopic images, and the depth inversions can be observed clearly from teapot lid and teapot bottom. The essence of the proposed EPISM method is to achieve pseudoscopic images of the scene when viewing at different positions, thus the reconstructed scene is a real scene in correct depth.

 figure: Fig. 12

Fig. 12 The numerical simulation of perspective images in different viewing positions. (a)The original perspective images. (b)The synthetic effective perspective images.

Download Full Size | PDF

The optical setup of the synthetic holographic stereogram printing system is shown in Fig. 13. A 400 mW 639 nm single longitudinal mode and linear polarization solid-state red laser (CNI MSL-FN-639) was used as the laser source, and an electric shutter (Sigma Koki SSH-C2B) was used to control the exposure time. The laser beam passed a non-polarizing beam splitter (NPBS) and was divided into two beams, i.e., the signal beam and the reference beam, and the intensities of them were both adjusted by attenuators. Synthetic effective perspective images were loaded onto the LCD panel. The backlight module and two polarizers of the LCD were removed. The illuminated beam passed through the LCD panel after expanded by a convex lens with f = 75 mm, then transmitted a diffuser and arrived at the holographic plate. The diffuser plate used for diffused transmission was placed just in front of the LCD panel to diffuse the object light to fill the aperture of the hogel. The reference beam passed through a spatial filter comprised of a 40 × objective and a 15 μm pin-hole to filter out the higher spatial frequency. A collimating lens with f = 150 mm was used to collimate the focused laser and to get a uniform plane reference wave. The distance between the LCD panel and the holographic plate was 11.4 cm. The signal beam and the reference beam were interfered from different sides of the holographic plate and the interference fringes were recorded on the holographic film. There were two apertures close to the holographic plate from both side to ensure only the square area of the holographic plate (i.e., the hogel) exposed. The holographic plate was installed on a motorized KSA300 X-Y stage whose positioning precision was both along the horizontal and vertical directions. The motorized X-Y stage was driven by a programmable MC600 controller. In our system, the distance between the LCD panel and the holographic plate was set as L2, and the displacement of motorized X-Y stage for every spatial step was l2. The LCD panel, the electric shutter and the motorized X-Y stage were time-synchronously controlled by a computer.

 figure: Fig. 13

Fig. 13 Optical setup of synthetic holographic stereogram printer system.

Download Full Size | PDF

It should be noted that, the synthetic effective perspective images were tiled by the original perspective images, whose exposure direction was from the LCD panel to the virtual H1 plate, and it was opposite to the exposure direction from the LCD panel to the H2 plate. Consequently, the synthetic effective perspective images should be flipped horizontally before loaded onto the LCD panel.

The exposure time can be expressed as Texp = E/(Psig + Pref), where E was the light sensitivity of holographic plate, Psig and Pref were the energy intensity of signal beam and reference beam illuminated on the holographic plate, respectively. In the experiment, the holographic plate was developed by ourselves, and for red laser at 639 nm, E = 1250 μJ/cm2. To reduce exposure time, the energy ratio between the signal beam and the reference beam was set as 1:30, then Psig = 10 μJ/cm2, Pref = 300 μJ/cm2. The exposure time was calculated as Texp = 4 s. Waiting time was assumed as Tsti = 16 s to eliminate the vibration resulted from the motion of the shuttle and the motorized X-Y stage. Consequently, the total printing time was T(Texp+Tsti)×NH2×NH2=1280s.

Since the energy ratio between the signal beam and the reference beam was far from 1:1, the diffraction efficiency was relatively low. The reconstructed effect was not so good in white light reconstruction, so the laser reconstruction was utilized. The schematic for reconstruction is shown in Fig. 14. The holographic plate was illuminated by the conjugate beam of the original collimated reference wave, and a real image could be viewed perpendicularly to the holographic plate. A Canon EOS 550D camera with a 100 mm focus lens was put about 45 cm in front of the holographic plate to capture the reconstructed images, and it was moved along the track as indicated in Fig. 14.

 figure: Fig. 14

Fig. 14 The schematic for reconstruction.

Download Full Size | PDF

It should be pointed out that, for the conventional hologram with a certain size, a longer reconstructed distance L2 is always referred to a smaller viewing zone range. However, in our EPISM based holographic stereogram printing, the determination of viewing zone range is different from that of the conventional one. According to our EPISM method, the selection of proper geometric parameters is to ensure a same perspective relation in sampling and printing processes, i.e., the size of the hologram should satisfy a certain relation with the value of L2. Consequently, the viewing zone range in our EPISM method is fixed, i.e., it is determined by the value we supposed, not by the size of the hologram. Specifically, the viewing zone range is determined by the FOV of any viewing point used in capturing of perspective images and printing of hogels, which is supposed to be 30° in our experiment.

The photographs of optical reconstruction from different perspectives are shown in Fig. 15. It can be seen that the proposed EPISM based holographic stereogram can present correct full parallax information, and the reconstructed 3D scene is well agreed with the original one. However, when the observation directions are close to the limited directions (i.e., ±15°), original sampled perspective images of the teapot scene are incomplete, so does the reconstructed scene. Furthermore, the situation mentioned above is more obvious in horizontal direction than that of the vertical direction since the teapot is with 6.4 cm width and 3.2 cm height. Therefore, photographs with complete teapot scene are selected to capture, so the viewing zone range shown in Fig. 15 is less than 30°. The relatively low contrast ratio is resulted from the small energy ratio Psig/Pref. In Fig. 15, the center of the teapot is brighter than the other sections, which dues to the direction of the lighting in modeling software, and it shows that the hologram can display the gloss of the scene.

 figure: Fig. 15

Fig. 15 The photographs of optical reconstruction from different perspectives(L2 = 11.4 cm, l2 = 1 cm, also seen in Visualization 1).

Download Full Size | PDF

To indicate the reconstructed real scene out of the holographic plate, one ruler was placed parallel to the holographic plate, and the other one was placed 11.4 cm ahead. A camera was put about 45 cm in front of the holographic plate to capture the reconstructed images. The photographs captured at different focus depths are shown in Fig. 16. The spatial position relation is shown in Fig. 16(a). In Fig. 16(b), both the right ruler and the median surface of teapot are clear simultaneously while the background was blurred, and a blurred lattice structure on the boundary among different hogels can be observed. In Fig. 16(c), both the left ruler and the holographic plate are clear simultaneously, and the printed hogels can be observed clearly, while the 3D scenes are blurred.

 figure: Fig. 16

Fig. 16 The photographs in different focus depths. (a)The spatial position relation of rulers and holographic plate. (b)Focused on the right ruler. (c)Focused on the left ruler(L2 = 11.4 cm, l2 = 1 cm).

Download Full Size | PDF

From the analysis in Section 4.1, the flipping effect will be decreased when the hogel size decreases while the other parameters keep unchanged. In the further experiment, l2 was adjusted to 0.5 cm, and then the number of printed hogels was 16 × 16 = 256. In the contrast experiments, the size of hogel size was selected as l2 = 1 cm and l2 = 0.5 cm, and the video of motion parallax was recorded as shown in Visualization 2 and Visualization 3, respectively. It can be seen that the motion parallax is smoother with a smaller hogel size.

With the fixed distance between LCD panel and holographic plate (kept as 11.4 cm), the printing system can also realize different reconstructed distances. When we need the reconstructed scene in different depths, i.e., the value of L2 is variable, the synthetic effective perspective images should be scaled first, as shown in Fig. 17.

 figure: Fig. 17

Fig. 17 The diagram of synthetic effective perspective images scaling

Download Full Size | PDF

Suppose the synthetic effective perspective image generated by the proposed EPISM is M × M pixels, then the image loaded onto the LCD panel should be scaled to M′ × M′ pixels, and M=11.4L2M.

In experiment, L2 was selected as 8.4 cm and l2 = 0.5 cm, then we had M = 440 pixels, and M′= 598 pixels. The 440 × 440 pixels synthetic effective perspective image was enlarged to 598 × 598 pixels and loaded onto the LCD panel for printing. In the reconstruction, one ruler was placed parallel to the holographic plate, and the other one was 8.4 cm ahead. The photographs in different focus depths are shown in Fig. 18. Both the right ruler and the median surface of teapot are clear simultaneously in Fig. 18(a), while both the right ruler and the holographic plate are clear simultaneously in Fig. 18(b), and the dim hogels in Fig. 18(b) are without information recorded because of the vibration during the exposure. It can be seen that with the scaling of synthetic effective perspective images, the printing system can realize different reconstructed distances when the distance between LCD panel and holographic plate is fixed.

 figure: Fig. 18

Fig. 18 The photographs in different focus depths. (a)Focused on the right ruler. (c)Focused on the left ruler(L2 = 8.4 cm, l2 = 0.5 cm).

Download Full Size | PDF

When we need a much shorter distance of reconstructed scene, i.e., much smaller L2, the size of hogel should be much smaller as shown in Eq. (20), and the number of hogels will be increased greatly. However, this will result in a dramatically increasing of printing time because of the limited power of the laser. Otherwise, if the hogels’ size is not diminished along with the decrease of L2, the reconstructed quality of scene will decline. An experiment was conducted to illustrate this problem. When L2 was shorten to 6 cm while l2 was still kept as 0.5 cm, in the reconstruction, one ruler was placed parallel to the holographic plate, and the other one was 6 cm ahead, the reconstructed scene was with poor quality as shown in Fig. 19.

 figure: Fig. 19

Fig. 19 The photographs in different focus depths. (a)Focused on the right ruler. (c)Focused on the left ruler(L2 = 6 cm, l2 = 0.5 cm).

Download Full Size | PDF

According to the principle of synthetic holographic stereogram, the sampling plane of the 3D scene is the clearest reconstructed plane when the sampling plane of scene and the LCD plane are identical, since the curvature distortion is arisen when the other planes are loaded onto the LCD panel. An experiment using 3D scene with 3 playing cards were implemented. They were placed 1 cm apart respectively along z-axis, and disposed staggeredly along x-axis and y-axis. The distances between the sampling camera and the first playing card were 18.6 cm, 17.6 cm and 16.6 cm, respectively (Fig. 20). During the generation of synthetic effective holographic stereogram, the distance between virtual H1 plate and LCD panel was supposed as 18.6 cm. The optically reconstructed images with different clearest imaging planes are shown in Fig. 21. In Fig. 21(a), Fig. 21(b) and Fig. 21(c), the playing card of the front, the middle, and the rear one is clearest, respectively. We take the middle card as an example to describe the difference between reconstructed planes. In each subfigure of Fig. 21, a same area is marked with a green rectangle, and this area is clearest in Fig. 21(b), but distorted in Fig. 21(a) and blurred in Fig. 21(c).

 figure: Fig. 20

Fig. 20 The diagram of experimental model with playing cards.

Download Full Size | PDF

 figure: Fig. 21

Fig. 21 The optically reconstructed images with different clearest imaging planes. The clearest imaging plane is (a)the front card, (b)the middle card and (c)the rear card while (d) is the original model.

Download Full Size | PDF

6. Conclusion

In this paper, we propose a printing method of single-step full parallax synthetic holographic stereogram based on effective perspective images’ segmentation and mosaicking (EPISM). The perspective images of 3D scene were first sampled by a virtual camera, then the effective perspective image segment was extracted from the perspective image of virtual hogel according to the propagation law of light and the viewing frustum effect of human eyes. Using the EPISM algorithm, the synthetic effective perspective images for the printed hogels were achieved by mosaicking the effective perspective images segments of the contributed virtual hogels. The influences of modelling parameters on the reconstruction quality, such as the flipping effect, the reconstructed images’ resolution, were also analyzed. The proposed method was verified experimentally and discussed, and the experimental results indicated that the EPISM based single-step full parallax holographic stereogram could behave good reconstruction quality as well as a compact configuration. However, the small hogel size is required to prevent the flipping effect, especially for the 3D scene with a large depth and the reconstructed scene with a short distance, and this will result in a significant increasing of printing time. The improved EPISM method with better reconstruction quality as well as efficient printing rate will be investigated in the future.

Funding

Foundation for the Author of National Excellent Doctoral Dissertation of the People’s Republic of China (FANEDD) (201432); Natural Science Foundation of Beijing Municipality (4152049); Beijing NOVA program (Z1511000003150119).

References and links

1. M. Lucente, “The first 20 years of holographic video - and the next 20,” in SMPTE 2nd Annual International Conference on Stereoscopic 3D for Media and Entertainment - Society of Motion Picture and Television Engineers(SMPTE), 2011.

2. H. I. Bjelkhagen and D. Brotherton-Ratcliffe, “Ultrarealistic imaging: the future of display holography,” Opt. Eng. 53(11), 112310 (2014). [CrossRef]  

3. M. Yamaguchi, “Light-field and holographic three-dimensional displays [Invited],” J. Opt. Soc. Am. A 33, (12)2348–2364 (2016). [CrossRef]  

4. Y. Takaki and K. Taira, “Speckle regularization and miniaturization of computer-generated holographic stereograms,” Opt. Express 24(6), 6328–6340 (2016). [CrossRef]   [PubMed]  

5. H. Kang, E. Stoykova, N. Berberova, J. Park, D. Nazarova, J. S. Park, Y. Kim, S. Hong, B. Ivanov, and N. Malinowski, “Three-dimensional imaging of cultural heritage artifacts with holographic printers,” Proc. SPIE 10226, 102261I (2017). [CrossRef]  

6. H. Kang, E. Stoykova, and H. Yoshikawa, “Fast phase-added stereogram algorithm for generation of photorealistic 3D content,” Appl. Opt. 55(3), A135–A143 (2016). [CrossRef]   [PubMed]  

7. H. Yoshikawa and T. Yamaguchi, “Review of Holographic Printers for Computer-Generated Holograms,” IEEE T. Ind. Inform. 12(4), 1584–1589 (2016). [CrossRef]  

8. C. Pei, X. Yan, and X. Jiang, “Computer-generated phase-modulated full parallax holographic stereograms without conjugate images,” Opt. Eng. 53(10), 103105 (2014). [CrossRef]  

9. T. Yamaguchi, O. Miyamoto, and H. Yoshikawa, “Volume hologram printer to record the wavefront of three-dimensional objects,” Opt. Eng. 51(7), 075802 (2012). [CrossRef]  

10. H. Kang, E. Stoykova, H. Yoshikawa, S. Hong, and Y. Kim, “Comparison of system properties for wave-front holographic printers,” in Fringe 2013, W. Osten, ed. (Springer-Verlag, 2014), 851–854. [CrossRef]  

11. K. Wakunami, R. Oi, T. Senoh, H. Sasaki, Y. Ichihashi, and K. Yamamoto, “Wavefront printing technique with overlapping approach toward high definition holographic image reconstruction,” Proc. SPIE 9867, 98670J (2016).

12. L. Cao, Z. Wang, H. Zhang, G. Jin, and C. Gu, “Volume holographic printing using unconventional angular multiplexing for three-dimensional display,” Appl. Opt. 55(22), 6046–6051 (2016). [CrossRef]   [PubMed]  

13. D. J. Debitetto, “Holographic panoramic stereograms synthesized from white light recordings,” Appl. Opt. 8(8), 1740–1741 (1969). [CrossRef]   [PubMed]  

14. M. C. King, A. M. Noll, and D. H. Berry, “A new approach to computer-generated holography,” Appl. Opt. 9(2), 471–475 (1970). [CrossRef]   [PubMed]  

15. M. W. Halle, “The generalized holographic stereogram,” Proc. SPIE 1461, 142–155 (1991). [CrossRef]  

16. M. W. Halle, “The generalized holographic stereogram,” Ph. D. thesis, Massachusetts Institute of Technology (1993).

17. C. Newswanger and M. Klug, “Holograms for the masses,” in 9th International Symposium on Display Holography(ISDH), (IOP Publishing, 2013), 012082.

18. D. Brotherton-Ratcliffe, “Large format digital colour holograms produced using RGB pulsed laser technology,” in, 7th International Symposium on Display Holography, H. I. Bjelkhagen, ed. (River Valley, 2006), 200–208.

19. D. Brotherton-Ratcliffe, S. J. Zacharovas, R. J. Bakanas, J. Pileckas, A. Nikolskij, and J. Kuchin, “Digital holographic printing using pulsed RGB lasers,” Opt. Eng. 50(9), 091307 (2011). [CrossRef]  

20. M. Yamaguchi, T. Honda, N. Ohyama, and J. Ishikawa, “Multidot recording of rainbow and multicolor holographic stereograms,” Opt. Commun. 110(5–6), 523–528 (1994). [CrossRef]  

21. S. Maruyama, Y. Ono, and M. Yamaguchi, “High-density recording of full-color full-parallax holographic stereogram,” Proc. SPIE 6912, 69120N (2008). [CrossRef]  

22. B. Lee, J. -H. Kim, K. Moon, I. -J. Kim, and J. Kim, “Holographic stereogram printing under the non-vibration environment,” Proc. SPIE 9117, 911704 (2014). [CrossRef]  

23. M. Yamaguchi, H. Endoh, T. Koyama, and N. Ohyama, “High-speed recording of full-parallax holographic stereograms by a parallel exposure system,” Opt. Eng. 35(6), 1556–1559 (1996). [CrossRef]  

24. X. Rong, X. Yu, and C. Guan, “Multichannel holographic recording method for three-dimensional displays,” Appl. Opt. 50(7), B77–B80 (2011). [CrossRef]   [PubMed]  

25. A. V. Morozov, A. N. Putilin, S. S. Kopenkin, Y. P. Borodin, V. V. Druzhin, S. E. Dubynin, and G. B. Dubinin, “3D holographic printer: fast printing approach,” Opt. Express 22(3), 2193–2206 (2014). [CrossRef]   [PubMed]  

26. Y. Im, W. Moon, J. Roh, H. Kim, and J. Hahn, “Direct laser writing of computer-generated hologram using pulse laser system,” in Digital Holography and Three-Dimensional Imaging, (Optical Society of America, 2014), paper JTu4A.27.

27. K. Hong, S. -g. Park, J. Yeom, J. Kim, N. Chen, K. Pyun, C. Choi, S. Kim, J. An, H. -S. Lee, U.-i. Chung, and B. Lee, “Resolution enhancement of holographic printer using a hogel overlapping method,” Opt. Express 21(12), 14047–14055 (2013). [CrossRef]   [PubMed]  

28. T. Utsugi and M. Yamaguchi, “Reduction of the recorded speckle noise in holographic 3D printer,” Opt. Express 21(1), 662–674 (2013). [CrossRef]   [PubMed]  

29. M. Takano, H. Shigeta, T. Nishihara, M. Yamaguchi, S. Takahashi, N. Ohyama, A. Kobayashi, and F. Iwata, “Full-color holographic 3D printer,” Proc. SPIE 5005, 126–136 (2003). [CrossRef]  

30. H. I. Bjelkhagen and E. Mirlis, “Color holography to produce highly realistic three-dimensional images,” Appl. Opt. 47(4), 123–133 (2008). [CrossRef]  

31. F. Yang, Y. Murakami, and M. Yamaguchi, “Digital color management in full-color holographic three-dimensional printer,” Appl. Opt. 51(19), 4343–4352 (2012). [CrossRef]   [PubMed]  

32. S. Tay, P. -A. Blanche, R. Voorakaranam, A. V. Tunç, W. Lin, S. Rokutanda, T. Gu, D. Flores, P. Wang, G. Li, P. St Hilaire, J. Thomas, R. A. Norwood, M. Yamamoto, and N. Peyghambarian, “An updatable holographic three-dimensional display,” Nature 451(7179), 694–698 (2008). [CrossRef]   [PubMed]  

33. P. -A. Blanche, A. Bablumian, R. Voorakaranam, C. Christenson, W. Lin, T. Gu, D. Flores, P. Wang, W. -Y. Hsieh, M. Kathaperumal, B. Rachwal, O. Siddiqui, J. Thomas, R. A. Norwood, M. Yamamoto, and N. Peyghambarian, “Holographic three-dimensional telepresence using large-area photorefractive polymer,” Nature 468(7320), 80–83 (2010). [CrossRef]   [PubMed]  

34. N. Tsutsumi, K. Kinashi, K. Tada, K. Fukuzawa, and Y. Kawabe, “Fully updatable three-dimensional holographic stereogram display device based on organic monolithic compound,” Opt. Express 21(17), 19880–19884 (2013). [CrossRef]   [PubMed]  

35. M. Yamaguchi, N. Ohyama, and T. Honda, “Holographic three-dimensional printer: new method,” Appl. Opt. 31(2), 217–222 (1992). [CrossRef]   [PubMed]  

36. M. Yamaguchi, H. Endoh, T. Honda, and N. Ohyama, “High-quality recording of a full-parallax holographic sterogram with a digital diffuser,” Opt. Lett. 19(2), 135–137 (1994). [CrossRef]   [PubMed]  

37. T. Yatagai, “Stereoscopic approach to 3-D display using computer-generated holograms,” Appl. Opt. 15(11), 2722–2729 (1976). [CrossRef]   [PubMed]  

Supplementary Material (3)

NameDescription
Visualization 1       optical reconstruction from different perspectives
Visualization 2       the video of motion parallax when the hogel size is 1cm
Visualization 3       the video of motion parallax when the hogel size is 0.5cm

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (21)

Fig. 1
Fig. 1 The production and reconstruction of two-step method. (a)The production of master hologram. (b)The reconstruction of master hologram. (c)The reproduction of master hologram to transfer hologram. (d)The reconstruction of transfer hologram.
Fig. 2
Fig. 2 The transformation principle of perspective images in infinite viewpoint camera method.
Fig. 3
Fig. 3 Principle of the single-step Lippmann holographic stereogram method. (a) Calculation of an exposing image. (b) Optical setup of the method. (c) Reconstruction geometry for the holographic stereogram.
Fig. 4
Fig. 4 The primitive principle of the proposed method. (a) The extraction of effective perspective image segment corresponding to a single virtual hogel. (b) The synthetic effective perspective image mosaicked by effective images segments of multiple virtual hogels.
Fig. 5
Fig. 5 The principle of effective pixels mosaicking for the hogel(in planar view).
Fig. 6
Fig. 6 The determination of viewing angle.
Fig. 7
Fig. 7 The determination of the number of all the contributed hogels in virtual H1 plate.
Fig. 8
Fig. 8 The one-dimension capture geometry of the perspective images.
Fig. 9
Fig. 9 The corresponding relationship between the hogel in H2 plate and the effective contributed hogels in virtual H1 plate.
Fig. 10
Fig. 10 The diagram of precise algorithm in segmentation and mosaicking of perspective images. (a)The perspective view along the gazing direction. (b)The front view along the gazing direction.
Fig. 11
Fig. 11 The diagram for calculating the flipping of the image.
Fig. 12
Fig. 12 The numerical simulation of perspective images in different viewing positions. (a)The original perspective images. (b)The synthetic effective perspective images.
Fig. 13
Fig. 13 Optical setup of synthetic holographic stereogram printer system.
Fig. 14
Fig. 14 The schematic for reconstruction.
Fig. 15
Fig. 15 The photographs of optical reconstruction from different perspectives(L2 = 11.4 cm, l2 = 1 cm, also seen in Visualization 1).
Fig. 16
Fig. 16 The photographs in different focus depths. (a)The spatial position relation of rulers and holographic plate. (b)Focused on the right ruler. (c)Focused on the left ruler(L2 = 11.4 cm, l2 = 1 cm).
Fig. 17
Fig. 17 The diagram of synthetic effective perspective images scaling
Fig. 18
Fig. 18 The photographs in different focus depths. (a)Focused on the right ruler. (c)Focused on the left ruler(L2 = 8.4 cm, l2 = 0.5 cm).
Fig. 19
Fig. 19 The photographs in different focus depths. (a)Focused on the right ruler. (c)Focused on the left ruler(L2 = 6 cm, l2 = 0.5 cm).
Fig. 20
Fig. 20 The diagram of experimental model with playing cards.
Fig. 21
Fig. 21 The optically reconstructed images with different clearest imaging planes. The clearest imaging plane is (a)the front card, (b)the middle card and (c)the rear card while (d) is the original model.

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

θ = θ = 2 arctan l LCD 2 L 1 .
n hogel = π θ ( L 1 + L 2 ) 360 l 1 1 ,
N H 1 = 2 n hogel + [ l 2 × ( N H 2 1 ) + l 1 ] / l 1 ,
l A B = l B C = l C D = l D A = Z 0 Z 1 Z 0 l 1 .
P ( x P = z 1 z 0 z 0 ( x 0 2 n l 1 l 1 2 ) + x 0 , y P = z 1 z 0 z 0 ( y 0 2 m l 1 l 1 2 ) + y 0 , z P = z 1 ) .
x A = x P l LCD 2 = 2 n l 1 l 1 l LCD 2 .
x A = x P l A B 2 = z 1 z 0 ( x 0 n l 1 + l 1 ) + n l 1 l 1 ,
y A = y P l A B 2 = z 1 z 0 ( y 0 m l 1 + l 1 ) + m l 1 l 1 .
l x A A = | x A x A | = | z 1 z 0 ( x 0 n l 1 + l 1 ) l 1 2 + l LCD 2 | .
l y A A = | z 1 z 0 ( y 0 m l 1 + l 1 ) l 1 2 + l LCD 2 | .
A ( | z 1 z 0 ( x 0 n l 1 + l 1 ) l 1 2 + l LCD 2 | × N l LCD + 1 , | z 1 z 0 ( y 0 m l 1 + l 1 ) l 1 2 + l LCD 2 | × N l LCD + 1 ) ,
B ( | z 1 z 0 ( x 0 n l 1 + l 1 ) l 1 2 + l LCD 2 | × N l LCD + z 0 z 1 z 0 × N l 1 l LCD , | z 1 z 0 ( y 0 m l 1 + l 1 ) l 1 2 + l LCD 2 | × N l LCD + 1 ) ,
C ( | z 1 z 0 ( x 0 n l 1 + l 1 ) l 1 2 + l LCD 2 | × N l LCD + z 0 z 1 z 0 × N l 1 l LCD , | z 1 z 0 ( y 0 m l 1 + l 1 ) l 2 + l LCD 2 | × N l LCD + z 0 z 1 z 0 × N l 1 l LCD ) ,
D ( | z 1 z 0 ( x 0 n l 1 + l 1 ) l 1 2 + l LCD 2 | × N l LCD + 1 , | z 1 z 0 ( y 0 m l 1 + l 1 ) l 1 2 + l LCD 2 | × N l LCD + z 0 z 1 z 0 × N l 1 l LCD ) .
δ = L d l L + L d .
| δ | β ,
β = 1.22 λ L α .
1.22 λ L 2 l α + 1.22 λ L L d 1.22 λ L 2 l α 1.22 λ L .
Δ L = 1.22 λ L 2 l α 1.22 λ L + 1.22 λ L 2 l α 1.22 λ L .
Δ L 2.44 λ L 2 l α = { 2.44 λ L 2 l α , when a l 2.44 λ L 2 l 2 , when a > l .
l eff = L 2 L 1 + L 2 × l 1 .
N eff = l eff × N uni .
M = N eff × ( 2 n hogel + 1 ) .
M = N uni × ( 2 n hogel + 1 ) × l 1 × L 2 L 1 + L 2 .
2 L 1 × tan θ 2 + L H 1 l 1 > L O ,
L H 1 = 2 ( L 1 + L 2 ) × t a n θ 2 + L H 2 l 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.