Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Simplified calculation method for computer-generated holographic stereograms from multi-view images

Open Access Open Access

Abstract

A simple calculation method to synthesize computer-generated holographic stereograms, which does not involve diffraction calculations, is proposed. It is assumed that three-dimension (3D) image generation by holographic stereograms is similar to that of multi-view autostereoscopic displays, in that multiple parallax images are displayed with rays converging to corresponding viewpoints. Therefore, a wavefront is calculated, whose amplitude is the square root of an intensity distribution of a parallax image and whose phase is a quadric phase distribution of a spherical wave converging to a viewpoint. Multiple wavefronts calculated for multiple viewpoints are summed up to obtain an object wave, which is then used to determine a hologram pattern. The proposed technique was experimentally verified.

©2013 Optical Society of America

1. Introduction

Intense researches have been conducted on the electronic implementation of holographic displays. The development of three-dimensional (3D) image capturing techniques that enable the display of real objects on electronic holographic displays is also an interesting area of research. Because the original holography technique to record real objects requires the use of an interferometer optical system and a laser, it cannot be used outside laboratories and it is difficult to capture large objects. A number of techniques that enable the capture of large objects without using an interferometer optical system and a laser have been developed, and most of these are categorized into two approaches. One approach obtains 3D shape data of objects by using, for example, a depth camera. The other approach obtains multiple projections of objects that are captured under incoherent light illumination. This study proposes a simple calculation technique of holographic stereograms based on the latter approach.

Holographic stereograms can be produced without using light interference when capturing real objects. Multiple images of real objects are captured using a lens imaging system under incoherent light illumination, and the captured images are then recorded on a hologram film using an interferometer optical system and a laser. Pole [1] proposed a method using a two-dimensional (2D) lens array to capture multiple images of objects. DeBitetto [2] proposed a method using a one-dimensional (1D) lens array aligned in the horizontal direction. McCrickerd et al. [3] proposed a method using a single-lens imaging system and a moving mask located in front of the lens. The mask position was changed to obtain multiple images with different parallax, i.e., multiple parallax images. The captured parallax images were coherently recorded on a hologram film by moving a mask over the film. Although this technique requires additional time to capture multiple images, it enables the capture of the images of large objects and it is also possible to obtain high-resolution images. DeBitetto [4] proposed the use of an ordinary camera to capture multiple parallax images by moving the camera horizontally. The captured parallax images were recorded at different horizontal positions on a hologram film by moving a slit over the film. King et al. [5] proposed a technique that uses a computer to generate parallax images of objects. They also proposed a hologram copying technique that changed the positions of the 3D images and the viewpoints, because the previous holographic stereograms generated 3D images apart from the hologram film and generated viewpoints in the vicinity of the hologram film. For the copying technique employed, the holographic stereogram (first hologram) was copied to another hologram film (second hologram), and by so doing, the second hologram film generated 3D images in its vicinity. Yatagai [6] proposed a technique in which a computer was used to calculate light diffraction from parallax images in order to numerically obtain a hologram pattern. The Fourier transform of the parallax images were calculated, and the computer-generated hologram (CGH) technique was used to determine the resulting hologram pattern. Computer-generated stereograms, considering the phase information of 3D objects, were developed to improve the image quality of reconstructed images [7].

Techniques that generate holograms from multiple 2D images of 3D objects were also developed [813]. Multiple angular projections of 3D objects were used to generate Fourier [8, 9] and Fresnel holograms [10, 11]. Each angular projection was multiplied by an appropriate phase distribution, and all pixel values of each multiplied projection were summed up to obtain one complex-amplitude value, which was used as the value of a single pixel in the hologram. A 2D lens array was used to obtain different angular projections of real objects illuminated by white light [9]. Park et al. also proposed the hologram generation techniques that use a 2D lens array to capture orthographic projections of real objects [12, 13].

Mishina et al. [14] also proposed a technique to generate holograms of real objects, which simulates the 3D image generation of integral photography (IP). Real objects were captured by the IP technique, i.e., a 2D lens array was used to capture 3D objects. The diffraction of light in the IP 3D image reconstruction step was numerically simulated by calculating the Fresnel diffraction. The complex-amplitude distribution thereby obtained was used to determine a hologram pattern.

In addition, the diffraction specific technique [15] was proposed to generate holograms from multiple parallax images. This technique spatially divides a hologram into contiguous hologram elements, called hogels, and assembles the hogels by combining precompiled sets of basic fringes that are modulated by the pixel information of the parallax images. The diffraction specific technique also uses the information of the 3D positions of object elements within a scene. The use of parallax images improves the image quality and the use of the object locations reduces the image blur. In Ref. 15, the required spatial and angular resolutions were discussed on the basis of the human visual system. In Ref. 16, wavefront elements, called wafels, are introduced as a progression of the hogel. Wafel apertures emit controllable intensities of light in controllable directions with controllable centers of curvature. With this technique, the number of images required to calculate holograms can be lowered compared with the conventional holographic stereograms. The technique proposed in this manuscript is based on the conventional holographic stereograms, and the calculation method is very different from the above two techniques [15, 16]. In our technique, a hologram is not divided spatially. Multiple parallax images, whose image sizes are the same as the hologram size, are multiplied by quadric phase distributions and are added to compute holograms. Our calculation method is much simpler. However, because the information of the 3D positions of object elements is not used, the reconstructed images are blurred when objects are displayed far from the hologram planes.

In this study, we propose a new technique to synthesize computer-generated holographic stereograms from multiple perspective projections (parallax images) of real objects. The proposed technique was suggested by considering the 3D image generation of multi-view autostereoscopic displays, and does not require calculations of light diffraction.

2. Theory

In this section, the similarity between 3D image generation both by holographic stereograms and multi-view autostereoscopic displays is considered first. Then, a new calculation method for the generation of holographic stereograms is described. At the end of this section, the difference between the proposed technique and the previous techniques is explained.

Figure 1 illustrates the procedure for producing conventional holographic stereograms when a single camera is used to capture real objects [4]. As shown in Fig. 1(a), a single camera is used to capture an image of real objects, and the camera is moved on the plane parallel to the object plane of the camera in order to capture multiple parallax images. The total number of captured parallax images is denoted by M and any given captured parallax image is indexed by m such that 1 ≤ mM. Then, as shown in Fig. 1(b), a parallax image #m is illuminated by laser light to generate an object wave #m, and a mask is placed on a hologram film to locate its aperture at the position where the camera was located when the parallax image was captured. A reference wave R1 also illuminates the hologram film to record the interference pattern in the aperture area. This coherent recording process is repeated for all parallax images. Figure 1(c) shows the hologram reconstruction step: the developed hologram film is illuminated by a reconstruction wave, which is identical to the reference wave R1, so that all object waves corresponding to the parallax images are reconstructed. When an eye is located near the hologram film, the eye observes the recorded parallax image corresponding to the viewpoint located near the eye. The area in the hologram film where the aperture of the mask was located to record the parallax image #m becomes the viewpoint #m.

 figure: Fig. 1

Fig. 1 Process of recording and reconstructing holographic stereograms: (a) capturing parallax images of real objects using incoherent light, (b) recording parallax images on hologram film using coherent light, and (c) reconstructing holographic stereogram.

Download Full Size | PDF

Figure 2 illustrates the hologram copying process to alter the positions of the 3D image and the viewpoints [5]. As shown in Fig. 2(a), the fabricated holographic stereogram, which is the first hologram, is illuminated by a conjugate reconstruction wave R1* to generate a conjugate object wave. Then, another hologram film, which is the second hologram, is placed near the reconstructed 3D image, and another reference wave R2 also illuminates this hologram film. As shown in Fig. 2(b), when the developed second hologram is illuminated by a reconstruction wave R2*, which is the complex-conjugate of the reference wave R2, the 3D image is reconstructed in the vicinity of the second hologram film and the reconstructed viewpoints are generated in the original positions of the first hologram film. In this manner, the reconstructed viewpoints #m correspond to the original parallax image and its resulting object wave #m. Observers can place their eyes apart from the hologram film, and the 3D images are reconstructed in the vicinity of the hologram film.

 figure: Fig. 2

Fig. 2 Holographic stereogram copying process: (a) reconstruction of the first hologram and recording of the second hologram, and (b) reconstruction of the second hologram.

Download Full Size | PDF

Next, the 3D image generation provided by the second hologram shown in Fig. 2(b) is compared with that provided by a multi-view display [1719]. Multi-view displays are autostereoscopic displays based on ray reconstruction. Figure 3 illustrates the operation of a flat-panel-type autostereoscopic display that consists of a flat-panel display and a lens array. The lens array deflects rays emitted from the pixels of the flat-panel display to generate multiple viewpoints. A parallax image #m is displayed using pixels that emit rays that are deflected by the lens array to converge to a viewpoint #m. When an eye is placed at the viewpoint #m, the parallax image #m is observed. While holographic stereograms generate viewpoints by means of the diffraction of waves, multi-view displays generate them by means of the refraction of rays.

 figure: Fig. 3

Fig. 3 3D image generation by a multi-view display.

Download Full Size | PDF

Comparison between the 3D image generation provided by the holographic stereograms shown in Fig. 2(b) and that provided by the multi-view displays shown in Fig. 3 suggests a simple technique to synthesize computer-generated holographic stereograms. For the proposed technique, multiple parallax images of 3D objects are prepared in advance. They can be captured using a single camera on a translation stage or an array of cameras. They can also be generated by a computer. As shown in Fig. 4, a wavefront converging to a viewpoint #m is considered, which has the amplitude distribution of a parallax image #m on the hologram plane and the phase distribution of a spherical wave converging to the viewpoint #m. The wavefronts are calculated for all parallax images and are summed up to obtain the object wave on the hologram plane. The summation is performed for each pixel. The coordinates of the hologram plane are denoted by (x, y). The distance between the hologram and the plane where the viewpoints are generated is denoted by l, and the position of the viewpoint #m is denoted by (xm, ym). The intensity distribution of the parallax image #m is denoted by Im(x, y). The object wave o(x, y) is given by

o(x,y)=m=1MIm(x,y)exp[iα(x,y)]exp[ik(xxm)2+(yym)2+l2],
where k is the wave number and α(x, y) is the phase distribution added to the amplitude distribution of the parallax image. This phase distribution α(x, y) is examined later.

 figure: Fig. 4

Fig. 4 Object wave generation of the proposed technique.

Download Full Size | PDF

The hologram distribution is calculated from the object wave given by Eq. (1). A reference wave is added to the object wave, and its intensity distribution provides a hologram pattern, which is displayed by an amplitude-modulation spatial light modulator (SLM). Thus, the proposed technique does not require diffraction calculations to synthesize holographic stereograms.

The pixel structure of the SLM limits the number of viewpoints and the resolution of the parallax images. The pixel pitch of the SLM is denoted by p, and the resolution is denoted by Nx × Ny. The viewing zone of holograms is given by the area where the SLM can control light diffraction. When the width of the view zone is denoted by w, the sampling theorem provides w = λl/p, where λ is the wavelength of light. The pitch of the viewpoints, denoted by q, is thereby given as q = w/M. Here, the pixel pitch of the parallax images is represented by mp. The maximum width of diffraction patterns generated by the parallax images is given by λl/mp on the plane where viewpoints are aligned. This maximum width should be equal to the pitch of the viewpoints, i.e., q = λl/mp, in order to minimize the crosstalk between the viewpoints. Therefore, the condition m = M is required. In this study, two-dimensionally aligned viewpoints are considered to provide both horizontal and vertical parallaxes, as shown in Fig. 5. The number of the viewpoints is denoted by Mx × My. Here, we consider the case in which the complex-conjugate image and the zero-order diffraction light are separated from the reconstructed image in the vertical direction. Therefore, the horizontal and vertical widths of the viewing zone are given by wx = λl/p and wy = λl/2p, respectively. The horizontal and vertical intervals of the viewpoints are given by qx = λl/Mxp and qy = λl/2Myp, respectively. The horizontal and vertical pixel pitches of the parallax images are given by Mxp and 2Myp, respectively. The resolution of the parallax images is given by (Nx/Mx) × (Ny/2My).

 figure: Fig. 5

Fig. 5 Schematic diagram illustrating the arrangement of viewpoints in the viewing zone.

Download Full Size | PDF

The phase distribution of the spherical wave in Eq. (1) is calculated with the sampling pitch of p, and the horizontal and vertical pixel pitches of the parallax images are Mxp and 2Myp. Therefore, Mx × 2My pixels representing the phase distribution of the spherical wave correspond to one pixel of the parallax images. The phase distributions α(x, y) have the same resolution as the parallax images so that random phases are determined with the horizontal pitch of Mxp and the vertical pitch of 2Myp.

The maximum width of diffraction patterns for the parallax images depends on the spatial bandwidth of the parallax images. If the parallax images do not contain sufficiently high spatial frequency components, the spatial extent of the resulting diffraction patterns are limited and dark regions arise between the viewpoints. In this case, the light intensity of the observed images changes depending on the eye position. The phase distribution α(x, y) in Eq. (1) should therefore be appropriately determined to enlarge the extent of the parallax image diffraction patterns.

The techniques proposed in Refs. 811 also do not require diffraction calculations to synthesize holograms. However, these hologram synthesis techniques are different from the technique proposed in the present study. For these techniques, holograms are generated from angular projections of real objects. Each angular projection is multiplied by a corresponding phase distribution, and the sum of all pixel values in the multiplied projection is calculated to obtain the value of a single pixel of the hologram. Therefore, the required number of angular projections is equal to the number of pixels of a hologram, and so a large number of angular projections are required. When the phase distribution multiplied to the angular projection is the phase distribution of a correspondingly inclined plane wave, a Fourier hologram is obtained [8, 9], i.e., a lens is used to reconstruct a hologram and a 3D image is produced around the focal plane of the lens. When the multiplied phase distribution is that of a common spherical wave, a Fresnel hologram is obtained [10, 11], i.e., a 3D image is produced near a hologram.

Techniques proposed in Ref. 12, which capture real objects using the IP technique and reconstruct 3D images using the holography technique, also do not require diffraction calculations. Orthographic projections of real objects, instead of angular projections, are used to synthesize Fourier and Fresnel holograms. The synthesis method of a Fourier hologram is the same as that proposed in Refs. 8 and 9. The synthesis method of a Fresnel hologram is similar to that proposed in this study. Each orthographic projection is multiplied by a correspondingly inclined plane wave. The multiplied projections are summed to generate a hologram, and the summation is performed at each pixel. Because a 2D lens array is used to obtain the orthographic projections of real objects, the resolution of the orthographic projections is equal to the number of lenses used in the array, and so high-resolution orthographic projections are difficult to obtain. On the contrary, the technique proposed in this study can obtain high-resolution perspective projections because an ordinary camera is used to capture real objects. Moreover, because the technique proposed in Ref. 12 projects orthographic projections with plane waves proceeding into corresponding directions, there is substantial viewing area where a partial reconstructed image is observed. On the contrary, the technique proposed in this study maximizes the width of the viewing zone, where a whole reconstructed image can be observed at the plane where viewpoints are generated.

3. Experiments

The proposed technique to synthesize computer-generated holographic stereograms was experimentally verified. The holographic stereograms generated by the proposed technique were displayed on an SLM to reconstruct 3D images. Both computer-generated images and camera-captured images were used as the parallax images.

Figure 6 depicts the experimental system used to evaluate the proposed technique. In this experiment, a 4f imaging system was used because we wanted to remove the conjugate image and the zero-order diffraction light from the reconstructed image in order to evaluate the generation of the viewpoints. The 4f imaging system consists of two Fourier transform lenses and a single-sideband filter on its Fourier plane [20]. Amplitude-modulation holograms generate a reconstructed image, a conjugate image, and zero-order diffraction light. The reconstructed image and the conjugate image can be spatially separated on the Fourier plane, and the zero-order diffraction light becomes a sharp light peak at the origin of the Fourier plane. The single-sideband filter is a knife-edge whose edge contains the origin of the Fourier plane. The filter blocks half of the Fourier plane to eliminate the conjugate image and the zero-order diffraction light components. The single-sideband filter in the Fourier plane halves the viewing zone angle. In this study, a horizontal slit was used as the single-sideband filter to half the vertical viewing zone angle, because the horizontal viewing zone angle is important for the 3D image representation.

 figure: Fig. 6

Fig. 6 Schematic diagram of the 4f imaging system used for experiment.

Download Full Size | PDF

To eliminate the conjugate image and the zero-order diffraction light by the single-sideband filter, a hologram pattern was calculated as follows [21], instead of the hologram calculation method described in Sec. 2. The Fourier transform of the object wave o(x, y) given by Eq. (1) is denoted as O(νx, νy), where νx and νy respectively represent the spatial frequencies in the x- and y-directions. The spatial bandwidth of the Fourier transformed image is Δν = 1/p. The spatial bandwidth is limited in the vertical direction, such that the distribution within the range −Δν/4 ≤ Δνy ≤ Δν/4, represented by Ob(νx, νy), is extracted. This vertically band-limited distribution and its symmetric and complex-conjugate distribution are aligned in the vertical direction to obtain a new Fourier transformed image O’(νx, νy) = Ob (νx, νy − Δν/4) + Ob*(−νx, −νy − Δν/4), as shown in Fig. 6. This synthesized Fourier transformed image is inverse-Fourier transformed to obtain o’(x, y). Because O’(νx, νy) is symmetric, the modified object wave o’(x, y) has a real-valued distribution. A constant real value, which generates a peak distribution at the origin of the Fourier plane, is added to make the modified object wave a non-negative distribution. This non-negative distribution is displayed on the SLM.

A reflection-type liquid-crystal SLM (LC-R 1080, HoloEye Corp.) was used to display the holographic stereograms generated by the proposed technique. The resolution was 1,920 × 1,200 pixels and the pixel pitch was p = 8.1 μm. The focal length of the two Fourier transform lenses constituting the 4f imaging system was 150.0 mm. A He-Ne laser (λ = 632.8 nm) was used as the light source.

The distance from the screen, which was the image plane of the 4f imaging system, to the plane where the viewpoints were aligned was l = 600 mm. The width and height of the viewing zone were wx = 46.9 mm and wy = 23.4 mm, respectively. The number of viewpoints was 10 × 5 (Mx = 10 and My = 5). The horizontal and vertical pitches of the viewpoints were qx = 4.69 mm and qy = 4.69 mm, respectively. The resolution of the parallax images was 192 × 120. The screen size of the 4f imaging system was 0.72 in., which was equal to that of the SLM, because the magnification of the 4f imaging system was unity.

First, the generation of viewpoints was verified. Instead of displaying actual parallax images, different numeric characters were displayed to different viewpoints in order to evaluate the quality of viewpoint generation. Three types of phase distributions, α(x, y), added to the parallax images were examined. One was a uniform phase distribution. Another was a common random phase distribution, i.e., a random phase distribution was commonly used for all parallax images. The third was a different random phase distribution, i.e., different random phase distributions were used for different parallax images. The reconstructed images are shown in Fig. 7. They were captured by a digital camera located at the viewpoints (4, 3), (5, 3), and (6, 3) and the intermediate positions between them. The position of the viewpoints is represented by (mx, my), where mx (1 ≤ mxMx) is the horizontal position and my (1 ≤ myMy) is the vertical position, as illustrated in Fig. 5. Figures 7(a), 7(b), and 7(c) show the results obtained for the cases using the uniform phase distribution, the common random phase distribution, and the different random phase distribution. When using the uniform phase distribution, sharp images were observed at the viewpoints and images were not observed at the intermediate positions between the viewpoints, because the dominant spatial frequencies contained in the images were comparatively low so that the extents of the viewpoints were smaller than the pitch of the viewpoints. When using the common random phase distribution, the corresponding image was observed at each viewpoint and two images were observed simultaneously at the intermediate positions between the viewpoints, because the random phase distribution contained sufficiently high spatial frequency components so that the extents of the viewpoints were sufficiently enlarged. However, speckles were observed, which were caused by the random phase distribution. The reconstructed images obtained using the different random phase distribution were almost the same as those obtained using the common random phase distribution.

 figure: Fig. 7

Fig. 7 Images generated at viewpoints and intermediate positions between viewpoints: parallax images are added with (a) uniform phase, (b) common random phase, and (c) different random phase.

Download Full Size | PDF

Then, parallax images were generated by a computer. A 3D object consisting of 710 object points was used. The perspective projections of the 3D object at the 10 × 5 viewpoints were generated. The common random phase distribution was used. Figure 8 shows the images captured at several viewpoints. A change in the parallax depending on the viewpoint was observed. However, the change was small because the viewing angles were ± 2.0° in the horizontal direction and ± 1.0° in the vertical direction.

 figure: Fig. 8

Fig. 8 Reconstructed images generated by the proposed technique; parallax images were generated by a computer, and a common random phase distribution was added.

Download Full Size | PDF

The parallax images of real objects were captured using a digital camera. The digital camera was mounted on a computer-controlled translation stage that moved two-dimensionally. The object distance from the camera lens to the object plane was 1800 mm, and the camera was moved both horizontally and vertically with the pitch of 14.0 mm in order to capture 10 × 5 parallax images. The focal length of the camera was 25.0 mm. The size of the image sensor was 1/2 in. and the resolution was 772 × 580. The captured images were appropriately clipped to obtain the parallax images. The resolution was reduced to 192 × 120. Figure 9 shows the reconstructed images captured at several viewpoints when the uniform phase distribution was added to the parallax images. Sharp images were observed. However, as mentioned above, the images became darker between the viewpoints. Figure 10 shows the reconstructed image when the common random phase distribution was added. The observed images were noisy because of the presence of speckles. The image did not become darker between the viewpoints and so smooth motion parallax was obtained.

 figure: Fig. 9

Fig. 9 Reconstructed images of real objects when a uniform phase distribution was added to the camera-captured parallax images.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Reconstructed images of real objects when a common random phase distribution was added to the camera-captured parallax images.

Download Full Size | PDF

4. Discussion

From the experimental results, the generation of the viewpoints by computer-generated holographic stereograms synthesized by the proposed technique was confirmed. The object waves were generated without diffraction calculations that were required in the previous techniques to synthesize computer-generated holographic stereograms [6, 7, 14]. In the experiments, the Fourier transform was used to calculate the hologram patterns from the object waves. It is because the 4f imaging system was used to completely remove the conjugate image and the zero-order diffraction light. As described in Sec. 2, the Fourier transform is not required to obtain hologram patterns when the SLM is simply illuminated by laser light to generate the reconstructed image, which is angularly separated from the conjugate image and the zero-order diffraction light.

We found that the determination of the phase distribution added to the parallax images is an important issue. When the uniform phase distribution is added, the reconstructed images do not contain speckle noises and become darker at the intermediate positions between the viewpoints. When the random phase distribution is added, the reconstructed images contain speckle noise and change smoothly between viewpoints. The aim of adding the phase distribution is to enlarge the extent of the diffraction patterns of the parallax images to be comparable to the pitch of the viewpoints so that the light intensity distribution becomes uniform in the viewing zone. To reduce the presence of speckle noise, the spatial frequency bandwidth contained in the random phase distribution should be limited. When the spatial bandwidth contained in the parallax images is Bp, the spatial bandwidth of the random phase distribution should be limited to 1/pBp. Further study is required to determine a phase distribution that reduces speckle noise and yet provides smooth motion parallax.

The resolution of the reconstructed images, especially when the uniform phase distribution was added to the parallax images, was better than those obtained in the previous studies. This is because the resolution of the reconstructed images is equal to that of the parallax images. High-resolution parallax images can be easily obtained because the present technique can use an ordinary camera to capture parallax images. Historically, although the first holographic stereograms used a 2D lens array to capture real objects [1, 2], an ordinary camera was then used to capture real objects in order to improve the resolution of the reconstructed images [3, 4]. The use of a camera mounted on a 2D translation stage or equivalently the use of a camera array may provide higher-resolution images than the use of a lens array. However, the use of a camera array increases the cost of the system, and the use of a camera mounted on a 2D translation stage increases the time required to capture real objects.

5. Conclusion

In this study, a new calculation technique to synthesize holographic stereograms was proposed, which was suggested by the similarity between the 3D image generation provided by holographic stereograms and that provided by multi-view displays. It does not require diffraction calculations. Experiments were performed to verify the proposed technique. An SLM with a resolution of 1,920 × 1,200 was used to generate 10 × 5 viewpoints with an interval of 4.69 mm at a distance of 600 mm from the hologram display screen. The resolution of the parallax images was 192 × 120. We confirmed that appropriate parallax images were observed at the corresponding viewpoints.

References and links

1. R. V. Pole, “3-D imagery and holograms of objects illuminated in white light,” Appl. Phys. Lett. 10(1), 20–22 (1967). [CrossRef]  

2. D. J. De Bitetto, “Transmission bandwidth reduction of holographic stereograms recorded in white light,” Appl. Phys. Lett. 12(10), 343–344 (1968). [CrossRef]  

3. J. T. McCrickerd and N. George, “Holographic stereogram from sequential component photographs,” Appl. Phys. Lett. 12(1), 10–12 (1968). [CrossRef]  

4. D. J. DeBitetto, “Holographic panoramic stereograms synthesized from white light recordings,” Appl. Opt. 8(8), 1740–1741 (1969). [CrossRef]   [PubMed]  

5. M. C. King, A. M. Noll, and D. H. Berry, “A new approach to computer-generated holography,” Appl. Opt. 9(2), 471–475 (1970). [CrossRef]   [PubMed]  

6. T. Yatagai, “Stereoscopic approach to 3-D display using computer-generated holograms,” Appl. Opt. 15(11), 2722–2729 (1976). [CrossRef]   [PubMed]  

7. H. Kang, T. Yamaguchi, and H. Yoshikawa, “Accurate phase-added stereogram to improve the coherent stereogram,” Appl. Opt. 47(19), D44–D54 (2008). [CrossRef]   [PubMed]  

8. D. Abookasis and J. Rosen, “Computer-generated holograms of three-dimensional objects synthesized from their multiple angular viewpoints,” J. Opt. Soc. Am. A 20(8), 1537–1545 (2003). [CrossRef]   [PubMed]  

9. N. T. Shaked, J. Rosen, and A. Stern, “Integral holography: white-light single-shot hologram acquisition,” Opt. Express 15(9), 5754–5760 (2007). [CrossRef]   [PubMed]  

10. N. T. Shaked and J. Rosen, “Modified Fresnel computer-generated hologram directly recorded by multiple-viewpoint projections,” Appl. Opt. 47(19), D21–D27 (2008). [CrossRef]   [PubMed]  

11. N. T. Shaked and J. Rosen, “Multiple-viewpoint projection holograms synthesized by spatially incoherent correlation with broadband functions,” J. Opt. Soc. Am. A 25(8), 2129–2138 (2008). [CrossRef]   [PubMed]  

12. J.-H. Park, M.-S. Kim, G. Baasantseren, and N. Kim, “Fresnel and Fourier hologram generation using orthographic projection images,” Opt. Express 17(8), 6320–6334 (2009). [CrossRef]   [PubMed]  

13. N. Chen, J.-H. Park, and N. Kim, “Parameter analysis of integral Fourier hologram and its resolution enhancement,” Opt. Express 18(3), 2152–2167 (2010). [CrossRef]   [PubMed]  

14. T. Mishina, M. Okui, and F. Okano, “Calculation of holograms from elemental images captured by integral photography,” Appl. Opt. 45(17), 4026–4036 (2006). [CrossRef]   [PubMed]  

15. W. Plesniak, M. Halle, V. M. Bove Jr, J. Barabas, and R. Pappu, “Reconfigurable image projection holograms,” Opt. Eng. 45(11), 115801 (2006). [CrossRef]  

16. Q. Y. J. Smithwick, J. Barabas, D. E. Smalley, and V. M. Bove Jr., “Interactive holographic stereograms with accommodation cues,” Proc. SPIE 7619, 761903, 761903-13 (2010). [CrossRef]  

17. T. Okoshi, Three-Dimensional Imaging Techniques (Academic Press, New York, 1976).

18. T. Okoshi, “Three-dimensional displays,” Proc. IEEE 68(5), 548–564 (1980). [CrossRef]  

19. N. A. Dodgson, “Autostereoscopic 3D displays,” Computer 38(8), 31–36 (2005). [CrossRef]  

20. O. Bryngdahl and A. Lohmann, “Single-sideband holography,” J. Opt. Soc. Am. 58(5), 620–624 (1968). [CrossRef]  

21. Y. Takaki and Y. Tanemoto, “Band-limited zone plates for single-sideband holography,” Appl. Opt. 48(34), H64–H70 (2009). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Process of recording and reconstructing holographic stereograms: (a) capturing parallax images of real objects using incoherent light, (b) recording parallax images on hologram film using coherent light, and (c) reconstructing holographic stereogram.
Fig. 2
Fig. 2 Holographic stereogram copying process: (a) reconstruction of the first hologram and recording of the second hologram, and (b) reconstruction of the second hologram.
Fig. 3
Fig. 3 3D image generation by a multi-view display.
Fig. 4
Fig. 4 Object wave generation of the proposed technique.
Fig. 5
Fig. 5 Schematic diagram illustrating the arrangement of viewpoints in the viewing zone.
Fig. 6
Fig. 6 Schematic diagram of the 4f imaging system used for experiment.
Fig. 7
Fig. 7 Images generated at viewpoints and intermediate positions between viewpoints: parallax images are added with (a) uniform phase, (b) common random phase, and (c) different random phase.
Fig. 8
Fig. 8 Reconstructed images generated by the proposed technique; parallax images were generated by a computer, and a common random phase distribution was added.
Fig. 9
Fig. 9 Reconstructed images of real objects when a uniform phase distribution was added to the camera-captured parallax images.
Fig. 10
Fig. 10 Reconstructed images of real objects when a common random phase distribution was added to the camera-captured parallax images.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

o( x,y )= m=1 M I m ( x,y ) exp[ iα( x,y ) ]exp[ ik ( x x m ) 2 + ( y y m ) 2 + l 2 ],
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.