Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Speckle regularization and miniaturization of computer-generated holographic stereograms

Open Access Open Access

Abstract

Holographic stereograms produce multiple parallax images that are seen from multiple viewpoints. Because random phase distributions are added to the parallax images to remove areas where images cannot be seen in the viewing area, speckles are generated in the reconstructed images. In this study, virtual viewpoints are inserted between the original viewpoints (real viewpoints) to make the interval of the viewpoints smaller than the pupil diameter of the eyes in order to remove the areas without images. In this case, regular interference patterns appear in the reconstructed images instead of the speckle patterns. The proper phase modulation of the parallax images displayed to the real and virtual viewpoints increases the spatial frequencies of the regular interference patterns on retinas so that the eyes cannot perceive them. The proposed technique was combined with the multiview-based holographic stereogram calculation technique and was experimentally verified.

© 2016 Optical Society of America

1. Introduction

For the generation of realistic three-dimensional (3D) images, intensive studies have been carried out to realize holographic displays [1]. Holographic stereograms [2–6] provide a convenient way to record real 3D objects, because coherent light is not needed when capturing them. However, speckles appear in 3D images produced by the holographic stereograms because a coherent process is needed for the hologram synthesis. In this study, the degradation of 3D images by speckles is reduced although speckles are not eliminated. Instead, we regularize and miniaturize speckles to be invisible to human eyes.

Conventional holography records interference of an object wave and a reference wave upon a hologram film, but the object wave contains random interference caused by rough surfaces of objects. Thus, the random interference is reconstructed in 3D images, which generates speckles. When holograms are calculated by computers, random phase distributions are usually added to surfaces of object models to simulate real object surfaces, so speckles also appear in the reconstructed images of the computer-generated holograms (CGHs). For holographic stereograms, parallax images of 3D objects are captured from different viewing positions using normal cameras, and then they are sequentially recorded on a hologram film while a slit placed on the film is moved accordingly. Random phase plates (in simpler words, diffusers) are usually placed in front of the parallax images to diverge light to fill the aperture of the slit, which lead speckles in the reconstructed images [5]. Holographic stereograms can also be calculated using computers [7, 8]; for computer-generated holographic stereograms, random phase distributions are also added to the parallax images; thus, speckles are generated in the reconstructed images in this case as well.

Numerous techniques have been proposed to reduce speckles in 3D images generated by holograms [9–21]. A time-averaging effect is employed using a rotating phase plate [11–13] and an image summation [14]. The temporal coherence is decreased using low-coherent light sources, such as a light emitting diode [15]. Recently, our research group proposed a time-multiplexing technique [16, 17], which generates multiple sets of sparse points at different times and with the sum of the point sets representing the reconstructed image. Interference does not occur among the sparse points or among the sparse point sets that are generated at different times. The use of sparse points to remove speckles was adopted for the lens-less holographic projection system [18]. It was also adopted for the CGH calculation technique using a ray-sampling plane [19].

The other approach to reduce speckles is to optimize the random phase distributions [17]. The random phase distributions are designed such that the distributions of their diffraction become as uniform as possible and their deviations become as small as possible. The designed random phase distribution was used to fabricate holographic stereograms and the improvement in the image quality was reported [21].

This study proposes a technique to improve the image quality of the holographic stereograms degraded by speckles. Although the abovementioned previous techniques aimed to reduce the speckles or to make the speckle distributions uniform, our technique considers the characteristics of the human vision system, i.e., the speckle patterns are altered into regular sinusoidal patterns whose spatial frequency is higher than the cut-off spatial frequency of human eyes. In this study, this approach is combined with the multiview-based holographic stereograms [22], which our research group previously proposed to enable simple and fast calculation of holographic stereograms.

The rest of paper is organized as follows. In Sec. 2, we explain the multiview-based computer-generated holographic stereogram, and describe the modification of the reconstruction optical system to maximize the viewing area. In Sec. 3, we propose a technique to improve the image quality of reconstructed images. The experimental verification is shown in Sec. 4. The discussion is given in Sec. 5, and the conclusions are presented in Sec. 6.

2. Multiview-based computer-generated holographic stereogram

2.1 Previous technique

Conventional holographic stereograms [5, 6] are synthesized from multiple parallax images of real objects captured from different viewing positions using normal cameras. An object wave coming from a parallax image and a reference wave are recorded on a holographic film with a slit placed in front of the film, while the position of the slit is moved according to the camera position where the corresponding parallax image was captured. After recording all parallax images, the first hologram is obtained [5]. Then, another holographic film is placed in the vicinity of the reconstructed image of the first hologram to create a second hologram, which becomes a holographic stereogram. Finally, as shown in Fig. 1(a), the 3D image is reconstructed around the holographic stereogram and is observed through viewpoints that are reconstructed images of the slits [6]. Figure 1(b) illustrates another 3D image generation technique using multiview displays [23], which are based on the ray reconstruction. A flat-panel display is combined with a lens array. Each lens of the lens array deflects rays from different pixels of the flat-panel display into different directions such that multiple viewpoints are generated. Parallax images are displayed with rays converging to the corresponding viewpoints. Therefore, when the eye is placed at different viewpoints, different parallax images are observed. By considering the similarity between the holographic stereograms and the multiview displays, the multiview-based computer-generated holographic stereogram [22] was proposed; the parallax images are displayed with wavefronts converging to the corresponding viewpoints. As shown in Fig. 2, this technique considers the wavefront converging to a viewpoint #v, which has the amplitude distribution of the parallax image #v on the display screen and phase distribution of a spherical wave converging to the viewpoint #v. The wavefronts are calculated for all parallax images and are summed to obtain an object wave. The total number of viewpoints is denoted by V. The coordinates of the display screen are denoted by (x, y). The distance between the display screen and the viewpoints is denoted by l, and the position of the viewpoint #v is denoted by (xv, yv). The intensity distribution of the parallax image #v is denoted by Iv(x, y). The object wave o(x, y) is given by

o(x,y)=v=0V1Iv(x,y)exp[iα(x,y)]exp[ik(xxv)2+(yyv)2+l2],
where k is the wavenumber and α(x, y) is the phase distribution added to the amplitude distribution of the parallax image. A reference wave is added to the object wave and its intensity distribution provides the hologram pattern, which is displayed on an amplitude-modulation spatial light modulator (SLM).

 figure: Fig. 1

Fig. 1 Three-dimensional image generation by (a) holographic stereogram (wavefront reconstruction), and (b) multiview display (ray reconstruction).

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Multiview-based holographic stereogram.

Download Full Size | PDF

2.2 Modified technique

The previous technique [22] multiplies the quadric phase distributions of spherical waves to parallax images. Because the spatial frequency of the quadric phase distributions becomes higher at positions farther from the center, the pixel pitch of the SLM limits the area where the parallax images can be displayed on the SLM.

When the pixel pitch of the SLM is denoted by p and light wavelength is denoted by λ, the sampling theorem requires that the quadric phase distribution must be displayed inside an ellipse with the major axis λl/p and the minor axis λl/2p. The viewing area is determined by the diffraction of light by the SLM, which is given by (λl/p) × (λl/2p), as shown in Fig. 3. For both the quadric phase distribution and the viewing area, the height is half the width, because we consider the case when a conjugate image and zero-order diffraction light are separated to eliminate them in the vertical direction. Thus, as shown in Fig. 3, parts of the parallax images cannot be seen from the viewpoints in the peripheral viewing area. This corruption in the observed images is the drawback of the previous technique.

 figure: Fig. 3

Fig. 3 Observation of corrupted image by previous technique.

Download Full Size | PDF

In this study, a spherical lens is introduced on the display screen, which is used to generate the quadric phase distributions, as shown in Fig. 4. In this case, the SLM needs to generate linear phase distributions of the inclined plane waves propagating toward the viewpoints. Because the spatial frequency of the inclined plane wave is constant for each parallax image, entire parallax images can be displayed.

 figure: Fig. 4

Fig. 4 Modified multiview-based holographic stereogram.

Download Full Size | PDF

In the modified technique, because a spherical lens is attached to the display screen, the viewpoints are generated on the focal plane of the spherical lens. A wavefront converging to a viewpoint #v is considered. On the display screen, the corresponding wavefront #v has the amplitude distribution of the parallax image #v and the phase distribution of the inclined plane wave propagating toward the viewpoint #v. When the focal length of the lens is denoted by f, the object wave displayed on the display screen is given by

o(x,y)=v=0V1Iv(x,y)exp[iα(x,y)]exp[ik(xvx+yvy)/xv2+yv2+f2].

Figure 5 shows the generation of the viewpoints by the modified technique on the focal plane of the spherical lens. The size of the viewing area is represented by w × (w/2), where w = λf/p. The interval of the viewpoint is given by w/M, when the number of viewpoints is represented by M × (M/2). The extent of the diffraction pattern of each parallax image should be equal to or smaller than (w/M) × (w/M) in the viewing area to prevent crosstalk between the viewpoints; thus, the pixel pitch of the parallax images should be Mp. Therefore, the resolution of the parallax images is given by (Nx/M) × (Ny/M), when the resolution of the SLM is represented by Nx × Ny.

 figure: Fig. 5

Fig. 5 Viewpoints generated by modified multiview-based holographic stereogram.

Download Full Size | PDF

3. Speckle reduction by regularization and miniaturization

When the spatial bandwidth of parallax images is not sufficiently large, areas without light arise between the viewpoints, where reconstructed images are not observed. Therefore, the motion parallax is perceived to be discontinuous. Because the Fourier transforms of the parallax images appear at the corresponding viewpoints, the Fourier transforms should have the spatial extent as large as (λf/Mp) × (λf/Mp) to eliminate the areas where images are not observed. In the previous study [22], random phase distributions were used as the additional phase distributions α(x, y) in Eq. (1) to increase the spatial bandwidth of the parallax images. The pixel pitch of the random phase distribution was Mp, which was equal to that of the parallax images. Although the smooth motion parallax was obtained, the random phase distributions generated speckles in the reconstructed images and the image quality was greatly affected.

In this study, we propose a technique to improve the viewing area continuity, which transforms the speckle patterns into sinusoidal patterns having a spatial frequency high enough to be invisible to human eyes, instead of removing the speckles.

As shown in Fig. 6, virtual viewpoints are added to real viewpoints to make plural viewpoints be included in the pupil of the eye, so that the viewing area discontinuity is not perceived. Here, we call the original viewpoints as real viewpoints. Figure 6 shows the occurrence where the number of the total viewpoints is horizontally and vertically increased by three times. When the pupil diameter is denoted by a, the interval of the whole viewpoints (real and virtual viewpoints) should be smaller thana/2. The parallax images corresponding to the virtual viewpoints are synthesized by interpolating parallax images corresponding to the real viewpoints existing around the virtual viewpoints.

 figure: Fig. 6

Fig. 6 Arrangement of real and virtual viewpoints in the viewing area to produce continuous viewing area.

Download Full Size | PDF

Because the proposed technique does not use random phase distributions, random interference does not occur and speckles are not generated on the retina of the eye. Instead, light from multiple viewpoints contained in the pupil generates regular sinusoidal interference patterns. In this study, the spatial frequency of the interference patterns is increased by adding proper phases to the Fourier transformed patterns that appear at the viewpoints. Human vision has a maximum sensitivity to a spatial frequency of 2–6 cycles/deg and has no sensitivity to the spatial frequency higher than 50–60 cycles/degree (cut-off spatial frequency) [24, 25].

The pupil diameter of human eyes changes between approximately 2 and 8 mm depending on ambient illumination, with an average diameter of 5 mm [26]. There are several possibilities in how many viewpoints are contained in the pupil and where the pupil is located relative to the two-dimensional array of the viewpoints. The interference pattern having the lowest spatial frequency is generated by the adjacent two viewpoints. Therefore, we consider the case when 2 × 2 viewpoints are contained in the pupil as shown in Fig. 7. The phases added at the four viewpoints are represented by θa, θb, θc, and θd. This phase modulation is repeated two-dimensionally on all viewpoints. Here, we assume that parallax images for the four viewpoints are nearly equal and their intensity distribution is denoted by Ip(x, y). Then, when the coordinate of the retinal plane is denoted by (xr, yr), the intensity distribution on the retina Ir(xr, yr) is given by

Ir(xr,yr)Ip(xr,yr){(cos2θa+cos2θb+cos2θc+cos2θd)+2(cosθacosθb+cosθccosθd)cos(πdxr/λs)+2(cosθacosθc+cosθbcosθd)cos(πdyr/λs)+2cosθbcosθccos[πd(xr+yr)/λs]+2cosθacosθdcos[πd(xryr)/λs]},
where d is the interval of the viewpoints, and s is the length between the pupil and the retina.

 figure: Fig. 7

Fig. 7 Phase modulation at real and virtual viewpoints.

Download Full Size | PDF

The first terms on the right side of Eq. (3) represents a constant distribution. The second and third terms represent the sinusoidal distributions in the horizontal and vertical directions, and the last two terms represent those in the diagonal directions. The spatial frequency of the second and third terms is p/2λd and that of the last two terms is2p/2λd. In this study, to increase the spatial frequency of the sinusoidal patterns, the magnitudes of the coefficients of the last two terms are maximized while those of the second and third terms are minimized by setting the phases as θa = 0, θb = π, θc = π, and θd = π; the coefficients of the second and third terms become zero and those of the last two terms become 2.

When we assume that the interval of the viewpoints is d = 3.0 mm, the length between the pupil and the retina is s = 18 mm, and the wavelength of light is λ = 0.50 μm, the phase values determined above make the spatial frequency of the interference fringe to be 74 cycles/deg, which exceeds the cut-off spatial frequency of human vision. Figure 8 shows the retinal images obtained by the computer simulation when 2 × 2 viewpoints are contained in the pupil and the pupil diameter is 5.0 mm. Figure 8(a) shows the parallax image used for all viewpoints, Fig. 8(b) shows the calculated retinal image when the four phase values are equal, and Fig. 8(c) shows that when the above determined phase values are used. The last image is modulated by a finer sinusoidal pattern compared to the center one. The finer sinusoidal pattern affects the image quality less.

 figure: Fig. 8

Fig. 8 Calculated retinal images: (a) parallax image, and retinal images (b) without phase modulation, and (c) with phase modulation.

Download Full Size | PDF

To calculate the object wave, the phase modulation of the Fourier transformed patterns at the viewpoints is performed by using the phase value θi (i = a, b, c, and d) as the additional phase α(x, y) in Eq. (2). In this case, the additional phase α(x, y) is constant for each parallax image; thus, it should be described as αv, where v represents the viewpoint number.

Figure 9 shows the retinal images calculated for several pupil positions relative to the viewpoints. Figure 9(a) shows the case when four viewpoints are contained in the pupil. Figures 9(b) and 9(c) show the cases when two viewpoints aligned in the horizontal and vertical directions, respectively, are contained in the pupil. When two viewpoints are contained, the spatial frequency of the regular patterns decreases. However, image degradation does not occur along the vertical and horizontal directions, as shown in Figs. 9(b) and 9(c), respectively, because the regular interference patterns are one-dimensional sinusoidal patterns.

 figure: Fig. 9

Fig. 9 Retinal images calculated for several viewpoints contained in the retina: (a) four viewpoints and two viewpoints aligned in (b) horizontal and (c) vertical directions.

Download Full Size | PDF

The proposed technique generates regular fringe patterns having a high spatial frequency on the retina to provide a continuous viewing area, although the previous technique generates random speckle patterns [22]. The speckle patterns contain a wide range of spatial frequencies and lower spatial frequency components (2–6 cycles/degree) are more visible to human eyes. Conversely, a single and high-frequency sinusoidal pattern generated by the proposed technique is invisible or less visible to human eyes.

4. Experiments

The experimental system was constructed based on the modified reconstruction system described in Sec. 2.2 and the image quality improvement technique described in Sec. 3 was verified.

The experimental setup is illustrated in Fig. 10. A 4f imaging system was used because we wanted to remove the conjugate image and zero-order diffraction light from the reconstructed image. A spherical lens that generates viewpoints was placed on the image plane. The 4f imaging system consisted of two Fourier transform lenses and a single-sideband filter on its Fourier plane. As the single-sideband filter, a horizontal slit was used to eliminate the conjugate image and zero-order diffraction light in the vertical direction [27, 28]. The calculation of the hologram pattern from the object wave was previously described in [22].

 figure: Fig. 10

Fig. 10 4f imaging system using spherical lens on its image plane.

Download Full Size | PDF

A liquid crystal on silicon SLM was used. The resolution was 4,096 × 2,400 pixels and the pixel pitch was p = 4.8 μm. The focal length of the two Fourier transform lenses constituting the 4f imaging system was f0 = 150 mm. A He–Ne laser (λ = 632.8 nm) was used as the light source. The screen size of the 4f imaging system was 0.90 in., which was equal to that of the SLM. The focal length of the spherical lens placed on the image plane was 750 mm. The size of the viewing area on the focal plane was 98.9 × 49.4 mm2 (w = 98.9 mm).

The number of the real viewpoints was 16 × 8 (M = 16). The interval of the real viewpoints was 6.18 mm, which was larger than the average pupil diameter of the human eye (5 mm). The resolution of the parallax images was 256 × 150 pixels. The virtual viewpoints were added to increase the total number of viewpoints to 32 × 16, such that the interval of the whole viewpoints was halved to 3.08 mm, which was smaller than the average pupil diameter. The spatial frequency of the interference pattern was 60.1 cycles/deg.

Parallax images of real objects were captured using a digital camera mounted on a computer-controlled translation stage that moved in two dimensions. The distance from the camera lens to the object plane was 750 mm, and the camera was moved horizontally and vertically with an interval of 6.18 mm to capture 16 × 8 parallax images.

For the evaluation of generated retinal images, the other digital camera was used instead of human eyes. A lens with an entrance pupil whose diameter was set to 5 mm (the average pupil diameter) was attached to the digital camera.

The retinal images captured by the digital camera are shown in Fig. 11. First, 3D images were generated without adding the virtual viewpoints. The retinal images captured at viewpoints (6, 3) and (6, 4), and at the intermediate position are shown. Figures 11(a) and 11(b), respectively, show the retinal images when uniform phase distributions and random phase distributions were added to the parallax images. When using the uniform phase distributions, sharp images were observed at the two viewpoints and a dark image was observed at the intermediate position. There is discontinuity in the viewing area production. When using the random phase distributions, corresponding images were observed at the two viewpoints and an image was also observed at the intermediate position. Therefore, the viewing area was generated continuously. However, speckles were clearly observed in the reconstructed images, which greatly degraded the image quality.

 figure: Fig. 11

Fig. 11 Reconstructed images at viewpoints (6, 3), (6, 4), and intermediate position; (a) without virtual viewpoints and with uniform phase distribution, (b) without virtual viewpoints and with random phase distribution, (c) with virtual viewpoints and without phase modulation, and (d) with virtual viewpoints and with phase modulation.

Download Full Size | PDF

Then, the virtual viewpoints were added. Figure 11(c) shows the retinal images when the phases of the viewpoints were not modulated, and Fig. 11(d) shows those when the phases were modulated as described in Sec. 3. In both cases, images were observed at the two viewpoints and the intermediate position and speckles were not generated. The magnified images are shown in Fig. 12. As shown in this figure, fine sinusoidal patterns were observed instead of speckle patterns in the reconstructed images. The spatial frequency of the sinusoidal pattern in Fig. 12(b) was higher than that in Fig. 12(a). The structures of the experimentally observed sinusoidal patterns were similar to those obtained by the computer simulation shown in Fig. 8.

 figure: Fig. 12

Fig. 12 Magnified images of reconstructed images with virtual viewpoints: (a) without phase modulation (Fig. 11(c)), and (b) with phase modulation (Fig. 11(d)).

Download Full Size | PDF

Figure 13 shows the retinal images obtained at several viewpoints in the viewing area when the virtual viewpoints were added and the phases of the viewpoints were modulated. Entire parallax images could be observed at all viewpoints and the image corruption was not observed. The effectiveness of the modified reconstruction system described in Sec. 2.2 was verified.

 figure: Fig. 13

Fig. 13 Reconstructed images captured at several real and virtual viewpoints in the viewing area.

Download Full Size | PDF

The number of the real viewpoints contained in the whole viewpoints was changed. The virtual viewpoints were added and the phases of the viewpoints were modulated. Figure 14 shows the retinal images captured at upper left viewpoints when the intervals of the real viewpoints were 6.18, 9.27, and 12.4 mm. The number of the whole viewpoints was 32 × 16 and the interval of the whole viewpoints was 3.09 mm. In all cases, the viewing areas were produced continuously and the speckles were not observed.

 figure: Fig. 14

Fig. 14 Reconstructed images when the ratio of real to virtual viewpoints changes: interval of whole viewpoints was 3.09 mm, and that of real viewpoints was (a) 6.18, (b) 9.27, and (c) 12.4 mm.

Download Full Size | PDF

5. Discussion

In Sec. 4, we experimentally verified the reduction of speckle patterns and the generation of fine regular patterns in the 3D images using the digital camera. We evaluated the 3D images with our eyes. When the virtual viewpoints were not added, the images could be observed intermittently when the observation position was moved in the viewing area. When the virtual viewpoints were added, the discontinuity in the 3D images was not perceived. When the phase modulation was not used, several observers noticed weak fine structures in the 3D images. When the phase modulation was applied, the fine structures were not perceived by any observers. Because the 3D images were evaluated in a darkroom environment, the diameter of the eyes might be larger than the average pupil diameter of 5 mm. Thus, the number of viewpoints contained in the pupil might be four or more such that the regular patterns with a lower spatial frequency, as shown in Figs. 9(b) and 9(c), might not be generated.

As shown in Figs. 8 and 12, the contrast of the sinusoidal distributions that enveloped the reconstructed 3D images was lower for the phase modulation case than the non-modulation case. The sinusoidal patterns with lower contrast are less visible. The contrast for the phase modulation case calculated from Eq. (3) is 0.50, while it is 0.62 for the non-modulation case.

From Figs. 8 and 12, the sinusoidal distributions were dot-like patterns. These dot-like patterns have a sampling effect on the reconstructed images. The number of dots for the phase modulation case is twice as many as that for the non-modulation case. This fact might have positively affected the improvement of image quality. The pitches of the dots on the retina are 2λs/dand 2λs/dfor the phase modulation and non-modulation cases, respectively. The pixel pitch of the parallax images is Mp on the screen, which is reduced to Mps/f on the retina. Because d < λf / Mp, the pitch of the dots is greater than the pixel pitch on the retina. Therefore, the proposed technique reduces the resolution of the parallax images. This fact explains why the images shown in Fig. 11(a) looked better than those shown in Fig. 11(d) at the viewpoints (6, 3) and (6, 4).

In this study, the proposed image quality improvement technique was applied to the multiview-based calculation technique for the holographic stereograms [22]. The proposed technique can also be applied to the other calculation techniques for the holographic stereograms. For the optical fabrication of the holographic stereograms, it was recommended that the interval of the viewpoints should be equal to or smaller than the pupil diameter, such as 0.5 mm [5] and 1 mm [6]. To generate such high-density viewpoints, the technique proposed in this article can be used.

6. Conclusion

In this study, we proposed techniques to produce a viewing area without discontinuity for the computer-generated holographic stereograms without generating speckles in the reconstructed images. The viewpoints are virtually increased and the phases of distributions at the viewpoints are modulated to generate regular sinusoidal patterns with a high enough spatial frequency to be invisible to human eyes, instead of generating speckles. The proposed technique was applied to the multiview-based holographic stereogram calculation technique and was verified experimentally. The interval of the original viewpoints was 6.18 mm, which was reduced to 3.09 mm by adding the virtual viewpoints. We found that fine regular distributions were generated in the reconstructed images using the digital camera, while the regular distributions could not be observed by human eyes and the discontinuity in the viewing area was not perceived.

Acknowledgment

The SLM used in this study was provided by the National Institute of Information and Communications Technology (NICT), Japan. This study was supported by JSPS KAKENHI Grant Number 15H03987.

References and links

1. F. Yaraş, H. Kang, and L. Onural, “State of the art in holographic displays: a survey,” J. Disp. Technol. 6(10), 443–454 (2010). [CrossRef]  

2. R. V. Pole, “3-D imagery and holograms of objects illuminated in white light,” Appl. Phys. Lett. 10(1), 20–22 (1967). [CrossRef]  

3. D. J. De Bitetto, “Transmission bandwidth reduction of holographic stereograms recorded in white light,” Appl. Phys. Lett. 12(10), 343–344 (1968). [CrossRef]  

4. J. T. McCrickerd and N. George, “Holographic stereogram from sequential component photographs,” Appl. Phys. Lett. 12(1), 10–12 (1968). [CrossRef]  

5. D. J. DeBitetto, “Holographic panoramic stereograms synthesized from white light recordings,” Appl. Opt. 8(8), 1740–1741 (1969). [CrossRef]   [PubMed]  

6. M. C. King, A. M. Noll, and D. H. Berry, “A new approach to computer-generated holography,” Appl. Opt. 9(2), 471–475 (1970). [CrossRef]   [PubMed]  

7. T. Yatagai, “Stereoscopic approach to 3-D display using computer-generated holograms,” Appl. Opt. 15(11), 2722–2729 (1976). [CrossRef]   [PubMed]  

8. H. Kang, T. Yamaguchi, and H. Yoshikawa, “Accurate phase-added stereogram to improve the coherent stereogram,” Appl. Opt. 47(19), D44–D54 (2008). [CrossRef]   [PubMed]  

9. L. I. Goldfischer, “Autocorrelation function and power spectral density of laser-produced speckle patterns,” J. Opt. Soc. Am. 55(3), 247–252 (1965). [CrossRef]  

10. H. J. Gerritsen, W. J. Hannan, and E. G. Ramberg, “Elimination of speckle noise in holograms with redundancy,” Appl. Opt. 7(11), 2301–2311 (1968). [CrossRef]   [PubMed]  

11. T. S. McKechnie, “Speckle reduction,” in Laser Speckle and Related Phenomena, J.C.Dainty, ed. (Springer-Verlag, 1975).

12. J. Amako, H. Miura, and T. Sonehara, “Speckle-noise reduction on kinoform reconstruction using a phase-only spatial light modulator,” Appl. Opt. 34(17), 3165–3171 (1995). [CrossRef]   [PubMed]  

13. Y. Kuratomi, K. Sekiya, H. Satoh, T. Tomiyama, T. Kawakami, B. Katagiri, Y. Suzuki, and T. Uchida, “Speckle reduction mechanism in laser rear projection displays using a small moving diffuser,” J. Opt. Soc. Am. A 27(8), 1812–1817 (2010). [CrossRef]   [PubMed]  

14. M. Makowski, I. Ducin, M. Sypek, A. Siemion, A. Siemion, J. Suszek, and A. Kolodziejczyk, “Color image projection based on Fourier holograms,” Opt. Lett. 35(8), 1227–1229 (2010). [CrossRef]   [PubMed]  

15. F. Yaraş, H. Kang, and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. 48(34), H48–H53 (2009). [CrossRef]   [PubMed]  

16. Y. Takaki and M. Yokouchi, “Speckle-free and grayscale hologram reconstruction using time-multiplexing technique,” Opt. Express 19(8), 7567–7579 (2011). [CrossRef]   [PubMed]  

17. T. Kurihara and Y. Takaki, “Speckle-free, shaded 3D images produced by computer-generated holography,” Opt. Express 21(4), 4044–4054 (2013). [CrossRef]   [PubMed]  

18. M. Makowski, “Minimized speckle noise in lens-less holographic projection by pixel separation,” Opt. Express 21(24), 29205–29216 (2013). [CrossRef]   [PubMed]  

19. T. Utsugi and M. Yamaguchi, “Speckle-suppression in hologram calculation using ray-sampling plane,” Opt. Express 22(14), 17193–17206 (2014). [CrossRef]   [PubMed]  

20. M. Matsumura, “Speckle noise reduction by random phase shifters,” Appl. Opt. 14(3), 660–665 (1975). [CrossRef]   [PubMed]  

21. M. Yamaguchi, H. Endoh, T. Honda, and N. Ohyama, “High-quality recording of a full-parallax holographic sterogram with a digital diffuser,” Opt. Lett. 19(2), 135–137 (1994). [CrossRef]   [PubMed]  

22. Y. Takaki and K. Ikeda, “Simplified calculation method for computer-generated holographic stereograms from multi-view images,” Opt. Express 21(8), 9652–9663 (2013). [CrossRef]   [PubMed]  

23. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013). [CrossRef]   [PubMed]  

24. R. L. De Valois, H. Morgan, and D. M. Snodderly, “Psychophysical studies of monkey vision. 3. Spatial luminance contrast sensitivity tests of macaque and human observers,” Vision Res. 14(1), 75–81 (1974). [CrossRef]   [PubMed]  

25. F. W. Campbell and D. G. Green, “Optical and retinal factors affecting visual resolution,” J. Physiol. 181(3), 576–593 (1965). [CrossRef]   [PubMed]  

26. T. Okoshi, Three-Dimensional Imaging Techniques (Academic, 1976).

27. O. Bryngdahl and A. Lohmann, “Single-sideband holography,” J. Opt. Soc. Am. 58(5), 620–624 (1968). [CrossRef]  

28. Y. Takaki and Y. Tanemoto, “Band-limited zone plates for single-sideband holography,” Appl. Opt. 48(34), H64–H70 (2009). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 Three-dimensional image generation by (a) holographic stereogram (wavefront reconstruction), and (b) multiview display (ray reconstruction).
Fig. 2
Fig. 2 Multiview-based holographic stereogram.
Fig. 3
Fig. 3 Observation of corrupted image by previous technique.
Fig. 4
Fig. 4 Modified multiview-based holographic stereogram.
Fig. 5
Fig. 5 Viewpoints generated by modified multiview-based holographic stereogram.
Fig. 6
Fig. 6 Arrangement of real and virtual viewpoints in the viewing area to produce continuous viewing area.
Fig. 7
Fig. 7 Phase modulation at real and virtual viewpoints.
Fig. 8
Fig. 8 Calculated retinal images: (a) parallax image, and retinal images (b) without phase modulation, and (c) with phase modulation.
Fig. 9
Fig. 9 Retinal images calculated for several viewpoints contained in the retina: (a) four viewpoints and two viewpoints aligned in (b) horizontal and (c) vertical directions.
Fig. 10
Fig. 10 4f imaging system using spherical lens on its image plane.
Fig. 11
Fig. 11 Reconstructed images at viewpoints (6, 3), (6, 4), and intermediate position; (a) without virtual viewpoints and with uniform phase distribution, (b) without virtual viewpoints and with random phase distribution, (c) with virtual viewpoints and without phase modulation, and (d) with virtual viewpoints and with phase modulation.
Fig. 12
Fig. 12 Magnified images of reconstructed images with virtual viewpoints: (a) without phase modulation (Fig. 11(c)), and (b) with phase modulation (Fig. 11(d)).
Fig. 13
Fig. 13 Reconstructed images captured at several real and virtual viewpoints in the viewing area.
Fig. 14
Fig. 14 Reconstructed images when the ratio of real to virtual viewpoints changes: interval of whole viewpoints was 3.09 mm, and that of real viewpoints was (a) 6.18, (b) 9.27, and (c) 12.4 mm.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

o( x,y )= v=0 V1 I v ( x,y ) exp[ iα( x,y ) ]exp[ ik ( x x v ) 2 + ( y y v ) 2 + l 2 ],
o( x,y )= v=0 V1 I v ( x,y ) exp[ iα( x,y ) ]exp[ ik( x v x+ y v y )/ x v 2 + y v 2 + f 2 ].
I r ( x r , y r ) I p ( x r , y r ){ ( cos 2 θ a + cos 2 θ b + cos 2 θ c + cos 2 θ d ) +2( cos θ a cos θ b +cos θ c cos θ d )cos( πd x r /λs ) +2( cos θ a cos θ c +cos θ b cos θ d )cos( πd y r /λs ) +2cos θ b cos θ c cos[ πd( x r + y r )/λs ] +2cos θ a cos θ d cos[ πd( x r y r )/λs ] },
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.