Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Speckle-free, shaded 3D images produced by computer-generated holography

Open Access Open Access

Abstract

A hologram display technique that provides speckle-free, shaded reconstructed images is proposed. A three-dimensional object consists of object points; these object points are divided into plural object point groups that are generated in a time-sequential manner. Each object point group consists of a two-dimensional (2D) array of object points that are separated so as to prevent interference among them. Each object point group is generated by displaying a 2D array of zone plates on a high-speed spatial light modulator (SLM). The amplitude distribution of the zone plates is modulated two-dimensionally based on Phong shading to shade the reconstructed images. The 2D amplitude distribution of the zone plates is decomposed into multiple binary patterns that are displayed by the SLM in a time-sequential manner. The proposed method is experimentally verified.

©2013 Optical Society of America

1. Introduction

Holography [1, 2] is a three-dimensional (3D) display technique that reconstructs the wavefront of light from objects so that holographic images satisfy all the physiological factors of human depth perception. A large number of studies have been conducted on the electronic implementation of holography. Computer-generated holograms (CGHs) can generate 3D images of both existing and non-existing objects. To date, the realism of 3D images generated by CGHs is inferior to that of two-dimensional (2D) images generated by modern computer graphics (CG) techniques, although holographic images can represent the depth of objects. Moreover, holography has an intrinsic problem of speckle generation in the reconstructed images. The speckles significantly impair the quality of reconstructed images. In this study, a novel CGH technique that provides speckle-free and shaded reconstructed images is proposed.

For images generated from object data using computers, shading techniques provide realism to generated images. There is a significant difference in shading techniques between 2D and 3D image generation. In 3D image generation, shading has to automatically change depending on the eye position of a viewer. In 2D image generation, shading is calculated for a predetermined viewpoint and does not change depending on the eye position. A large number of shading techniques have been developed in the CG field. However, researches about shading techniques for CGHs have started recently. There are two approaches. One approach [35] uses a polygon-based model, which has been widely used in the CG field, to represent 3D objects. The other approach [6], which was proposed by our research group, uses a point-based model. The point-based model has been traditionally used in CGH techniques, and 3D objects are represented by an aggregate of object points. The object points are generated by zone plates, and the superposition of zone plates gives a CGH pattern. In Ref. 6, the amplitude distribution of the zone plates is modulated two-dimensionally to shade reconstructed images.

Because holography uses coherent light during the recording and reconstruction processes, speckles are inherently generated in reconstructed images. Speckles cause high-frequency and high-contrast random patterns in the reconstructed images, and numerous techniques have been proposed to reduce them [712]. Most of them decrease the temporal or spatial coherence of the reconstruction light. In reconstructed images produced by CGHs, speckles also appear, although the recording process is numerically performed. Speckle reduction techniques have also been developed for CGHs [1315]. A time-averaging effect was employed [13, 14], and low-coherent light sources, such as, a light-emitting diode, were used to reconstruct holograms [15]. Recently, our research group proposed a time-multiplexing technique [16] for speckle reduction, which uses neither the time-averaging effect nor the low-coherent light source. This technique uses the point-based model, and multiple sets of sparse object points are generated at different times to prevent speckle generation. The sparse object points are generated by an array of zone plates that do not overlap each other. Speckle-reduction techniques have also been developed for digital holography, in which an optically generated hologram pattern is captured by an image sensor and a reconstructed image is numerically obtained using a computer. Ref. 17 shows a technique that uses multiple binarized hologram patterns to reduce speckles in numerically reconstructed images. We use multiple binary zone plates to reduce speckles in optically reconstructed images.

In this study, the shading technique that modulates the zone plates two-dimensionally and the speckle reduction technique that displays the zone plates in a time-multiplexing manner are combined to produce speckle-free and shaded reconstructed images. The resultant technique enables the generation of high-quality and real 3D images.

2. Theory

In this section, we explain the combination of the shading technique [6] and the speckle-reduction technique [16].

Figure 1 shows the point-based model used to represent 3D objects, which consist of an array of object points. When using the point-based model, the zone plate technique [1820] can be used to calculate hologram distributions. The zone plate consists of concentric circular fringes and generates a spherical wave converging to a light point, i.e., an object point. Therefore, the superposition of the zone plates provides a hologram distribution.

 figure: Fig. 1

Fig. 1 Point-based method and zone plate technique.

Download Full Size | PDF

In this study, the shading technique that we previously developed [6] is used. This technique was based on Phong shading [21]. Almost all the shading techniques developed in the CG field, including Phong shading, consider the direction of rays emitted from the object surface. Therefore, the conversion from wavefront to rays needs to be considered to utilize the ideas of the shading techniques developed in the CG field. As shown in Fig. 2 , a zone plate generates light that converges to an object point. We assume that each point on the zone plate emits a ray and that all the rays emitted from the zone plate converge to the object point and then are re-emitted. Therefore, the intensity at the point at which a ray is emitted on the zone plate determines the ray’s intensity. By modulating the amplitude of the zone plate two-dimensionally, different rays proceeding in different directions can be provided with different light intensities.

 figure: Fig. 2

Fig. 2 Rays converging to an object point from the zone plate and variable view vector.

Download Full Size | PDF

Figure 3 shows the four unit vectors used in Phong shading. The normal vector n is normal to the object surface, and the light vector l indicates the direction to a light source. The reflection vector r indicates the direction of reflected light; the angle between r and n is equal to that between l and n. The view vector v indicates the direction to the viewpoint of a viewer. The view vector is a fixed vector for 2D image generation. In our previous study, a variable view vector was proposed for the generation of holographic 3D images [6]. As shown in Fig. 2, the variable viewing vector is defined for the zone plate. The coordinate origin is assumed to be located at the center of the zone plane. Let us consider a line connecting an object point and a viewpoint of a viewer, and the position that this line intersects with the zone plate is denoted by (x, y). When the distance between the zone plate and the object point is denoted by z, the variable view vector is given by v = (−x, −y, z) / (x 2 + y2 + z2)1/2. For each point on the zone plate, the variable view vector can be calculated. The intensity of light emitted from the object point is calculated using the variable view vector, and the result is used to modulate the amplitude at the position (x, y) on the zone plate. The 2D amplitude modulation of the zone plate enables the object point to emit rays in different viewpoints with different light intensities.

 figure: Fig. 3

Fig. 3 Unit vectors in a Phong reflection model.

Download Full Size | PDF

Phong shading calculates three reflection light components: diffuse reflection light, ambient reflection light, and specular reflection light. The diffuse reflection light is uniformly emitted in all directions from the object surface, whose intensity does not depend on the emitting direction of rays and is given by Id = kd Il |n·l|, where kd is the diffuse reflection constant and Il is the illumination light intensity. The ambient reflection light is the sum of light reflected from surrounding objects, whose intensity does not depend on the emitting direction of rays and is given by Ia = ka I0, where ka is the reflectance of the object surface and I0 is the light intensity of the ambient light. These two components do not change depending on the view vector; thus, they uniformly modulate the zone plate, as shown in Fig. 4 . The specular reflection light consists of bundles of rays reflected in a range of directions of which the center direction is the reflection vector r, whose intensity is given by Is(x, y) = ks Il |r·v|n, where ks is the specular reflection constant and n is the roughness of the surface. This component changes depending on the view vector so that it modulates the zone plate two-dimensionally, as shown in Fig. 4. Thus, the amplitude modulation of the zone plate is given by m(x, y) = [Id + Ia + Is(x, y)]1/2.

 figure: Fig. 4

Fig. 4 Two-dimensional amplitude modulation of the zone plate based on Phong shading.

Download Full Size | PDF

To avoid the generation of speckles in reconstructed images, two-dimensionally modulated zone plates are displayed on a spatial light modulator (SLM) using the time-multiplexing technique, as proposed in Ref. 16. The object points are divided into several object point groups, denoted by Gt, as shown in Fig. 5 . The distances between object points in each group are made sufficiently large so as to prevent interference among the object points in each group. The groups of the zone plates that generate the object point groups are displayed on the SLM in a time-multiplexing manner. Thus, interference does not occur between different object point groups because they are generated at different times. Hence, no interference occurs in the hologram reconstruction process so that speckle-free reconstructed images are obtained.

 figure: Fig. 5

Fig. 5 Object surface consisting of object point groups Gt.

Download Full Size | PDF

A high-speed SLM is used to generate object point groups at a high frame rate. As shown in Fig. 6 , each group consists of equally spaced object points in the horizontal and vertical directions. The SLM’s display screen is regularly divided in the horizontal and vertical directions in accordance with the arrangement of the object points so that the display screen consists of a 2D array of rectangular areas. One zone plate is displayed in one rectangular area. The object points are divided horizontally and vertically into M groups and N groups, respectively. Thus, the total number of object point groups is M × N, and the object point group is represented by Gm, n (0 ≤ mM − 1, 0 ≤ nN − 1). The positions of the rectangular areas on the SLM screen are shifted both horizontally and vertically to generate different object point groups. The hologram pattern displayed on the SLM that generates the object point group Gm, n is denoted by Hm, n.

 figure: Fig. 6

Fig. 6 Speckle-free generation of object points using the time-multiplexing technique.

Download Full Size | PDF

A high-speed SLM tends to have a poor grayscale representation. Here we assume that the SLM can generate only binary images. When hologram patterns are displayed as binary images, reconstructed images have a poor grayscale representation. In a previous study on reducing speckles [16], the grayscale representation was improved by the time-multiplexing technique. Multiple binary images illuminated by different laser powers were used to generate one object point group. In the previous study, the amplitude of the zone plates was uniformly modulated in each rectangular area. In this study, the amplitude of the zone plates is modulated two-dimensionally in order to shade the reconstructed images. The hologram pattern Hm, n, which consists of a 2D array of the two-dimensionally modulated zone plates, is decomposed into multiple binary patterns, as shown in Fig. 7 . When the number of binary patterns used to represent one hologram pattern is denoted by Q, the binary image displayed by the high-speed SLM is denoted by Bm, nq (0 ≤ qQ − 1). The illumination laser power for each binary pattern is appropriately determined to improve the grayscale representation of the reconstructed images.

 figure: Fig. 7

Fig. 7 Grayscale representation of two-dimensionally modulated zone plates by the time-multiplexing technique.

Download Full Size | PDF

Because the time-multiplexing technique is used for reducing speckles and improving a grayscale representation, a very high frame rate of the SLM is required. When the frame rate of the SLM is represented by f Hz, the frame rate of generating reconstructed images is given by f / MNQ Hz.

3. Experiment

3.1 Experimental system

To provide speckle-free and shaded reconstructed images, the proposed hologram display technique was verified experimentally.

A digital micromirror device (DMD) was used as the high-speed SLM. The DLP DiscoveryTM 4100 (Texas Instruments, Inc.) was used with the ALP-4.1 high-speed accessory software package that enables the generation of binary images with a frame rate of 22,727 Hz. The image resolution was 1,024 × 768 pixels, the pixel pitch was 13.68 μm, and the size of the image area was 0.7 in.

Figure 8 shows the 4f optical system used for hologram reconstruction. This system consists of two Fourier transform lenses and contains a single-sideband filter on its Fourier plane to eliminate the conjugate image and zero-order diffraction light that impair reconstructed images [22]. The single-sideband technique requires the use of a half-zone plate instead of a zone plate for generating an object point [23, 24]. The half-zone plate is either the upper half or the lower half of the zone plate. It generates both a spherical wave and a conjugate spherical wave that are spatially separated on the Fourier plane so that the single-sideband filter can remove the conjugate image. Zero-order diffraction light, which becomes a sharp peak at the center on the Fourier plane, can also be removed by the filter. In Sec 2, we used the entire zone plate for the explanation because it has usually been used to explain the zone plate technique. For the entire zone plate, the center of the concentric circular fringes is located at its center. In contrast, for the half-zone plate, it is located at the midpoint of the lower or upper edge of the zone plate. The half-zone plates are displayed in rectangular areas whose height is half the width.

 figure: Fig. 8

Fig. 8 4f optical system used for the experiments.

Download Full Size | PDF

The size of the rectangular area corresponding to one half-zone plate was 64 × 32 pixels, which was experimentally determined to avoid interference between object points in our previous study [16]. Because the resolution of the SLM was 1,024 × 768, the SLM screen was divided into 16 × 24 rectangular areas to generate16 × 24 object points, which constitute one object point group. The positions of the rectangular areas were shifted 16 times horizontally and 8 times vertically so that the total number of object point groups was 16 × 8, and the total number of object points constituting a reconstructed image was 256 × 192. The number of binary images used for one object point group to represent the grayscale of its hologram pattern was eight. Thus, the frame rate of generating reconstructed images was 22.2 Hz (M = 16, N = 8, and Q = 8).

In the experiments, in order to clearly show the effects of shading, the surfaces of 3D objects were represented by only depth images, i.e., texture images were not used. The resolution of the depth images was 256 × 192 and the number of gray levels was 256. The normal vector n at each object point was calculated by computing a vector normal to the plane containing adjacent object points.

Here, the method used to decompose a continuous-valued hologram pattern into multiple binary patterns is explained. The decomposition method proposed in Ref. 25 was used. In this study [25], the grayscale representation of the reconstructed images generated by horizontally scanning holography [26] was improved. Horizontally scanning holography also uses the DMD as the high-speed SLM. Four decomposition methods—the bit-plane method, the intensity threshold method, the amplitude threshold method, and the histogram method—were examined in Ref. 25. A continuous-valued hologram pattern is quantized into multiple binary patterns using multiple threshold levels. In the histogram method, the threshold levels are determined so that the number of pixels having intensities between each pair of adjacent threshold levels becomes constant. This variable threshold method provides 3D images having good linearity in the grayscale representation, as well as the highest brightness, as shown in Ref. 25. In the amplitude threshold method, multiple amplitudes with regular intervals are used as the threshold levels. This regular threshold method also provides good linearity in the grayscale representation, and provides the second-highest brightness. Because the amplitude threshold method requires less computation time, the amplitude threshold method was used in this study. The illumination laser power is the difference between the light intensities of the threshold levels, which is given by Iq = α [(q + 1)2q2] = α (2q + 1), where α is a constant coefficient. In the experimental system, because the hologram pattern was decomposed into eight binary images, the illumination light intensities were α, 3α, 5α, ⋅⋅⋅, and 15α. The calculation time required to binarize 128 continuous-valued holograms into 1024 binary patterns was 2.1 s, which was approximately constant for all reconstructed images. The calculation was performed using a PC with an Intel CoreTM i5 750 (2.67 GHz) CPU and a 2 GB RAM.

A laser diode with a wavelength of 635 nm was used as a light source. Pulse width modulation was used to modulate the laser power. A microcontroller was used to control the pulse width. As the microcontroller, an H8/300 microcontroller (Renesas Electronics) was used. The microcontroller receives image update signals from a DMD driver and generates pulses to modulate the laser diode.

The 4f optical system consisted of two identical Fourier transform lenses, whose focal length was 150 mm. Thus, the magnification of the 4f optical system was unity so that the hologram screen size was 0.7 in. The viewing zone angle was given by 2 sin−1(λ / 2p), where λ was the wavelength of light and p was the pixel pitch of the SLM. The horizontal and vertical viewing zone angles were 2.7° and 1.3°, respectively. As a single-sideband filter, a variable slit was used, and the lower half of the Fourier transformed image was cut by the slit.

3.2 Experimental results

The depth representation of 3D images generated by the proposed technique was verified. Each 3D image consisted of two objects located in front of a screen; the distances to the left and right objects were 25 mm and 10 mm from the screen, respectively. A digital camera (EOS Kiss X4, Canon Inc.) was used to capture reconstructed images. The focal length of the camera lens was 200 mm, and the camera was located at a distance of 530 mm from the image plane of the 4f optical system. The exposure time was ~1 s. The camera was mounted on a tripod. The f-number of the lens was 5.6 for capturing reconstructed image shown in Fig. 9 to show the effects of changing the focus of the camera, and was 10 for capturing other reconstructed images shown in Figs. 10–12. Figure 9 shows photographs of a reconstructed image captured by the digital camera in which speckles are not observed. From Figs. 9(a) and 9(b), in which the focus of the camera was respectively set at the stems of the left and the right objects, the two objects were found to be produced at different depths. The shading was done with material parameters of kd = 0.35, ks = 0.60, n = 10.0, and ka = 0.05 and lighting parameters of l = (0, 0, 1), Il = 1.0 and Ia = 1.0.

 figure: Fig. 9

Fig. 9 Photographs of reconstructed images with the camera focused on (a) left object and (b) right object.

Download Full Size | PDF

The reconstructed image generated by the proposed technique was compared with that generated by CG software. Figure 10(a) shows the images generated by the CG software and Fig. 10(b) shows the photograph of the reconstructed image. The object was produced in front of the screen, and the center of the object was located at the distance of 13 mm from the screen. For the both image generations, the same shading parameters were used; material parameters of kd = 0.35, ks = 0.60, n = 5.0, and ka = 0.05 and lighting parameters of l = (0, 0, 1), Il = 1.0, and Ia = 1.0. The reconstructed image was captured right in front of the screen and the CG image was rendered by locating a viewpoint of a camera right in front of the screen; v = (0, 0, 1). The shading in the reconstructed image generated by the proposed technique was similar to that obtained in the 2D image generated by the CG software.

 figure: Fig. 10

Fig. 10 Comparison of shading in images generated by (a) CG software and (b) proposed holographic technique.

Download Full Size | PDF

Figure 11 shows the reconstructed images when the light vector l was changed. Figures 11(a) and 11(b) show the reconstructed images when the object was illuminated from the left l = (−0.71, 0, 0.71) and from the lower right l = (0.54, 0.45, 0.71), respectively. Both images were captured in front of the screen. The reflection of light on the object surface changed depending on the illumination direction. The material parameters were kd = 0.35, ks = 0.60, n = 5.0, and ka = 0.05, and lighting parameters were Il = 1.0 and Ia = 1.0. A movie of the reconstructed image is attached to this figure, in which the light vector was changed with time.

 figure: Fig. 11

Fig. 11 Photographs of reconstructed images when object was illuminated from (a) left and (b) lower right (Media 1).

Download Full Size | PDF

Figure 12 shows the reconstructed images generated with different material parameters. Figure 12(a) shows the reconstructed image when the object had a glossy surface; kd = 0.25, ks = 0.70, n = 10.0, and ka = 0.05. Figure 12(b) shows the reconstructed image when the object had a matte surface; kd = 0.75, ks = 0.20, n = 5.0, and ka = 0.05. For both reconstructed images, the lighting parameters were l = (0, 0, 1), Il = 1.0, and Ia = 1.0. The difference of the object materials was clearly observed. The movies of the reconstructed images, when the 3D object was rotating, are attached to this figure. We also attached the movie of the reconstructed image, in which the material parameters were changed with time.

 figure: Fig. 12

Fig. 12 Photographs of reconstructed images when the object had (a) glossy surface (Media 2) and (b) matte surface (Media 3); transition of material parameters is shown in (Media 4).

Download Full Size | PDF

4. Discussion

The reduction in speckles significantly improved the quality of the reconstructed images as compared with the reconstructed images obtained in our previous study [6]. The presence of the speckles made the object surfaces look coarse and rough. Because the speckles were eliminated without using any averaging effect, blur was not observed in the reconstructed images.

The frame rate of the experimental system was 22.2 Hz. Because still reconstructed images were displayed and the image size was small, obvious flicker was not perceived. However, if moving 3D images were displayed or the image size became larger, flicker might be observed. The use of a higher speed SLM or the use of multiple SLMs might decrease the perception of flicker.

The beam width of the object points constituting the reconstructed images depends on the size of the rectangular area and the distance to the object points from the screen, as described in Ref. 16. In the reconstructed images shown in Figs. 1012, the width of the object points was 14 μm, and the pitch of the object points was 55 μm [16]. By magnifying the reconstructed images, the separation of the object points was observed. However, the gap between the object points was very small to be observed by the eye. In binary patterns corresponding to higher threshold levels, the width of the zone plate decreases, as illustrated in Fig. 7. This might cause the increase in the width of the object points.

Because the pixel pitch of the DMD was large, the viewing zone angle of the experimental system was small. The screen size of the experimental system was equal to that of the SLM. The proposed technique should be combined with holographic display techniques that enlarge the viewing zone angle and the screen size [26, 27].

The experimental system provided monochromatic reconstructed images, because a red-color laser was used as the light source. To improve the realism of the reconstructed images, the reconstruction of color images is important. The color display system can be constructed using three-channel systems generating red, green, and blue images. A time-multiplexing system can also be used, which sequentially generates red, green, and blue images. Red, green, and blue lasers are used as light sources. On the basis of the Phong shading, the amplitude distribution m(x, y) is individually calculated for each color. As the pitch of fringes constituting the zone plates depends on the wavelength of the light, the zone plates are generated differently for each color. Thus, the generated zone plates are modulated by the amplitude distribution m(x, y) in each color.

In this study, the zone plate was modulated based on Phong shading. In addition to the Phong shading technique, a large number of shading techniques have been proposed in the CG field, and some of them can also be applied to determine the 2D modulation of the zone plates. The bidirectional reflectance distribution function (BRDF) [28] is a technique that can provide highly realistic images. The BRDF is a four-dimensional function that defines the reflection of light on the object surface. It provides a 2D angular intensity distribution of reflection light for the incident light vector l. Therefore, this 2D intensity distribution could be used to modulate the zone plate.

5. Conclusion

A shading technique and a speckle reduction technique were combined to improve the image quality of the reconstructed images produced by computer-generated holograms. The zone plates were displayed on a high-speed SLM in a time-multiplexing manner to avoid interference between object points and to modulate the amplitude of the zone plates two-dimensionally. A DMD with a frame rate of 22,727 Hz was used to generate the reconstructed images consisting of 256 × 192 object points at a frame rate of 22.2 Hz. The 2D amplitude modulation of the zone plate was determined on the basis of Phong shading. The generation of speckle-free and shaded reconstructed images was experimentally verified.

Acknowledgments

This study was supported by a Grant-in-Aid for Challenging Exploratory Research from the Japan Society for the Promotion of Science (JSPS), No. 23656234.

References and links

1. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]   [PubMed]  

2. E. N. Leith and J. Upatnieks, “Reconstructed wavefronts and communication theory,” J. Opt. Soc. Am. 52(10), 1123–1130 (1962). [CrossRef]  

3. K. Matsushima, “Computer-generated holograms for three-dimensional surface objects with shade and texture,” Appl. Opt. 44(22), 4607–4614 (2005). [CrossRef]   [PubMed]  

4. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48(34), H54–H63 (2009). [CrossRef]   [PubMed]  

5. H. Nishi, K. Hayashi, Y. Arima, K. Matsushima, and S. Nakahara, “New techniques for wave-field rendering of polygon-based high-definition CGHs,” Proc. SPIE 7957, 79571A, 79571A-11 (2011). [CrossRef]  

6. T. Kurihara and Y. Takaki, “Shading of a computer-generated hologram by zone plate modulation,” Opt. Express 20(4), 3529–3540 (2012). [CrossRef]   [PubMed]  

7. L. I. Goldfischer, “Autocorrelation function and power spectral density of laser-produced speckle patterns,” J. Opt. Soc. Am. 55(3), 247–252 (1965). [CrossRef]  

8. H. J. Gerritsen, W. J. Hannan, and E. G. Ramberg, “Elimination of speckle noise in holograms with redundancy,” Appl. Opt. 7(11), 2301–2311 (1968). [CrossRef]   [PubMed]  

9. T. S. McKechnie, “Speckle reduction,” in Laser Speckle and Related Phenomena, J. C.Dainty, ed. (Springer-Verlag, 1975).

10. M. Matsumura, “Speckle noise reduction by random phase shifters,” Appl. Opt. 14(3), 660–665 (1975). [CrossRef]   [PubMed]  

11. J. M. Huntley and L. Benckert, “Speckle interferometry: noise reduction by correlation fringe averaging,” Appl. Opt. 31(14), 2412–2414 (1992). [CrossRef]   [PubMed]  

12. M. Yamaguchi, H. Endoh, T. Honda, and N. Ohyama, “High-quality recording of a full-parallax holographic sterogram with a digital diffuser,” Opt. Lett. 19(2), 135–137 (1994). [CrossRef]   [PubMed]  

13. J. Amako, H. Miura, and T. Sonehara, “Speckle-noise reduction on kinoform reconstruction using a phase-only spatial light modulator,” Appl. Opt. 34(17), 3165–3171 (1995). [CrossRef]   [PubMed]  

14. T. Kozacki, M. Kujawińska, G. Finke, B. Hennelly, and N. Pandey, “Extended viewing angle holographic display system with tilted SLMs in a circular configuration,” Appl. Opt. 51(11), 1771–1780 (2012). [CrossRef]   [PubMed]  

15. F. Yaraş, H. Kang, and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. 48(34), H48–H53 (2009). [CrossRef]   [PubMed]  

16. Y. Takaki and M. Yokouchi, “Speckle-free and grayscale hologram reconstruction using time-multiplexing technique,” Opt. Express 19(8), 7567–7579 (2011). [CrossRef]   [PubMed]  

17. N. Pandey and B. Hennelly, “Quantization noise and its reduction in lensless Fourier digital holography,” Appl. Opt. 50(7), B58–B70 (2011). [CrossRef]   [PubMed]  

18. J. P. Waters, “Holographic image synthesis utilizing theoretical methods,” Appl. Phys. Lett. 9(11), 405–407 (1966). [CrossRef]  

19. G. L. Rogers, “Gabor diffraction microscopy: the hologram as a generalized zone-plate,” Nature 166(4214), 237 (1950). [CrossRef]   [PubMed]  

20. W. J. Siemens-Wapniarski and M. P. Givens, “The experimental production of synthetic holograms,” Appl. Opt. 7(3), 535–538 (1968). [CrossRef]   [PubMed]  

21. B. T. Phong, “Illumination for computer generated pictures,” Commun. ACM 18(6), 311–317 (1975). [CrossRef]  

22. O. Bryngdahl and A. Lohmann, “Single-sideband holography,” J. Opt. Soc. Am. 58(5), 620–624 (1968). [CrossRef]  

23. T. Mishina, F. Okano, and I. Yuyama, “Time-alternating method based on single-sideband holography with half-zone-plate processing for the enlargement of viewing zones,” Appl. Opt. 38(17), 3703–3713 (1999). [CrossRef]   [PubMed]  

24. Y. Takaki and Y. Tanemoto, “Band-limited zone plates for single-sideband holography,” Appl. Opt. 48(34), H64–H70 (2009). [CrossRef]   [PubMed]  

25. Y. Takaki, M. Yokouchi, and N. Okada, “Improvement of grayscale representation of the horizontally scanning holographic display,” Opt. Express 18(24), 24926–24936 (2010). [CrossRef]   [PubMed]  

26. Y. Takaki and N. Okada, “Hologram generation by horizontal scanning of a high-speed spatial light modulator,” Appl. Opt. 48, 3255–3260 (2009). [CrossRef]   [PubMed]  

27. Y. Takaki and Y. Tanemoto, “Modified resolution redistribution system for frameless hologram display module,” Opt. Express 18(10), 10294–10300 (2010). [CrossRef]   [PubMed]  

28. G. J. Ward, “Measuring and modeling anisotropic reflection,” ACM SIGGRAPH Computer Graphics 26(2), 265–272 (1992). [CrossRef]  

Supplementary Material (4)

Media 1: MOV (2914 KB)     
Media 2: MOV (2288 KB)     
Media 3: MOV (2099 KB)     
Media 4: MOV (2569 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Point-based method and zone plate technique.
Fig. 2
Fig. 2 Rays converging to an object point from the zone plate and variable view vector.
Fig. 3
Fig. 3 Unit vectors in a Phong reflection model.
Fig. 4
Fig. 4 Two-dimensional amplitude modulation of the zone plate based on Phong shading.
Fig. 5
Fig. 5 Object surface consisting of object point groups Gt.
Fig. 6
Fig. 6 Speckle-free generation of object points using the time-multiplexing technique.
Fig. 7
Fig. 7 Grayscale representation of two-dimensionally modulated zone plates by the time-multiplexing technique.
Fig. 8
Fig. 8 4f optical system used for the experiments.
Fig. 9
Fig. 9 Photographs of reconstructed images with the camera focused on (a) left object and (b) right object.
Fig. 10
Fig. 10 Comparison of shading in images generated by (a) CG software and (b) proposed holographic technique.
Fig. 11
Fig. 11 Photographs of reconstructed images when object was illuminated from (a) left and (b) lower right (Media 1).
Fig. 12
Fig. 12 Photographs of reconstructed images when the object had (a) glossy surface (Media 2) and (b) matte surface (Media 3); transition of material parameters is shown in (Media 4).
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.