Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Synthesizing computer generated holograms with reduced number of perspective projections

Open Access Open Access

Abstract

We present an improved method for recording a synthesized Fourier hologram under incoherent white-light illumination. The advantage of the method is that the number of real projections needed for generating the hologram is significantly reduced. The new method, designated as synthetic projection holography, is demonstrated experimentally. We show that the synthetic projection holography barely affects the reconstructed images. However, by increasing the number of observed projections one can improve the synthetic projection hologram quality.

©2007 Optical Society of America

1. Introduction

Digital holography has become an important tool in 3-D imaging since the pioneer work of Yamaguchi, and Zhang [1]. Strictly controlled conditions of conventional holographic recording have actually prevented holograms from becoming a commonly used for many practical applications. By strictly controlled conditions we mean the need of intense and coherent light source for illuminating the object and the special stability requirements of the optical setup. A breakthrough has been made in Refs. [2–4] proposing a method of synthesizing computer generated holograms (CGH) of realistic objects under incoherent white light illumination by capturing projections of the 3D object from different perspectives and fusing these projections by a specific computing process. Yet, the required recording process for achieving these high quality holograms is extremely long and complicated due to the tremendous amount of different perspective projections needed. Recently, Sando et al. [5] proposed to synthesize the CGH more efficiently by reducing the 2-D scanning process of the 3-D object into a 1-D azimuth scanning. Shaked et al. [6] presented a different method called integral holography (IH) in which 3D scene is captured by a microlens array. Since the entire projection set is acquired in a single shot, the IH abolishes the limitation of the complicated recording process, although this advantage is achieved with the penalty of low resolution.

In this paper we propose an alternative method to synthesize high quality holograms under incoherent white light illumination by significantly reducing the number of required projections. This method relies on a geometric image interpolation called view-synthesis [7]. We designate the proposed holographic acquisition as synthetic projection holography (SPH). It should be noted that Park, et al. [8] used a similar principle for resolution enhancement of the reconstruction method in computational integral imaging.

2. Synthetic projection holography

In the proposed method, instead of capturing the complete set of observed projections from different perspectives, only a few chosen observed projections are captured by the digital camera. Then, computational process synthesizes a series of synthetic middle projections between each two consecutive real observed projections. As demonstrated in Fig. 1, we actually imitate the observed projections that are supposed to be located between every two real observed projections. Next, based on the resulting projection set, the computer generates the hologram using the algorithm of Ref. [2]. In the following subsections, algorithms for both the view synthesis (subsection 2.1) and the holographic recording method (subsection 2.2) are described.

2.1 View synthesis algorithm

The view synthesis algorithm, intensively used in computer vision, predicts the new virtual viewpoints for any given number of real views on the scene. Essentially, the algorithm estimates how the scene would look from new viewpoints [7]. Like most other view synthesis algorithms, this one is also based on interpolating the locations and intensities of pixels in two different views after finding correspondent points in these given views. It should be remarked that other well known view synthesis algorithms may be suitable for the current method.

At the beginning of the synthesis stage, illustrated in Fig. 1, two complete correspondence maps are computed. Each element of each complete correspondence map describes the displacement of pixels from one view to the other. Three steps are required for generating a complete correspondence map. In the first step, two images of vertical edges are produced by convolving the original views with the following kernel [1 2 1;0 0 0;-1 -2 -1]. Then, an initial sparse correspondence map is estimated by finding matches between corresponding pixels in the two images of vertical edges. Finally, a complete correspondence map is determined by interpolation of the initial sparse correspondence map. Once complete correspondence maps are established, a sequence of synthesis views is generated. Each and every synthesis view is estimated according to two observed projections. Its location in the area between the two observed projections is determined by the relative displacement D/N, where ND ≥ 0 and N represents the total number of both synthetic and observed projections in the set. The in-between synthesis views are actually computed in two phases.

 figure: Fig. 1.

Fig. 1. Illustration of the view synthesis algorithm.

Download Full Size | PDF

First, both observed views are warped to the new location of the synthesis view in order to get two warped projections PW 1 and PW 2. The warp functions are computed – based on two observed projections PO 1 and PO 2, the correspondence maps C 1 : PO 1PO 2, C 2 : PO 1PO 2 and the relative displacement D/N of the new synthesis views – as the following:

PW1(x+12+DN×C1xy,y)=(1DN)×P1Oxy
PW2(x12+(1DN)×C2xy,y)=DN×P2Oxy,

where x, y are the pixels, coordinates, the superscript o denotes the observed projections by a camera and the symbol ⌊⌋ stands for rounding down the value to its closest integer. In the second phase, the warped projections PW 1 and PW 2 are summed up (PSD = PW 1 + PW 2, where the superscript S denotes a synthetic projection) giving the final synthesis view PSD at the relative displacement D/N. The outcomes of the different steps in the view synthesis algorithm in our experiment are demonstrated by Fig. 2. In this demonstration the middle (D=N/2) synthesis projection (g) is synthesized based on two observed projections (a) and (b). The two images of vertical edges [(c) and (d)] and the warped images [(e) and (f)] are presented as well.

 figure: Fig. 2.

Fig. 2. Results of the different steps in the view synthesis algorithm. Two observed projections (a) and (b), their two images of vertical edges (c) and (d), the warped images (e) and (f) and the final middle synthesized projection (g).

Download Full Size | PDF

2.2 Hologram generating method

As explained above, the proposed method is aimed at decreasing the number of observed projections by synthesizing the in-between projections. Let us demonstrate the method for a case in which we use only two observed projections PO 1(x, y) , PON(x, y) and synthesize the in-between projections PSk (x, y) k=2,…,N-1. This case is illustrated in Fig. 3. Once the observed projections are captured, the view synthesis algorithm is used in order to synthesize the middle projections which are required for yielding the entire projection set P S = {PO 1(x, y), PS 2 (x, y), PS 3 (x, y), …, PS N-2(x, y), PS N-1(x, y), PON (x, y)}. Next, each projection in the set is centered on a chosen reference point by a digitally correlating this projection with a known pattern (in our experiment it is the upper cube) taken from one of the projections [6]. The set of centered projections is denoted as P S,C . As a result, the radial distance from the reference point to the camera is the same for all the projections. Next, each projection is multiplied by a horizontally varied linear phase function with a frequency proportional to the serial number of the projection in the entire projection set. The resulting product is summed up into a single column in the complex amplitude of the 1-D Fourier transform of the 3-D object. Each projection, therefore, yields a different column in the complex amplitude of the object's 1-D Fourier transform.

 figure: Fig. 3.

Fig. 3. Illustration of the proposed method.

Download Full Size | PDF

3. Experimental results

We have implemented the experimental setup shown in the upper part of Fig. 3. The 3-D scene contains three cubes, each 3.5cm×3.5cm×3.5cm in size. The distances along the optical axis Z between the imaging lens of the CCD camera and the first, middle and last cubes are 30cm, 37cm and 40cm, respectively. A 1-D Fourier CGH has been generated from a set of 400 observed projections of the scene according to the algorithm described in subsection 2.2. The magnitude and the phase of this hologram are shown in Figs. 4(a) and 4(b), respectively. This hologram is compared to the SPHs which are recorded using reduced number of observed projections and synthetic projections. Having assumed that the number of observed projections is known, the rest of the projections can be synthesized accordingly, after the number of pixels in the SPH is determined.

Figures 2(a) and 2(b) present the two projections taken from the entire set acquired by the CCD camera. The distance between the two most extreme projections along the CCD path is 4cm and the interval between every two successive projections is 0.1mm. Four different holograms were synthesized according to the holographic recording method described in section 2. The holograms were generated with 2, 17, 33 and 55 observed projections, where the distance between every two successive projections in each hologram is equal. To evaluate the results, we have compared the reconstructions of the 3-D scene encoded into each hologram with the reconstruction of the hologram shown in Fig. 4 synthesized with all 400 observed projections. The reconstructions of the various holograms have been obtained digitally by a 1-D inverse Fourier transform of each hologram, followed by a convolution with quadratic phase function in order to simulate a Fresnel propagation [9] in the regime beyond the focal plane.

The best in-focus reconstructed planes obtained from the various holograms, as well as from the hologram in Fig. 4, are presented in Fig. 5. The latter presents three different planes in which the best in-focus images are shown, for the various cubes. Figs 5(a), 5(b) and 5(c) show the reconstruction from the hologram in Fig. 4, Figs 5(d), 5(e) and 5(f) are from the SPH synthesized with 2 observed projections and Figs 5(g), 5(h) and 5(i) show the results of the SPH synthesized with 55 projections. A quantitative comparison among the holograms is carried out by measuring the mean-square error (MSE) of the best in-focus reconstructed planes from each hologram. The comparison is restricted to the best in-focus cubes area and not to the entire plane. The MSE calculation is defined in the following equation:

MSE=1MKi=1Mj=1K[PijβP˜ij]2,

where i, j are the coordinates of each pixel, M, K are the dimensions of the considered area, P(i,j) is the scaled reconstructed images from the hologram in Fig. 4, P̃(i,j) is the reconstructed images from the SPH, and β is a factor that scales the reconstructed images to minimize the MSE given by

β=[i=1Mj=1KPijP˜ij]i=1Mj=1KP˜ijP˜ij

According to Fig. 5, the quality of the reconstructed planes of the various holograms is relatively good, even for the hologram synthesized with only 2 observed projections (out of the 400 observed projections in the initial set). Nevertheless, higher quality is achieved by increasing the number of observed projections. The quality improvement is demonstrated in the visual comparison between corresponding images in Figs. 5(d), 5(e), 5(f) and in Figs. 5(g), 5(h), 5(i), as well as by the quantitative comparison shown in Fig. 6. For example, the results of the reconstructed planes of the hologram synthesized with 55 observed projections are superior comparing it to the reconstructed planes of the hologram synthesized with only 2 observed projections. After the reconstruction, the resultant image quality is directly proportional to the quality of the interpolated images.

 figure: Fig.4.

Fig.4. Magnitude (a) and phase (b) of the complex amplitude of the object’s 1-D Fourier transform obtained from the set of the fully observed projections.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Best in-focus reconstructed planes (a, b and c) obtained from the fully-observed hologram; planes (d, e and f) obtained from SPH with 2 observed projections and planes (g, h and i) obtained from SPH with 55 observed projections.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. MSE versus the number of observed projections.

Download Full Size | PDF

4. Conclusions

A new method for recording a Fourier hologram under spatially incoherent white-light illumination is presented and demonstrated. It should be commented that the SPH as well as other methods [2–6] have the advantage, over the method of generating holograms from a single perspective, in their ability to reveal regions that are invisible from a single point of view due to the wider angle in which the object is acquired by the camera. The view synthesis algorithm is integrated into the holographic acquisition in a way that significantly reduces the number of observed projections needed for recording a hologram, without losing much of the resolution capabilities of the system and without considerable reduction in the reconstructed image quality.

This research was supported by Israel Science Foundation grant 119/03.

References and links

1. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1269 (1997). [CrossRef]   [PubMed]  

2. Y. Li, D. Abookasis, and J. Rosen, “Computer-generated holograms of three-dimensional realistic objects recorded without wave interference,” Appl. Opt. 40, 2864–2870 (2001). [CrossRef]  

3. D. Abookasis and J. Rosen, “Computer-generated holograms of three-dimensional objects synthesized from their multiple angular viewpoints,” J. Opt. Soc. Am. A 20, 1537–1545 (2003). [CrossRef]  

4. Y. Sando, M. Itoh, and T. Yatagai, “Holographic three-dimensional display synthesized from three-dimensional Fourier spectra of real existing objects,” Opt. Lett. 28, 2518–2520 (2003). [CrossRef]   [PubMed]  

5. Y. Sando, M. Itoh, and T. Yatagai, “Full-color computer-generated holograms using 3-D Fourier spectra,” Opt. Express , 12, 6246–6251 (2004). [CrossRef]   [PubMed]  

6. N. T. Shaked, J. Rosen, and A. Stern, “Integral holography: white-light single-shot hologram acquisition,” Opt. Express 15, 5754–5760 (2007). [CrossRef]   [PubMed]  

7. D. Scharstein, “View Synthesis using Stereo Vision” Lecture Notes in Computer Science (LNCS) (Springer-Verlag, 1999) 1583, Chap. 2.

8. J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Enhanced-resolution computational integral imaging reconstruction using an intermediate-view reconstruction technique,” Opt. Eng. 45, 1170041–1170047 (2006). [CrossRef]  

9. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, New York, 1996), Chap. 5.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Illustration of the view synthesis algorithm.
Fig. 2.
Fig. 2. Results of the different steps in the view synthesis algorithm. Two observed projections (a) and (b), their two images of vertical edges (c) and (d), the warped images (e) and (f) and the final middle synthesized projection (g).
Fig. 3.
Fig. 3. Illustration of the proposed method.
Fig.4.
Fig.4. Magnitude (a) and phase (b) of the complex amplitude of the object’s 1-D Fourier transform obtained from the set of the fully observed projections.
Fig. 5.
Fig. 5. Best in-focus reconstructed planes (a, b and c) obtained from the fully-observed hologram; planes (d, e and f) obtained from SPH with 2 observed projections and planes (g, h and i) obtained from SPH with 55 observed projections.
Fig. 6.
Fig. 6. MSE versus the number of observed projections.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

PW 1 ( x + 1 2 + D N × C 1 x y , y ) = ( 1 D N ) × P 1 O x y
PW 2 ( x 1 2 + ( 1 D N ) × C 2 x y , y ) = D N × P 2 O x y ,
MSE = 1 M K i = 1 M j = 1 K [ P i j β P ˜ i j ] 2 ,
β = [ i = 1 M j = 1 K P i j P ˜ i j ] i = 1 M j = 1 K P ˜ i j P ˜ i j
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.