Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

360-degree viewable image-plane disk-type multiplex holography by one-step recording

Open Access Open Access

Abstract

By tilting both the input and the image planes of a holographic system and adopting a diverging reference wave for hologram recording, a special type of multiplex hologram can be produced in one-step. Due to symmetry of reconstruction geometry, the reconstructed 3D image from this type of rainbow hologram can be viewed by the surrounding observers simultaneously. Theoretical formulation for the holographic process is presented. Some numerical simulation and experimental result demonstrating the characteristics of the reconstructed image are included.

©2010 Optical Society of America

1. Introduction

Since the invention by D. Gabor in 1948, holography has undergone extensive investigation [1]. One branch of holographic development, known as multiplex holography [2], enables the use of not only the outdoor scene but also the computer-generated data as the recording subject. This type of holography generally is a combination of usual photography and rainbow holography. Hence, images can be reconstructed using white-light illumination. Starting from the original flat format [3], this type of holography has been developed into many formats, including the cylindrical type [4,5], the conical type [6,7], and the disk type [8,9]. The cylindrical-type and conical-type multiplex hologram, due to the bending of the hologram, automatically align all the centers of the reconstructed 2D images on the symmetrical axis of the hologram. Thus, they can provide 3D images for the observers around the hologram simultaneously. The traditional method of multiplex holography is to fabricate a series of long thin individual holograms side by side. Hence, the reconstructed image is inevitably overlaid with a fence structure, known as the picket-fence effect. To overcome this problem, image-plane technique was adopted [1013].

Owing to the possibility to utilize the well developed CD technology for mass production, the disk-type multiplex holography was proposed [8,13]. Because this type of hologram stays flat after fabrication, the reference source point can only be placed off the symmetry axis of the hologram disk. Consequently, it can only allow one viewer at a time since the image wave propagates out nearly normal to the hologram surface. By tilting the recording film plane and correspondingly the object plane in the optical system for hologram recording, it would be possible to make all the reconstructed image rays propagate in the oblique directions with respect to the normal of the hologram surface simultaneously when a reference source point is placed on the symmetry axis of hologram disk. This idea was presented at the 2005 OSA Annual Meeting but remains unpublished yet, although we have investigated on how to copy this hologram either as a transmission or a reflection hologram [14,15] through two-step recording. Since this one-step transmission hologram produces relatively bright image under room light using commercially available clear light bulb with linear filament as a reconstruction light source. It is therefore suitable for 3D display purpose itself. Owing to the image-plane characteristic of this hologram, the observed 3D image is considerably sharp although the image appears to be floating at a distance above the hologram plane. In this paper, we present the theoretical foundation for making this type of one-step transmission hologram. Then, some numerical result is presented to demonstrate the characteristics of the reconstructed image observed at the designated viewing direction. Finally, some results of optical experiment are included for comparison.

2. Theory of the holographic process

Figure 1 . shows the recording optical system for our one-step 360-degree viewable CD-type multiplex hologram. As usual, the laser light is first split into two beams by the beam splitter BS. One beam serves as the object beam while the other beam serves as the reference beam. The object beam is first expanded by the spatial filter SF1 before going into the optical system. It is designated to be focused by the lens L1 onto the plane of the lens L2, where zeroth-order filtering can be performed. On its way toward the lens L2, it acquires a 2D image of the original 3D object, either taken by a CCD camera at a certain inclined angle or may be generated by computer, from the object plane (in our case, a LCD panel). The 2D image is then enlarged and imaged directly onto the recording holographic film by the lens pair L2 and L3. The illuminating light is focused by the lens L3 to a distance behind the recording film, where it acts as the designated viewing position for this 2D image. Meanwhile, the reference beam is expanded by the spatial filter SF2 to shine directly on the recording film. A proper exposure makes an individual image-plane hologram for this particular 2D image. Note that a limiting aperture is placed before the recording film to allow only for the useful information to go through. After exposure, both the platform of the 3D object and the recording film are rotated by a certain small angle (we take 0.36 degree). Another exposure is then made. This process continues until the object and the recording film are rotated by a complete 360 degree revolution. After development, the resulted hologram becomes a CD-shaped hologram. Although this hologram is designed for white-light point-source reconstruction, in reality, due to the lack of a bright white-light point source, an ordinary clear lamp of line filament is adopted instead on the axis of the hologram disk to reconstruct the 3D image for the surrounding observers simultaneously.

 figure: Fig. 1

Fig. 1 Optical system for recording image-plane disk-type multiplex hologram.

Download Full Size | PDF

Theoretical formulation for the holographic process can be built as follows. Referring to Fig. 2 , consider a generalized object point P at (x,y,z) in its own coordinate system XYZ. When the object is rotated by an angle Nθo with respect to the Y (Yo)-axis, its position in the laboratory coordinate system XoYoZo can be obtained through the following equation

[xoyozo]=[cos(Nθo)0sin(Nθo)010sin(Nθo)0cos(Nθo)][xyz],
where N is an integer and θo is the incremental rotation angle between exposures. The position of this image point can further be expressed in the CCD coordinate system XcYcZc, which is a version of the laboratory coordinate system rotated by an angle θc with respect to the Xo-axis, through another coordinate transformation

 figure: Fig. 2

Fig. 2 The relationship between the laboratory coordinate system XoYoZo and the object coordinate system XYZ.

Download Full Size | PDF

[xcyczc]=[1000cosθcsinθc0sinθccosθc][xoyozo].

Suppose the optical axis of the CCD camera is aligned on the Zc-axis and the CCD camera is at a distance do from the origin of the object coordinate system (Fig. 3 ). After being acquired by the CCD camera, the object point under consideration appears at the following location on the detector plane

Pd(xd=didozcxc,yd=didozcyc).
Note that, in order to have a clear image, all the object points on the original 3D object should lie within the depth of focus of the optical system. The 2D image on the CCD detector plane is then transmitted with a magnification factor Mcl to the object plane (LCD) of the optical system for hologram recording. Thus, the location of this object point Pl on LCD is at xl=Mclxd and yl=Mclyd.

 figure: Fig. 3

Fig. 3 Object point in CCD coordinates and its image on the detector plane.

Download Full Size | PDF

After enlargement by the optical system (Fig. 4 ), this object point is at the following position as expressed in the film coordinate system XfYfZf, where M is the magnification ratio, θ1 is the tilted angle for the holographic film, and θ2=cot1{cotθ1/[(1p2f2)(1d23f3)p2f3]} is the corresponding tilted angle for the LCD panel.

 figure: Fig. 4

Fig. 4 Object point on LCD panel is imaged by the optical system onto the recording film plane.

Download Full Size | PDF

[xfyfzf]=[1000cosθ1sinθ10sinθ1cosθ1][M000M0000][1000cosθ2sinθ20sinθ2cosθ2][xlyl0]

Since the illuminating wave for the LCD is first focused on the center of the lens L2 and again is imaged by the lens L3 to a distance dfe behind the recording film, the direction cosines for the object ray going through this image point (see Fig. 5 ) can then be found by subtracting the coordinates of the point Pf from that of the point Ps

 figure: Fig. 5

Fig. 5 Schematic for the direction of the object ray in the film coordinate system. PS is the focus point of the illuminating wave. Pf is the object point on the recording film.

Download Full Size | PDF

(cosαo,cosβo,cosγo)=(xf,dfesinθ1yf,dfecosθ1)xf2+(dfesinθ1+yf)2+(dfecosθ1)2.

The reference source point Pc is placed at (0,R,Zd)in the film coordinate system, which results in an angle θc=tan1(RZd) between its optical axis and the Zf-axis (Fig. 6 ). The direction cosines of the recording reference ray, for this image point, can similarly be found by subtraction between the coordinates of the points Pc and Pf.

 figure: Fig. 6

Fig. 6 Schematic for the direction of the reference ray in the film coordinate system.

Download Full Size | PDF

(cosαc,cosβc,cosγc)=(xf,yf+R,Zd)xf2+(yf+R)2+Zd2

The axis of rotation for the recording film is designated to go through the point (0,R) of the film coordinate system during successive individual hologram recording. Hence, the effective position for the reference source point Pc would be on the axis of and at a distance Zd below the hologram disk. The two light rays of Eq. (5) and Eq. (6) produce interference and is recorded on the holographic film. In the holographic reconstruction process, this individual hologram may appear anywhere on the hologram disk. Suppose that it is now rotated by an angle θ so that the object point under consideration appears at the following position in the observation coordinate system XvYvZv (Fig. 7 ).

 figure: Fig. 7

Fig. 7 Position of image point on film expressed in the observation coordinate system XvYvZv.

Download Full Size | PDF

xv=xf2+(R+yf)2sin[θ+tan1(xfR+yf)]
yv=xf2+(R+yf)2cos[θ+tan1(xfR+yf)]

The corresponding direction cosines for both the object ray and the recording reference ray in the observation coordinate system can be obtained by the following transformation

[cosαvcosβvcosγv]=[cosθsinθ0sinθcosθ0001][cosαcosβcosγ].
The direction cosines of the reconstruction reference ray can be obtained simply by subtraction between the coordinates of the object point on film (Eq. (7)) and the coordinates of the reconstruction reference source point (Fig. 8 ).
(cosαr,cosβr,cosγr)=(xf,yf+R,Zd)xf2+(yf+R)2+Zd2
With the illumination of the reconstruction reference wave Ur, the retrieved information from the hologram which is of interest to us would be Ui=UoUc*Ur, where Uo, Uc, and Ur denotes the amplitude of the object wave, the reference wave, and the reconstruction reference wave, respectively. The direction cosines of the diffracted light ray Ui can be calculated by the following equations, where λr is the wavelength in the reconstruction process and λc is the wavelength for the hologram-forming process.

 figure: Fig. 8

Fig. 8 Hologram viewing geometry.

Download Full Size | PDF

cosαi=cosαr+λrλccosαoλrλccosαc
cosβi=cosβr+λrλccosβoλrλccosβc

According to our design, the observation zone is a ring of radius dfecosθ1R, which is at an angle θ1 above the hologram disk for the wavelength of the light during hologram recording. For light of longer (shorter) wavelength, the observation ring appears lower (higher) owing to the dispersion effect of the hologram. As the observer places his eyes in the vicinity of the viewing zone, the diffracted light ray for the object point under consideration may reach one of his eyes through one individual hologram. The line joining the center of that eye pupil and the object point on the hologram plane determines the line of sight (Fig. 8) for that eye. Similarly, the other eye of the observer can perceive the same object point through another individual hologram. The intersection point of these lines of sight determines the final image position. All the image points of the original 3D object can be considered in the same way using exactly the same equations described above. Hence the 3D image distribution in the observation coordinate system can be found.

3. Numerical simulation and experimental result

A large set of holographic parameters can be used for numerical simulation to bring out the characteristics of the reconstructed image. Here, we present a case which is quite related to our optical experiment. A cube of dimension 3cm on each side is taken as the original object for simulation with the following holographic parameters: θc=45o, do=79cm, di=1.8cm, Mcl=18, f2=15cm, f3=40cm, p2=14.5cm, d23=57cm, q3=43cm, dfe=89cm, M=2.6, θ1=45o, θr=30o, R=7cm, R2+Zd2=14cm. Suppose that both eyes, each of a diameter 3mm, of the observer are placed at the designated viewing ring for the wavelength of the light during hologram recording. The numerically simulated reconstructed image is shown in Fig. 9 with the length of all the line segments marked. Some of other simulated results are listed in Table 1 .

 figure: Fig. 9

Fig. 9 Reconstructed image of a cube from simulation.

Download Full Size | PDF

Tables Icon

Table 1. The parameters for the simulated reconstructed image of Fig. 9

From Fig. 9, we know that the top surface ABCD of the cube image is only slightly smaller (approximately 2%) than the bottom surface EFGH. However, the height of the cube image is considerably shorter (more than 10%) as compared with its width although we started from a perfect 3D cube object. The surface closer to the observer (BCGF) is also slightly larger than the surface further away from the observer (ADHE). As we see from Table 1, the closest distance of the two lines of sight for each corner of the cube is comparable to the resolution element of the observer’s eye at that distance, which is about 0.16mm. Hence, the whole 3D image is quite in focus by the two eyes of the observer. The observed wavelength bandwidth is of the order of 2nm (632nm to 634nm), which is quite monochromatic. This can be understood as follows. Since the illuminating white light from a point reference source is dispersed by the hologram disk (Fig. 8), the light rays, belonging to different wavelengths, diffracted from an image point on the hologram disk would sweep in the vertical direction. An eye of the observer can only capture a narrow band of the diffracted light rays from one object point due to its small aperture as compared with the viewing distance. When the observer moves his eyes up and down, he can perceive the same image of different mean wavelength but with similar wavelength bandwidth. The observed 3D image shows the characteristics of the rainbow hologram, i.e., the image appears larger (smaller) for longer (shorter) mean wavelength. One eye of the observer can only perceive a 2D image from an individual hologram if it is situated at the designated viewing location. In this particular design, hologram number 10 is designated for the left eye and hologram number −10 is designated for the right eye of the observer when his nose is pointing at the center of the zeroth individual hologram. For the other situations, the individual hologram number would generally change. However, at the designated observation ring, generally, one eye can observe a whole 2D image. Even at this designated viewing location, the height of the observed 3D image is considerably shorter than its width (Fig. 9). To compensate for this effect, one can pre-distort the original 2D images for hologram recording to yield image of correct aspect ratio (width-to-height ratio). However the aspect ratio of the image cannot maintain constant as the viewing distance is varied along the designated viewing direction. This can be understood as follows. As the observation distance is changed, the observation is much like seeing an image through a small aperture. The more off the designated viewing position, the smaller the area of the 2D image can be seen. In the vertical direction, different parts of the 2D image can be carried to the eye by different wavelength since hologram disperse the incident white light from all the image points on the hologram in the vertical direction. Although the mean wavelength is varied, the wavelength bandwidth stays quite similar. In the horizontal direction, different parts of the 2D image come from adjacent individual holograms. The more off the designated viewing position, the more size distortion of the 2D image as seen by one eye of the observer has. Using two lines of sight belonging to two eyes of the observer described previously, a size-varied cube image can be obtained for each viewing distance. Figure 10 is a plot of the width-to-height ratio for the observed 3D image which becomes smaller when the observer is further away from the hologram. That is the image appears skinnier when viewed further away from hologram. Conversely, the image appears fatter when viewed closer to the hologram. Although this hologram is designated for white-light point-source illumination, in reality, lack of a bright point source, we adopt a clear light bulb with a linear filament centered at the designated reference-source position instead to illuminate the hologram.

 figure: Fig. 10

Fig. 10 Width-to-height ratio as a function of viewing distance.

Download Full Size | PDF

Figure 11 shows two images of correct parallax as observed by the two eyes at the designated viewing distance, which, in this hologram design, was set at 89cm from the center of the individual hologram. Figure 12 shows the images observed at various distances, 45cm, 90cm, and 180cm. As we expected, the width-to-height ratio of the image decreases as the observation distance increases. It is interesting to note that the image of Fig. 12(c) looks very much like an undistorted cube although it is obtained at twice the designated viewing distance. This is because that the width-to-height ratio of the image at the designated viewing ring is about 1.13 (Fig. 9) which goes down by a factor of 0.915 (Fig. 10) at a distance 180cm from the hologram and becomes about 1.034.

 figure: Fig. 11

Fig. 11 Images with correct parallax, (a) for right eye (b) for left eye, as observed at best-viewing distance (89cm from the center of individual hologram).

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 Effect of viewing distance on observed image: (a) 45cm from center of individual hologram, (b) 90cm from center of individual hologram, (c) 180cm from center of individual hologram.

Download Full Size | PDF

This rainbow-type multiplex hologram was originally designed for white-light point-source reconstruction with a specified viewing ring for the recording wavelength. However, the 3D image can be observed with moderately wide vertical angular range. Here, we describe the experimental observation. Bright red image can be observed at the angle approximately at 48.7o while bright green image appears at 34.2o as measured from Zv axis (Fig. 8). This angular span is about 14.5o. For blue image, only very dim partial image can be observed. Each of the observed images stays quite single-colored, however, its wavelength bandwidth appears to be widened since all the point sources in the linear filament simultaneously contribute to the image. The observation condition is much more complicated than it appears and is currently under investigation.

4. Conclusion and discussion

In either cylindrical or conical multiplex holography, the bending of hologram automatically makes the optical axes of all the reconstructed 2D images to go through the symmetry axis of the hologram. Hence, they can provide 3D images for surrounding observers simultaneously. For disk-type multiplex hologram recorded in the traditional way, the optical axes of the reconstructed 2D images cannot go through the symmetry axis of the hologram disk. As a result, the hologram can provide 3D image for one observer only. If we tilt the recording film by an angle so that the optical axis of the object beam is no longer perpendicular to the film plane, the centers of all the reconstructed images can be made to go through the symmetry axis of hologram disk. In hologram recording, the object beam is designated to focus at a distance behind the hologram plane, where it acts as the best-viewing position of the 2D image. A spherical reference wave is utilized to record the 2D image as an image-plane hologram. Between successive recordings of individual holograms, the recording holographic film and the rotational stage of the original object are rotated by the same small amount of angle. The rotating axis of the recording film and the optical axis of the object beam should lie on the same plane. The reference source point is at some proper location on the rotating axis of the recording film. After all the recording, the exposed area of the holographic film forms a donut-shaped disk. When a white-light point source is situated at the proper location of the symmetry axis of the hologram disk, all the retrieved 2D-image information traverses the symmetry axis of the hologram and, at some distance away, forms an observation ring for light of each wavelength. When an observer places his two eyes on the designated observation ring for the hologram-forming wavelength, each of his eyes can perceive a 2D image. The two images received by the observer then fuse in his brain to produce the feeling of 3D-image observation. If the eyes of the observer are off the observation ring along the designated viewing direction, he can still see 3D image. However, each of the perceived 2D images is generated by more than one individual hologram. The more distance off the observation ring, the more individual holograms contribute to the image and the more change for the aspect ratio of the image

In the theoretical formulation, the method of object-image relationship is adopted. Coordination transformation and imaging property of lenses are used to trace the original object point to its location on the recording film. The direction cosines of both the object ray and the reference ray are then obtained. When the recorded hologram is rotated to anywhere on the hologram disk, they follow the same rule of coordinate rotation as that for the location of object point on film. The well known holographic equation Ui=UoUc*Ur is then utilized to obtain the diffracted ray. If this light ray can reach any eye of the observer, then it determines the line of sight for that eye. Similarly, the other eye of the observer may see the same object point through a different individual hologram. Two lines of sight then determine the location of the final image. All the object points can be considered in the same way to yield the location of the 3D image in space.

An artificial cube object is utilized for numerical simulation. The result shows that, when the two eyes of the observer are placed at the designated observation ring, the bottom of the image appears slightly larger than the top of the image. Meanwhile, the front surface is also slightly larger than the rear surface. Although the image width and the image depth are similar, the image height is considerably shortened. The shortest distance between two lines of sight for all the corner image points is comparable to the resolution element of the eye. Hence, the whole 3D image is seen in focus by the two eyes of the observer. The aspect ratio (width-to-height ratio) of the image can be compensated by pre-distortion of the original 2D images for hologram recording. However, this aspect ratio cannot be maintained as the observation distance changes. When the viewing distance increases (decreases), the width-to-height ratio of the image decreases (increases), i.e., the image becomes skinnier (fatter).

Rather than using a real object, an artificial cube object is taken for optical experiment. The taking of the original 2D images mimic the process of image grabbing by CCD camera. Due to the lack of a bright white-light point source, experimentally, we adopt a clear light bulb with a linear filament for image reconstruction. The image reconstructed from this CD-type hologram appears to be quite single-colored at every viewing location. The reconstructed image, as expected, also reveals rainbow effect as the observer’s eyes go up and down. The vertical-viewing angular range is moderate while the longitudinal-viewing range is quite large. Some images are taken to show the change of the aspect ratio as the observation distance changes. Although with white-light line source illumination, the reconstructed image is quite single-colored at each viewing location, the spectral bandwidth appears to be broadened. Furthermore, only very dim partial blue image can be observed. The spectral characteristics of the reconstructed image for extended source illumination need some further investigation.

Besides the natural extension to full color holography, the idea of this type of hologram may be transferred to a real-time display if the hologram is replaced by a partitioned disk-type holographic optical element (HOE) in contact with a liquid crystal display device. Different areas of the HOE bring different 2D-image information to the observation ring so that information from a finite number of images can be sent to the observation ring. If the HOE disk is rotating in synchronization with successive set of images, in a short time interval, the observation ring can be filled with a sufficient number of 2D-image information. An observer can thus perceive a 3D image, not necessarily at the designated observation ring.

Acknowledgement

Thanks are given to National Science Council, R.O.C. for financial support through research projects NSC96-2221-E-008-030 and NSC97-2628-E-008-021-MY2.

References and links

1. R. J. Collier, C. B. Burckhardt, and L. H. Lin, Optical Holography, (Academic Press, New York, 1971).

2. G. Saxby, Practical Holography, 2nd ed. (Prentice-Hall, Englewood Cliffs, N.J., 1994).

3. D. J. DeBitetto, “Holographic panoramic stereograms synthesized from white light recordings,” Appl. Opt. 8(8), 1740–1741 (1969). [CrossRef]   [PubMed]  

4. L. Huff and R. L. Fusek, “Color holographic stereograms,” Opt. Eng. 19, 691–695 (1980).

5. E. N. Leith and P. Voulgaris, “Multiplex holography: some new methods,” Opt. Eng. 24, 171–175 (1985).

6. K. Okada, S. Yoshii, Y. Yamaji, J. Tsujiuchi, and T. Ose, “Conical holographic stereograms,” Opt. Commun. 73(5), 347–350 (1989). [CrossRef]  

7. L. M. Murillo-Mora, K. Okada, T. Honda, and J. Tsujiuchi, “Color conical holographic stereogram,” Opt. Eng. 34(3), 814–817 (1995). [CrossRef]  

8. Y. S. Cheng, W. H. Su, and R. C. Chang, “Disk-type multiplex holography,” Appl. Opt. 38(14), 3093–3100 (1999). [CrossRef]  

9. J. Kim, “etc, “360° viewable flat hologram,” Proc. SPIE 2333, 418–423 (1995). [CrossRef]  

10. M. C. King, A. M. Noll, and D. H. Berry, “A new approach to computer-generated holography,” Appl. Opt. 9(2), 471–475 (1970). [CrossRef]   [PubMed]  

11. Y. S. Cheng and C. M. Lai, “Image-plane conical multiplex holography by one-step recording,” Opt. Eng. 42(6), 1631–1639 (2003). [CrossRef]  

12. Y. S. Cheng and R. C. Chang, “Image-plane cylindrical holographic stereogram,” Appl. Opt. 39(23), 4058–4069 (2000). [CrossRef]  

13. Y. S. Cheng and C. H. Chen, “Image-plane disk-type multiplex hologram,” Appl. Opt. 42(35), 7013–7022 (2003). [CrossRef]   [PubMed]  

14. C. H. Chen, Y. S. Cheng, and Z. Y. Lei, “Single-beam copying system of 360-degree viewable image-plane disk-type multiplex hologram and polarization effects on diffraction efficiency,” Opt. Express 15(17), 10804–10813 (2007). [CrossRef]   [PubMed]  

15. Y. S. Cheng, C. H. Chen, and Y. C. Hsieh, “Reflection disk-type multiplex holography using two-step recording,” Jpn. J. Appl. Phys. 47(9), 7173–7181 (2008). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Optical system for recording image-plane disk-type multiplex hologram.
Fig. 2
Fig. 2 The relationship between the laboratory coordinate system X o Y o Z o and the object coordinate system X Y Z .
Fig. 3
Fig. 3 Object point in CCD coordinates and its image on the detector plane.
Fig. 4
Fig. 4 Object point on LCD panel is imaged by the optical system onto the recording film plane.
Fig. 5
Fig. 5 Schematic for the direction of the object ray in the film coordinate system. PS is the focus point of the illuminating wave. Pf is the object point on the recording film.
Fig. 6
Fig. 6 Schematic for the direction of the reference ray in the film coordinate system.
Fig. 7
Fig. 7 Position of image point on film expressed in the observation coordinate system X v Y v Z v .
Fig. 8
Fig. 8 Hologram viewing geometry.
Fig. 9
Fig. 9 Reconstructed image of a cube from simulation.
Fig. 10
Fig. 10 Width-to-height ratio as a function of viewing distance.
Fig. 11
Fig. 11 Images with correct parallax, (a) for right eye (b) for left eye, as observed at best-viewing distance (89cm from the center of individual hologram).
Fig. 12
Fig. 12 Effect of viewing distance on observed image: (a) 45cm from center of individual hologram, (b) 90cm from center of individual hologram, (c) 180cm from center of individual hologram.

Tables (1)

Tables Icon

Table 1 The parameters for the simulated reconstructed image of Fig. 9

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

[ x o y o z o ] = [ cos ( N θ o ) 0 sin ( N θ o ) 0 1 0 sin ( N θ o ) 0 cos ( N θ o ) ] [ x y z ] ,
[ x c y c z c ] = [ 1 0 0 0 cos θ c sin θ c 0 sin θ c cos θ c ] [ x o y o z o ] .
P d ( x d = d i d o z c x c , y d = d i d o z c y c ) .
[ x f y f z f ] = [ 1 0 0 0 cos θ 1 sin θ 1 0 sin θ 1 cos θ 1 ] [ M 0 0 0 M 0 0 0 0 ] [ 1 0 0 0 cos θ 2 sin θ 2 0 sin θ 2 cos θ 2 ] [ x l y l 0 ]
( cos α o , cos β o , cos γ o ) = ( x f , d f e sin θ 1 y f , d f e cos θ 1 ) x f 2 + ( d f e sin θ 1 + y f ) 2 + ( d f e cos θ 1 ) 2
( cos α c , cos β c , cos γ c ) = ( x f , y f + R , Z d ) x f 2 + ( y f + R ) 2 + Z d 2
x v = x f 2 + ( R + y f ) 2 sin [ θ + tan 1 ( x f R + y f ) ]
y v = x f 2 + ( R + y f ) 2 cos [ θ + tan 1 ( x f R + y f ) ]
[ cos α v cos β v cos γ v ] = [ cos θ sin θ 0 sin θ cos θ 0 0 0 1 ] [ cos α cos β cos γ ] .
( cos α r , cos β r , cos γ r ) = ( x f , y f + R , Z d ) x f 2 + ( y f + R ) 2 + Z d 2
cos α i = cos α r + λ r λ c cos α o λ r λ c cos α c
cos β i = cos β r + λ r λ c cos β o λ r λ c cos β c
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.