Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Resolution priority holographic stereogram based on integral imaging with enhanced depth range

Open Access Open Access

Abstract

Conventional holographic stereogram (HS) can be generated through fast Fourier transforming parallax images into hogels. Conventional HS uses multiple plane waves to reconstruct 3D images with low resolution and is similar to the principle of depth priority integral imaging (II). We proposed the concept of resolution priority HS for the first time, which is based on the principle of resolution priority II, by adding a quadratic phase term on the conventional Fourier transform. In the proposed resolution priority HS, the resolution of reconstructed 3D images is much better than conventional HS, but the depth range is limited. To enhance the depth range, a multi-plane technique was used to present multiple central depth planes simultaneously. The proposed resolution priority HS with high resolution and enhanced depth range was verified by both simulation and optical experiment.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Holography has been considered as the final form of three-dimensional (3D) display. Conventional hologram recording needs coherent illumination, dark environment and bulky system, which is not practical. The computer-generated holography (CGH) is now a hot issue to avoid optical recording [1–3]. Among the numerous techniques in the CGH, the holographic stereogram (HS) is an excellent approach which can easily handle complicated issues such as occlusion culling, gloss reproduction and surface shading, because the image source is obtained by either cameras or the computer graphics rendering techniques [4–6].

Conventional HS is calculated by Fourier transforming the parallax images into the corresponding hologram elements (hogel). In reconstruction, each hogel emits many plane waves of different directions to reproduce the light field. From this point, the principle of HS is very similar to the depth priority integral imaging (DPII) [7,8]. Based on this similarity, Ichihashi et al. proposed to convert elemental images into hogels by fast Fourier transform (FFT) calculation [9].

One common drawback of HS and DPII is the low lateral resolution. The low resolution is caused by the low sampling rate due to the hogel pitch or lens pitch. Some methods were proposed to improve the resolution such as the phase-added stereograms [10,11], the diffraction specific coherent panoramagram [12] and the fully computed HS [13]. However, these methods need the accurate depth information of the 3D scene, which make the capturing and calculation process complicated and time-consuming. The ray sampling (RS) plane method enhances the resolution by setting the RS plane near the object to sample high resolution projection image [14,15]. However, for real objects, this method requires a resampling process which costs extra time. The ray-tracing method calculates elementary holograms based on physical model to provide occlusion, refractive and reflective effects. It is accurate but the computational load is large [16]. Recently, we have proposed to use the moving array lenslet technique to enhance the sampling rate of HS [17].

In this paper, we proposed a resolution priority HS (RPHS) based on resolution priority integral imaging (RPII). The RPII has higher resolution than the DPII because the image is focused on the image reference plane (IRP). By adding a quadratic phase term on the conventional FFT calculation, the RPII is converted to the RPHS. Multi-plane technique is adopted to improve the limited depth range of the RPHS inherited from the RPII. Both simulation and optical experiment verify that the proposed RPHS has higher resolution than conventional HS with enhanced depth range.

2. Conventional HS and depth priority integral imaging

The principle of HS is shown in Fig. 1 in which only one object point is recorded. In HS, the hologram plane is partitioned into many small square hogels. In hologram generation, each parallax image is placed at the front focal plane of the lens, and each hogel is placed at the back focal plane of the lens (note the lens can be either real or virtual). According to the common knowledge in Fourier optics, if an object is placed at the front focal plane of the lens, we can obtain its exact Fourier transform at the back focal plane of the lens. Thus, to each lens, the hogel H(u, v) behind it is the exact Fourier transform of the parallax image I(x, y) in front of it. We can obtain the following Fourier transform relation [18]:

H(u,v)=1jλfI(x,y)exp[j2πλf(xu+yv)]dxdy,
where λ is the wavelength and f is the focal length of the lens. In the simple case of one object point, each parallax image is a point image in which the point is located at different place. It can be seen as an impulse function in the frequency domain, and the location of the impulse determines the spatial frequency of the fringe pattern on the corresponding hogel. Thus, the contribution of a point to the fringe pattern on the hogel is approximated as a plane wave described by a single 2D complex harmonic function. It can be seen that in reconstruction, each hogel emits plane wave of specific direction to converge at the original object point location.

 figure: Fig. 1

Fig. 1 Principle of holographic stereogram.

Download Full Size | PDF

Figure 2(a) shows the wavefront reconstruction of one point in HS. Different from Fresnel hologram which reconstructs image point with spherical waves, the HS reconstructs image point using multiple plane waves with discrete directions. The spatial frequency of the fringe pattern in each hogel relates to the propagating direction of the corresponding plane wave. All the plane waves corresponding to one point converge to form one volume pixel (voxel) of the 3D image. The voxel size is equal to the hogel size. The eye samples the 3D image according to the pitch of the hogels. According to the Nyquist sampling theorem, the Nyquist frequency, in cycles per radian, is

fnyq=L2p,
where L is the viewing distance and p is the pitch of hogels. The limited sampling rate degrades the image quality. The sampling rate can be increased by decreasing the hogel size, but small hogel size will cause increased quantization error in FFT calculation [11]. Figure 2(b) shows the principle of DPII, which is very similar to the HS. The elemental image array (EIA) (each elemental image is a parallax image of the 3D object) is placed at the front focal plane of the lens array, so the elemental lens collimates the light rays of different pixels to different directions, and voxels are formed through intersections of multiple parallel light rays. Since the display principles of HS and DPII are almost the same, Ichihashi et al. proposed to convert elemental images into hogels by FFT calculation for real-time display [9], as shown in Fig. 3. The EIA is placed at the front focal plane of the lens array, and the hologram is placed at the back focal plane of the lens array. Each elemental image was fast Fourier transformed into the corresponding hogel. Since this conversion is based on DPII, the synthesized hologram is basically the same as conventional HS with low resolution. We proposed to convert hologram based on RPII with much higher resolution, and called it RPHS.

 figure: Fig. 2

Fig. 2 (a) Wavefront reconstruction and display principle of conventional HS. (b) Wavefront reconstruction and display principle of DPII.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Conversion process from DPII to HS through FFT [9].

Download Full Size | PDF

3. Resolution priority integral imaging and proposed resolution priority HS

Different from DPII which performs better at depth range, RPII has higher lateral resolution [7,8]. Figure 4(a) shows the principle of RPII. The gap between EIA and lens array is slightly different from (larger or smaller than) the focal length, so there exist a privileged plane in the reconstruction space, the so-called IRP, which is conjugate with the EIA plane. In such plane the object point is reconstructed with very good quality (see point A), which is very similar to the reconstruction of Fresnel hologram. Each lens emits spherical waves which are focused on the IRP. Object points that are away from the IRP are reconstructed not well (see point B), because the ray-cones intersecting at point B are not focused. As the object point goes far away from the IRP, the reconstructed quality degrades. Thus, the depth range of RPII is limited, and the 3D image must be located around the IRP. The IRP is also called as the central depth plane (CDP). Nevertheless, the resolution of reconstructed images in the RPII is still better than in the DPII over a long depth range.

 figure: Fig. 4

Fig. 4 (a) Wavefront reconstruction and display principle of RPII. (b) Wavefront reconstruction and display principle of proposed RPHS.

Download Full Size | PDF

Based on the RPII, we proposed the RPHS to realize high lateral resolution as shown in Fig. 4(b). Different from conventional HS which emits plane waves, the RPHS emits spherical waves which are focused on the IRP. The object points at the IRP are reconstructed perfectly and objects points around the IRP are reconstructed with good quality. Just as the RPII has higher resolution than DPII, the proposed RPHS also has higher resolution than conventional HS.

To obtain such RPHS, a modified version of Fig. 3 is adopted in Fig. 5. The EIA is placed in front of the lens array by a distance of g, and the hologram is placed at the back focal plane. According to Fourier optics [18], if an object I(x, y) is placed in front of the lens by a distance of g, we can obtain the light distribution H(u, v) at the back focal plane as:

H(u,v)=Aexp[jπλf(1gf)(u2+v2)]jλfI(x,y)exp[j2πλf(xu+yv)]dxdy
where λ is the wavelength and f is the focal length of the lens array. Ignoring the constant value, Eq. (3) can be simplified as:
H(u,v)=exp[jπλf(1gf)(u2+v2)]·A(uλf,vλf),A(fx,fy)=FFT[I(x,y)]
where FFT[] is the fast Fourier transform and A(fx, fy) is the frequency spectrum of I(x, y). Equation (4) shows that in the proposed RPHS method, each hogel can be generated by adding a quadratic phase term on the conventional Fourier transform of corresponding elemental image. Assuming each elemental image has N×N pixels with pixel size of Δp, to make the size of elemental hologram equal to the elemental image or elemental lens, the following equation is derived according to sampling theorem and FFT theory:

 figure: Fig. 5

Fig. 5 Conversion process from RPII to RPHS by adding a quadratic phase term on the conventional FFT.

Download Full Size | PDF

12Δp=NΔp2λfλf=NΔp2.

To check the utility of the proposed method, we first consider the phase profile of one point in conventional HS and proposed RPHS. Computational II is used to capture EIA of one point. The virtual lens array consists of 20×20 elemental lenses with focal length of 2.5mm and pitch of 0.25mm. Each elemental image has 50×50 pixels. According to Eq. (5), the wavelength is λ = 500nm. The object point is 50mm distant from the lens array. Figure 6(a) shows the synthesized EIA of one point. Figure 6(b) shows the phase profile of conventional HS. It is spatially segmented into 20×20 hogels, and each hogel is a specific fringe pattern with one spatial frequency. Figures 6(c) and 6(d) show the phase profiles of proposed RPHS and Fresnel hologram respectively. In the RPHS, the IRP was set at where the object point was. It can be seen that the phase profile of RPHS is very close to the spherical phase of Fresnel hologram. Figures 6(e) and 6(f) show the numerical reconstructed results of the point in conventional HS and proposed RPHS respectively. The RPHS reconstructed much smaller light spot than the conventional HS. The intensity profiles along the white lines in Figs. 6(e) and 6(f) are shown in Figs. 6(g) and 6(h). The half widths in Figs. 6(g) and 6(h) are 20 pixels and 7 pixels, or 100 μm and 35μm, respectively. The RPHS reconstruction is very sharp while the conventional HS reconstruction is rough. These reconstruction results preliminarily verify that the proposed RPHS has higher resolution than conventional HS.

 figure: Fig. 6

Fig. 6 (a) Synthesized EIA of one point. Phase profile of (b) conventional HS and (c) proposed RPHS and (d) Fresnel hologram. Numerical reconstructed results of the point in (e) conventional HS and (f) proposed RPHS. (g) The intensity profile along the white line in (e). (h) The intensity profile along the white line in (f).

Download Full Size | PDF

4. Numerical and optical reconstruction results

Numerical simulation was performed to further verify the proposed method. A bee model was built in the 3ds Max modeling software and a virtual camera array was built to capture the EIA as shown in Fig. 7(a). The camera array consisted of 40 × 40 virtual cameras, and the pitch and focal length were set as p = 0.5mm and g = 5mm. Figure 7(b) shows the captured EIA which contains 4000 × 4000 pixels. Then, each elemental image was converted to the corresponding hogel based on Eq. (4) to generate RPHS. Here, each elemental image was multiplied by a random phase distribution to equalize the power spectrum on CGH. Figures 7(c) and 7(e) show the reconstructed images focused on the stomach in conventional HS and RPHS respectively. Figures 7(d) and 7(f) show the reconstructed images focused on the eyes in conventional HS and RPHS respectively. The reconstruction results of proposed method shown in Figs. 7(e) and 7(f) have higher spatial resolutions compared with the conventional HS due to the spherical wave reconstruction, instead of plane wave reconstruction in HS. This change is easily realized by adding a quadratic phase term on the conventional Fourier transform.

 figure: Fig. 7

Fig. 7 (a) The bee model used in simulation. (b) The captured EIA of the bee model. (c) and (d) reconstructions of conventional HS; (e) and (f) reconstructions of proposed RPHS.

Download Full Size | PDF

Next, optical reconstruction experiment was performed. To obtain amplitude-type hologram, the complex amplitude distribution H(u, v) was encoded into the intensity distribution I (u, v):

I(u,v)=2Re[H(u,v)r(u,v)]+C
where r (x, y) was the reference plane wave with incident angle of 3° and r* (x, y) is the conjugated wave of r (x, y). C was a constant real value to make I (u, v) non-negative. Then, the intensity distribution was quantized into a binary image, which was latter printed on a substrate of fused silica coated with chromium film to generate a binary hologram. The sampling number of the binary hologram was 4000×4000 with pixel pitch of 5μm. Figures 8(a) and 8(b) show the optical reconstruction results of conventional HS captured from different directions, and Figs. 8(c) and 8(d) show the optical reconstruction results of proposed RPHS captured from different directions. It is confirmed that the RPHS has higher resolution than conventional HS. The relative shift between different parts confirms the 3D nature of the holographic image. The noise in the optical reconstruction comes from speckle and twin image, which can be improved by several methods [19,20].

 figure: Fig. 8

Fig. 8 (a) and (b) Optical reconstruction results of conventional HS captured from different directions. (c) and (d) Optical reconstruction results of proposed RPHS captured from different directions (see Visualization 1).

Download Full Size | PDF

5. Enhanced depth range by using multi-plane technique

As mentioned above, the depth range of RPII is limited, and the 3D image must be located around the IRP. Figure 9 shows the limited depth range ΔZm which is expressed as [21]:

ΔZm=2lPIφ=2l2PDgφ
where l is the distance to the IRP, PI is the pixel size of the image, PD is the pixel size of the display device, and φ is the lens pitch. Simulation was performed to show the limited depth range of the RPHS. Computational II was used to generate the EIA of three letters: “h”, “u” and “t”, as shown in Fig. 10(a). The lens array consisted of 40 × 40 lens, and the pitch and focal length were set as p = 0.5mm and f = 4.7mm. The IRP was located at l = 80mm, where the letter “u” was. The gap was given as g = fl/(l-f) = 4.99mm. The letter “h” and “t” were located at distances of 20mm and 140mm respectively. Figure 10(b) shows the synthesized EIA. Here, the pinhole model was actually used in the EIA synthesis, so that each letter is recorded in focus.

 figure: Fig. 9

Fig. 9 Principle of depth range limit in RPII.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 (a) Computational II for three letters. (b) The synthesized EIA of the three letters.

Download Full Size | PDF

Figures 11(a), 11(b) and 11(c) are numerically reconstructed image focused on different letters. It can be seen that the letter “u” is reconstructed clearly because it is at the IRP, while the letters “h” and “t” are not reconstructed clearly because they are far away from the IRP. To enhance the depth range of the RPHS, multi-plane technique was adopted next.

 figure: Fig. 11

Fig. 11 (a) and (b) and (c) Numerically reconstructed image focused on different letters without multi-plane technique. (d) and (e) and (f) Numerically reconstructed image focused on different letters with multi-plane technique. (g) and (h) and (i) Separately synthesized EIA of different letters.

Download Full Size | PDF

The multi-plane technique has been used to enhance the depth range of II [22,23]. It usually works by producing multiple IRPs or CDPs through time-multiplexing technique. Here, we propose to produce multiple IRPs by only one hologram without the need of time-multiplexing technique. The multi-plane technique is also used in hologram generation which approximates the 3D object as a collection of many planes [24]. We have previously proposed this concept to enhance depth range in a fast HS generation using look-up table method [25]. First, record the EIA of the three letters separately by setting different IRP. For the letter “h”, the IRP is set at l = 20mm and the gap is g = fl/(l-f) = 6.14mm. For the letter “u”, the IRP is set at l = 80mm and the gap is g = 4.99mm. For the letter “t”, the IRP is set at l = 140mm and the gap is g = 4.86mm. Figures 11(g), 11(h) and 11(i) show the EIA of the three letters respectively. Next, the three EIAs are converted into corresponding holograms through Eq. (4) with different values of g and added together to synthesize the final hologram. Figures 11(d), 11(e) and 11(f) are numerically reconstructed image focused on different letters with the proposed multi-plane technique. It can be seen that besides the letter “u”, the other two letters are also reconstructed well because multiple IRPs are produced through one hologram. There is no need of time-multiplexing technique, which is an advantage superior to the multi-plane based II.

Optical reconstruction was also performed. Figures 12(a) and 12(b) show the optically reconstructed images focused on different letters without and with multi-plane technique respectively. It can be seen that the reconstruction qualities of the letter “u” in Figs. 12(a) and 12(b) are almost the same, while the reconstruction qualities of the other letters in Fig. 12(b) are much better than in Fig. 12(a). It is because that in the RPHS without multi-plane technique, the other letters are far away from the IRP, resulting in degraded quality. While in the RPHS with multi-plane technique, three IRPs are produced through one hologram at where the three letters are, and each letter is reconstructed with good quality.

 figure: Fig. 12

Fig. 12 Optically reconstructed images focused on different letters (a) without multi-plane technique and (b) with multi-plane technique.

Download Full Size | PDF

In the above experiment, a small number of plane objects were used to verify the depth enhancement without considering the mutual occlusion between different planes. The occlusion process in the hologram generation has been studied in several researches [3,26–28] and it will become complicated when 3D objects are used (it means the plane number is huge). Here, we give another simple solution. First, use 3ds Max modeling software (or other image rendering software) to capture several EIAs corresponding to several 3D objects located at different depth ranges separately. Since the 3ds Max software applies the computer graphics rendering techniques (CGRT), occlusion is included in the rendered images of each 3D object. That is, the occlusion is realized in each separate depth range. This step can also be realized by real capture using lens array or plenoptic camera. Secondly, the occlusion between different depth ranges is processed. It is apparent that the front object will occlude the rear object, which causes that the EIA of the front object will overwrite the EIA of the rear object. The EIA of the front object is used to generate a mask to filter the information of the EIA of the rear object. Lastly, each EIA is assigned a CDP to generate the corresponding hologram and added together. The experiment of three 3D objects was performed. Figure 13(a) shows the distances between each object and the virtual camera array built in the 3ds Max software. Figure 13(b) shows the front view of the three objects. Figures 13(c)-13(e) show the three EIAs corresponding to the three 3D objects without mutual occlusion. Figures 13(f) and 13(g) are the masks generated by Figs. 13(c) and 13(d) respectively. The mask M(x, y) is generated by:

M(x,y)={0E(x,y)>01E(x,y)=0
where E(x, y) is the intensity distribution of the EIA. Through multiplying Fig. 13(d) by the mask in Fig. 13(f), the occlusion of the “ball” is realized in Fig. 13(h). And through multiplying Fig. 13(e) by the masks in Figs. 13(f) and 13(g), the occlusion of the front two objects is realized in Fig. 13(i). Then, each EIA with occlusion is converted to the corresponding hologram and added together.

 figure: Fig. 13

Fig. 13 (a) Distances between each object and the virtual camera array built in the 3ds Max software. (b) Front view of the three objects. (c-e) The three EIAs of the three 3D objects without mutual occlusion. (f) The mask generated from (c). (g) The mask generated from (d). (h) The EIA of the middle object with occlusion. (i) The EIA of the rear object with occlusion.

Download Full Size | PDF

Figures 14(a) and 14(b) show the optically reconstructed images focused on different objects without and with multi-plane technique respectively. It can be seen that the reconstruction qualities of the center image in Figs. 14(a) and 14(b) are almost the same, while the reconstruction qualities of the other images in Fig. 14(b) are better than in Fig. 14(a). Figure 14(c) shows different perspectives of the 3D images, which confirms the occlusion between 3D images.

 figure: Fig. 14

Fig. 14 Optically reconstructed images focused on different objects (a) without multi-plane technique and (b) with multi-plane technique. (c) different perspectives of the 3D images.

Download Full Size | PDF

6. Conclusion

A resolution priority HS was proposed for the first time based on the principle of resolution priority II. The RPHS was generated by adding a quadratic phase term on the conventional Fourier transform of elemental images. Different from HS which uses multiple plane waves to reproduce light field, the proposed RPHS uses spherical wave focused on the IRP to reconstruct 3D image. Just as the RPII has higher resolution than DPII, the proposed RPHS also has higher resolution than conventional HS. Considering the limited depth range inherited from RPII, the multi-plane technique was adopted to enhance the depth range of RPHS without need of time-multiplexing technique. Simulation and optical experiments verified the proposed RPHS with high resolution and enhanced depth range.

Funding

National Natural Science Foundation of China (61805065); Science and technology major project of Anhui Province, China. (No. 17030901053); Fundamental research funds for the central universities, China. (No. 4115100011).

References

1. P. W. M. Tsang and T. C. Poon, “Review on the state-of-the-art technologies for acquisition and display of digital holograms,” IEEE Trans. Industr. Inform. 12(3), 886–901 (2016). [CrossRef]  

2. T. Shimobaba, T. Kakue, and T. Ito, “Review of fast algorithms and hardware implementations on computer holography,” IEEE Trans. Industr. Inform. 12(4), 1611–1622 (2016). [CrossRef]  

3. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015). [CrossRef]   [PubMed]  

4. J. T. McCrickerd and N. George, “Holographic stereogram from sequential component photographs,” Appl. Phys. Lett. 12(1), 10–12 (1968). [CrossRef]  

5. M. C. King, A. M. Noll, and D. H. Berry, “A new approach to computer-generated holography,” Appl. Opt. 9(2), 471–475 (1970). [CrossRef]   [PubMed]  

6. T. Yatagai, “Stereoscopic approach to 3-D display using computer-generated holograms,” Appl. Opt. 15(11), 2722–2729 (1976). [CrossRef]   [PubMed]  

7. H. Navarro, R. Martínez-Cuenca, A. Molina-Martian, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral-imaging monitors,” J. Disp. Technol. 6(10), 404–411 (2010). [CrossRef]  

8. F. Jin, J.-S. Jang, and B. Javidi, “Effects of device resolution on three-dimensional integral imaging,” Opt. Lett. 29(12), 1345–1347 (2004). [CrossRef]   [PubMed]  

9. Y. Ichihashi, R. Oi, T. Senoh, K. Yamamoto, and T. Kurita, “Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms,” Opt. Express 20(19), 21645–21655 (2012). [CrossRef]   [PubMed]  

10. M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–32 (1993). [CrossRef]  

11. H. Kang, T. Yamaguchi, and H. Yoshikawa, “Accurate phase-added stereogram to improve the coherent stereogram,” Appl. Opt. 47(19), D44–D54 (2008). [CrossRef]   [PubMed]  

12. Q. Y. J. Smithwick, J. Barabas, D. Smalley, and V. M. Bove, Jr., “Interactive Holographic Stereograms with Accommodation Cues,” Proc. SPIE Practical Holography XXIV: Materials and Applications, 7619, 761903 (2010).

13. H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues,” Opt. Express 23(4), 3901–3913 (2015). [CrossRef]   [PubMed]  

14. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011). [CrossRef]   [PubMed]  

15. K. Wakunami, M. Yamaguchi, and B. Javidi, “High-resolution three-dimensional holographic display using dense ray sampling from integral imaging,” Opt. Lett. 37(24), 5103–5105 (2012). [CrossRef]   [PubMed]  

16. T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, “Realistic expression for full-parallax computer-generated holograms with the ray-tracing method,” Appl. Opt. 52(1), A201–A209 (2013). [CrossRef]   [PubMed]  

17. Z. Wang, R. S. Chen, X. Zhang, G. Q. Lv, Q. B. Feng, Z. A. Hu, H. Ming, and A. T. Wang, “Resolution-enhanced holographic stereogram based on integral imaging using moving array lenslet technique,” Appl. Phys. Lett. 113(22), 221109 (2018). [CrossRef]  

18. J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005).

19. P. W. M. Tsang, Y. T. Chow, and T.-C. Poon, “Generation of patterned-phase-only holograms (PPOHs),” Opt. Express 25(8), 9088–9093 (2017). [CrossRef]   [PubMed]  

20. W. Zhang, L. Cao, D. J. Brady, H. Zhang, J. Cang, H. Zhang, and G. Jin, “Twin-Image-Free Holography: A Compressive Sensing Approach,” Phys. Rev. Lett. 121(9), 093902 (2018). [CrossRef]   [PubMed]  

21. Y. Kim, K. Hong, and B. Lee, “Recent researches based on integral imaging display method,” 3D Res. , 1, 17–27 (2010).

22. Y. Kim, H. Choi, J. Kim, S.-W. Cho, Y. Kim, G. Park, and B. Lee, “Depth-enhanced integral imaging display system with electrically variable image planes using polymer-dispersed liquid-crystal layers,” Appl. Opt. 46(18), 3766–3773 (2007). [CrossRef]   [PubMed]  

23. X. Shen, Y. J. Wang, H. S. Chen, X. Xiao, Y. H. Lin, and B. Javidi, “Extended depth-of-focus 3D micro integral imaging display using a bifocal liquid crystal lens,” Opt. Lett. 40(4), 538–541 (2015). [CrossRef]   [PubMed]  

24. H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Layered holographic stereogram based on inverse Fresnel diffraction,” Appl. Opt. 55(3), A154–A159 (2016). [CrossRef]   [PubMed]  

25. Z. Wang, G. Lv, Q. Feng, A. Wang, and H. Ming, “Simple and fast calculation algorithm for computer-generated hologram based on integral imaging using look-up table,” Opt. Express 26(10), 13322–13330 (2018). [CrossRef]   [PubMed]  

26. J. Jia, J. Liu, G. Jin, and Y. Wang, “Fast and effective occlusion culling for 3D holographic displays by inverse orthographic projection with low angular sampling,” Appl. Opt. 53(27), 6287–6293 (2014). [CrossRef]   [PubMed]  

27. K. Wakunami, H. Yamashita, and M. Yamaguchi, “Occlusion culling for computer generated hologram based on ray-wavefront conversion,” Opt. Express 21(19), 21811–21822 (2013). [CrossRef]   [PubMed]  

28. A. Symeonidou, D. Blinder, A. Munteanu, and P. Schelkens, “Computer-generated holograms by multiple wavefront recording plane method with occlusion culling,” Opt. Express 23(17), 22149–22161 (2015). [CrossRef]   [PubMed]  

Supplementary Material (1)

NameDescription
Visualization 1       optical reconstruction of proposed resolution priority holographic stereogram

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 Principle of holographic stereogram.
Fig. 2
Fig. 2 (a) Wavefront reconstruction and display principle of conventional HS. (b) Wavefront reconstruction and display principle of DPII.
Fig. 3
Fig. 3 Conversion process from DPII to HS through FFT [9].
Fig. 4
Fig. 4 (a) Wavefront reconstruction and display principle of RPII. (b) Wavefront reconstruction and display principle of proposed RPHS.
Fig. 5
Fig. 5 Conversion process from RPII to RPHS by adding a quadratic phase term on the conventional FFT.
Fig. 6
Fig. 6 (a) Synthesized EIA of one point. Phase profile of (b) conventional HS and (c) proposed RPHS and (d) Fresnel hologram. Numerical reconstructed results of the point in (e) conventional HS and (f) proposed RPHS. (g) The intensity profile along the white line in (e). (h) The intensity profile along the white line in (f).
Fig. 7
Fig. 7 (a) The bee model used in simulation. (b) The captured EIA of the bee model. (c) and (d) reconstructions of conventional HS; (e) and (f) reconstructions of proposed RPHS.
Fig. 8
Fig. 8 (a) and (b) Optical reconstruction results of conventional HS captured from different directions. (c) and (d) Optical reconstruction results of proposed RPHS captured from different directions (see Visualization 1).
Fig. 9
Fig. 9 Principle of depth range limit in RPII.
Fig. 10
Fig. 10 (a) Computational II for three letters. (b) The synthesized EIA of the three letters.
Fig. 11
Fig. 11 (a) and (b) and (c) Numerically reconstructed image focused on different letters without multi-plane technique. (d) and (e) and (f) Numerically reconstructed image focused on different letters with multi-plane technique. (g) and (h) and (i) Separately synthesized EIA of different letters.
Fig. 12
Fig. 12 Optically reconstructed images focused on different letters (a) without multi-plane technique and (b) with multi-plane technique.
Fig. 13
Fig. 13 (a) Distances between each object and the virtual camera array built in the 3ds Max software. (b) Front view of the three objects. (c-e) The three EIAs of the three 3D objects without mutual occlusion. (f) The mask generated from (c). (g) The mask generated from (d). (h) The EIA of the middle object with occlusion. (i) The EIA of the rear object with occlusion.
Fig. 14
Fig. 14 Optically reconstructed images focused on different objects (a) without multi-plane technique and (b) with multi-plane technique. (c) different perspectives of the 3D images.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

H ( u , v ) = 1 j λ f I ( x , y ) exp [ j 2 π λ f ( x u + y v ) ] d x d y
f n y q = L 2 p
H ( u , v ) = A exp [ j π λ f ( 1 g f ) ( u 2 + v 2 ) ] j λ f I ( x , y ) exp [ j 2 π λ f ( x u + y v ) ] d x d y
H ( u , v ) = exp [ j π λ f ( 1 g f ) ( u 2 + v 2 ) ] · A ( u λ f , v λ f ) , A ( f x , f y ) = F F T [ I ( x , y ) ]
1 2 Δ p = N Δ p 2 λ f λ f = N Δ p 2
I ( u , v ) = 2 Re [ H ( u , v ) r ( u , v ) ] + C
Δ Z m = 2 l P I φ = 2 l 2 P D g φ
M ( x , y ) = { 0 E ( x , y ) > 0 1 E ( x , y ) = 0
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.