Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Acceleration of computer-generated hologram using wavefront-recording plane and look-up table in three-dimensional holographic display

Open Access Open Access

Abstract

In this paper, we propose a fast calculation method using look-up table and wavefront-recording plane. Wavefront-recording plane method consists of two steps: the first step is the calculation of a wavefront-recording plane which is placed between the object and the hologram. In the second step, we obtain the hologram by executing diffraction calculation from the wavefront-recording plane to the hologram plane. The first step of the previous wavefront-recording plane method is time consuming. In order to obtain further acceleration to the first step, we propose high compressed look-up table method based on wavefront-recording plane. We perform numerical simulations and optical experiments to verify the proposed method. Numerical simulation results show that the calculation time reduces dramatically in comparison with previous wavefront-recording plane method and the memory usage is very small. The optical experimental results are in accord with the numerical simulation results. It is expected that proposed method can greatly reduce the computational complexity and could be widely applied in the holographic field in the future.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Holographic display [1], providing all depth information for human eyes [24], is regarded as the ultimate three-dimensional (3D) display technology and has attracted more and more attention in recent years. However, the computation time [58] involved in computer-generated hologram (CGH) obstructs the realization of a practical 3D holographic display system. Current CGH calculation method are mainly categorized into three approaches: point-based method [913], polygon-based method [1418] and layer-based method [1922]. Compared with polygon-based method and layer-based method, point-based method is a simple, flexible and widely used method that can achieve 3D images with high quality.

In point-based method, a 3D object is decomposed as a number of object points, and the complex amplitude distribution on the hologram plane can be obtained by the superposition of spherical waves emitted from each point, which is time-consuming. For point-based method, many investigations have been presented to speed up the calculation [2328]. Look-up table (LUT) method [6] is first proposed to accelerate the calculation, where the fringe patterns (FPs) of all possible object points are pre-calculated and stored in a table. However, it needs a huge memory and cannot reduce the computational complexity. Novel look-up-table method [7], Split look-up-table method [9], Compressed look-up-table (C-LUT) method [10] and Accurate compressed look-up-table method [29] are proposed to reduce the memory usage of LUT and computational complexity. However, the memory usage is still in the order of megabytes (MBs). Therefore, the memory usage needs to be reduced further.

In recent years, wavefront recording plane (WRP) method [30] has been developed to implement a fast CGH computation. The method consists of two steps, in the first step, the spherical waves emitted from 3D object are recorded on a WRP, which is placed between a 3D object and a CGH. In the second step, the CGH is obtained by diffraction calculation from the WRP to CGH based on fast Fourier transform (FFT). If the WRP is set near the 3D object, the computational complexity in the first step is reduced. Nevertheless, the first step calculation still consumes a considerable time due to the direct point-to-point online calculation in conventional WRP method. Several methods to accelerate the first step calculation are proposed [3137]. An improved WRP method using LUT [31] is proposed to accelerate the first step. Double WRPs method and multiple WRPs method are proposed to generate holograms for 3D long-depth objects [3234], and the arrangement of the WRPs can be optimized further [35]. The tilted WRP method [36] is proposed to decrease the calculation cost of the first step by using a tilted diffraction calculation. The stretched WRP method [37] are proposed to decrease the calculation cost using a non-uniform sampled diffraction calculation. However, for almost all reported WRP methods, the computation speed of the first step for real time holographic display is still fast not enough, so the WRP computation process needs to be dramatically accelerated.

In this paper, we propose a specific LUT method, which adapts to WRP method, to accelerate the first step. The calculation time of the first step is accelerated greatly and the memory usage reduces rapidly, so we call our method as high compressed look-up table based on wavefront-recording plane (HCLUT-WRP) method. Numerical simulations and optical experiments are performed to verify the validity of the proposed method.

2. Principles and methods

In WRP method, the generation of CGH involves two steps. Figure 1 shows an outline of the method.

 figure: Fig. 1.

Fig. 1. Diagram of the WRP method to generate the CGH.

Download Full Size | PDF

In the first step, we place the WRP close to the object and record the complex amplitude (CA) of the object on the WRP using the following equation:

$$W(\xi ,\eta ) = \sum\limits_{j = 0}^{N - 1} {{A_j}\exp [i(k{r_j} + {\phi _j})]}$$
where ${r_j} = {({{{(\xi - {x_j})}^2} + {{(\eta - {y_j})}^2} + {{({z_j} - {z_\textrm{2}})}^2}} )^{1/2}} \approx ({z_j} - {z_2}) + ({{{{(\xi - {x_j})}^2} + {{(\eta - {y_j})}^2})} \mathord{\left/ {\vphantom {{{{(\xi - {x_j})}^2} + {{(\eta - {y_j})}^2})} {2({z_j} - {z_2})}}} \right.} {2({z_j} - {z_2})}}$ is the distance between the object point and the coordinate $(\xi ,\eta )$ on the WRP. $W(\xi ,\eta )$ is the CA on WRP plane, N is the number of object points. $({x_j},{y_j},{z_j})$ and ${A_j}$ are the coordinate and amplitude of object point $j,$ respectively. $\lambda$ is wavelength and $k = 2\pi /\lambda$ is wave number. ${z_\textrm{2}}$ is the perpendicular distance between the WRP plane and the CGH plane and ${z_1} = {z_j} - {z_\textrm{2}}$ is the distance between the object point and WRP plane.

In the second step, we calculate the CA on the CGH using Fresnel diffraction. Because the amplitude and phase information of the object points are recorded in the WRP, the diffraction calculation from the WRP to the CGH is equivalent to directly calculating the CA on the CGH from the 3D object. In Fresnel diffraction, the complex amplitude distribution on the CGH plane could be expressed by:

$$\begin{aligned} H(x{^{\prime}_p}, y{^{\prime}_q}) &= \frac{{\exp ({ik{z_\textrm{2}}} )}}{{i\lambda {z_\textrm{2}}}}\int\!\!\!\int {W(\xi ,\eta )} \exp (\frac{{{{({x{^{\prime}_p} - \xi } )}^2} + {{({y{^{\prime}_q} - \eta } )}^2}}}{{2{z_\textrm{2}}}})dxdy\\ &= \frac{{\exp ({ik{z_\textrm{2}}} )}}{{i\lambda {z_\textrm{2}}}}{{\cal F}^{ - 1}}[{{\cal F}[{W(\xi ,\eta )} ]\cdot {\cal F}[{h(x{^{\prime}_p}, y{^{\prime}_q})} ]} ]\end{aligned}$$
where ${\cal F}$ and ${{\cal F}^{ - 1}}$ are the Fourier and inverse Fourier operators, and the impulse response is $h(x{^{\prime}_p}, y{^{\prime}_q}) = \exp (ik\frac{{x{^{\prime}_p}^2 + y{^{\prime}_q}^2}}{{2{z_\textrm{2}}}}).$

The maximum diffraction angle $\theta$ for reconstructing a 3D object from the CGH can be expressed as

$$\sin \theta = \lambda /2p$$
where p is the sampling pitch on the CGH. The spreading area ${L_j}$ for the $jth$ object point on the WRP can be expressed as
$${L_j} = \textrm{2}|{{z_j} - {z_2}} |\tan \theta$$
The spreading area ${L_j}$ will increase if the distance ${z_j} - {z_2}$ increases. Therefore, if WRP is located near the object points, the computational complexity can be reduced drastically.

In the first step, Eq. (1) can be simplified as

$$W(\xi ,\eta ) = \sum\limits_{j = 1}^N {{A_j}} \exp [ik({z_j} - {z_2})]{\{ \exp [\frac{{{{(\xi - {x_j})}^2} + {{(\eta - {y_j})}^2}}}{2}]\} ^{(\frac{{ik}}{{{z_j} - {z_2}}})}}$$
We split the vertical and horizontal information, and the Eq. (5) can be written as:
$$W(\xi ,\eta ) = \sum\limits_{j = 1}^N {{A_j}} \exp [ik({z_j} - {z_2})]{\{ \exp [\frac{{{{(\xi - {x_j})}^2}}}{2}]\cdot \exp [\frac{{{{(\eta - {y_j})}^2}}}{2}]\} ^{(\frac{{ik}}{{{z_j} - {z_2}}})}}$$
We define $H(\xi ,{x_j}) = \exp [{(\xi - {x_j})^2}/2]$ as the horizontal modulation factor, $V(\eta ,{y_j}) = \exp [{(\eta - {y_j})^2}/2]$ as the vertical modulation factor, ${L_1}({z_j}) = \exp [ik({z_j} - {z_2})]$ and ${L_2}({z_j}) = ik/({z_j} - {z_2})$ as the longitudinal modulation factors.

So Eq. (6) can be simplified as

$$W(\xi ,\eta ) = \sum\limits_{j = 1}^N {{A_j}} {L_1}({z_j}){(H(\xi ,{x_j})V(\eta ,{y_j}))^{{L_2}({z_j})}}$$
For ${N_{xy}}$ object points falling on the same layer of the 3D object, they have the same longitudinal modulation factors ${L_1}({z_j})$ and ${L_2}({z_j}).$ So Eq. (7) can be written as
$$W(\xi ,\eta ) = \sum\limits_{{j_z} = 1}^{{N_z}} {[\sum\limits_{{j_{xy}} = 1}^{{N_{xy}}} {{A_{{j_{xy}}}}{{(H(\xi ,{x_{{j_{xy}}}})V(\eta ,{y_{{j_{xy}}}}))}^{{L_2}({z_{{j_z}}})}}} } ]{L_1}({z_{{j_z}}})$$
where ${j_z}\textrm{( = 1,2,} \cdots ,{N_z})$ is the sequence number in the 2D image plane of the 3D object, and ${j_{xy}}\textrm{( = 1,2,} \cdots ,{N_{xy}})$ is the sequence number of points in each 2D image plane.

For ${N_y}$ object points falling on the same vertical line of each 2D image plane, they have the same horizontal modulation factor $H(\xi ,{x_j}),$ so Eq. (8) can be written as

$$W(\xi ,\eta ) = \sum\limits_{{j_z} = 1}^{{N_z}} {{\{ }\sum\limits_{{j_x} = 1}^{{N_x}} {[\sum\limits_{{j_y} = 1}^{{N_y}} {{A_{{j_y}}}V{{(\eta ,{y_{{j_y}}})}^{{L_2}({z_{{j_z}}})}}} ]} \textrm{ }H{{(\xi ,{x_{{j_x}}})}^{{L_2}({z_{{j_z}}})}}{\} }} {L_1}({z_{{j_z}}})$$
where ${j_x}\textrm{, }{j_y}\textrm{ ( = 1,2,} \cdots {N_x}\textrm{ , = 1,2,} \cdots {N_y})$ are the sequence number of points on the horizontal and vertical line in each 2D image plane.

Because the resolution of the modulation factors is proportional to ${z_j} - {z_2}$ and the modulation factors don’t contain depth information, we can only precalculate and store the horizontal and vertical modulation factors of 3D object in maximum diffraction range in precomputation. Therefore, the memory usage of modulation factors is $2\max \{ {L_j}\} .$ The operators max{·} means taking the maximum. During the calculation of WRP, we accumulate the CA by referencing the modulation factors.

Because the diffraction range depends on ${z_j} - {z_2},$ we define $\overline L = \frac{1}{N}\sum\limits_{j = 1}^N {{L_j}}$ as the average diffraction range of all object points. The average of the computational complexity on the WRP is expressed as ${\overline L ^2}.$ Therefore, the computational complexity of proposed method between the WRP and a 3D object is ${N_z}({N_y}({N_x}\overline L + {\overline L ^2}) + {\overline L ^2}),$ which is much smaller than conventional WRP method whose computational complexity is ${N_x}{N_y}{N_z}{\overline L ^2}.$ Next, the computational complexity between the WRP and the CGH, which includes two FFT and one inverse FFT, is $3pq\log pq.$ If the object point is very large, the total computational complexity is approximatively expressed as ${N_z}({N_y}({N_x}\overline L + {\overline L ^2}) + {\overline L ^2}) + 3pq\log pq \approx {N_x}{N_y}{N_z}\overline L .$

Moreover, a process of the calculation for three object points is given in Fig. 2. In the following description, steps 1 and 2 are precomputation steps, steps 3-5 are WRP generation step, and step 6 is CGH generation step.

 figure: Fig. 2.

Fig. 2. Example of the calculation for three object points A, B and C to the WRP. (a) The position of object points A, B, C and the WRP to which they propagate. (b)–(d) The generation of CA on WRP for object points B and C. (e)–(g)The generation of CA on WRP for object points A. (h) The generation of final WRP for object points A,B and C.

Download Full Size | PDF

Precomputation step

  • 1. Calculate the maximum diffraction range of FPs for the 3D object on WRP based on Eq. (4).
  • 2. Calculate the modulation factors of 3D object in maximum diffraction range and store them in LUT.
WRP generation step
  • 3. Determine the diffraction range of each object depth and optical field contribution from the LUT for the given depth.
  • 4. Apply multiplications and addition operation to calculate the CA for each object depth on WRP.
  • 5. Add the CA of each depth on WRP and generate final WRP.
CGH generation step
  • 6. Generate the CA on CGH by Fresnel diffraction from final WRP.

3. Numerical simulation and emulation

To demonstrate the feasibility of proposed method, we perform the numerical simulations. We reconstruct 3D scene located at different distances and the parameters used are listed in Table 1. Our program is run by a computer with MATLAB on Core i7-7700, 3.6 GHz, and 8G RAM.

Tables Icon

Table 1. CGH computation parameters

In the numerical simulations, the range of the distance between the object points and WRP plane is $2mm \le {z_1} \le 20mm.$ In proposed method, the memory usage is only about 8 kilobytes. Compared with LUT-WRP method, whose memory usage is 6.26 MBs, the memory usage reduces 782 times in proposed method. It is noteworthy that the memory usage is decided by the longest wavelength when generating the color LUTs because the diffraction range increases with the increment of wavelength. That is to say, when generating color LUTs, we only need to calculate the horizontal and vertical modulation factors of red in maximum diffraction range, which reduces the memory usage further. In WRP generation step, we intercept corresponding modulation factors of green and blue from the red one.

As shown in Fig. 3, proposed method is the fastest among these four methods. Although Double WRPs method reduces the calculation complexity, the calculation is still slow due to direct calculation in WRP generation step, while large memory usage restricts the CGH computation time for LUT-WRP method. When the object points are 200 × 200, the computation time of proposed method is about 96 times, 78 times, and 70 times faster than WRP, LUT-WRP, and Double WRPs methods.

 figure: Fig. 3.

Fig. 3. Comparison of CGH computation time by using WRP, LUT-WRP, Double WRPs and HCLUT-WRP methods.

Download Full Size | PDF

In simulation, we reconstruct teapot and pyramid located at different distances to verify the feasibility of proposed method. Figures 4(a) and 4(b) are monochrome numerical simulation results and Figs. 4(c) and 4(d) are color numerical simulation results. Figures 4(a) and 4(c) show the camera focusing on the teapot at 224 mm and Figs. 4(b) and 4(d) show the camera focusing on the pyramid at 234 mm. From Fig. 4, we can see that the teapot becomes from focus to blur while the pyramid becomes from blur to focus, which verify the method can provide the correct depth information.

 figure: Fig. 4.

Fig. 4. Numerical simulation results by using proposed method. Figures 4(a) and 4(b) are the monochrome results focused on 224mm and 234mm, respectively. Figures 4(c) and 4(d) are the color results focused on 224mm and 234mm, respectively.

Download Full Size | PDF

4. Optical experiments

To demonstrate the feasibility of proposed method, we perform the optical experiments. The size of object is 400 × 400. The reconstructed image is projected by the phase-only spatial light modulator (SLM) whose pixel size is 8µm. The resolution of the reconstructed image is 1080 × 1080 and it is captured by a CCD (Lumenera camera INFINITY 4-11C). The wavelength of the reconstructed light is 532 nm. In the optical experiments, the zero-order beam elimination method [38] is adopted to improve the quality of the reconstructed images. Besides, the temporal multiplexing method is utilized to generate the color holographic display, where red, green, and blue components are reconstructed and formed into color objects by using time integration. The schematic of the optical experimental setup for reconstruction is shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. Setup of the holographic display system: SLM is the spatial light modulator, PC is the personal computer, L1 and L2 are the Fourier transform lens.

Download Full Size | PDF

Figure 6 shows the optical reconstructed 3D scenes by using proposed method. Figures 6(a) and 6(b) show the monochrome optical experimental results, and Figs. 6(c) and 6(d) show color optical experimental results. From Fig. 6, we can see that proposed method keeps the depth information well and the optical experimental results are in accord with numerical simulation results. The optical experimental results indicate that the accuracy of the display doesn’t slightly negotiate compared with previous WRP methods with the acceleration of speed and reduction of memory usage.

 figure: Fig. 6.

Fig. 6. Optical experimental results by using proposed method. Figures 6(a) and 6(b) are the monochrome results focused on 224mm and 234mm, respectively. Figures 6(c) and 6(d) are the color results focused on 224mm and 234mm, respectively.

Download Full Size | PDF

5. Conclusion

We propose a HCLUT-WRP method based on LUT and WRP to achieve two significant improvements: less LUT memory usage and faster CGH computation speed. In pre-computation, we calculate the maximum diffraction range and compress the data of the horizontal and vertical modulation factors in maximum diffraction range to dual 1D data arrays, thus the LUT reduces to the order of Kilobytes. In WRP generation step, we generate the WRP by simple arithmetical operation. Numerical simulations and optical experiments are performed to verify proposed method, and the results match well with each other. Our future works will focus on the further optimization, such as multiple WRPS, tilted WRP and parallel computing by using graphics processing unit. The proposed method shows great potential for its small memory requirement. Therefore, it is expected that our method is promising to realize dynamic 3D holographic display and might have great potential to be applied in the holographic display in the future.

Funding

National Natural Science Foundation of China (61420106014, 61575024, 61975014); Newton Fund; National Basic Research Program of China (973 Program) (2017YFB1002900).

Disclosures

The authors declare no conflicts of interest.

References

1. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38(8), 46–53 (2005). [CrossRef]  

2. J. Hahn, H. Kim, Y. Lim, G. Park, and B. Lee, “Wide viewing angle dynamic holographic stereogram with a curved array ofspatial light modulators,” Opt. Express 16(16), 12372–12386 (2008). [CrossRef]  

3. T. Kozacki, M. Kujawińska, G. Finke, W. Zaperty, and B. Hennelly, “Holographic capture and display systems in circular configurations,” J. Disp. Technol. 8(4), 225–232 (2012). [CrossRef]  

4. F. Yaraş, H. Kang, and L. Onural, “Circular holographic video display system,” Opt. Express 19(10), 9147–9156 (2011). [CrossRef]  

5. A. D. Stein, Z. Wang, and J. J. S. Leigh, “Computer-generated holograms: a simplified ray-tracing approach,” Comput. Phys. 6(4), 389–392 (1992). [CrossRef]  

6. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993). [CrossRef]  

7. S. C. Kim and E. S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008). [CrossRef]  

8. H. Sato, T. Kakue, Y. Ichihashi, Y. Endo, K. Wakunami, R. Oi, K. Yamamoto, H. Nakayama, T. Shimobaba, and T. Ito, “Real-time colour hologram generation based on ray-sampling plane with multi-GPU acceleration,” Sci. Rep. 8(1), 1500 (2018). [CrossRef]  

9. Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. Tanjung, C. Tan, and T. C. Chong, “Fast CGH computation using SLUT on GPU,” Opt. Express 17(21), 18543–18555 (2009). [CrossRef]  

10. J. Jia, Y. Wang, J. Liu, X. Li, Y. Pan, Z. Sun, B. Zhang, Q. Zhao, and W. Jiang, “Reducing the memory usage for effective computer-generated hologram calculation using compressed look-up table in full-color holographic display,” Appl. Opt. 52(7), 1404–1412 (2013). [CrossRef]  

11. S. C. Kim and E. S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. 48(6), 1030–1041 (2009). [CrossRef]  

12. D. Pi, J. Liu, R. Kang, Z. Zhang, and Y. Han, “Reducing the memory usage of computer-generated hologram calculation using accurate high-compressed look-up-table method in color 3D holographic display,” Opt. Express 27(20), 28410–28422 (2019). [CrossRef]  

13. S. C. Kim, J. H. Yoon, and E. S. Kim, “Fast generation of three-dimensional video holograms by combined use of data compression and lookup table techniques,” Appl. Opt. 47(32), 5986–5995 (2008). [CrossRef]  

14. Y. Pan, Y. Wang, J. Liu, X. Li, and J. Jia, “Improved full analytical polygon-based method using Fourier analysis of the three-dimensional affine transformation,” Appl. Opt. 53(7), 1354–1362 (2014). [CrossRef]  

15. Y. Pan, Y. Wang, J. Liu, X. Li, and J. Jia, “Fast polygon-based method for calculating computer-generated holograms in three-dimensional display,” Appl. Opt. 52(1), A290–A299 (2013). [CrossRef]  

16. Y. Pan, Y. Wang, J. Liu, X. Li, J. Jia, and Z. Zhang, “Analytical brightness compensation algorithm for traditional polygon-based method in computer-generated holography,” Appl. Opt. 52(18), 4391–4399 (2013). [CrossRef]  

17. J. Park, S. Kim, H. Yeom, H. Kim, H. Zhang, B. Li, Y. Ji, S. Kim, and S. Ko, “Continuous shading and its fast update in fully analytic triangular-mesh-based computer generated hologram,” Opt. Express 23(26), 33893–33901 (2015). [CrossRef]  

18. H. Nishi and K. Matsushima, “Rendering of specular curved objects in polygon-based computer holography,” Appl. Opt. 56(13), F37–F44 (2017). [CrossRef]  

19. M. Bayraktar and M. Özcan, “Method to calculate the far field of threedimensional objects for computer-generated holography,” Appl. Opt. 49(24), 4647–4654 (2010). [CrossRef]  

20. J. Chen and D. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015). [CrossRef]  

21. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015). [CrossRef]  

22. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017). [CrossRef]  

23. S. C. Kim, J. M. Kim, and E. S. Kim, “Effective memory reduction of the novel look-up table with onedimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012). [CrossRef]  

24. S. C. Kim, X. B. Dong, M. W. Kwon, and E. S. Kim, “Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table,” Opt. Express 21(9), 11568–11584 (2013). [CrossRef]  

25. S. C. Kim, X. B. Dong, and E. S. Kim, “Accelerated one-step generation of full-color holographic videos using a color-tunable novel-look-up-table method for holographic three-dimensional television broadcasting,” Sci. Rep. 5(1), 14056 (2015). [CrossRef]  

26. T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram using runlength encoding based recurrence relation,” Opt. Express 23(8), 9852–9857 (2015). [CrossRef]  

27. S. Jiao, Z. Zhuang, and W. Zou, “Fast computer generated hologram calculation with a mini look-up table incorporated with radial symmetric interpolation,” Opt. Express 25(1), 112–123 (2017). [CrossRef]  

28. Z. Zeng, H. Zheng, Y. Yu, and A. K. Asundic, “Off-axis phase-only holograms of 3D objects using accelerated point-based Fresnel diffraction algorithm,” Opt. Laser. Eng. 93, 47–54 (2017). [CrossRef]  

29. C. Gao, J. Liu, X. Li, G. Xue, J. Jia, and Y. Wang, “Accurate compressed look up table method for CGH in 3D holographic display,” Opt. Express 23(26), 33194–33204 (2015). [CrossRef]  

30. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calclulation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]  

31. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18(19), 19504–19509 (2010). [CrossRef]  

32. A. Phan, M. Piao, S. Gil, and N. Kim, “Generation speed and reconstructed image quality enhancement of a long-depth object using double wavefront recording planes and a GPU,” Appl. Opt. 53(22), 4817–4824 (2014). [CrossRef]  

33. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram for RGB and depth images using wavefront recording plane method,” Photonics Lett. Pol. 6(3), 90–92 (2014). [CrossRef]  

34. A. Symeonidou, D. Blinder, A. Munteanu, and P. Schelkens, “Computer-generated holograms by multiple wavefront recording plane method with occlusion culling,” Opt. Express 23(17), 22149–22161 (2015). [CrossRef]  

35. N. Hasegawa, T. Shimobaba, T. Kakue, and T. Ito, “Acceleration of hologram generation by optimizing the arrangement of wavefront recording planes,” Appl. Opt. 56(1), A97–A103 (2017). [CrossRef]  

36. D. Arai, T. Shimobaba, K. Murano, Y. Endo, R. Hirayama, D. Hiyama, T. Kakue, and T. Ito, “Acceleration of computer generated holograms using tilted wavefront recording plane method,” Opt. Express 23(2), 1740–1747 (2015). [CrossRef]  

37. C. Chang, J. Wu, Y. Qi, C. Yuan, S. Nie, and J. Xia, “Simple calculation of a computer-generated hologram for lensless holographic 3d projection using a nonuniform sampled wavefront recording plane,” Appl. Opt. 55(28), 7988–7996 (2016). [CrossRef]  

38. H. Zhang, J. Xie, J. Liu, and Y. Wang, “Elimination of a zero-order beam induced by a pixelated spatial light modulator for holographic projection,” Appl. Opt. 48(30), 5834–5841 (2009). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Diagram of the WRP method to generate the CGH.
Fig. 2.
Fig. 2. Example of the calculation for three object points A, B and C to the WRP. (a) The position of object points A, B, C and the WRP to which they propagate. (b)–(d) The generation of CA on WRP for object points B and C. (e)–(g)The generation of CA on WRP for object points A. (h) The generation of final WRP for object points A,B and C.
Fig. 3.
Fig. 3. Comparison of CGH computation time by using WRP, LUT-WRP, Double WRPs and HCLUT-WRP methods.
Fig. 4.
Fig. 4. Numerical simulation results by using proposed method. Figures 4(a) and 4(b) are the monochrome results focused on 224mm and 234mm, respectively. Figures 4(c) and 4(d) are the color results focused on 224mm and 234mm, respectively.
Fig. 5.
Fig. 5. Setup of the holographic display system: SLM is the spatial light modulator, PC is the personal computer, L1 and L2 are the Fourier transform lens.
Fig. 6.
Fig. 6. Optical experimental results by using proposed method. Figures 6(a) and 6(b) are the monochrome results focused on 224mm and 234mm, respectively. Figures 6(c) and 6(d) are the color results focused on 224mm and 234mm, respectively.

Tables (1)

Tables Icon

Table 1. CGH computation parameters

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

W ( ξ , η ) = j = 0 N 1 A j exp [ i ( k r j + ϕ j ) ]
H ( x p , y q ) = exp ( i k z 2 ) i λ z 2 W ( ξ , η ) exp ( ( x p ξ ) 2 + ( y q η ) 2 2 z 2 ) d x d y = exp ( i k z 2 ) i λ z 2 F 1 [ F [ W ( ξ , η ) ] F [ h ( x p , y q ) ] ]
sin θ = λ / 2 p
L j = 2 | z j z 2 | tan θ
W ( ξ , η ) = j = 1 N A j exp [ i k ( z j z 2 ) ] { exp [ ( ξ x j ) 2 + ( η y j ) 2 2 ] } ( i k z j z 2 )
W ( ξ , η ) = j = 1 N A j exp [ i k ( z j z 2 ) ] { exp [ ( ξ x j ) 2 2 ] exp [ ( η y j ) 2 2 ] } ( i k z j z 2 )
W ( ξ , η ) = j = 1 N A j L 1 ( z j ) ( H ( ξ , x j ) V ( η , y j ) ) L 2 ( z j )
W ( ξ , η ) = j z = 1 N z [ j x y = 1 N x y A j x y ( H ( ξ , x j x y ) V ( η , y j x y ) ) L 2 ( z j z ) ] L 1 ( z j z )
W ( ξ , η ) = j z = 1 N z { j x = 1 N x [ j y = 1 N y A j y V ( η , y j y ) L 2 ( z j z ) ]   H ( ξ , x j x ) L 2 ( z j z ) } L 1 ( z j z )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.