Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Ultrafast layer based computer-generated hologram calculation with sparse template holographic fringe pattern for 3-D object

Open Access Open Access

Abstract

In this paper, we propose a new ultrafast layer based CGH calculation that exploits the sparsity of hologram fringe pattern in 3-D object layer. Specifically, we devise a sparse template holographic fringe pattern. The holographic fringe pattern on a depth layer can be rapidly calculated by adding the sparse template holographic fringe patterns at each object point position. Since the size of sparse template holographic fringe pattern is much smaller than that of the CGH plane, the computational load can be significantly reduced. Experimental results show that the proposed method achieves 10-20 msec for 1024x1024 pixels providing visually plausible results.

© 2017 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Computer generated holograms (CGH) have called much attention in the research and industry field of holographic displays. CGHs are holographic fringe patterns numerically calculated on a computer system without optical recording system. The optical wavefronts of virtual object can be synthesized in CGHs without requiring physical representation of the virtual object [1, 2]. However, one of the main problems is a heavy computational load. In particular, to generate CGHs of 3-D point cloud model, the complex amplitudes of all 3-D object points have to be calculated at every pixel on CGH plane. As a result, the computational complexity for CGH calculation is proportional to the number of 3-D object points and the CGH resolution.

Various methods have been proposed to improve the calculation speed of the CGH of 3-D object [3–16]. In point based methods, 3-D object points were considered as each point light source. To reduce the computational cost of the point based method, look-up table (LUT) based approaches were proposed [3, 4]. However, the LUT based approaches required a large amount of memory. Recurrence based approaches using a numerical approximation were proposed to avoid repeating nonlinear operations [5–7]. However, approximation errors could be accumulated and propagated. To accelerate the speed of CGH calculation using a fast Fourier transform (FFT) based diffraction, wavefront recording plane (WRP) based approaches were proposed using a virtual plane (i.e., WRP) [8–11]. In the WRP-based methods, its computational load significantly increased as the number of points increased due to the ray tracing for the WRP calculation. In polygon based methods, the 3-D object was divided into thousands of polygon segments like meshes. In general, the polygon based methods could achieve fast calculation speed of CGH because the number of polygons was much smaller than the number of object points. To represent each plane (i.e., polygon) in 3-D space and its wavefronts, many polygon based methods using plane wave decomposition were proposed [12–14]. However, the polygon based methods required additional calculation load for surface representation. In layer based methods, the 3-D object was sliced into multiple layers with depth information. Then, the calculation speed could be accelerated by calculating diffraction patterns of each layer with FFT. The authors of [15] proposed a fast layer based method using region fraction approach. In [16], a layer oriented method using angular spectrum was proposed for calculating the accurate diffraction fields. The authors of [17] proposed a novel layer based CGH calculation using layer classification and occlusion culling for 3-D object. However, the computational cost of the layer based methods increased as the CGH resolution increased.

Recently, the advantages of exploiting sparsity for fast CGH calculation have been demonstrated. The authors of [18] proposed sparsity based fast CGH calculation method using the sparse FFT (sFFT). In [18], the sparsity of holographic fringe pattern was investigated and exploited in order to improve the calculation speed of CGH. The speed of CGH calculation was significantly accelerated by combining sparse Fresnel diffraction using sFFT and WRP based approach. Although the computation time for CGH calculation of 3-D objects was significantly reduced in [18], its computational cost depended on the number of 3-D object points as well due to the WRP. Author of [19] proposed a CGH calculation method using sparse fringe pattern via wavelet transform. In [19], by reducing the small wavelet coefficients using wavelet shrinkage, the CGH calculation was accelerated with only large wavelet coefficient values. However, it is required the forward/inverse wavelet transform, and sorting process in order to find the high wavelet coefficient values in wavelet domain.

In this paper, we propose a novel ultrafast layer based CGH calculation method that leverages the sparsity of the holographic fringe pattern on depth layer for 3-D point cloud model. The holographic fringe patterns of each depth layer can be rapidly calculated by planar diffraction with FFT or sFFT, instead of ray tracing method for point light diffraction. In particular, it can take advantage of the sparsity characteristics of the fringe pattern because each layer has quite a few object points at the same depth. In the proposed method, to effectively leverage the sparsity of holographic fringe patterns on each depth layer, the object points on a depth layer were divided into multiple sub-layer at the same depth to have sparser object points in the sub-layer [see Fig. 2]. As a result, multiple sub-layer including a small number of points (i.e., sparse object points) can be generated.

In addition, we have observed that the holographic fringe pattern on the sub-layer has a few dominant signals around specific region on the CGH plane. On the other hand, most of the regions on the CGH plane have very small and zero signals [see Fig. 3]. Based on the observation, we devise an ultrafast CGH calculation using a sparse template holographic fringe pattern at each depth distance. The sparse template holographic fringe pattern has a small size of holographic fringe pattern of one object point. It includes only a few dominant signals (i.e., sparse signals). Therefore, instead of calculating the entire holographic fringe patterns on every depth layer with Fourier-based diffraction model, in this paper, the holographic fringe pattern on each depth layer can be generated by simply adding the sparse template holographic fringe patterns. The final CGH is obtained by superimposing the holographic fringe patterns of every depth layer. In the proposed CGH calculation, we do not need to calculate Fourier-based diffraction calculation on every depth layer. In addition, because the size of the sparse template holographic fringe pattern is much smaller than that of entire CGH plane, the calculation time of the CGH is significantly reduced.

In our experiments, the calculation times of CGH for 3-D object validate that the proposed CGH method significantly accelerates the CGH calculation regardless of the number of object points and the CGH resolution. The visual results show that the proposed method provides visually plausible results with a low computational complexity.

The remainder of this paper is organized as follows. Section 2 presents the proposed ultrafast CGH calculation method. In Section 3, experiments and results are presented to evaluate the performance of the proposed method. Finally, discussion and conclusion are drawn in Section 4 and 5, respectively.

2. Proposed fast CGH calculation method

Figure 1 shows the overview of the proposed layer based CGH calculation considering the sparsity of the holographic fringe pattern on each depth layer. As shown in Fig. 1, the proposed CGH calculation method consists of three sequential steps: 1) generation of multiple sub-layer based on sparsity on each depth layer, 2) diffraction calculations on each depth layer using the sparse template holographic fringe pattern, and 3) superimposition of holographic fringe patterns on all depth layers. Detailed descriptions of each step for the CGH of 3-D object are given in the following subsections.

 figure: Fig. 1

Fig. 1 Overview of the proposed ultrafast CGH calculation method using the sparse template holographic fringe pattern on depth layer.

Download Full Size | PDF

2.1 Multiple sub-layer generation based on the sparsity on each depth layer

In the layer based method, the CGH calculation for 3-D object is able to be accelerated because FFT based diffraction calculation can be used to generate holographic fringe patterns of each depth layer for plane wave propagation.

In this paper, we present a novel multiple sub-layer generation based on the sparsity on each depth layer. Since the object points are classified into parallel 2-D depth layers, each layer contains small numbers of points with the same depth. In the proposed method, a small number of object points having the same depth are allocated to multiple sub-layer so that the proposed method is able to effectively leverage the sparsity of the holographic fringe patterns on sub-layer. It is helpful to reduce the calculation time using a sparse template holographic fringe pattern in Section 2.2.

Let L=[L1,L2,...,LD] denote the depth layer set that contains parallel 2-D depth layers with D different distances between each layer and the CGH plane. In our experiment, the space (∆z) between layers is about 0.01 mm [17]. In addition, Li=[li,1,li,2,...,li,Ki] denotes the multiple sub-layer at the i-th depth distance. Ki is the number of sub-layer at the i-th depth distance. li,jW×H corresponds to the j-th sub-layer at the i-th depth. W and H are a width and a height of the layer, which are same as the CGH resolution. li,j could contain very sparse object points, Pi,j due to the multiple sub-layer division. Figure 2 shows the proposed multiple sub-layer generation at the same depth. In order to generate sparser light propagation from each plane, the object points at the same depth are classified into multiple sub-layer according to the number of points with the same depth position.

 figure: Fig. 2

Fig. 2 Example of the proposed multiple sub-layer at i-th depth distance.

Download Full Size | PDF

To efficiently generate the multiple sub-layer, the LUT is employed. By pre-generating the multiple sub-layer and storing them in LUT, a computational load for multiple sub-layer generation can be effectively reduced. In existing LUT-based methods [3, 4], LUT was used to save and load the pre-calculated complex amplitudes of 3-D objects. As a result, it required a large size of memory to generate the CGHs. On the other hand, in this paper, the horizontal and vertical pixel positions of object points, i-th depth, and j-th sub-layer information are stored in the LUT. Hence, the size of LUT used in our experiment is P x 4 bytes. P is the total number of object points and 4 is byte number for an integer value. As shown in Section 3, a memory size of LUT in our proposed method is 531 Kbytes for Bunny (35,947 points).

2.2 Ultrafast CGH calculation method with sparse template holographic fringe pattern on depth layer

Figure 3 shows examples of holographic fringe pattern on a sub-layer including some points by multiple sub-layer classification. Each sub-layer has a small number of points with the same depth [see Fig. 3(a)]. As a result, the holographic fringe patterns of each sub-layer are likely to be sparse signals around object points (light sources), as seen in Fig. 3(b). Most of the regions on the CGH plane have very small (gray area) or zero values so that they could be ignored. Figure 3(c) shows the holographic fringe pattern including a few dominant signals around object points and zero values (black area) in other regions (i.e., sparse holographic fringe pattern). The holographic fringe pattern in the Fig. 3(c) was obtained by applying a threshold to the original fringe pattern in the Fig. 3(b). The threshold was set to select about top 5% magnitude of signals in order to show the sparsity characteristics of holographic fringe pattern. Figure 3(d) shows the numerical reconstruction result from Fig. 3(c). As shown in Fig. 3(d), the object point light sources can be well reconstructed from the holographic fringe pattern even with a few dominant signals (i.e., sparse signals). The result is consistent with [18]. In [18], the experimental result showed that the holographic fringe patterns were sparse enough so that about top 5 – 10% dominant signals could provide feasible visual quality of the reconstructed images (PSNR over 30 dB). Based on the sparsity of the holographic fringe pattern on a sub-layer, we devise a fast layer-based CGH calculation using the sparse template holographic fringe pattern. Figure 4(a) shows the sparse template holographic fringe pattern which is generated by Fresnel diffraction calculation of a layer including one object point for plane wave propagation at a depth distance. Let ti denote a sparse template holographic fringe pattern at i-th depth with the limited range, which can be written as

ti(ξ,η)=exp(j2πλzi)jλzilit(x,y)exp(jπλzi((ξx)2+(ηy)2))dxdy,=exp(j2πλzi)jλziF1[F[lit(x,y)]F[g(x,y)]]
where lit(x, y) indicates a template layer at i-th depth distance. The template layer has an object point at center position. The size of the sparse template layer is much smaller than that of CGH plane. F[∙] and F−1[∙] indicate the fast Fourier transform and inverse fast Fourier transform, respectively. zi indicates the distance between i-th depth layer and CGH plane, and λ represents a wave length of a reference light [see Table 3]. (ξ, η) is the coordinate of fringe pattern and g(x, y) indicates an impulse response function for Fresnel diffraction calculation.

 figure: Fig. 3

Fig. 3 Example of holographic fringe pattern on a sub-layer including a few points. (a) One sub-layer with a few object point light source. (b) Holographic fringe pattern of (a), (c) Sparse holographic fringe pattern including a few dominant signals and zero values (black area), (d) Numerical reconstruction result from (c).

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Holographic fringe pattern of a sub-layer (a) Proposed sparse template holographic fringe pattern generation, (b) Sparse object points (e.g., five points) in the sub-layer, (c) Holographic fringe pattern of the sub-layer sparse template fringe patterns.

Download Full Size | PDF

Figure 4(b) shows a sub-layer containing five object points. As shown in Fig. 4(a), each sparse template holographic fringe pattern at each depth distance can be calculated in advance. Then, the holographic fringe pattern on each sub-layer including several object points can be obtained by adding the sparse template holographic fringe patterns at positions of the point light sources on the sub-layer, as seen in Fig. 4(c). Let Hi=[hi,1,hi,2,...,hi,Ki] denote holographic fringe patterns on the i-th depth layer generated by the sparse template holographic fringe pattern, ti. hi,jW×H corresponds to the holographic fringe pattern of the j-th sub-layer at the i-th depth, which can be written as

hi,j(ξ,η)=p=1Pi,jti(ξξp+R2+1,ηηp+R2+1),
where (ξp, ηp) indicates the position of the sparse template holographic fringe pattern for the p-th object point on the sub-layer, ξp∈[ξp-R/2, ξp + R/2], ηp∈[ηp-R/2, ηp + R/2]. R represents the width and height of the sparse template holographic fringe pattern, ti.

The width and height of the sparse template fringe pattern, R, play a similar role to a threshold for dominant signal selection. The smaller the R, the fewer hologram signals are used for the generation of holographic fringe patterns of depth layers with a smaller number of adding operations. On the other hand, the larger the R, the more hologram signals are used to obtain the holographic fringe patterns of each depth layer with a larger number of adding operations. That is, the width and height of the sparse template fringe pattern are related to the quality of the reconstructed image obtained from the final CGH and the computational time of CGH calculation.

Figure 5 shows the proposed fast CGH calculation using the sparse template holographic fringe pattern. The proposed CGH calculation rapidly calculates the dominant holographic fringe patterns of each sub-layer using the sparse template holographic fringe patterns. First, the multiple sub-layer are generated using the LUT without additional calculation cost. Then, the holographic fringe patterns of each sub-layer are rapidly generated by simply adding sparse template holographic fringe patterns at positions of each object point on a sub-layer. Finally, the CGH is generated by superimposing the holographic fringe patterns of all sub-layer. That is, the final CGH is obtained by adding holographic fringe patterns of all the depth layers. Let u(ξ, η) denote the final CGH, which can be written as

 figure: Fig. 5

Fig. 5 Ultrafast layer-based CGH calculation using the sparsity based sparse template holographic fringe pattern (ti) for 3-D point cloud.

Download Full Size | PDF

u(ξ,η)=i=1Dj=1Kihi,j(ξ,η).

In Eq. (3), since we have pre-calculated sparse template holographic fringe pattern according to the depth distance, only summation operation is required to generate the holographic fringe pattern on each depth layer. No additional processing such as domain transform and sorting is required to extract sparse dominant signals, compared to [19]. Therefore, the calculation cost is not dependent on the spatial resolution of the CGH plane.

3. Experiments and results

3.1 Data sets

To evaluate the performance of the proposed method, we used three public data sets of 3-D point cloud model. Two data sets were collected from the Berkeley instance recognition data set (BigBIRD) [20]: Baby toy and Syrup. One was collected from the Stanford 3-D scanning repository [21]: Bunny. Table 1 illustrates detail information on the data sets used in our experiment. The depth ranges were 4.8mm for Baby toy, 9.6mm for Syrup, and 6.8mm for Bunny.

Tables Icon

Table 1. Data sets in our experiments

3.2 Simulation setting

To calculate the CGHs from 3-D point cloud data sets, we used Microsoft Windows 7 Professional Service pack 1, Intel Core i7-4770 CPU @ 3.40 GHz and a 32 GBytes memory, and Microsoft Visual Studio 2013. Table 2 illustrates the CGH calculation conditions. The holographic fringe patterns on each depth layer were calculated by 8 CPU multi threads in openMP. In our experiment, the number of object points (sparse object points) on each sub-layer was experimentally set to under 20. Namely, if the depth layer had more than 20 object points, then we divided it into multiple sub-layer so that a sub-layer had fewer than 20 object points.

Tables Icon

Table 2. CGH calculation conditions in our experiments

3.3 Performance evaluation results for visual quality and computational speed

In this paper, experiments were performed in terms of calculation time and visual quality in order to evaluate the performance of the proposed method. In our experiment, we compared the performance of six CGH calculation methods, which were a ray tracing method, a LUT based method [3], a recurrence based method [6], a WRP based method [8], a sparsity based method using sFFT [18], and the proposed method.

Table 3 shows the memory sizes of LUT for conventional LUT based method [6] and multiple sub-layer generation in the proposed method. Since the conventional LUT based method [3] stored pre-calculated complex amplitudes of 3-D objects, a lot of memory sizes were required. On the other hand, in our method, a small sizes of LUT were enough to generate 3-D depth layers because we stored only four integer values into the LUT for pixel positions, depth number, and layer number. By pre-generating the depth layers and storing them to the small size of the LUT, we could effectively reduce computational load of the layer generation.

Tables Icon

Table 3. Memory size of LUT for each data set

Table 4 shows the calculation time of the CGH generation for each 3-D point cloud data set. As shown in Table 4, the proposed method could rapidly generate the CGH of 3-D objects, compared to the existing methods. In particular, the computational times of most of existing methods rapidly increased with the large number of object points. The proposed method could achieve the fastest method even in a huge number of 3-D object points like Bunny. As shown in Table 4, the proposed method was at least 12,921 times faster than ray tracing algorithm in 1024 x 1024 CGH resolution.

Tables Icon

Table 4. Computational times [s] of the 1024x1024 CGH calculation

Figure 6 shows the numerical reconstructed results from the CGH generated by each CGH calculation method for presenting the visual quality. In the Fig. 6, the first and second rows on the left side show the numerical reconstructed results from the CGHs by ray tracing and LUT based method, respectively. These results were almost same because the LUT based method stored and loaded pre-calculated values by ray tracing method. The third row on the left side shows the numerical reconstructed results obtained from the CGHs by recurrence based method. Figures 6(d) and 6(e) show the reconstruction results of the CGHs by WRP-based method and sparsity based method using sFFT, respectively. Figure 6(f) shows the numerical reconstruction of the CGHs obtained by the proposed method. In Figs. 6(a) and 6(b), the numerical reconstructed results of the ray tracing and the LUT based method provided visually plausible results because any approximations were not used in the ray tracing. As shown in Fig. 6(c), the visual results of the recurrence based method had some distortions. The one of the reasons is error propagation in the recurrence based approach. In Fig. 6(f), the proposed method presented visually plausible results for three data sets, compared to the recent fast CGH calculation methods which are WRP based method [Fig. 6(d)] and the sparsity based method [Fig. 6(e)]. Table 5 shows the quantitative quality of the reconstructed image from the generated CGH using PSNR. For this purpose, we used the result by ray tracing method as reference image (i.e., ground-truth). Then, the PSNR value between the result by each method and the reference was measured. As shown in Table 5, the LUT based method provided higher PSNR because it saved all complex amplitude values of the fringe pattern. However, it required a large amount of memory. Although the recurrence based method achieved better image qualities on the simple objects, its result for complex object such as Bunny was the lowest because of error propagation around neighbors. Compared to other fast methods, the proposed method achieved visually acceptable quality (i.e., around 30dB PSNR), in Table 5. Consequently, the proposed method could provide visually plausible images with a rapid calculation time.

 figure: Fig. 6

Fig. 6 Visual results of the numerical reconstruction from the CGHs generated by five existing methods and the proposed method for each data set. (a) Results of the ray tracing, (b) Results of the LUT based method [3], (c) Results of the recurrence based method [6], (d) Results of the WRP based method [8], (e) Results of the sparsity based method using sFFT [18], (f) Results of the proposed method.

Download Full Size | PDF

Tables Icon

Table 5. PSNR [dB] for visual quality of the numerical reconstruction from the generated CGH

4. Discussion

In this paper, the reason for limiting the resolution of the sparse fringe pattern is that there are dominant signals (i.e., meaningful signals) around the center of the object light source, as shown in the Figs. 3(a) and 3(b). And it is possible to provide visually feasible reconstruction quality with only sparse dominant signals, as shown in the Figs. 3(c) and 3(d). The limitation of the resolution of the sparse fringe patterns does not decrease the viewing angle of the reconstructed image because it does not mean the limitation of resolution of the final CGH (The resolution of the final CGH is 1024x1024 pixels in this paper). The limitation of the resolution of the sparse fringe patterns can decrease the image quality of the reconstructed image. However, in our previous work [18], the experimental results showed that the hologram signals were sparse enough so that about top 5 – 10% dominant signals could provide feasible visual quality of the reconstructed images (PSNR over 30 dB) [18]. In this paper, the resolution of sparse fringe pattern (200x200 pixels) is about 4% of that of the fringe patterns at each depth layer and the final CGH (1024x1024 pixels). For the final CGH, the fringe patterns at each depth layer were obtained by summation of several sparse template fringe patterns, as shown in Fig. 4(c). As a result, by using more than 5 – 10% dominant signals to obtain the final CGH, the proposed method could preserve the quality of the reconstructed image.

The number of the selected elements, which means the resolution of the sparse fringe patterns, is required to be increased as increasing the distance between an object point and CGH plane for visually plausible result. As the distance between object point and the CGH plane increases, the resolution of CGH is also increased to provide visually plausible result as in [1, 8, 19]. For example, when the distance is 0.4 m, the CGH resolution of 2048x2048 pixels is required to obtain the visually similar results to the image quality of 1024x1024 pixels CGH plane at a distance of 0.2 m. As a result, in the proposed method, the number of the selected elements for sparse fringe pattern (i.e., the resolution of the sparse template fringe patterns) is increased as well, e.g., 200x200 pixels for 0.2 m vs. 400x400 pixels for 0.4 m. However, the ratio between the resolution of the sparse fringe pattern (400x400) and that of the resultant CGH (2048x2048) still remains about 4%.

5. Conclusion

In this paper, we proposed a new ultrafast CGH calculation using the sparse template holographic fringe patterns on depth layers which aimed to accelerate the calculation of CGH of 3-D object. To effectively leverage the sparsity of each depth layer, we classified object points at the same depth to multiple sub-layer, and effectively dealt with the multiple sub-layer information using LUT. The proposed layer-based CGH calculation using the sparse template holographic fringe pattern could leverage the sparsity of the holographic fringe patterns for low computational time of CGH calculation by simply adding the sparse template holographic fringe patterns. The final CGH was obtained by superimposing holographic fringe patterns on the all depth layers. Experimental results showed that the proposed method was able to considerably accelerate the CGH calculation for 3-D object (10-20 msec for 1024x1024 CGH resolution) while preserving the visual quality. In the future work, we will conduct an optical experiment with spatial light modulator and holographic fringe patterns calculated by the proposed method.

Funding

Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea Government (MSIT) (No.2017-0-00780, Development of VR sickness reduction technique for enhanced sensitivity broadcasting).

References and links

1. J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20(4), 4018–4023 (2012). [PubMed]  

2. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38, 46–53 (2005).

3. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993).

4. S.-C. Kim and E.-S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. 48(6), 1030–1041 (2009). [PubMed]  

5. T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram using run-length encoding based recurrence relation,” Opt. Express 23(8), 9852–9857 (2015). [PubMed]  

6. K. Matsushima and M. Takai, “Recurrence formulas for fast creation of synthetic three-dimensional holograms,” Appl. Opt. 39(35), 6587–6594 (2000). [PubMed]  

7. H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000).

8. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [PubMed]  

9. A. Symeonidou, D. Blinder, A. Munteanu, and P. Schelkens, “Computer-generated holograms by multiple wavefront recording plane method with occlusion culling,” Opt. Express 23(17), 22149–22161 (2015). [PubMed]  

10. D. Arai, T. Shimobaba, K. Murano, Y. Endo, R. Hirayama, D. Hiyama, T. Kakue, and T. Ito, “Acceleration of computer-generated holograms using tilted wavefront recording plane method,” Opt. Express 23(2), 1740–1747 (2015). [PubMed]  

11. D. Arai, T. Shimobaba, T. Nishitsuji, T. KaKue, N. Masuda, and T. Ito, “An accelerated hologram calculation using the wavefront recording plane method and wavelet transform,” Opt. Commun. 393, 107–112 (2017).

12. Y. Pan, Y. Wang, J. Liu, X. Li, and J. Jia, “Fast polygon-based method for calculating computer-generated holograms in three-dimensional display,” Appl. Opt. 52(1), A290–A299 (2013). [PubMed]  

13. D. Im, J. Cho, J. Hahn, B. Lee, and H. Kim, “Accelerated synthesis algorithm of polygon computer-generated holograms,” Opt. Express 23(3), 2863–2871 (2015). [PubMed]  

14. X. Li, J. Liu, Y. Pan, and Y. Wang, “Improved polygon-based method for subwavelength pixel pitch computer generated holograms,” Opt. Commun. 390, 22–25 (2017).

15. J. S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015). [PubMed]  

16. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015). [PubMed]  

17. P. Su, W. Cao, J. Ma, B. Cheng, X. Liang, L. Cao, and C. Jin, “Fast computer-generated hologram generation method for three-dimensional point cloud model,” J. Disp. Technol. 12(12), 1688–1694 (2016).

18. H. G. Kim, H. Jeong, and Y. Man Ro, “Acceleration of the calculation speed of computer-generated holograms using the sparsity of the holographic fringe pattern for a 3D object,” Opt. Express 24(22), 25317–25328 (2016). [PubMed]  

19. T. Shimobaba and T. Ito, “Fast generation of computer-generated holograms using wavelet shrinkage,” Opt. Express 25(1), 77–87 (2017). [PubMed]  

20. A. Singh, J. Sha, K. S. Narayan, T. Achim, and P. Abbeel, “Bigbird: A large-scale 3d database of object instances,” in Proc. Int. Conf. Robotics and Automation (IEEE, 2014), pp. 509–516.

21. M. Levoy, J. Gerth, B. Curless, and K. Pull, “The Stanford 3D Scanning Repository,” (2005) [online], http://graphics.stanford.edu/data/3Dscanrep/.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Overview of the proposed ultrafast CGH calculation method using the sparse template holographic fringe pattern on depth layer.
Fig. 2
Fig. 2 Example of the proposed multiple sub-layer at i-th depth distance.
Fig. 3
Fig. 3 Example of holographic fringe pattern on a sub-layer including a few points. (a) One sub-layer with a few object point light source. (b) Holographic fringe pattern of (a), (c) Sparse holographic fringe pattern including a few dominant signals and zero values (black area), (d) Numerical reconstruction result from (c).
Fig. 4
Fig. 4 Holographic fringe pattern of a sub-layer (a) Proposed sparse template holographic fringe pattern generation, (b) Sparse object points (e.g., five points) in the sub-layer, (c) Holographic fringe pattern of the sub-layer sparse template fringe patterns.
Fig. 5
Fig. 5 Ultrafast layer-based CGH calculation using the sparsity based sparse template holographic fringe pattern (ti) for 3-D point cloud.
Fig. 6
Fig. 6 Visual results of the numerical reconstruction from the CGHs generated by five existing methods and the proposed method for each data set. (a) Results of the ray tracing, (b) Results of the LUT based method [3], (c) Results of the recurrence based method [6], (d) Results of the WRP based method [8], (e) Results of the sparsity based method using sFFT [18], (f) Results of the proposed method.

Tables (5)

Tables Icon

Table 1 Data sets in our experiments

Tables Icon

Table 2 CGH calculation conditions in our experiments

Tables Icon

Table 3 Memory size of LUT for each data set

Tables Icon

Table 4 Computational times [s] of the 1024x1024 CGH calculation

Tables Icon

Table 5 PSNR [dB] for visual quality of the numerical reconstruction from the generated CGH

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

t i (ξ,η)= exp( j 2π λ z i ) jλ z i l i t (x,y) exp( j π λ z i ( ( ξx ) 2 + ( ηy ) 2 ) )dxdy , = exp( j 2π λ z i ) jλ z i F 1 [ F[ l i t (x,y) ]F[ g(x,y) ] ]
h i,j (ξ,η)= p=1 P i,j t i ( ξ ξ p + R 2 +1,η η p + R 2 +1 ) ,
u(ξ,η)= i=1 D j=1 K i h i,j (ξ,η) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.