Abstract
Digital holography is a promising display technology that can account for all human visual cues, with many potential applications i.a. in AR and VR. However, one of the main challenges in computer generated holography (CGH) needed for driving these displays are the high computational requirements. In this work, we propose a new CGH technique for the efficient analytical computation of lines and arc primitives. We express the solutions analytically by means of incomplete cylindrical functions, and devise an efficiently computable approximation suitable for massively parallel computing architectures. We implement the algorithm on a GPU (with CUDA), provide an error analysis and report real-time frame rates for CGH of complex 3D scenes of line-drawn objects, and validate the algorithm in an optical setup.
© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
Computer generated holography (CGH) consists of numerical diffraction algorithms for efficiently calculating interference for various applications in holography, such as display technology [1], beam shaping [2] and interferometry [3], among many others. One of the main computational challenges is that every (virtual) scene element can potentially affect all hologram pixels. This problem is exacerbated for holographic video displays, requiring high-resolution hologram generation at video rates.
Holographic displays are often cited as the ultimate 3D display technology [4]. In principle, holograms can display virtual objects which are indistinguishable from real objects since they account for all human visual cues. This makes them promising candidates for augmented and virtual reality applications, since they do not suffer from the drawbacks of head-mounted displays with conventional screens, as they typically cannot account for eye-focusing and exhibit accommodation-vergence conflicts.
There are many kinds of CGH algorithms [4,5], whose relative advantages and drawbacks are reflected in attributes such as computational speed, supported graphical effects and scene geometry, numerical precision and visual realism. One major way of classifying CGH algorithms is by their used elementary constituents: point cloud methods [6–11], layer-based methods [12–14], polygon methods [15–17] and ray-based methods [18–20].
In this work, we investigate the problem of diffracting line-drawn segments. We target applications such as head-mounted displays, near-eye displays, navigation systems; but this algorithm could be used as a building block for CGH of more complex virtual scenes too. This may also have broader applications in pure diffraction theory beyond displays, e.g. the efficient computation of straight or curved line apertures.
This problem was recently addressed in [21], which we designate as the reference “CG line” (computer graphics) CGH method. It subdivides the scene into depth planes, utilizing a pre-computed one-dimensional signal per plane encoding a converged light distribution. For every curve, the buffer is added on the hologram plane for each of a series of equidistant points along the curve with a period equal to the pixel pitch. This buffer is sweeped along the curve, drawn perpendicularly to the curve’s tangent at every point. The algorithm was successful in creating line-drawn object CGH at multiple depths in real-time.
However, the current version of that method has a few drawbacks. It is primarily designed for shorter distances to the hologram plane and operates within a fixed set of depth planes. Furthermore, noticeable errors are present at the curve’s edges, especially at lower resolutions.
To address this, we introduce a novel way for computing the hologram of line-drawn objects. We propose an analytical technique for directly computing the value of diffracted line and circle arc apertures in any point (e.g. pixel). This eliminates the error present at edges, and imposes no constraints on the number of depth levels, up to one per line if needed. Moreover, this analytical formulation makes a parallel implementation more straightforward: we achieve video-rate CGH for driving a HD spatial light modulator (SLM) with complex 3D line-drawn content on a GPU.
The paper is structured as follows: in sections 2 and 3 we derive analytical expressions for the diffraction of curve apertures. We begin with the straight line case, and proceed with the derivation of general circular arc segments. We derive an efficiently computable numerical approximation and provide an error analysis. Then, in section 4, we compare the calculation speed and accuracy with multiple CGH algorithms, including the reference point-based CGH and the CG line CGH algorithms. We examine the visual quality in both simulation environment and validate it in a holographic display setup. Finally, we conclude in section 5.
2. Line apertures
The expression for a single point-spread function $P$, describing the diffraction pattern induced by a luminous point at coordinates $(\delta , \epsilon , \zeta )$, is given by
The diffraction pattern for an arc aperture $A$, cf. Fig. 1(a), can be described by the sum of the set of all $P(x,y)$ points lying on the arc with radius $\rho$ and spanning an angle $\alpha$, and is given by
2.1 Straight line apertures
We rewrite (3) as
Using $\mathcal {E}(t)$, we can rewrite (4) and obtain the final expression
3. Arc apertures
We convert expression (2) to polar coordinates $(x,y)=(r\cos {\theta }, r\sin {\theta })$:
Although $E(s,w)$ can be evaluated $\forall s,w \in \mathbb {R}$, we can reduce the space of values under consideration by leveraging the following symmetries:
3.1 Asymptotic approximation for large parameters
Substituting $u=\cos {w}$ in $E(s,w)$ and using (15), we obtain
Although this series can be used to compute $E(s,w)$ to arbitrary precision for large $s$, the convergence is too slow for our intended application, requiring too many calculations.
Instead, we propose a different approach. We use the following equality:
Finally, we would like to find a way to compute $H_0(s)$; unlike Bessel functions, there are no standard C++/CUDA implementations of Struve functions at the time of writing. We therefore use the following asymptotic form for Struve functions:
3.2 Series approximation for small parameters
For small values of $w\sqrt {s}$, we need a different method. A natural first approach is to take the Taylor expansion in $s$ of the expression within the integral of $E(s,w)$,
We propose a different approach by taking the Taylor expansion of the exponent instead:
valid for small values of $w\approx \sin {w}$, after integration3.3 Error analysis
To know whether to choose $E_S(s,w)$ or $E_L(s,w)$ for any given $(s,w)$, we have to quantify which one has the smallest error. For this we use the following measure of relative precision:
Finally, we have to consider the case $s\approx 0$, which will cause numerical problems since $s$ appears in the denominator for expressions in both $E_S$ and $E_L$. For this we can use the (truncated) Taylor expansion in (24); in conclusion we get
3.4 Evaluating arc aperture expressions
With the help of the incomplete cylindrical function $E(s,w)$, we can rewrite (11) as
The case for $w$ is more tricky, as the input can be many multiples of $\tfrac {\pi }{2}$, including negative ones. Recursively applying the two last symmetries of (16) would be computationally sub-optimal, as this would introduce branching divergence across computation threads, which is unfavorable for parallel computation implementations. Instead, we combine all cases for $w$ using the following expression:
4. Experiments
The experiments are partitioned in two parts. First, we perform numerical experiments to compare the calculation time and accuracy of the proposed algorithm with other references. Then, we test the algorithm in an optical setup and compare visual quality differences.
4.1 Numerical calculation experiments
We utilized two line segment data sets for our experiments: (1) the “Simple shapes” data set (Fig. 3), consisting of 40 straight lines and 6 circular segments, forming simple geometric shapes placed at multiple depth planes; (2) the “Seigaiha” data set (Fig. 4) made out of 144 arc segments, which is a traditional Japanese motif of superposed wave patterns.
We implemented multiple algorithms for computing line holograms.
- • Point-based: this is the exact reference algorithm, where we decompose all lines into points with a period equal to the pixel pitch. We sum over all point-spread functions according to (1).
- • FFT-based: this will subdivide the scene into $D$ depth planes depending on the data set, directly draw the lines in their respective depth planes, followed by an FFT-based convolution for computing the Fresnel diffraction. This requires $(D+1)$ FFT calculations: $D$ for each plane and a final extra (I)FFT to obtain the hologram after adding all the Fourier transforms together.
- • CG line method: this is the method proposed in [21], using a 1D line buffer which are drawn over the hologram plane along the curve.
- • Analytical method: the proposed algorithm used in this paper.
The algorithms may be implemented using multi-threading on a CPU, or using massively parallel processing on a GPU. The multi-threaded CPU implementations ran on a machine with an Intel Core-i7 8850H 2.60GHz with 16GB of RAM, using either Visual Studio’s “concurrency::parallel_for” or OpenMP with maximal CPU core utilization. The GPU versions of the algorithm were run on a machine with an Intel Xeon E5-2687W v4 processor 3Ghz, 256 GB of RAM, a NVIDIA TITAN RTX GPU with CUDA 11.0, enabling CUDA compute capability 7.5. All versions of the algorithm were implemented in C++17 running on Windows 10 OS, using 32-bit floating point precision.
The calculation times for all tested algorithms applied on both data sets are summarized in Table 1. The reference point-based algorithm is embarrassingly parallel, and thus easily implemented on a GPU; it will have calculation times proportional to the number of points, which will depend in turn on the number of line segments and their lengths. This explains why the “Seigaiha” with more line elements takes about half a second, which is 3 times longer than the “Simple shapes”.
On the other hand, the FFT-based algorithm’s time will mostly depend on the number of required FFT operations, proportional to the number of depth planes. Unlike all other tested algorithms, this one does not scale linearly with the hologram resolution.
The CG method was implemented in a multi-threaded CPU environment. Direct implementation of this algorithm on GPU is not straightforward, as the drawn line buffers are self-intersecting in the hologram plane introducing dependencies, and requiring more complex memory access patterns. A successful GPU implementation may yield further significant speedups.
The proposed analytical method can compute pixel values independently, making it highly suitable for GPU: all threads can execute the same operations with built in trigonometric and Bessel function operations. A functional, but non-optimized implementation on CPU is also provided for reference. The calculation time is proportional to the number of line segments, yet independent of their parameters. We report calculation times of 2ms and 21ms for “Simple shapes” and “Seigaiha” respectively, which ranges from 20x to over 50x faster than the point-based CGH reference algorithm.
Furthermore, we want to quantify and compare the numerical accuracy of the different algorithms for the tested data sets. We utilize two quality metrics: the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM). The used quality metrics require a reference signal $H$, measuring the degree by which an approximated signal $\hat {H}$ differs from the reference. The general definition of the PSNR for a real- / complex-valued signal $H$ is
The SSIM uses the definition found in [26], which has been shown to significantly outperform PSNR for measuring perceptual image quality. Thus we also report both metrics on the numerically backpropagated hologram amplitude at multiple different depth planes where some of the objects are in focus. This is done for the proposed analytical CGH algorithm, the FFT-based and the CG-based CGH algorithms, using the Point-based CGH as a reference. The results for the “Simple shapes” are reported in Table 2 and for “Seigaiha” are reported in Table 3.
The proposed algorithm outperforms the CG line algorithm on average by 3.4 dB PSNR (and 0.27 SSIM units) for “Simple shapes” and by 16.1 dB PSNR (and 0.74 SSIM units) for “Seigaiha”. This gain difference is likely attributable to the stronger approximation of the CG method at line edges, which are more prominent in “Seigaiha”. The FFT-based algorithm performs slightly better than the CG line algorithm overall. However, please note that the current version of the FFT-based algorithm draws 1-pixel thick lines using nearest pixel neighbors. The quality may be improved considerably by using anti-aliasing. Moreover, note that the analytical method, unlike the other methods, requires a decomposition of curves into lines and arcs, so this may affect the resulting curve shape depending on the chosen approximation [22].
Finally, we want to confirm that the proposed analytical algorithm has consistently better quality for any arc radius, not just for specific cases. To do so we measure the approximation error using the the normalized mean-square error (NMSE)
4.2 Optical experiments
We also verify the proposed algorithms using optically reconstructed image of the tested holograms. We used a phase-modulation-type SLM (Holoeye Photonics AG, “PLUTO”) and a green laser with 532 nm wavelength (Showa Optronics, “J150GS”, Japan), cf. Figure 6.
The numerical and optical reconstructions of the “Simple shapes” and “Seigaiha” data sets are shown in Tables 4 and 5, respectively. For “Simple shapes”, we showed cropped zoomed-in images of the holograms brought into focus at different depth planes. The numerical reconstructions look largely the same for all three methods, but there is some intensity loss at the line’s edges for the CG line method as previously observed in [21]. These optical reconstructions mirror the numerical reconstructions quite well. For “Seigaiha”, we show reconstructions of the full hologram refocused in the middle of the virtual depth volume at 0.5m. The analytical method’s reconstructions resemble the reference Point-based CGH closer than the CG line method, whose edge darkening effect is visible just like in the other data set.
The visible noise in the optical reconstructions is due to several factors: the zero-order diffraction light, the lack of amplitude modulation in the SLM and the partial visibility of objects in other focal planes due to their long depth of fields. This effect could be mitigated by using i.a. a 4f optical system with a spatial filter, using SLMs with higher resolutions and modulation capabilities, time-multiplexing or rapidly changing the relative phase values of the different geometric shapes over time.
5. Conclusion
We propose a novel algorithm for the fast computation of CGH for line-drawn objects. We analytically derive the solution with incomplete cylindrical functions and simplify the expressions to an efficiently computable approximation suitable for GPU implementations. We report 1 to 2 orders of magnitude speedups and higher accuracy than previous solutions, and validate the rendered CGH in an holographic display setup. This algorithm enables the real-time generation of line-drawn CGH, opening the door to novel display applications in digital holography. Future work may include adding phase modulation to the line segments, different curve primitives and extensions to 3D curves.
Funding
Fonds Wetenschappelijk Onderzoek (12ZQ220N, VS07820N); Kenjiro Takayanagi Foundation; Inoue Foundation for Science; Japan Society for the Promotion of Science (20K19810).
Disclosures
The authors declare no conflicts of interest.
References
1. F. Yaras, H. Kang, and L. Onural, “State of the art in holographic displays: A survey,” J. Disp. Technol. 6(10), 443–454 (2010). [CrossRef]
2. T. Dresel, M. Beyerlein, and J. Schwider, “Design of computer-generated beam-shaping holograms by iterative finite-element mesh adaption,” Appl. Opt. 35(35), 6865–6874 (1996). [CrossRef]
3. J. H. Burge, “Applications of computer-generated holograms for interferometric measurement of large aspheric optics,” in International Conference on Optical Fabrication and Testing, vol. 2576T. Kasai, ed., International Society for Optics and Photonics (SPIE, 1995), pp. 258–269.
4. D. Blinder, A. Ahar, S. Bettens, T. Birnbaum, A. Symeonidou, H. Ottevaere, C. Schretter, and P. Schelkens, “Signal processing challenges for digital holographic video display systems,” Signal Process. Image Commun. 70, 114–130 (2019). [CrossRef]
5. J.-H. Park, “Recent progress in computer-generated holography for three-dimensional scenes,” J. Inf. Displ. 18(1), 1–12 (2017). [CrossRef]
6. M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993). [CrossRef]
7. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]
8. P. Tsang, W.-K. Cheung, T.-C. Poon, and C. Zhou, “Holographic video at 40 frames per second for 4-million object points,” Opt. Express 19(16), 15205–15211 (2011). [CrossRef]
9. S. Jiao, Z. Zhuang, and W. Zou, “Fast computer generated hologram calculation with a mini look-up table incorporated with radial symmetric interpolation,” Opt. Express 25(1), 112–123 (2017). [CrossRef]
10. Y. Yamamoto, H. Nakayama, N. Takada, T. Nishitsuji, T. Sugie, T. Kakue, T. Shimobaba, and T. Ito, “Large-scale electroholography by horn-8 from a point-cloud model with 400,000 points,” Opt. Express 26(26), 34259–34265 (2018). [CrossRef]
11. D. Blinder, “Direct calculation of computer-generated holograms in sparse bases,” Opt. Express 27(16), 23124–23137 (2019). [CrossRef]
12. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015). [CrossRef]
13. A. Symeonidou, D. Blinder, and P. Schelkens, “Colour computer-generated holography for point clouds utilizing the phong illumination model,” Opt. Express 26(8), 10282–10298 (2018). [CrossRef]
14. A. Gilles and P. Gioia, “Real-time layer-based computer-generated hologram calculation for the fourier transform optical system,” Appl. Opt. 57(29), 8508–8517 (2018). [CrossRef]
15. H. Kim, J. Hahn, and B. Lee, “Mathematical modeling of triangle-mesh-modeled three-dimensional surface objects for digital holography,” Appl. Opt. 47(19), D117–D127 (2008). [CrossRef]
16. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48(34), H54–H63 (2009). [CrossRef]
17. H. Nishi and K. Matsushima, “Rendering of specular curved objects in polygon-based computer holography,” Appl. Opt. 56(13), F37–F44 (2017). [CrossRef]
18. T. Yatagai, “Stereoscopic approach to 3-d display using computer-generated holograms,” Appl. Opt. 15(11), 2722–2729 (1976). [CrossRef]
19. K. Wakunami, H. Yamashita, and M. Yamaguchi, “Occlusion culling for computer generated hologram based on ray-wavefront conversion,” Opt. Express 21(19), 21811–21822 (2013). [CrossRef]
20. H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues,” Opt. Express 23(4), 3901–3913 (2015). [CrossRef]
21. T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram of line-drawn objects without fft,” Opt. Express 28(11), 15907–15924 (2020). [CrossRef]
22. D. Meek and D. Walton, “Approximating smooth planar curves by arc splines,” J. Comput. Appl. Math. 59(2), 221–231 (1995). [CrossRef]
23. G. N. Watson, A treatise on the theory of Bessel functions (MacMillan, 1945).
24. M. M. Agrest, M. S. Maksimov, H. E. Fettis, J. Goresh, and D. Lee, Theory of incomplete cylindrical functions and their applications, vol. 160 (Springer, 1971).
25. H. Nagaoka, “Diffraction phenomena produced by an aperture on a curved surface,” The journal Coll. Sci. Imp. Univ. Jpn. pp. 301–322 (1891).
26. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]