Abstract
We have developed a full analytical method with texture mapping for polygon-based computer-generated holography. A parallel planar projection mapping for holographic rendering along with affine transformation and self-similar segmentation is derived. Based on this method, we further propose a parallelogram-approximation to reduce the number of polygons used in the polygon-based technique. We demonstrate that the overall method can reduce the computational effort by 50% as compared to an existing method without sacrificing the reconstruction quality based on high precision rendering of complex textures. Numerical and optical reconstructions have shown the effectiveness of the overall scheme.
© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
1. Introduction
Polygon-based computer-generated holography (CGH) is a practical technique for obtaining computer-generated holograms, which can reproduce the light field distribution of three-dimensional (3D) objects. The method divides any 3D object into polygons, and then superimposes the diffracted light field distribution of each polygon on the hologram plane. Since the number of polygons is much smaller than the number of point sources in point-based holography, the amount of computation is greatly reduced. There are two classical polygon-based CGH methods [1]: the FFT-based or the traditional method [2–7], and the analytical method based on the affine transformation [8–13]. In the former case, we need to perform discrete 2D Fourier transform on each polygon, and then resample the spectrum on the plane that is parallel to the hologram plane. In the latter case, the spectrum of each polygon is obtained via affine transformation analytically, and sampling only needs to be performed on a plane that is parallel to the hologram plane [12,14]. Since the FFT method requires pre-generated images of the individual polygons that make up the 3D object, it is only necessary to change the amplitude of the generated polygon images when adding rendering information such as textures and illumination [5,6].
While the computational time in the analytical method in principle is faster than that in the FFT-based method, texture rendering in the analytical method seems to be the computational bottleneck. Texture mapping is a technology to improve the quality of computer images. In computer graphics, any 3D object is usually composed of polygons. In computer graphics texture mapping can generate non-constant amplitude inside each polygon and improve the image quality and the creation of realistic 3D images without increasing the number of polygons used for the 3D object. In recent years, some texture mappings have been applied to fully analytic polygon-based methods. For example, Kim et al. [9] have employed for the first time a self-similar partitioning scheme in a fully analytical method, where a single polygon is partitioned into sub-triangles of similar shape with different constant amplitudes, adding rendering information while maintaining the principle of a fully analytical framework. They have shown the feasibility of the proposed scheme for texture mapping, but have not demonstrated their idea concretely through experiments and simulations. Lee et al. [15] have proposed the scheme of representing texture functions using Fourier series, and based on this, the texture functions are convolved with the spectrum of the polygon, which saves the generation step of a large number of sub-triangles as compared with the method by Kim et al. They have produced some improvement on the efficiency of texture rendering. However, this scheme requires the texture information to be periodic as it is generated through Fourier series. Ji et al. [16] have introduced the concept of texture mapping in a spherical polar coordinate system and improved the segmentation step based on Kim et al.'s self-similar segmentation scheme [9]. In addition, they have proposed an adaptive segmentation method, which reduces the number of sub-triangles needed. The method is not limited by whether the texture information is periodic or not while maintaining the full analytical framework. However, this scheme is difficult to work effectively in high precision rendering of complex textures, and the efficiency of this scheme changes with the texture information and is not stable as compared to that of Kim et al.
Recently we have proposed the use of planar projection texture mapping to generate a full affine transformed computer-generated hologram [17]. Here, we report a full-version of the proposed technique along with additional numerical and optical reconstructions to substantiate the proposed idea. A self-similar segmentation scheme proposed by Kim et al. is employed to add texture information to the object's light field distribution along with our proposed parallel planar texture mapping. We further propose a parallelogram-approximation within the framework of self-similar segmentation scheme and affine transformation to replace most of the sub-triangles with sub-parallelograms. The result of which indicates a steady reduction of the computational effort by almost 50% under the premise of high-precision texture rendering. In Section 2, we develop the theory of polygon-based method based on affine transformation. In Section 3, we discuss texture mapping through the use of self-similar segmentation for a full analytical formulism. In Section 4, we show numerical and optical reconstructions to verify the usefulness of our proposed technique. Finally, in Section 5, we make some concluding remarks.
2. Theory of full analytical polygon-based method based on affine transformation
In the polygon-based method, the total light field distribution ${U_O}({x,y} ) $ of any object composed of polygons (we use triangles in our development) on the hologram plane is composed of the polygon field $ {U_i}({x,y} ) $ of n arbitrary tilted triangle apertures that make up the object (see Fig. 1):
Equation (1) can be rewritten as
An affine transform maps input coordinates $({x_p},{y_p}) $ into output coordinates $({x,y,z} )$ according to
As shown in Fig. 2, ${\mathrm{\Psi }_i}$ enclosed by the unit triangle is mapped to the area ${\mathrm{\Phi }_i}$ enclosed by the arbitrary triangle under the linear mapping of coordinate systems from $({{x_p},{y_p}} )$ to $({x,y,z} )$, so that the surface function $sur{f_i}({{x_p},{y_p}} )$ can be replaced by $sur{f_p}({{x_p},{y_p}} )$ in Eq. (9).
By re-arranging Eq. (9), we get
Now, the spectrum of the unit triangle can be evaluated analytically as follows:
Comparing Eq. (10) and Eq. (11), we have
We have just obtained the spectrum ${A_i}({{f_x},{f_y}} )$ of any triangle indirectly represented by the spectrum ${A_p}({{f_{xM}},{f_{yM}}} )$ of the unit triangle. By substituting Eq. (2) into Eq. (12), we can get the light field distribution $\; {U_O}({x,y} )$ analytically:3. Texture mapping
The mathematical expression of the light field distribution of an object consisting of triangles in the full analytic polygon-based method has been obtained in Eq. (13). However, the light field distribution of an object described by Eq. (13) is composed of triangular apertures described by a constant surface function given by Eq. (5), which makes the reconstruction effect very monotonous and far from satisfying the needs of practical 3D display. Therefore, it is necessary to perform additional rendering of the field distribution ${U_O}({x,y} )$ in order to pursue a better object field reconstruction effect; among them, rendering the object with texture mapping is an effective method to improve the reconstructed 3D object.
3.1 Texture mapping based on planar projection mapping
Texture is an important part of the object information. In computer graphics, this information is usually stored in a 2D Cartesian coordinates $({{x_t},{y_t}} )$, which is commonly referred to as texture coordinates, and the texture function $T({{x_t},{y_t}} )$ is assigned to object point ${P_O}({x,y,z} )$ through the mapping relationship between coordinate system $({{x_t},{y_t}} )$ and coordinate system $({x,y,z} )$. This process is reflected in the polygon-based method by mapping the texture function $T({{x_t},{y_t}} )$ to the surface function of each arbitrary triangle $sur{f_i}({x,y;z} )$. To complete the above operations, it is necessary to determine the corresponding relationship between the two coordinate systems $({{x_t},{y_t}} )$ and $({x,y,z} )$.
Our scheme of projected texture mapping is shown in Fig. 3. First, we set the position of a light point source $ L({{x_L},{y_L},{z_L}} )$ and the texture location $({{z_t}} ) ;\; {z_t}$ is a constant which sets the location of the texture, if the hologram located in the $z = 0$ plane. To facilitate the calculation, light point source $L({{x_L},{y_L},{z_L}} )$ is located in $({0,0,{z_L}} )$. For any point ${P_O}({x,y,z} ) $ on the object, we find $d = |{z - {z_L}} |$, ${d_t} = |{{z_t} - {z_L}} |$, ${d_t} = d - {d_o}$. Using similar triangles from the figure, we have $d/{d_t} = x/{x_t} = y/{y_t}$ We can then finally write the perspective projection geometry relationship as follows:
As it turns out, perspective projection can be regarded as a generalization of orthogonal projection. When the position ${z_L}$ of the point light source tends to infinity ${z_L} \to \infty $ (see Fig. 3)
For each arbitrary triangle under the coordinate system $({x,y,z} )$, there is a texture triangle corresponding to it under the texture coordinate system $({{x_t},{y_t}} )$, and both meet the mapping relationship in Eq. (14). The purpose of surface function mapping is to replace the constant surface function of any triangle originally described by Eq. (3) with the texture function in the interval ${\mathrm{\Omega }_i}$ enclosed by the corresponding texture triangle, as shown in Fig. 4.
According to the mapping result given by Eq. (14), the surface function with projection texture information of arbitrary triangle $sur{f_i}({x,y;z} )$ can be rewritten as:
3.2 Full analytical polygon-based method rendering based on self-similar segmentation
In the previous subsection, we have succeeded in replacing the surface function of an arbitrary triangle by its corresponding texture function T through the projection mapping relation Eq. (14). Theoretically, we only need to substitute $sur{f_i}({x,y;z} )$ in Eq. (3) to $sur{f_{Ti}}({x,y;z} )$ to reconstruct the object-light field with texture information. Kim et al. [9] first proposed to partition an arbitrary triangle into sub-triangles similar to itself and assign a different constant surface function to each sub-triangle that constitutes an arbitrary triangle in order to add rendering information while maintaining a fully resolved framework. Their work has been only applied to occlusion issues and random phase addition during CGH generation. In our work, we also employ their idea of dividing each arbitrary triangle into smaller similar triangles with uniform amplitudes according to the texture mapping function. But our approach is through the division of the unit triangle along with affine transformation to achieve texture rendering. We call this method as self-similar segmentation.
As shown in Fig. 5(a), $\; M$ is the degree of sub-division. The unit triangle has the unit sides divided with $1/M$ long, giving ${M^2}$ self-similar sub-triangles. We also denote $({m,\; n} )$ as an index value along the two unit-lengths. The index value describes the position of the sub-triangle. There are Forward sub-triangles, ${\mathrm{\Psi }_{F,i}}({m,n} )$, and Reverse sub-triangles, ${\mathrm{\Psi }_{R,i}}({m,n} )$ as shown in Fig. 5(a). ${\mathrm{\Psi }_{u,i}}$ and ${\mathrm{\Psi }_{l,i}}$ are the area enclosed by the sub-triangles. For example, ${\mathrm{\Psi }_{F,i}}({0,0} )$ and ${\mathrm{\Psi }_{R,i}}({0,0} )$ denotes the red and blue sub-triangle within the unit triangle. Note that the index values $m + n \le M - 1$ for ${\mathrm{\Psi }_{F,i}}({m,n} )$ and $m + n \le M - 2$ for ${\mathrm{\Psi }_{R,i}}({m,n} ).$ According to these guidelines, for example, when $M = 4$, we have eight ${\mathrm{\Psi }_{F,i}}({m,n} )^{\prime}s$ and ${\mathrm{\Psi }_{R,i}}{({m,n} )^{\prime}}s$, as illustrated in Fig. 5(a). Through affine transformation, ${\mathrm{\Psi }_{F,i}}({m,n} )$ and ${\mathrm{\Psi }_{R,i}}({m,n} )$ map into ${\mathrm{\Phi }_{F,i}}({m,n} )$ and ${\mathrm{\Phi }_{R,i}}({m,n} )$, respectively within the ${i^{th}}$ arbitrary triangle, as shown in Fig. 5(b).
In order to maintain a full analytical framework, we need to find the spectrum of the sub-triangles ${\mathrm{\Psi }_{F,i}}({m,n} )$ and ${\mathrm{\Psi }_{R,i}}({m,n} )$. Assuming the surface functions of the Forward and Reverse sub-triangles are $sur{f_{p,F}}({{x_p},{y_p},m,n} )$ and $sur{f_{p,R}}({{x_p},{y_p},m,n} ) $ as follows:
In principle, we just need to find the spectrum of ${\mathrm{\Psi }_{F,i}}({0,0} )$ analytically, and the rest of spectrum of other self-similar sub-triangles can be found by simply through the operations of translation and rotation. For a general index value $({m,n} )$ and with the help of Fig. 6(a) and Fig. 6(b) to establish the integral limits, we can find the spectrum of ${\mathrm{\Psi }_{F,i}}({m,n} )$ and ${\mathrm{\Psi }_{R,i}}({m,n} )$ with rendering information, respectively, as follows:
Neglecting any constants and by summing all the spectra of Forward and Reverse sub-triangles by using Eqs. (17) and (18), we have the spectrum of the unit triangle with rendering information $ren{d_F}({m,n} )$ and $ren{d_R}({m,n} ):$
Since the method uses a self-similar set of sub-triangles instead of using the whole unit triangle, we can consider that it is equivalent to sampling the texture function T by sub-triangles during the texture mapping process. Figure 7 shows the coordinates of the vertices of $({m,n} )$ sub-triangles (Forward in red and Reverse in blue) and their corresponding barycenters, ${B_{p,F}} = [{({3m + 1} )/3M,({3n + 1} )/3M} ]$, ${B_{p,R}} = [{({3m + 2} )/3M,({3n + 2} )/3M} ]$, respectively.
Through affine transformation [See Eq. (6)], we can find the barycenters in the arbitrary triangle, ${B_F}({m,n} ) $ and ${B_R}({m,n} )$, respectively as follows:
So far, we have obtained the result of adding texture information to the object light field using Kim et al.’s [9] segmentation method but through an affine transformation for full analytical formalism. However, the computation time remains higher if M increases. To solve this problem, we propose a self-similar segmentation method using a parallelogram-approximation.
3.3 Full analytical polygon-based method rendering based on self -similar segmentation using a parallelogram-approximation
As shown in Fig. 9, the two sub-triangles ${\mathrm{\Psi }_{F,i}}({m,n} )$ and ${\mathrm{\Psi }_{R,i}}({m,n} )$ are replaced by a parallelogram ${\mathrm{\Theta }_i}({m,n} )$, where ${\mathrm{\Theta }_i}({m,n} )$ is the area enclosed by the sub-parallelogram. So, we have ${\mathrm{\Psi }_{F,i}}({m,n} )+ {\mathrm{\Psi }_{R,i}}({m,n} )= {\mathrm{\Theta }_i}({m,n} ).$ This step allows us to reduce the number of polygons. After affine transform using Eq. (8), i.e., linear mapping as shown in the figure, ${\mathrm{\Phi }_{F,i}}({m,n} )$ and ${\mathrm{\Phi }_{R,i}}({m,n} )$ have become ${\mathrm{\Xi }_i}({m,n} )$ in the arbitrary triangle, an area enclosed by sub-parallelogram in the arbitrary triangle. Hence, we have ${\mathrm{\Phi }_{F,i}}({m,n} )+ {\mathrm{\Phi }_{R,i}}({m,n} )= {\mathrm{\Xi }_i}({m,n} ).$ In general, there are a total of M Forward sub-triangles and $({M - 1} )\cdot M/2$ sub-parallelograms. As before, $({m,\; n} ) $ can be used to describe the relative positions of sub-parallelograms. The index value of the sub-parallelograms is less than $M - 2$ and of the sub-triangle is less than $M - 1.$
The expression for the spectrum of the Forward sub-triangles ${A_{p,T,F}}({{f_x}^\mathrm{^{\prime}},{f_y}^\mathrm{^{\prime}},m,n} )$ has been obtained in Eq. (17), and so we only need to obtain the spectrum of the sub-parallelograms analytically to obtain a full analytical formalism.
Similar to that shown in Fig. 6, in Fig. 10, we show the boundaries of the sub-parallelogram for integration to obtain the spectrum.
We denote the surface function of the sub-parallelogram as $sur{f_{p,P}}({{f_x}^\mathrm{^{\prime}},{f_y}^\mathrm{^{\prime}},m,n} )$, and its constant amplitude will be replaced with the corresponding rendering information $ren{d_P}({m,n} )$ as
4. Simulations and experimental results
4.1 Numerical reconstruction of light field without texture information
We first use the complex object “Alex”, which is composed of a triangular mesh as shown in Fig. 11(a), as the data source to verify the validity of the mathematical expression of the object light field derived from linear mapping given by Eq. (13). The numerical simulations here use the Lambert diffuse reflection illumination model [21] to add shading information to the light field to enrich the appearance of the reconstructed image, and the result is shown in Fig. 11(b).
Lambert illumination model assumes that a parallel beam of light in a certain direction produces diffuse reflection shadow information that does not change in intensity with the observation direction after illuminating the object. This shading information is obtained by the dot product of the unit normal vector of the i-th polygon ${\hat{n}_{pi}}$ and the unit normal vector of the illumination light ${\hat{n}_L}$, which is added as a constant to Eq. (13) for each polygon of the 3D object. It is worth noting that Eq. (13) is the complex amplitude distribution of the light field, and so this intensity constant for each polygon should be
For numerical simulations, we have generated the holograms with a resolution of 1024 × 1024 pixels with pixel pitch of 8$\mu m$, and wavelength of 0.532$\mu m$. The size of the object is 5.947$mm \times $ 3.234$mm \times $ 7.854$mm\; ({L \times W \times H} )$, and the hardware includes CPU: Intel Core i5-4590 @ 3.3 GHz, GPU: NVIDIA GTX 950,8G-byte RAM under the environment of MATLAB 2019b. Figure 11(b) proves the validity of the light field calculation formula of Eq. (13). Most importantly, we can observe that the numerical reconstruction obtained by this formula has no visible black lines typically resulted from the joining of the polygons in the original 3D data set.
4.2 Comparison of computational complexity and similarity between two different segmentation methods
In modern computer graphics, in order to make 3D objects easy to handle in subsequent texture rendering, it is common to use the technology of mesh regularization (so called remeshing) [25] so that the polygons that make up the object are similar in size. The polygon model “Alex” also goes through this process. This allows the algorithm proposed in the present paper to set sub-division level M as a constant.
Since Kim et al.'s segmentation method uses sub-triangles of equal size, the number of sub-triangles constituting an arbitrary triangle is ${M^2}$. Our proposed scheme replaces most sub-triangles with parallelograms, and so the number of sub-polygons constituting a triangle is $({M - 1} )\cdot M/2 + M$ as previously described. Figure 12(a) plots the number of sub-polygons as a function of M for the two methods for 3D object “Alex.”. In high-precision reconstruction with a high degree of subdivision M, our proposed method can steadily reduce the computational effort by nearly 50% as compared with that obtained from Kim et al.’s. Figure 12(b) compares the calculation time for the two methods. We also see that our proposed method is more computational efficient by a factor of two.
In the next numerical simulations, we have used the complex texture shown in Fig. 13(a) to render the light field distribution of “Alex” with the two self-similar segmentation methods. The size of the texture is 5.183$\textrm{mm} \times $ 5.183$\textrm{mm}\; \textrm{with}\; {\textrm{z}_\textrm{t}} = $17.278$\textrm{mm }$ and $({{x_L},{y_L},{z_L}} )= ({0,0,32.9406} )\textrm{mm}.$ We have calculated the numerical reconstruction of the two algorithms for M = 4, 8 and 16. We also compare the two results using structural similarity (SSIM).
As shown in Fig. 14, the reconstruction results of the two methods approach to each other as the subdivision accuracy increases based on the value of SSIM. This means that our proposed method can reduce the computational effort while not changing the reconstruction quality even with rendering. In order to ensure an accurate representation of surface morphological features in a 3D scene, it is usually necessary to have a larger number of polygons with smaller areas to describe the corresponding 3D scene.
The use of many polygons for a 3D object is a complex model. Since the polygons that make up a complex model are smaller, the required M to achieve the same texture information rendering is smaller than that of a simple model with several polygons. In other words, the rendering of complicated texture information for a simple model requires a higher M to guarantee the number of equivalent sampling points required in the rendering process so that the texture information is rendered as distortion-free as possible. Hence, a complex model and a model of low-polygons have about the same efficiency. It is worth noting that the acceleration algorithm proposed in this paper is only applicable to the high-precision rendering case when the subdivision degree M is large enough; when the sub-division degree is insufficient, the acceleration algorithm of parallelogram approximation will further deteriorate the reconstruction results.
As shown in Fig. 15 for a simple plane with texture, the reconstruction results of texture rendering by self-similar segmentation are unsatisfactory due to the insufficient sub-division degree M, e.g., M = 50. The objective evaluation results of SSIM indicate that the reconstruction results are further deteriorated by parallelogram approximation in the case of insufficient M. To ensure a high SSIM value, simple models such as a simple plane also require a larger M value to ensure rendering accuracy, which will result in a large number of sub-triangles. These conclusions apply to the parallelogram approximation used in the present investigation.
However, in practical calculations, we usually encounter 3D objects composed of polygons of different sizes; when reconstructing the light field of such models by the full analytical texture rendering polygon-based algorithm with self-similar segmentation, the larger polygons usually require a higher degree of subdivision to maintain the same rendering accuracy as the smaller polygons. If we use a constant degree of subdivision M for all triangles for texture rendering, the final result will either lead to insufficient rendering accuracy for the larger triangles (as M is not large enough) but too many subdivisions for the smaller triangles. To solve this problem, one can employ a processing method that adaptively selects different values of the degree of sub-division M based on the area of polygons of different sizes, for example. This is an area of research worth investigating in the future.
4.3 Optical reconstruction
We have rendered the light field distribution of “Alex” with different texture mappings and performed optical reconstruction. The model of the spatial light modulator (SLM) is HOLOEYE PLUTO2 (NIR-011) with resolution of 1920 × 1080 (Full HD 1080p) and with pixel pitch of 8$\mu m.$ The active area is 15.36 mm × 8.64 mm. The wavelength of the light wave emitted by the laser is 0.532$\mu m$. The image receiver is MMRY UC900C Charge-coupled device. The schematic diagram of the optical setup is shown in Fig. 16. The light beam from the laser is first expanded by the beam expander (BE), and is then collimated by the collimator lenses (CL). The polarizing beam splitter (PBS) allows the light field to be phase-modulated by the spatial light modulator (SLM), which reconstructs the hologram.
The reconstructed image is to be imaged by the lens to the charge-coupled device (CCD). The screen (S) is used to block any unwanted light. The results of numerical and optical reconstructions are shown in Fig. 17.
5. Concluding remarks
We have developed a full analytical polygon-based holographic generation method with texture rendering. A self-similar segmentation method is proposed along with an affine transformation. We have also proposed a texture mapping based on planar projection mapping for the full analytical polygon-based method and a self-similar segmentation method using parallelogram approximation. We have compared our method with that of Kim et al. and our method has shown to steadily reduce nearly 50% of the computational effort in high precision texture rendering. In addition, the proposed method is more computational efficient by a factor of two. The proposed method is validated by simulations and optical experiments. We believe the method proposed also has greater potential in subsequent adaptive segmentation.
Funding
National Natural Science Foundation of China (62275113); Yunnan Provincial Science and Technology Department (Xing Dian Talent Support Program); Yunnan Provincial Science and Technology Department (202201AU070159).
Disclosures
The authors declare no conflicts of interest.
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
References
1. Y. Zhang and T.-C. Poon, Modern Information Optics with MATLAB, Cambridge University, Cambridge, U.K. (2023).
2. K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A 20(9), 1755–1762 (2003). [CrossRef]
3. Y. Pan, Y. Wang, J. Liu, X. Li, J. Jia, and Z. Zhang, “Analytical brightness compensation algorithm for traditional polygon-based method in computer-generated holography,” Appl. Opt. 52(18), 4391–4399 (2013). [CrossRef]
4. H. Nishi, K. Matsushima, and S. Nakahara, “Rendering of specular surfaces in polygon-based computer-generated holograms,” Appl. Opt. 50(34), H245–H252 (2011). [CrossRef]
5. K. Matsushima, “Formulation of the rotational transformation of wave fields and their application to digital holography,” Appl. Opt. 47(19), D110–D116 (2008). [CrossRef]
6. K. Matsushima, “Computer-generated holograms for three-dimensional surface objects with shade and texture,” Appl. Opt. 44(22), 4607–4614 (2005). [CrossRef]
7. K. Matsushima, H. Nishi, and S. Nakahara, “Simple wave-field rendering for photorealistic reconstruction in polygon-based high-definition computer holography,” J. Electron. Imag. 21(2), 1 (2012). [CrossRef]
8. L. Ahrenberg, P. Benzie, M. Magnor, and J. Watson, “Computer generated holograms from three dimensional meshes using an analytic light transport model,” Appl. Opt. 47(10), 1567–1574 (2008). [CrossRef]
9. H. Kim, J. Hahn, and B. Lee, “Mathematical modeling of triangle-mesh-modeled three-dimensional surface objects for digital holography,” Appl. Opt. 47(19), D117–D127 (2008). [CrossRef]
10. Y. Pan, Y. Wang, J. Liu, X. Li, and J. Jia, “Improved full analytical polygon-based method using Fourier analysis of the three-dimensional affine transformation,” Appl. Opt. 53(7), 1354–1362 (2014). [CrossRef]
11. Y.-P. Zhang, F. Wang, T.-C. Poon, S. Fan, and W. Xu, “Fast generation of full analytical polygon-based computer-generated holograms,” Opt. Express 26(15), 19206–19224 (2018). [CrossRef]
12. Y. Zhang, H. Fan, F. Wang, X. Gu, X. Qian, and T.-C. Poon, “Polygon-based computer-generated holography: a review of fundamentals and recent progress [Invited],” Appl. Opt. 61(5), B363–B374 (2022). [CrossRef]
13. H. Fan, B. Zhang, Y. Zhang, F. Wang, W. Qin, Q. Fu, and T-C. Poon, “Fast 3D Analytical Affine Transformation for Polygon-Based Computer-Generated Holograms,” Appl. Sci. 12(14), 6873 (2022). [CrossRef]
14. D. Pi, J. Liu, and Y. Wang, “Review of computer-generated hologram algorithms for color dynamic holographic three-dimensional display,” Light: Sci. Appl. 11(1), 231 (2022). [CrossRef]
15. W. Lee, D. Im, J. Paek, J. Hahn, and H. Kim, “Semi-analytic texturing algorithm for polygon computer generated holograms,” Opt. Express 22(25), 31180–31191 (2014). [CrossRef]
16. Y-M Ji, H. Yeom, and J-H. Park, “Efficient texture mapping by adaptive mesh division in mesh-based computer-generated hologram,” Opt. Express 24(24), 28154–28169 (2016). [CrossRef]
17. Q.-Y. Fu, Y. Zhang, B. Zhang, and T-C. Poon, “Texture Mapping Based on Planar Projection Mapping in fully analytic polygon-based computer-generated holography,” Paper JTu4B. 51, FiO + LS (2022). [CrossRef]
18. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015). [CrossRef]
19. F. Wang, T. Shimobaba, Y. Zhang, T. Kakue, and T. Ito, “Acceleration of polygon-based computer-generated holograms using look-up tables and reduction of the table size via principal component analysis,” Opt. Express 29(22), 35442–35455 (2021). [CrossRef]
20. K. Matsushima, Introduction to Computer Holography: Creating Computer-Generated Holograms as the Ultimate 3D Images, Series in Display Science and Technology, Springer (2020).
21. D. Hearn, M. P. Baker, and W. Carithers, “Computer Graphics with OpenGL.” Prentice Hall Professional Technical Reference (2003).
22. . Stony Brook University 3D Scanning Laboratory, https://www3.cs.stonybrook.edu/∼gu/software/holoimage/index.html.
23. J. H. Park, S. B. Kim, H. J. Yeom, H. J. Kim, H. Zhang, B. Li, Y. M. Ji, S. H. Kim, and S. B. Ko, “Continuous shading and its fast update in fully analytic triangular-mesh-based computer generated hologram,” Opt. Express 23(26), 33893–33901 (2015). [CrossRef]
24. F. Wang, T. Ito, and T. Shimobaba, “High-speed rendering pipeline for polygon-based holograms,” Photonics Res. 11(2), 313 (2023). [CrossRef]
25. D. -M. Yan and P. Wonka, “Non-Obtuse Remeshing with Centroidal Voronoi Tessellation,” in IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 9, pp. 2136–2144, 1 Sept. (2016).