Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast calculation of computer-generated hologram of line-drawn objects without FFT

Open Access Open Access

Abstract

Although holographic display technology is one of the most promising three-dimensional (3D) display technologies for virtual and augmented reality, the enormous computational effort required to produce computer-generated holograms (CGHs) to digitally record and display 3D images presents a significant roadblock to the implementation of this technology. One of the most effective methods to implement fast CGH calculations is a diffraction calculation (e.g., angular spectrum diffraction) based on the fast-Fourier transform (FFT). Unfortunately, the computational complexity increases with increasing CGH resolution, which is what determines the size of a 3D image. Therefore, enormous calculations are still required to display a reasonably sized 3D image, even for a simple 3D image. To address this issue, we propose herein a fast CGH algorithm for 3D objects comprised of line-drawn objects at layers of different depths. An aperture formed from a continuous line at a single depth can be regarded as a series of aligned point sources of light, and the wavefront converges for a sufficiently long line. Thus, a CGH of a line-drawn object can be calculated by synthesizing converged wavefronts along the line. Numerical experiments indicate that, compared with the FFT-based method, the proposed method offers a factor-56 gain in speed for calculating 16-k-resolution CGHs from 3D objects composed of twelve line-drawn objects at different depths.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Since Gabor invented the hologram in 1947, holography has become known for recording, processing, and displaying three-dimensional (3D) information. Today, it is applied in many technologies, such as microscopy [1], digital media [2], and 3D displays [35]. Electro-holography can display physiologically natural 3D images, making it one of the most promising 3D display technologies. However, significant obstacles prevent its practical use, one of which is the enormous computational effort required to calculate computer-generated holograms (CGHs–these are digital images that contain 3D information to display 3D images). The computational resources required to produce CGHs depend on both the resolution of the CGH and the complexity of the 3D model. In addition, the viewing angle for 3D images is inversely proportional to the pixel pitch of the spatial light modulator (SLM–this is a display device that modulates the incident light as dictated by the CGH) and the size of 3D image is directly proportional to the size of SLM. Therefore, significant computational resources are required to calculate a CGH that can project a 3D image with sufficient viewing area, size, and complexity.

A CGH is calculated by simulating the interference between two coherent beams of light, which are normally called the object beam and the reference beam. The object beam contains coherent light reflected from the 3D object and the reference beam is emitted from a coherent light source. At some distance from the 3D object is the hologram plane, which is where the phase and amplitude of the interference between the object and reference beam is quantized and output as the CGH. The computational complexity of simulating the two beams is overwhelmingly due to the object beam because the reference beam is typically treated as a plane wave. Furthermore, because the simulation scheme for the object beam depends on the form of the 3D model, conventional algorithms for fast CGH calculations are categorized by the type of 3D model.

To form a CGH of a 3D object, one approach is to use methods based on infinitely small point-light sources (PLSs). In this case, the object beam is calculated by simulating the wavefront at the hologram plane created by the interference between the spherical waves emitted from the PLSs. Thus, the computational order of the PLS-based method is $O(WHN)$, where $W$ and $H$ are the width and height of the CGH, respectively, and $N$ is the number of PLSs. Because calculations of PLS-based CGHs are relatively simple, many fast methods are available, such as the approximation-based method [6], the recurrence relation method [79], and the look-up table (LUT) method [1014].

The polygon-based method is used to create CGHs of 3D models comprising planar surfaces, such as polygons [1517] and multi-layer images [1820]. In this case, the diffraction between planar surfaces is calculated, which may be done by using the angular spectrum method or Fresnel diffraction method [21] both of which lend themselves to fast-Fourier transforms (FFTs). Thus, the computational complexity of the polygon-based method becomes $\mathcal {O}(HW\rm {log}WML)$ where $W$ and $H$ are the width and height of polygon $M$ is the number of FFTs required for a single propagation calculation, and $L$ is the number of surfaces.

In addition to these two basic methods, many other fast CGH calculation methods are available, such as the sparsity-based method [2224], the wavefront recording plane method [25,26], and the ray-wavefront-conversion-based method [27,28].

The present study targets a 3D model composed of line-drawn objects on multiple planar layers oriented parallel to the hologram plane, and the goal of the study is to increase the speed of the polygon-based calculation method for multi-layer images. We assume that the results of this study will be used for head-mounted displays (HUDs) and near-eye displays (NEDs) for navigation systems (e.g., car navigation and work assistance) because the display content for these applications is relatively simple (e.g., characters, basic shapes, symbols). The most straightforward way to calculate CGHs of our targeted 3D object is to apply FFT-based diffraction calculations (e.g., the angular spectrum method or the Fresnel diffraction method). However, if 3D image display is required, the computational resources required to calculate CGHs remains enormous, especially for high-resolution CGHs. To address this issue, we propose herein a fast PLS-based method that does not rely on FFTs to calculate CGHs of a 3D object comprised of line-drawn objects. By limiting the pictures projected onto 3D space to line-drawn images, the proposed method can calculate 16-k-resolution CGHs of twelve-layer 3D images approximately 56 times faster than a conventional FFT-based method.

The main contributions of this work are as follows:

  • • We propose a fast algorithm for calculating CGHs that uses CPUs to project simple line-drawn images, such as characters, icons, and graphical-user-interface elements (e.g., buttons and cursors), to different depths.
  • • The calculation speed and image quality of the proposed algorithm is validated by comparing the results of numerical simulations with those of conventional fast methods of calculating CGHs.
  • • We analyze the causes leading to the degradation of image quality.

The remainder of this study is organized as follows: Section 2 briefly presents the calculation of a CGH, and the details of the proposed method are described in Sec. 3. Section 4 details the experiments used to validate the proposed method and Sec. 5 discusses the results. Finally, Sec. 6 concludes the study.

2. Computer-generated holograms

CGHs constitute a digital image in which is recorded the information required to reconstruct a 3D object. A CGH is obtained by simulating the optical interference between object and reference beams on a holographic plane: the object beam is reflected from the 3D object and the reference beam is a plane wave that comes directly from the light source. The CGH is obtained by quantizing the amplitude and phase of the complex-amplitude distribution that results from the superposition of the object and reference beam at the holographic plane. The computational load of this operation comes primarily from simulating the object beam. CGHs generated from the amplitude (phase) distribution are called “amplitude-type” (“phase-type,” or “kinoform”) CGHs. This study assumes kinoform CGHs.

Methods to simulate the propagation of the object beam can be mainly divided into two categories: PLS-based methods and polygon-based methods. The following sections briefly review these two approaches.

2.1 PLS-based method

Figure 1 shows an overview of the PLS-based method and an example of the complex-amplitude distribution of a wavefront created by a PLS. Note that the figure depicts the phase of the complex-amplitude distribution. The PLS-based method is designed to create a CGH from the 3D model comprised of infinitely small PLSs. Thus, the wavefront of the object beam in the PLS-based method is

$$u(x_h,y_h)=\sum_{j=1}^{N}\frac{A_j}{r_{hj}}\exp\left(i\frac{2\pi}{\lambda}r_{hj}\right),$$
where $u(x_h,y_h)$ is the complex-amplitude distribution of the object beam on the hologram plane, $i=\sqrt {-1}$, $A_j$ is the amplitude of PLS $j$, $\lambda$ is the wavelength of the object and reference beam, $r_{hj}= \sqrt {(x_j-x_h)^{2}+(y_j-y_h)^{2}+z_j^{2}}$ is the distance between PLS $j$ and pixel $(x_h,y_h)$ in the hologram plane, $(x_j,y_j,z_j)$ is the coordinate of PLS $j$, and $N$ is the number of PLSs. In this study, we assume $A_j/r_{hj} \approx \textrm {const.}$, which simplifies the calculation by allowing the paraxial approximation under the condition $x_j,y_j \ll z_j$ and with all PLSs emitting at the same intensity.

 figure: Fig. 1.

Fig. 1. Overview of PLS-based method.

Download Full Size | PDF

Finally, the kinoform CGH is given by

$$c(x_h,y_h) = \textrm{arg}\left\{u(x_h,y_h)\right\}\cdot\frac{2^{b}-1}{2\pi},$$
where $\textrm {arg}(\cdot )$ is an operator taking an argument, and $b$ is the quantization bit length.

Conversely, the effective distance for recording a wavefront from each PLSs on the hologram plane is restricted by the sampling interval of the SLM, which is limited by the necessity of preventing aliasing noise. The effective distance at which the wavefront becomes circular for waves emanating from PLS $j$ at $(x_j,y_j,z_j)$ is

$$R_{\textrm{max}}(z_j)= z_j\frac{\lambda}{\sqrt{4p^{2}-\lambda^{2}}},$$
where $R_{\textrm {max}}$ is the radius of the sphere and $p$ is the pixel pitch of the SLM.

In addition, because Eq. (1) is shift invariant (i.e., the complex-amplitude distribution a given distance from a PLS is essentially constant), the PLS-based method can be interpreted as accumulating the complex-amplitude distribution of each PLS by shifting the wavefront. Therefore, the PLS-based method can be rationalized by precalculating the wavefront of each PLS at each depth and combining the wavefronts according to their distance from their PLS, which is generally called the “novel-LUT” (N-LUT) method [11,12].

2.2 Polygon-based method

The polygon-based method simulates wave propagation from each planar surface in a 3D model, which is expressed as

$$u(x_h,y_h)=\mathcal{F}^{{-}1}\left\{\sum_{j=1}^{P}u_j(x_h,y_h)\right\},$$
where $P$ is the number of polygons and $u_j(x_h,y_h)$ is the light distribution on polygon $j$ on the hologram plane. The function $u_j(x_h,y_h)$ is calculated by applying a diffraction calculation between planar surfaces (e.g., the angular spectrum method), which is expressed as
$$u_j(x_h,y_h) = \left[\mathcal{F}\left\{u_j(x_j,y_j)\right\}\exp\left(i2\pi z_j \sqrt{\frac{1}{\lambda^{2}}-f_{x}^{2}-f_{y}^{2}}\right)\right],$$
where $\mathcal {F[\cdot ]}$ and $\mathcal {F[\cdot ]}^{-1}$ are the Fourier transform and inverse Fourier transform operators, respectively, $(f_x,f_y)$ are coordinates in the frequency domain, and $z_j$ is the distance between polygon $j$ and the hologram plane. Generally, polygon planes are not parallel to the hologram plane so Eq. (5) cannot be applied (it can only be applied for wave propagation between two parallel planes). We thus use the modified algorithm reported by [29,30], which can be used for wave propagation between nonparallel planes. Finally, as with the PLS-based method, the CGH is obtained from Eq. (2).

In this study, the target 3D model is a special polygon model composed of multiple planes that are parallel to the hologram plane. This allows us to use the diffraction calculation between the model planes and the hologram plane. We call this target model the “layer model.”

Figure 2 shows an example of a layer model with a line-drawn object and an overview of a CGH calculation for a conventional layer model. The conventional way to produce a CGH from a layer model is to repeatedly calculate the diffraction according to Eqs. (4) and (5), which means that the computational load increases with increasing CGH resolution and as the number of layers increases. For example, when using four-cores in Intel Core i7-8850H CPUs, the calculation time for creating a CGH with $15\,360\times 8640$ pixels (16 k resolution) from four layers at different depths and with the same resolution is 120 s.

 figure: Fig. 2.

Fig. 2. Overview of calculation of CGH of layer model (assuming a 3D car-navigation display).

Download Full Size | PDF

3. Calculation of CGH for layer model with line-drawn object

In contrast with the conventional method, we propose to use the PLS method to calculate a CGH from the layer model. In other words, we do not use the FFT-based diffraction calculation. To begin, note that a line-drawn object on a layer can be regarded as a thin slit. According to Huygens–Fresnel diffraction theory, light propagation through such a slit can be regarded as an aggregation of a spherical wavelet emitted from PLSs continuously aligned across the slit. Thus, Eq. (1) can also be used for the diffraction calculation from layers containing line-drawn objects. Here, the thickenss of the line is same as $p$ (the pixel pitch of SLM).

The proposed method exploits the fact that an accumulated wavefront from a straight alignment of PLSs converges when the length is much longer than the diameter $2R_{\textrm {max}}$ of the effective range. Figure 3(a) shows an example of a line-drawn object consisting of a straight line of length $L$, Fig. 3(b) shows its complex-amplitude distribution on the hologram plane, Fig. 3(c) shows the complex-amplitude distribution approximated by the proposed method, and Fig. 3(d) shows the complex-amplitude distribution of a PLS at the same depth. Note that Figs. 3(b) and 3(d) are calculated by using Eq. (1). The PLS-based CGH method introduced in Sec. 2.1 calculates the CGH of the line-drawn object by overwrapping the complex-amplitude distribution of a PLS [Fig. 3(d)] along the line-drawn object by shifting the center position by one pixel pitch along the $x$ axis. Therefore, the complex-amplitude distribution along a vertical line at a given $x$ is obtained by summing the complex-amplitude distribution of a PLS over a certain range along the vertical line. For example, the complex-amplitude distribution along the dotted green lines in Fig. 3(b) is obtained by summing the complex-amplitude distribution of one PLS in the vertical direction over the range indicated by the green double-headed arrow. The analogous approach is used to treat the complex-amplitude distribution along the red and blue dotted lines. The area between the blue dotted lines is treated by summing all the complex-amplitude distribution of one PLSs in the vertical direction, so the CGH pattern converges in this area. Since the CGH is symmetric with respect to the line-drawn object, it can be compressed to one-dimensional (1D) data. Furthermore, as shown in Fig. 3(b), the pattern between the red and blue lines is almost the same as that between the blue lines.

 figure: Fig. 3.

Fig. 3. Formation of complex-amplitude distribution from line-drawn object using the proposed method (Complex-amplitude distribution is depicted as a phase distribution): (a) example of line-drawn object, (b) complex-amplitude distribution from the line-drawn object, (c) approximate complex-amplitude distribution using the proposed method, (d) complex-amplitude distribution of a PLS.

Download Full Size | PDF

Thus, the proposed method approximates the complex-amplitude distribution of the line-drawn object by synthesizing the converged 1D complex-amplitude distribution between the blue dotted lines in Fig. 3(b) to obtain the line-drawn object shown in Fig. 3(c). The CGH of an arbitrary curve can be generated in the same way by treating the curve as a set of short straight lines; in other words, the converged 1D complex-amplitude distribution is overwrapped along the direction normal to the curve. Figure 4 depicts the procedure for calculating the complex-amplitude distribution from an arbitrary curved line-drawn object, and Fig. 5 shows the resulting kinoform CGH created by using Eq. (1) and the proposed method. As similar to the conventional CGH calculation method, the proposed method accumulates the 1D complex-amplitude distribution on a hologram plane, which is implemented as an array of 2D floating-point buffer. The process of accumulating 1D complex-amplitude distributions to the normal direction of the line is executed at $p$ (the pixel pitch of SLM) interval along the line. For example, for a curved line of length $L$, the process is executed $L/p$ times. As for the accumulation at the crossing point of the 1D complex-amplitude distribution is executed by adding those values. For example, when two values in 1D complex-amplitude distribution ($A_1\exp (i\theta _1), A_2\exp (i\theta _2$)) crossing on the same point, the complex values on the hologram plane at the point become $A_1\exp (i\theta _1)+A_2\exp (i\theta _2)$. The effects of the approximation of the complex-amplitude distribution in the proposed method are discussed in Sec. 5.

 figure: Fig. 4.

Fig. 4. Illustration of procedure for applying the proposed method to an arbitrary curved line-drawn object (Complex-amplitude distribution is depicted as a phase distribution).

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Example of kinoform CGH of arbitrary curved line-drawn object: (a) original line-drawn object, (b) kinoform CGH generated by Eq. (1), (c) kinoform CGH generated by the proposed method.

Download Full Size | PDF

Figure 6 shows the procedure for using the proposed method to calculate a CGH. First, the layered model containing line-drawn objects in each layer is divided into its layer and the calculation unit (which is assumed to be a CPU core) synthesizes the converged 1D complex-amplitude distribution for the line-drawn objects. Next, the complex-amplitude distributions of all layers are integrated and quantized to produce the CGH.

 figure: Fig. 6.

Fig. 6. Illustration of procedure to use proposed method to calculate CGH of 3D line-drawn object.

Download Full Size | PDF

To make the calculation faster, the proposed method uses a LUT that stores all of the precalculated converged 1D complex-amplitude distributions from each layer. These are then read out as needed to reproduce the desired depth of the line-drawn object. Because of the symmetry of the converged complex-amplitude distribution, only half of each complex-amplitude distribution need be stored, which means that the amount of data stored in the LUT for a layered object of depth $Q$ becomes

$$D=\sum_{j=0}^{Q-1}{C\times R_{\textrm{max}}(z_j)},$$
where $D$ is the total memory required by the LUT, $C$ is the size of one complex value, and $z_j$ is the depth of layer $j$.

Because the proposed method requires only a 1D complex-amplitude distribution for each layer, and the calculation load mainly depends on the complexity of the line-drawn object but not on the resolution of CGH, the calculation of the CGH from a layered series of simple line-drawn objects requires only negligible computational power and memory. Thus, the proposed method should be suitable for implementation in embedded systems in HUDs and NEDs to project simple images.

4. Experiment

We verify the proposed method by testing its speed for calculating a CGH and the quality of the numerically and optically reconstructed image. Fig. 7 shows the optical setup. We used a phase-modulation-type SLM (Holoeye Photonics AG, ‘PLUTO’) and a green laser with 532 nm wavelength (Showa Optronics, ‘J150GS’, Japan). The results of the proposed method are compared with those of the conventional FFT-based method, which calculates the diffraction between a hologram plane and each layer by using the angular spectrum method, as depicted in Fig. 2. Further, we compared with the PLS-based method (N-LUT method [11]) in terms of computational speed because of the trend of the calculation time based on the PLS-based method assumes to be different from the proposed method and conventional FFT-based method. Figure 8 and 9 show the 3D models used for these tests. Figure 8 presents a 3D model containing 12 layers, with each layer containing a simple line drawing. Figure 9 presents a four-layer 3D model representing a car HUD. In Fig. 8, the depth of the forward-most layer is 0.24 m and the inter-layer interval is 0.04 m between all layers. In Fig. 9, the depth of the forward-most layer is 0.10 m and the intervals between layers are 0.5, 0.2, and 0.3 m from front to back.

 figure: Fig. 7.

Fig. 7. Optical setup (green-line depicts the light path from the laser).

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. 3D model constructed from twelve layers of simple shapes. The depth of the forward-most layer is 0.24 m and the inter-layer interval is 0.04 m between all layers (yellow-line on the 1st layer is not an object but is referred in Sec. 4.3).

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. 3D model of car HUD constructed from four layers. The depth of the forward-most layer is 0.10 m and the intervals between layers are 0.5, 0.2, and 0.3 m from front to back (yellow-line on the 1st layer is not an object but is referred in Sec. 4.3).

Download Full Size | PDF

In general, the depth intervals of a 3D model should be greater than the human depth resolution or the optical resolution of point-light sources in depth direction. The former one is defined by $\Delta z_1 = 0.5d\tan \{\theta + \tan ^{-1}(2z/d)\}-z$ [31] and latter one is defined by $\Delta z_2 = 8\lambda z/(N_h p)^{2}$ [32] where $z$ is the depth of a layer, $d$ is the distance between two eyes, $\theta$ is the stereo-acuity representing the smallest detectable depth difference in binocular vision and $N_h$ is the one-side length of a hologram. Given $d=0.064$ m, $\theta = 10$ arcsec [31] and $N_h = 1,080$, $\Delta z_1 = 0.70$ mm, $\Delta z_2 = 26$ mm at $z = 0.68$ m depth which is the deepest depth of Fig. 8, i.e., the minimum distance of consecutive layers should be $\Delta z = 26$ mm at $z=0.68$ m. As for $z=0.2$ m which is the deepest depth of Fig. 9, $\Delta z_1 = 62$ $\mu$m, $\Delta z_2 = 2.3$ mm, i.e., $\Delta z = 2.3$ mm. Considering $\Delta z_1$ and $\Delta z_2$ are the incremental function for $z$, $\Delta z$ is the minimum distance of consecutive layers for each 3D model. Since the layer intervals between the two 3D models is below this distance, we can expect the 3D image to clearly resolve in all layers.

The resolution of the layers in each 3D models are 2 k (1,920$\times$1,080 pixels), 4 k (3,840$\times$2,160 pixels), 8 k (7,680$\times$4,320 pixels) and 16 k (15,360$\times$8,640 pixels). In other words, we generate CGHs with 2 k, 4 k, 8 k, and 16 k resolution. The wavelength of the incident light is $\lambda =532$ nm, the pixel pitch of the display device is $p=8$ $\mu \textrm {m}$, and the bit depth of the CGH is $b=8$ bits. The computing environment consisted of a Windows 10 Professional 64 bit operating system with an Intel Core-i7 8850H 2.60 GHz CPU, the Microsoft Visual C++ 2017 compiler, 32 bit floating point precision, and 16 GB of DDR4-2666 memory.

For the 3D model constructed from simple shapes (Fig. 8), we used a 2.53 MB LUT to hold 80 layers of the converged 1D complex-amplitude distributions from depths of 0.01 to 0.80 m in 0.01 m increments. For the car HUD 3D model, the LUT required 168 kB to contain 20 layers of converged 1D complex-amplitude distributions from depths of 0.01 to 0.20 m in 0.01 m increments. Both LUTs are sufficiently small for embedded systems.

The phase delay for all line segment points are the same over the whole hologram.

4.1 Calculation speed

Table 1 compares the calculation speed of the proposed method with that of the conventional FFT-based method that uses the angular spectrum approach and the CWO++ library [33], and N-LUT method [11]. We parallelized both the proposed method and the conventional methods by using OpenMP with four threads. As shown in the table, the proposed method can calculate the CGH faster than the conventional methods for all resolutions, with the ratio of speeds increasing with the resolution of the CGH compared to the conventional FFT-based method.

Tables Icon

Table 1. Calculation times for proposed method (“Prop.”) compared with that for conventional FFT-based method (“FFT-based”) and N-LUT method for CGH of 3D model in Fig. 8 (“Simple shapes”) and in Fig. 9 (“Car HUD”). The ratio of times is given in inner parathensis beside the calculation time

4.2 Image quality

We examined the image quality by using the angular spectrum method to numerically simulate the CGHs at the given distance from the layers. Further, we examined the optically reconstructed image of the proposed method for simple shapes model with 2 k resolution because of our SLM resolution. Toward this end, we calculated the structural similarity (SSIM) and peak signal-to-noise ratio (PSNR) between the original image and the image numerically reconstructed by the proposed method and by the conventional FFT-based method. Tables 2 and 3 show the results for the each layer of the simple shape 3D model (Fig. 8) and the car HUD 3D model (Fig. 9).

Tables Icon

Table 2. Image quality for CGH of simple shape 3D model (Fig. 8) as determined by SSIM and PSNR.

Tables Icon

Table 3. Image quality for CGH of car HUD 3D model (Fig. 9) as determined by SSIM and PSNR.

Figure 10 shows the numerically and optically reconstructed image of the CGH from the simple shape 3D model at 2 k resolution. Figure 11 show the numerically reconstructed image of the CGH from the car HUD 3D model at 16 k resolution. Each figure compares images reconstructed by the proposed method with those reconstructed by the conventional FFT-based method. Note that the figure panels in Fig. 10 are cropped to show only the area where an object is visible in the reconstructed image. In addition, videos of the numerically reconstructed images with varying focal distance are available in the Visualization 1, Visualization 2 for simple shape 3D model with 2 k and 16 k resolution, and Visualization 3, Visualization 4 for car HUD 3D model with 2 k and 16 k resolution, and optically reconstructed video is available in the Visualization 5 for simple shape 3D model with 2 k resolution. These results show that the proposed method adequately reconstructs the images in the layers at different focal distances.

 figure: Fig. 10.

Fig. 10. Numerically reconstructed images at 2 k resolution CGH created by both conventional FFT-based method (left columns) and proposed method (middle columns), and optically reconstructed images of propose method (right columns) from the simple shapes 3D model (each panel is cropped to show only area where an object is visible).

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Numerically reconstructed images at 16 k resolution CGH created by both conventional FFT-based method (2nd and 3rd columns) and proposed method (4th and 5th columns) from the car HUD 3D model.

Download Full Size | PDF

The average PSNR for the proposed method is less than 30 dB, which not satisfies one of the criteria for high-quality two-dimensional images [34]. In addition, the average SSIM is less than 0.5 and its maximum is 1.0. However, in both Figs. 10 and 11, we recognize shapes, values, and strings in the 3D objects, which indicates that the CGH produced by the proposed method reconstructs these 3D images with sufficient image quality for displaying line-drawn objects.

4.3 Property of intensity distribution of the reconstructed image

We analyzed the property of the intensity distribution of the reconstructed image regarding two points: 1) uniformty of the intensity of the reconstructed images on the line-drawn object, 2) thickness of the line. Ideally, the intensity is uniformly distributed with bright value on the line of the reconstructed image, and thickness of the width is equal to the pixel pitch of $p (8$ $\mu$m in this paper).

Figure 12 shows the average (bar graph) and standard deviation (error bar) of the intensity distribution of the reconstructed image on the line-drawn object, which was created for evaluating problem 1). Here, the reconstructed image is a gray-scale image of 8 bit depth. As shown in the figure, the average value of the proposed method is lower, and the standard deviation is wider than the conventional FFT-based method in both models, which is not apparently improved as the resolution increases.

 figure: Fig. 12.

Fig. 12. Average and standard deviation of the intensity ditribution on the reconstracted image: (a) Car HUD, (b)simple shapes.

Download Full Size | PDF

We also calculated the full width at half maximum (FWHM) of the intensity distribution on the yellow-line in Figs. 8 and 9 in order to evaluate 2). Note that we applied the linear interpolation of the intensity values between the pixel of the peak value and the adjacent pixels. The results show in Table 4. Although the thickness of the lines differs from the theoretical values, there are no major visual problems, as can be seen from the reconstructed images in Figs. 10 and 11.

Tables Icon

Table 4. Estimated thickness of the line on the reconstructed plane.

5. Discussion

The results show that the gain in calculation speed offered by the proposed method increases with CGH resolution, with the maximum gain in speed being a factor of 56 for the 16-k-resolution CGH of the simple shapes 3D model, compared to the conventional FFT-based model. This means that we can numerically calculate a 2-k-resolution CGH at 6.6 fps for the simple shapes 3D model and at 3.0 fps for the car HUD 3D model. These results would allow the proposed method to be implemented on embedded systems in, for example, cars or wearable glasses.

The increase in calculation speed of the proposed method over the conventional method is greater for reconstruction of the simple shapes 3D model than for the car HUD 3D model, which is due to the different complexity of the line-drawn objects and the different number of layers. Although the simple shapes model contains more layers than the car HUD 3D model, the complexity of the car HUD model is sufficiently greater than that of the simple shapes 3D model to reduce the gain in calculation time compared with the conventional method. The calculation time of the proposed method is mainly affected by the complexity and the size of line-drawn object, whereas the calculation time of the conventional FFT-based method is affected by the number of layers and the size. Thus, the gain in calculation speed with the proposed method is significantly greater for reconstructing the simple shapes 3D model.

Note that, although the quality of the images reconstructed by the proposed method is not excellent, it suffices to recognize the contents of the 3D objects. We assume that the degradation in SSIM and PSNR for the proposed method is due to

  • 1. an uneven intensity distribution on the line;
  • 2. the intensity distribution extended beyond both ends of each line;
  • 3. the difference in background noise.

According to the Fig. 12, problem (1) appears every layers and every resolution in both models, and it can be observed more clearly between the curved and straight line object in lower-resolution CGHs. Figure 13 compares, at several CGH resolutions, part of a reconstructed image from the first layer of the car HUD 3D model reconstructed by the proposed method with the same reconstructed by the conventional FFT-based method. At 2 k resolution, the curved parts of the lines reconstructed by the proposed method contain significantly less intensity compared with the conventional FFT-based method. However, this difference becomes increasingly less noticeable as the resolution increases and is essentially negligible at 16 k resolution. One mechanism that might lead to this problem is the gap in the synthesized converged 1D complex-amplitude distribution. As shown in Fig. 4, because the proposed method creates the complex-amplitude distribution of an arbitrary curved-line object by combining straight segments of converged 1D complex-amplitude distributions in the direction normal to the curved line, blank pixels appear within the synthesized converged complex-amplitude distribution. These gaps cause the density of the complex-amplitude distribution to diminish with increasing distance from the curved line, as seen in Fig. 5(c). The gaps decrease the intensity of the reconstructed image because each CGH pixel modulates the incident light, so a “gap” pixel would equate to less intensity from the reconstruction of the line-drawn object. However, in the reconstructed image with higher resolution, this problem is alleviated because the total pixels of the curved segment are larger than that of lower resolution CGH so the density of the complex-amplitude distribution increases along the curve. As a result, the image quality is quite satisfactory when the features of the line-drawn object are sufficiently larger than the resolution length of the CGH. In future work, we will resolve this problem by filling in the gaps at a faster speed.

 figure: Fig. 13.

Fig. 13. Comparison of intensity distribution between curved and straight objects.

Download Full Size | PDF

Further, to analyze the relation between the curvature radius and the error of CGHs, we created a line-drawn object consisting of a single curve of fixed length and calculated the normalized root mean square error (NRMSE) of CGH as a function of radius of curvature, which was created by the proposed method and the PLS-based method. The results are shown in Fig. 14(a). The length of the line is set to 1,024 and the radius of curvature is set from 164, where the circumference length is closest to 1,024, to 6,000, where it is closer to a straight line. Here, the radius of curvature was increased by 4 increments for radii 164 to 600, and by 10 increments for radii 600 to 6,000. The depth of the object is set as 0.10 m. When the radius is 164, the line can be considered a circle as shown in Fig. 14(b), and when the radius is 324, the line can be considered a half-circle as shown in Fig. 14(c). From this, it can be inferred that the error becomes smaller when the curve is close to a closed curve such as a circle, but in other cases the error increases in inverse proportion to the radius of curvature.

 figure: Fig. 14.

Fig. 14. Relation between the curvature radius of the line-drawn object and the NRMSE: (a) result, (b) line-drawn object of radius=164, (c) line-drawn object of radius=324.

Download Full Size | PDF

Problem (2) appears clearly in the characters in the reconstructed images of the fourth layer of the car HUD 3D model. Compared with the image numerically reconstructed by the conventional FFT-based method, extra lines appear at both ends of straight lines and degrade the visibility of the object. Figure 15 compares part of the reconstructed image of the fourth layer of the car HUD 3D model at different resolutions. This problem stems from the approximation used by the proposed method of the complex-amplitude distribution of the line-drawn object. As explained in Sec.  3. and Fig. 3, the proposed method substitutes the complex-amplitude distribution with a converged distribution; in other words, we ignore any differences in the complex-amplitude distribution within the red and blue dotted lines and in the area outside of the red dotted lines in Fig. 3. It is this strategy that causes problem (2).

 figure: Fig. 15.

Fig. 15. Comparison of intensity distribution of string “HIGH” in car HUD 3D model.

Download Full Size | PDF

However, the ratio of the length of the extended line to the size of the line-drawn object decreases as the CGH resolution increases because the length of the extended line is determined by the wavelength of the incident light, the pixel pitch of the SLM, and the distance between the object and the reconstructed CGH. Thus, as is the case for problem (1), problem (2) becomes negligible when the size of the line-drawn object is large compared with the resolution length of the CGH.

As shown in the figure, the image quality improves as the resolution increases, which is reflected in the improved PSNR and SSIM at higher resolution (see Tables 2 and 3). To solve problem (2), an effective method must be developed to reproduce the complex-amplitude distribution in the area outside the blue lines in Fig. 3. This work will also be the subject of a future study.

The source of problem (3) is assumed to also be due to the approximation of the complex-amplitude distribution. However, as mentioned above, the quality of the images of the line-drawn object reconstructed by the proposed method is sufficient, so we this problem is not significant.

6. Conclusion

We propose herein a fast method to calculate a CGH from a 3D object composed of multiple layers of line-drawn objects. When combined with a LUT, the proposed method can generate CGHs about 56 times faster than the FFT-based method with sufficient image quality. Because the relative gain in speed with the proposed method increases with increasing CGH resolution and practical holographic displays require extremely-high-resolution CGHs, the proposed method may be a practical method to obtain real-time CGHs.

Funding

Kenjiro Takayanagi Foundation; Inoue Foundation for Science; Japan Society for the Promotion of Science (19H01097, 19K21536, 20K19810).

Acknowledgments

The authors declare no conflicts of interest.’

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. T. Tahara, X. Quan, R. Otani, Y. Takaki, and O. Matoba, “Digital holography and its multidimensional imaging applications: a review,” Microscopy 67(2), 55–67 (2018). [CrossRef]  

2. B. Kemper and G. von Bally, “Digital holographic microscopy for live cell applications and technical inspection,” Appl. Opt. 47(4), A52–61 (2008). [CrossRef]  

3. N. Padmanaban, Y. Peng, and G. Wetzstein, “Holographic near-eye displays based on overlap-add stereograms,” ACM Trans. Graph. 38(6), 1–13 (2019). [CrossRef]  

4. F. Yaraş, H. Kang, and L. Onural, “State of the art in holographic displays: A survey,” J. Display Technol. 6(10), 443–454 (2010). [CrossRef]  

5. R. Häussler, Y. Gritsai, E. Zschau, R. Missbach, H. Sahm, M. Stock, and H. Stolle, “Large real-time holographic 3D displays: enabling components and results,” Appl. Opt. 56(13), F45–F52 (2017). [CrossRef]  

6. T. Nishitsuji, T. Shimobaba, T. Kakue, D. Arai, and T. Ito, “Simple and fast cosine approximation method for computer-generated hologram calculation,” Opt. Express 23(25), 32465–32470 (2015). [CrossRef]  

7. T. Shimobaba, S. Hishinuma, and T. Ito, “Special-purpose computer for holography HORN-4 with recurrence algorithm,” Comput. Phys. Commun. 148(2), 160–170 (2002). [CrossRef]  

8. K. Matsushima and M. Takai, “Recurrence formulas for fast creation of synthetic three-dimensional holograms,” Appl. Opt. 39(35), 6587–6594 (2000). [CrossRef]  

9. H. Yoshikawa, “Fast computation of fresnel holograms employing difference,” Opt. Rev. 8(5), 331–335 (2001). [CrossRef]  

10. M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993). [CrossRef]  

11. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008). [CrossRef]  

12. S.-C. Kim and E.-S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. 48(6), 1030–1041 (2009). [CrossRef]  

13. T. Nishitsuji, T. Shimobaba, T. Kakue, N. Masuda, and T. Ito, “Fast calculation of computer-generated hologram using the circular symmetry of zone plates,” Opt. Express 20(25), 27496–27502 (2012). [CrossRef]  

14. T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram using run-length encoding based recurrence relation,” Opt. Express 23(8), 9852–9857 (2015). [CrossRef]  

15. H. Kim, J. Kwon, and J. Hahn, “Accelerated synthesis of wide-viewing angle polygon computer-generated holograms using the interocular affine similarity of three-dimensional scenes,” Opt. Express 26(13), 16853–16874 (2018). [CrossRef]  

16. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48(34), H54–H63 (2009). [CrossRef]  

17. Y.-P. Zhang, F. Wang, T.-C. Poon, S. Fan, and W. Xu, “Fast generation of full analytical polygon-based computer-generated holograms,” Opt. Express 26(15), 19206–19224 (2018). [CrossRef]  

18. H. G. Kim and Y. Man Ro, “Ultrafast layer based computer-generated hologram calculation with sparse template holographic fringe pattern for 3-D object,” Opt. Express 25(24), 30418–30427 (2017). [CrossRef]  

19. J.-S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015). [CrossRef]  

20. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015). [CrossRef]  

21. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2017).

22. T. Shimobaba and T. Ito, “Fast generation of computer-generated holograms using wavelet shrinkage,” Opt. Express 25(1), 77–87 (2017). [CrossRef]  

23. D. Blinder and P. Schelkens, “Accelerated computer generated holography using sparse bases in the STFT domain,” Opt. Express 26(2), 1461–1473 (2018). [CrossRef]  

24. D. Blinder, “Direct calculation of computer-generated holograms in sparse bases,” Opt. Express 27(16), 23124–23137 (2019). [CrossRef]  

25. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]  

26. P. W. M. Tsang and T.-C. Poon, “Fast generation of digital holograms based on warping of the wavefront recording plane,” Opt. Express 23(6), 7667–7673 (2015). [CrossRef]  

27. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011). [CrossRef]  

28. S. Igarashi, T. Nakamura, and M. Yamaguchi, “Fast method of calculating a photorealistic hologram based on orthographic ray-wavefront conversion,” Opt. Lett. 41(7), 1396–1399 (2016). [CrossRef]  

29. K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A 20(9), 1755–1762 (2003). [CrossRef]  

30. T. Tommasi and B. Bianco, “Computer-generated holograms of tilted planes by a spatial frequency approach,” J. Opt. Soc. Am. A 10(2), 299–305 (1993). [CrossRef]  

31. S.-C. Kim and E.-S. Kim, “Fast one-step calculation of holographic videos of three-dimensional scenes by combined use of baseline and depth-compensating principal fringe patterns,” Opt. Express 22(19), 22513–22527 (2014). [CrossRef]  

32. T. Latychevskaia, “Lateral and axial resolution criteria in incoherent and coherent optics and holography, near- and far-field regimes,” Appl. Opt. 58(13), 3597–3603 (2019). [CrossRef]  

33. T. Shimobaba, J. Weng, T. Sakurai, N. Okada, T. Nishitsuji, N. Takada, A. Shiraki, N. Masuda, and T. Ito, “Computational wave optics library for c++: CWO++ library,” Comput. Phys. Commun. 183(5), 1124–1138 (2012). [CrossRef]  

34. R. Gomes, W. Junior, E. Cerqueira, and A. Abelem, “A QoE fuzzy routing protocol for wireless mesh networks,” in Future Multimedia Networking, (Springer Berlin Heidelberg, 2010), pp. 1–12.

Supplementary Material (5)

NameDescription
Visualization 1       Video of the numerically reconstructed images with varying focal distance (simple shapes 3D model with 2 k resolution)
Visualization 2       Video of the numerically reconstructed images with varying focal distance (simple shapes 3D model with 16 k resolution)
Visualization 3       Video of the numerically reconstructed images with varying focal distance (car HUD 3D model with 2 k resolution)
Visualization 4       Video of the numerically reconstructed images with varying focal distance (car HUD 3D model with 16 k resolution)
Visualization 5       Video of the optically reconstructed images with varying focal distance (simple shapes 3D model with 2 k resolution)

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Overview of PLS-based method.
Fig. 2.
Fig. 2. Overview of calculation of CGH of layer model (assuming a 3D car-navigation display).
Fig. 3.
Fig. 3. Formation of complex-amplitude distribution from line-drawn object using the proposed method (Complex-amplitude distribution is depicted as a phase distribution): (a) example of line-drawn object, (b) complex-amplitude distribution from the line-drawn object, (c) approximate complex-amplitude distribution using the proposed method, (d) complex-amplitude distribution of a PLS.
Fig. 4.
Fig. 4. Illustration of procedure for applying the proposed method to an arbitrary curved line-drawn object (Complex-amplitude distribution is depicted as a phase distribution).
Fig. 5.
Fig. 5. Example of kinoform CGH of arbitrary curved line-drawn object: (a) original line-drawn object, (b) kinoform CGH generated by Eq. (1), (c) kinoform CGH generated by the proposed method.
Fig. 6.
Fig. 6. Illustration of procedure to use proposed method to calculate CGH of 3D line-drawn object.
Fig. 7.
Fig. 7. Optical setup (green-line depicts the light path from the laser).
Fig. 8.
Fig. 8. 3D model constructed from twelve layers of simple shapes. The depth of the forward-most layer is 0.24 m and the inter-layer interval is 0.04 m between all layers (yellow-line on the 1st layer is not an object but is referred in Sec. 4.3).
Fig. 9.
Fig. 9. 3D model of car HUD constructed from four layers. The depth of the forward-most layer is 0.10 m and the intervals between layers are 0.5, 0.2, and 0.3 m from front to back (yellow-line on the 1st layer is not an object but is referred in Sec. 4.3).
Fig. 10.
Fig. 10. Numerically reconstructed images at 2 k resolution CGH created by both conventional FFT-based method (left columns) and proposed method (middle columns), and optically reconstructed images of propose method (right columns) from the simple shapes 3D model (each panel is cropped to show only area where an object is visible).
Fig. 11.
Fig. 11. Numerically reconstructed images at 16 k resolution CGH created by both conventional FFT-based method (2nd and 3rd columns) and proposed method (4th and 5th columns) from the car HUD 3D model.
Fig. 12.
Fig. 12. Average and standard deviation of the intensity ditribution on the reconstracted image: (a) Car HUD, (b)simple shapes.
Fig. 13.
Fig. 13. Comparison of intensity distribution between curved and straight objects.
Fig. 14.
Fig. 14. Relation between the curvature radius of the line-drawn object and the NRMSE: (a) result, (b) line-drawn object of radius=164, (c) line-drawn object of radius=324.
Fig. 15.
Fig. 15. Comparison of intensity distribution of string “HIGH” in car HUD 3D model.

Tables (4)

Tables Icon

Table 1. Calculation times for proposed method (“Prop.”) compared with that for conventional FFT-based method (“FFT-based”) and N-LUT method for CGH of 3D model in Fig. 8 (“Simple shapes”) and in Fig. 9 (“Car HUD”). The ratio of times is given in inner parathensis beside the calculation time

Tables Icon

Table 2. Image quality for CGH of simple shape 3D model (Fig. 8) as determined by SSIM and PSNR.

Tables Icon

Table 3. Image quality for CGH of car HUD 3D model (Fig. 9) as determined by SSIM and PSNR.

Tables Icon

Table 4. Estimated thickness of the line on the reconstructed plane.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

u ( x h , y h ) = j = 1 N A j r h j exp ( i 2 π λ r h j ) ,
c ( x h , y h ) = arg { u ( x h , y h ) } 2 b 1 2 π ,
R max ( z j ) = z j λ 4 p 2 λ 2 ,
u ( x h , y h ) = F 1 { j = 1 P u j ( x h , y h ) } ,
u j ( x h , y h ) = [ F { u j ( x j , y j ) } exp ( i 2 π z j 1 λ 2 f x 2 f y 2 ) ] ,
D = j = 0 Q 1 C × R max ( z j ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.