Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Improving the imbalance of the light intensity of 3D wire-frame projection with electro-holography by superimposing a phase error

Open Access Open Access

Abstract

The CG-line method is an algorithm for generating computer-generated holograms (CGHs), a digitally recording medium for three-dimensional images in electro-holography. Since the CG-line method is specialized for projecting three-dimensional wireframe objects, it can calculate CGH with a very low computational load. However, the reconstructed image of the conventional CG-line method suffers from unintended light imbalance depending on the object shape, which disturbs the understandability of the projecting image. Therefore, we propose a method for reducing light imbalance by imposing phase error that controls light according to the line shape. Consequently, we reduced light imbalance by maintaining the high computational speed.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

D. Gabor invented holography to record and reconstruct three-dimensional (3D) images [1]. Electro-holography is now expected to be a promising 3D display technology. In theory, electro-holography can completely reconstruct the reflective light emitted from real objects; thus, many researchers have studied holography as the final goal of the 3D displaying technology.

However, its practical implementation has yet to be established because of the significant issues, such as a considerable amount of computational load, the requirement of an extremely precise display, and a large transmission capacity. In particular, the large amount of computational load required for generating computer-generated holograms (CGHs) is the most severe issue, and thus researchers have made considerable efforts in developing fast CGH generation algorithms and systems.

So far, various fast-generating methods have been developed, which are classified according to hardware- and software-based approaches. Hardware-based approaches include the rapid calculations of CGHs with field programmable gate arrays (FPGAs) [2,3], graphics processing units (GPUs) [4,5], and integrated circuits [6]. In addition, various software-based algorithms for fast CGH generation have been developed, which include ray tracing [7], point-based [8], deep learning [9], layer-based [10], and line-based algorithms for 3D wireframe objects [1116]. There are also methods to accelerate the above algorithms, including look-up tables [1719] and wavefront recording methods [20] for point-based methods, and angular spectral methods (ASM) [21] for layer-based methods.

The line-based method was first developed by Nishitsuji et al. [11] as the “computer-graphics (CG)-line method,” which specializes in the generation of CGH for 3D wireframe objects, including multilayered 3D wireframe objects. As the wavefront obtained from the line-shaped light sources have a line symmetry, the CG-line method duplicates the one-dimensional elementary wavefront along the 3D wireframe based on CG-like techniques; thus, the CG-line method can accelerate the CGH calculation drastically, which are elaborated in the following paragraphs.

The point-based method, which superimposes the point spread function (PSF) of the point light sources, can also generate CGHs of 3D wireframe objects. However, as the point-based method superimposes the two-dimensional PSF for each point on every line of the wireframe, the calculation amount increases in square order than that when using the CG-line method. The layer-based method also can generate CGHs for 3D wireframe objects; however, it requires huge memory space for executing fast Fourier transforms (FFT) when calculating the wave propagation between layers and holograms. The computational amount of the layer-based method is defined by multiplying the resolution of CGHs with the number of layers. The CG-line method is defined by the total length of the 3D wireframe object; thus, its calculation amount has low dependence on the resolution of CGHs. Therefore, the CG-line method is superior to the other methods in terms of a larger resolution for CGH which is already investigated our previous study [12].

Although the CG-line method can generate CGHs for 3D wireframe objects in high speed, several issues still need to be addressed, such as improving the image quality, increasing the computational speed, and improving the expressiveness. The authors have tackled the issues, such as the imposing of gradation [16], continuous change in depth direction [22], and expansion of the thickness line [23] for improving expressiveness and implementation on the GPU [12] for increasing the computational speed.

In this study, we focused on improving the image quality. The use of the CG-line method modified for GPU implementation [12] causes unintended changes in the light intensity on the wireframe within curves with continuous curvature changes, thus degrading the visibility of the 3D image. Namely, it induces the difficulties of correctly recognizing the whole shape of the object since, generally, we tend to focus on the brighter part of the object. Further, it also degrades the contrast or resolution of the reconstructed images; thus, it is essential to enhance the visibility of the 3D image for practical uses for electro-holography. In general, the intensity imbalance occurs in the reconstructed image of the phase-hologram can be solved by the phase-optimization process. The best-known optimization method for holograms is the iterative method (e.g., Gerchberg–Saxton (GS) method [24]). This method improves image quality by repeating forward and backward propagation of the complex amplitude distribution of the hologram, with constraints on the phase and amplitude at each loop. Thus, such kind of method can be solved the issues of the CG-line method by setting the wire-frame image as constraints of the amplitude distribution in the iteration. However, the computational time required for improving the image quality is problematic because of the repetitive calculations involved. This process is especially detrimental to the interactive playback system of holographic 3D display using the CG-line method; therefore, the iterative optimization method is unsuitable for use in the image-quality improvement of the CG-line method. In contrast, the error-diffusion-based method is a well-known approach to optimize the CGHs [25,26]; however, it is not considered suitable for parallel implementation for GPU, and its computational amount depends on the CGH resolution It loses many advantages of the CG-Line method, such as the suitability of the parallel implementation and resolution independence of the computational time; thus, the computational time for improving the image quality increases the CGH generation time. Therefore, the error-diffusion-based method is unsuitable for application in the CG-line method. Therefore, we developed the image-quality-improvement method that is not based on the iterative or error-diffusion-based processing.

Owing to the limitations of the spatial light modulator (SLM), current CGHs are classified as either phase- or amplitude-modulated. Generally, the phase-modulated CGHs can modulate the light more efficiently than the amplitude-modulated CGHs; thus, the reconstructed 3D images can be brighter. Therefore, in this study, as in previous studies, we focus on the phase-modulated CGHs, especially kinoform, which is a method of realizing the phase-modulated hologram [27]. Furthermore, the proposed method can not be straightforwardly applied to the amplitude-CGH since the algorithm that we are based on is constructed with the processing of the phase distribution; however, we believe that we can apply the proposed method to the amplitude-modulated CGH by modifying the idea of the proposed method to the amplitude, it will be reported in the future study.

2. CG-Line method

2.1 Overview

For a 3D object comprising self-luminous point light sources, the CGH is calculated by the superposition of wavefronts emitted from the point light source on the hologram plane. The wavefront created on the hologram plane $(x_\alpha,y_\alpha )$ by using the point light source at coordinates $(\delta,\epsilon,\zeta )$ is denoted as

$$\phi(x_\alpha,y_\alpha) = \frac{a}{\lambda}\exp{\left[\frac{i\pi}{\lambda \zeta}\left\{(x_\alpha - \delta)^2 + (y_\alpha - \epsilon)^2\right\}\right]},$$
where $\lambda$ is the wavelength of the reference light, $i$ is the imaginary unit, and $a$ is the amplitude of the point light sources.

Given that a line is composed of a collection of point light sources and is straight with infinite length on the $x$-axis, parallel to the hologram plane, the wavefront created by the superposition of spherical waves radiated from each point light source achieves a geometrically symmetric shape, which is formulated as

$$L(x_\alpha,y_\alpha) = \frac{1}{\zeta}\int_{-\infty}^{\infty} a(u)\cdot\exp\left[\frac{i\pi}{\lambda \zeta}\big\{(x_\alpha-u)^{2} + y_\alpha^{2}\big\}\right] du,$$
where $a(u)$ is the amplitude of the point light source at $u$. Given the amplitudes of the spherical waves emitted from a point light source are equal (i.e., $a=const.$), Eq. (2) can be represented by the pure $y_\alpha$ function using the Fresnel integral:
$$U(y_\alpha) = a\sqrt{\frac{\lambda}{\zeta}}\exp{\left(\frac{i\pi y_\alpha^2}{\lambda \zeta}\right)}.$$
The effective range of $U(y_\alpha )$ is determined by the diffraction limit of the SLM,
$$U(y_\alpha)= \left\{ \begin{array}{ll} \mbox{Eq. (3)} & (y_\alpha < R_\zeta)\\ 0 & (otherwise) \end{array} \right.,$$
where $R_\zeta$ is defined as
$$R_\zeta = \zeta \frac{\lambda}{\sqrt{4p^2-\lambda^2}},$$
where $p$ is the pixel pitch of the SLM. As Eq. (4) is a pure-$y_\alpha$ function, the wavefront can be computed as if stretching the wavefront by using $U(y_\alpha )$ in the direction of line extension. In summary, the CG-line method approximates the conventional method of computing Eq. (4) using Eq. (1) in the direction normal to the 3D-line-drawing object, as shown in Fig. 1. The computational amount of the CG-line method for each point on the line is the square root compared to the superposition of PSFs in the point-based method; thus, the CG-line method can significantly reduce computational complexity [12].

 figure: Fig. 1.

Fig. 1. Overview of the CG-line method: (a) General instruction of the wavefront convergence; superimposing PSF with Eq. (1) along the infinity straight line generates the converged wavefront, which can be formulated using Eq. (4), (b) CGH of the wireframe 3D object generated using Eq. (1).

Download Full Size | PDF

2.2 Intensity imbalance of the reconstructed image

The GPU-accelerated version of the CG-line method [12] has been determined to cause intensity imbalance in reconstructed images, as illustrated in Fig. 2; this problem is yet to be solved. Ideally, the intensity distribution on the line of the reconstructed images should be equal, as shown in Fig. 2(a), because the intensity of all positions on the line is assumed to be uniform. However, as shown in Fig. 2(b), an intensity imbalance occurs at the edge of each character in a reconstructed image, induced by the GPU-accelerated CG-line method, which depends on the line shape (e.g., curvature radius). According to the observation, the intensity decreases and increases at locations with large and small curvature radii, respectively. For example, the intensity gap between points A and B in Fig. 2(b) is 31.5 times.

 figure: Fig. 2.

Fig. 2. Illustration of the intensity imbalance (obtained through a numerical reconstruction simulation): (a) Ideal image and (b) practical reconstructed image by using the GPU-accelerated CG-line method.

Download Full Size | PDF

According to Nishitsuji et al. [12], this phenomenon could be due to the differences in the area of the composed wavefront per unit length, i.e., a relatively smaller intensity is achieved with the increase in the curvature radius. As a hologram is an optical element that accumulates the incident light radiated from hologram pixels onto the focusing points, the difference in the number of wavefront pixels of each focal point is considered to induce the intensity differences. Furthermore, the total number of pixels of the wavefront per length of the line strictly depends on the curvature radius of the line under certain conditions [12]. Therefore, we can compensate for the intensity imbalance according to the curvature radius of the line.

3. Proposed method

To reduce the intensity imbalance in the image reconstructed using the GPU-accelerated version of the CG-line method (hereafter referred to as the conventional method), the proposed method incorporates the imposition of the random phase error that can attenuate the intensity of the reconstructed image [16] to the CGH-calculation process of the conventional method. If the computational cost of imposing the random phase error is less, the computational speed of the CG-line method is sufficiently superior compared to that of the conventional method.

The imposition of the random phase error to the hologram can be regarded as the placing of the optical diffuser near the hologram. The degree of attenuation was determined to be inversely proportional to the amplitude of the random phase error [16]. As the intensity imbalance of the conventional method is assumed to be correlated with the curvature radius of the line of the wireframe object, the proposed method determines the amplitude of the random phase error as a function of the curvature radius of the line.

Therefore, Eq. (4) with respect to the random phase error becomes

$$U(y_\alpha) = a\sqrt{\frac{\lambda}{\zeta}}\exp{\left(\frac{i\pi y_\alpha^2}{\lambda \zeta}\pm \rho \pi\right)} ,$$
where $\rho$ is the amplitude of the phase error, and the sign $\pm$ of the phase error is determined in equal probability. The sign represents the irregularity of the grid, and the amplitude of the phase error represents the grid size of a diffuser [16]. As the amplitude of the phase error is inversely proportional to the rate of attenuation of the intensity and the intensity imbalance is caused by the curvature radius of the line, we set the goal of the proposed method to find the appropriate amplitude of the phase error for the curvature radius. That is, we try to identify the function of random phase error, $\rho (R)$, in Eq. (6), where $R$ is the curvature radius of the line. As the intensity weakens with the increase in the curvature radius; the intensity is weakest for a straight line, the curvature radius of which tends to infinity. Furthermore, because imposing a random phase error can only attenuate the intensity, i.e., it cannot increase the intensity, the intensity on the line with small curvature radius must be reduced until it became closer to the intensity of the straight line to flatten the intensity distribution on the line. Therefore, by using the proposed method, we derived $\rho (R)$ from the ratio of the area of the wavefront per unit length of the line between the curve and straight line. Note that a more detailed explanation of generating CGH with the CG-line method with phase error imposition can be found in [16].

First, by assuming that the 3D wireframe line is represented by the polynomials, a small section of the line can be approximated to an arc. According to a previous investigation [12], the area of the wavefront for a straight line and an arc becomes

$$S = 2R_\zeta l \quad\quad\quad\quad\quad(R = \infty)$$
$$C(R) = \left\{ \begin{array}{ll} 2R_\zeta l & (R_\zeta < R)\\ \frac{l}{R}(R_\zeta ^2 + R^2) & (R_\zeta >{=} R) \end{array}, \right.$$
where $l$ is the length of the line and $S$ and $C(R)$ are the areas of the straight line and curve, respectively.

The ratio of the area of the wavefront is defined as

$$A_r(R) =\frac{S}{C(R)}.$$
In addition, the attenuation effect of the light intensity has been investigated in [16] as
$$\rho(D)=\frac{\arccos(2D-1)}{2\pi},$$
where $D$ is the attenuation ratio determined by
$$D=\frac{\text{Desired (attenuated) intensity}}{\text{Current intensity}}.$$
As attenuation rate $D$ is assumed to be correlated with intensity ratio $I_r$ between the straight line and curve, Eq. (10) can be rewritten as
$$\rho(I_r)=\frac{\arccos(2I_r-1)}{2\pi}.$$
Furthermore, as intensity ratio $I_r$ is correlated with area ratio $A_r$, approximating by $I_r=A_r$, Eq. (12) becomes
$$\rho\{A_r(R)\} = \frac{\arccos\{2A_r(R)-1\}}{2\pi}.$$
Therefore, the phase-error amplitude can be calculated according to curvature radius $R$ and Eq. (13). Finally, by integrating Eq. (6) with Eq. (13), we can obtain the proposed CG-line method that can modify the intensity imbalance of the reconstructed line.

Given that the curve is represented by a polynomial, the curvature radius of the object is formulated as

$$R(t) = \frac{\Bigl(x(t)^{\prime 2}+y(t)^{\prime 2}\Bigr)^{\frac{3}{2}}}{x(t)^{\prime}y(t)^{\prime\prime}-y(t)^{\prime}x(t)^{\prime\prime}},$$
where $x(t)$ and $y(t)$ are the coordinates of the curve on the hologram plane; $x(t)^\prime$ and $y(t)^\prime$ are the first-order derivatives to parameter $t$ of $x(t)$ and $y(t)$, respectively; and $x(t)^{\prime \prime }$ and $y(t)^{\prime \prime }$ are the second-order derivatives to parameter $t$ of $x(t)$ and $y(t)$, respectively. Note that the GPU-accelerated CG-line method assumes that the 3D wireframe object is described using a quadratic Bezier curve; therefore, we can employ the above-mentioned process in the method.

4. Experimental results and discussion

Two types of experiments were conducted to demonstrate the proposed method’s feasibility. Figure 4 shows the 3D line object used in the experiment. In the basic experiment, to clarify the validity of the intensity correction for the curvature radius shown in Eq. (13), we examined the intensity ratio for an arc and straight-line object by using the line model as depicted in Fig. 4(a), which comprises the half arc and straight–line with the same length and on the same depth layer. In the second experiment, we tested the effectiveness of the proposed method in terms of image-quality improvement and computational speed by using objects depicted in Fig. 4(b), comprising 111 quadratic Bezier curves and two straight–lines on single or multiple layers. The single layer possesses a depth of 0.3 m, and the multiple layers have depths of 0.1, 0.2, and 0.3 m, which differ from letter to letter.

The computer environment used in this study was configured as follows: Microsoft Windows 10 Professional operating system, Intel Core i7-12700 2.10GHz, 64-GB DDR4 3200 memory, Microsoft Visual C++ 2019 compiler with single floating-point computational precision, and a NVIDIA Geforce RTX 3080 GPU with CUDA 11.6.

Figure 3 shows the optics comprising a phase-modulation type Spatial Light Modulator (SLM) (Jasper, ’JD7714’) with a resolution of $4,096 \times 2,400$ pixels and a pixel pitch of $p=3.74\mu \mathrm {m}$, a green laser with a wavelength of $\lambda =532$ nm (Thorlabs, ’CPS532-C2’), a beam expander (Thorlabs, ’GBE10-A’), a polarizer (Thorlabs, ’WP25M-VIS’), a polarized beam splitter (Thorlabs, ’CCM1-PBS251/M’), a half-wave plate (Thorlabs, ’WPH10M-532’) and a quarter-wave plate (Thorlabs, ’Thorlabs WPQ10M-532’), a plano-convex lens (Thorlabs, ’AC254-150-A-ML’) and a hand-crafted $\phi =0.5$ mm circular block filter created with 3M’s aluminum coated tape. It was recorded on a Sony ILCE-6000 camera. The image sensor of the camera captured the real images directly.

 figure: Fig. 3.

Fig. 3. Optics system.

Download Full Size | PDF

4.1 Basic experiment

With the objects depicted in Fig. 4(a), we examined the intensity ratio between the straight–line and arc of the numerically reconstructed image by varying the curvature radius of the arc. Hereinafter, we obtained the numerically reconstructed images using the angular spectrum method (ASM). Note that at the intensity extraction, we averaged the intensity values around the point on the line to address the intensity spread on the reconstructed plane. Fig. 5 illustrates the numerically reconstructed images, and Fig. 6 shows the results of the intensity ratios of the straight-line and arc sections according to the curvature radius of the arc. Ideally, the intensity ratios of the straight-line and arc sections should be one for all radii.

 figure: Fig. 4.

Fig. 4. 3D object (a) comprising two types of constant curvature radii (Arc, Line) and that (b) comprising two straight–lines and 111 quadratic Bezier curves (Tokyo)

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Example of numerically reconstructed images of two types of constant curvature radius objects. ($R = 286$ [pixel], depth$=0.1$ [m])

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Intensity ratios of the straight-line and arc sections: (a) 0.1 [m], and (b) 0.2 [m]

Download Full Size | PDF

These results show that qualitatively and quantitatively, the intensity imbalance of the reconstructed images is modified using the proposed method, i.e., the intensity ratio at all radii is 1, whereas the ratio of the conventional method is very high especially around the small radius of curvature. Note that we could not conduct experiments using a longer playback distance owing to a lack of computer memory; however, the result could be the same at any playback distance because the theory of causing an imbalance is independent to the playback distance.

4.2 General object experiment

To clarify the validity of the proposed method, we examined the image-quality improvement and computation speed compared to the three reference methods, as follows.

  • Point-based method: The most basic method calculated using Eq. (1).
  • ASM: Method to calculate the diffraction from all the layers comprising point light source with FFT.
  • Conventional GPU-accelerated CG-line method [12]: The method calculated in Eq. (3) results in an intensity imbalance.
The resolution of CGH was set to $4096 \times 2400$ and $8192 \times 4800$ pixels. All CGH generation was performed on a GPU.

Therefore, the image quality was successfully improved, and the computational speed was faster than that of the comparison method, although it slightly degraded than that of the conventional CG-line method. These results confirm the versatility of the proposed method for general objects.

4.2.1 Image-quality improvement

Figure 7 and 8 show the numerically and optically reconstructed images of CGH created from the 3D model with a single layer and multiple layers, respectively. Here, we conducted the numerical experiment for both the resolutions, whereas the optical experiment was conducted for only the resolution of $4096\times 2400$ due to the SLM’s resolution. Further, a histogram-based threshold filter was used to improve visibility for the numerically reconstructed images, whereas the image quality was evaluated using the unfiltered numerically reconstructed image.

 figure: Fig. 7.

Fig. 7. Single layer of numerically and optically reconstructed images of a general object.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Multiple layers of numerically and optically reconstructed images of a general object.

Download Full Size | PDF

From the qualitative perspective, as shown in Fig. 8, the proposed method succeeded in correcting the intensity imbalance of the conventional CG-line method, and the image quality of the proposed method is equal to or better than that of the point-based and ASM methods. The noise around objects in the numerically reconstructed image is considered to be a cross talk influenced by other layers, and the noise around objects in the optically reconstructed image is considered to stem from the insufficient optical adjustments, which could be improved by adjusting the optical system.

As the Fig. 7 shows that the brightness of the character ’o’ in the reconstructed image of the proposed method is seems weaker than other characters which is not appeared in the other methods. This is assumed that due to the approximating error of the proposed method. Namely, the proposed method calculates the curvature radius at the point on the curve and determine the amount of the phase error from it, which means the proposed method approximates the curves to the set of the minimum segment of arcs and the curvature radius in the neighboring segments are usually different. On the other hand, the algorithm depicted in Eq. (13) is constructed from the single arc object that the curvature radius of the neighboring point (segment) on the line is the same; thus, those differences induce the approximation error in the proposed method. Looking at the result of the numerical simulation, the character ’o’ is composed of the general curves with small curvature radius could be affected a lot from the approximation error since a lot of compensation is applied rather than the other letter like ’T’ which mainly composed of the curve that shape is almost straight. Therefore, the overall intensity of ’o’ become weaker than other letters in the proposed method. This issue will be solved by, for example, approximating the curve into a longer segment of curve because it guarantees the consistence of the curvature radius at the neighboring point on the curve.

Furthermore, the overall brightness of the optically reconstructed image of the proposed method is low. This is because the proposed method disperses the light concentrated on the line to the surrounding area by partially superimposing a phase error on the hologram. The light is not dispersed to other parts of the line, i.e., evenly dispersed to the surrounding area. Further, the light in the area where the intensity was not concentrated (the straight section) is not dispersed, i.e., the intensity on the straight line stays the same. In other words, the hologram is processed to adjust the intensity to the initially weak area. Therefore, the overall light intensity is reduced compared to ASM and point-based methods. However, considering this research is for display applications, the perception of lines and letters should be no matter unless the intensity is extremely low because the human eye adapts to light and dark dynamically.

From the quantitative perspective, we measured the mean absolute difference (MAD) of the numerically reconstructed intensity distribution with respect to the desired value around the desired line position. We define the desired value as the strongest intensity; thus, the MAD indicates how the intensity distribution is close to the desired value. MAD is defined as

$$\text{MAD}=\frac{\sum_j{|I_n(j)-1.0|}}{N}$$
where $N$ is the number of pixels along a line, $I_n(j)$ is the average normalized intensity distribution on the $j$-th position on the line, i.e., we obtained $I_n(j)$ by first normalizing the whole intensity distribution between 0 to 1, and then extracting the intensity values around the $j$-th position of the line. These values were then averaged to absorb the intensity spreading on the reconstructed plane. Usually, the image quality is evaluated using the well-known index, e.g., peak signal to noise ratio (PSNR) or structural similarity (SSIM). However, as these indexes evaluate the similarity of each pixel in a whole image, they cannot directly evaluate the effectiveness of correcting the intensity imbalance. Thus, in this study, we evaluate the differences between the desired and realized intensities by directly comparing the intensities around the line object.

Figure 9 shows the results of MAD for each model and resolution. The MAD of the proposed method is smaller than that of the conventional method under all conditions, indicating that the strength of the line is close to the ideal strength; thus, the proposed method is superior to other methods in terms of image quality.

 figure: Fig. 9.

Fig. 9. MAD results of different methods: (a) Multiple layers, 4k resolution; (b) Multiple layers, 8k resolution; and (c) Single layer, 4k and 8k

Download Full Size | PDF

Overall, the proposed method succeeded in qualitatively and quantitatively improving image quality, and it therefore applicable even under complex conditions with respect to general objects, thus confirming its versatility.

4.2.2 Computational speed

Tables 1 and 2 provide the comparison of the computational times of each method. Each table includes both the kernel execution time on GPU and data-transfer time from/to the GPU. The proposed method is faster than all methods except the conventional CG-line method under multiple layers. Furthermore, the point-based computational speed is approximately 37–75 times faster, and the ASM-based computational speed is approximately 1.5–1.6 times faster. Moreover, in the case of a single layer, the ASM is faster than the proposed method. Therefore, the proposed method is effective for multilayered objects. Compared to the conventional CG-line method, the proposed method was approximately 1.2 times slower. The delay can be attributed to the computational burden of imposing the phase errors and calculating the curvature radius of the object. However, the additional cost to the conventional CG-line method is small.

Tables Icon

Table 1. Computational time for a single layer

Tables Icon

Table 2. Computational time for multiple layers

In summary, the proposed method is faster than the ASM and point-based method and has no significant delay than that of the conventional CG-line method.

5. Conclusion

This paper presented a method for improving the intensity imbalance on a reconstructed line using the CG-line method, which can rapidly generate kinoform CGHs of 3D wireframe objects. The proposed method successfully improves the image quality by adding phase errors to the hologram without significantly reducing the computational speed as compared to that of conventional methods. In addition, the proposed method is superior to the ASM method in terms of computational speed with respect to the number of layers. Thus, the proposed method improves the expressive power of multilayered models composed of line-art objects.

Nevertheless, the proposed method experiences the following issue, which will be resolved in future research, specifically further improvements in the image quality. The overcorrection of intensity depends on the object being drawn. For complex objects, the intensity for a very small curvature radius was lower than the average intensity. The proposed method is considered to display an intensity imbalance caused by the composite area of the wavefront. Therefore, when the curvature varies continuously, the composite area is more variable than when it is constant, and this may result in overcorrection. This problem can be solved by changing the error value by using an image-generation method corresponding to the variable area.

If these issues are resolved, further improvements can be expected in image quality, and it is hoped that 3D images with higher image quality can be generated at higher speeds.

Funding

Tokyo Metropolitan University (TMU local 5G research support); Japan Society for the Promotion of Science (19H01097, 22H03616).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]  

2. T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, Y. Endo, T. Kakue, T. Shimobaba, and T. Ito, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018). [CrossRef]  

3. Y. Yamamoto, T. Shimobaba, and T. Ito, “HORN-9: Special-purpose computer for electroholography with the Hilbert transform,” Opt. Express 30(21), 38115–38127 (2022). [CrossRef]  

4. H. Niwase, N. Takada, H. Araki, H. Nakayama, A. Sugiyama, T. Kakue, T. Shimobaba, and T. Ito, “Real-time spatiotemporal division multiplexing electroholography with a single graphics processing unit utilizing movie features,” Opt. Express 22(23), 28052–28057 (2014). [CrossRef]  

5. N. Takada, T. Shimobaba, H. Nakayama, A. Shiraki, N. Okada, M. Oikawa, N. Masuda, and T. Ito, “Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system,” Appl. Opt. 51(30), 7303–7307 (2012). [CrossRef]  

6. Y.-H. Seo, Y.-H. Lee, and D.-W. Kim, “ASIC chipset design to generate block-based complex holographic video,” Appl. Opt. 56(9), D52–D59 (2017). [CrossRef]  

7. T. Ichikawa, T. Yoneyama, and Y. Sakamoto, “CGH calculation with the ray tracing method for the Fourier transform optical system,” Opt. Express 21(26), 32019–32031 (2013). [CrossRef]  

8. P. W. M. Tsang, T.-C. Poon, and Y. M. Wu, “Review of fast methods for point-based computer-generated holography [invited],” Photonics Res. 6(9), 837–846 (2018). [CrossRef]  

9. M. H. Eybposh, N. W. Caira, M. Atisa, P. Chakravarthula, and N. C. Pégard, “DeepCGH: 3D computer-generated holography using deep learning,” Opt. Express 28(18), 26636–26650 (2020). [CrossRef]  

10. M. Bayraktar and M. Özcan, “Method to calculate the far field of three-dimensional objects for computer-generated holography,” Appl. Opt. 49(24), 4647–4654 (2010). [CrossRef]  

11. T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram of line-drawn objects without FFT,” Opt. Express 28(11), 15907–15924 (2020). [CrossRef]  

12. T. Nishitsuji, D. Blinder, T. Kakue, T. Shimobaba, P. Schelkens, and T. Ito, “GPU-accelerated calculation of computer-generated holograms for line-drawn objects,” Opt. Express 29(9), 12849–12866 (2021). [CrossRef]  

13. T. Nishitsuji, T. Kakue, D. Blinder, T. Shimobaba, and T. Ito, “An interactive holographic projection system that uses a hand-drawn interface with a consumer CPU,” Sci. Rep. 11(1), 147 (2021). [CrossRef]  

14. D. Blinder, T. Nishitsuji, T. Kakue, T. Shimobaba, T. Ito, and P. Schelkens, “Analytic computation of line-drawn objects in computer generated holography,” Opt. Express 28(21), 31226–31240 (2020). [CrossRef]  

15. D. Blinder, T. Nishitsuji, and P. Schelkens, “Real-Time Computation of 3D Wireframes in Computer-Generated Holography,” IEEE Trans. on Image Process. 30, 9418–9428 (2021). [CrossRef]  

16. T. Nishitsuji, N. Shiina, D. Blinder, T. Shimobaba, T. Kakue, P. Schelkens, T. Ito, and T. Asaka, “Variable-intensity line 3D images drawn using kinoform-type electroholography superimposed with phase error,” Opt. Express 30(15), 27884–27902 (2022). [CrossRef]  

17. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008). [CrossRef]  

18. Z. Yang, Q. Fan, Y. Zhang, J. Liu, and J. Zhou, “A new method for producing computer generated holograms,” J. Opt. 14(9), 095702 (2012). [CrossRef]  

19. T. Nishitsuji, T. Shimobaba, T. Kakue, N. Masuda, and T. Ito, “Fast calculition of computer-generated hologram using the circular symmetry of zone plates,” Opt. Express 20(25), 27496–27502 (2012). [CrossRef]  

20. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]  

21. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015). [CrossRef]  

22. D. Blinder, T. Nishitsuji, and P. Schelkens, “Three-dimensional spline-based computer-generated holography,” Opt. Express 31(2), 3072–3082 (2023). [CrossRef]  

23. A. Hayashi, T. Nishitsuji, and T. Asaka, “Thickening the width of holographic three-dimensional line images using an error diffusion technique,” in Proceedings of Information Photonics 2022, (Yokohama, Kanagawa, Japan, 2022), Information Photonics 2022 (IP2022), IP06-04.

24. R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

25. T.-C. Poon and P. W. M. Tsang, “Novel method for converting digital Fresnel hologram to phase-only hologram based on bidirectional error diffusion,” Opt. Express 21(20), 23680–23686 (2013). [CrossRef]  

26. K. Liu, Z. He, and L. Cao, “Pattern-adaptive error diffusion algorithm for improved phase-only hologram generation,” Chin. Opt. Lett. 19(5), 050501 (2021). [CrossRef]  

27. L. B. Lesem, P. M. Hirsch, and J. A. Jordan, “The kinoform: A new wavefront reconstruction device,” IBM J. Res. Dev. 13(2), 150–155 (1969). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Overview of the CG-line method: (a) General instruction of the wavefront convergence; superimposing PSF with Eq. (1) along the infinity straight line generates the converged wavefront, which can be formulated using Eq. (4), (b) CGH of the wireframe 3D object generated using Eq. (1).
Fig. 2.
Fig. 2. Illustration of the intensity imbalance (obtained through a numerical reconstruction simulation): (a) Ideal image and (b) practical reconstructed image by using the GPU-accelerated CG-line method.
Fig. 3.
Fig. 3. Optics system.
Fig. 4.
Fig. 4. 3D object (a) comprising two types of constant curvature radii (Arc, Line) and that (b) comprising two straight–lines and 111 quadratic Bezier curves (Tokyo)
Fig. 5.
Fig. 5. Example of numerically reconstructed images of two types of constant curvature radius objects. ($R = 286$ [pixel], depth$=0.1$ [m])
Fig. 6.
Fig. 6. Intensity ratios of the straight-line and arc sections: (a) 0.1 [m], and (b) 0.2 [m]
Fig. 7.
Fig. 7. Single layer of numerically and optically reconstructed images of a general object.
Fig. 8.
Fig. 8. Multiple layers of numerically and optically reconstructed images of a general object.
Fig. 9.
Fig. 9. MAD results of different methods: (a) Multiple layers, 4k resolution; (b) Multiple layers, 8k resolution; and (c) Single layer, 4k and 8k

Tables (2)

Tables Icon

Table 1. Computational time for a single layer

Tables Icon

Table 2. Computational time for multiple layers

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

ϕ ( x α , y α ) = a λ exp [ i π λ ζ { ( x α δ ) 2 + ( y α ϵ ) 2 } ] ,
L ( x α , y α ) = 1 ζ a ( u ) exp [ i π λ ζ { ( x α u ) 2 + y α 2 } ] d u ,
U ( y α ) = a λ ζ exp ( i π y α 2 λ ζ ) .
U ( y α ) = { Eq. (3) ( y α < R ζ ) 0 ( o t h e r w i s e ) ,
R ζ = ζ λ 4 p 2 λ 2 ,
U ( y α ) = a λ ζ exp ( i π y α 2 λ ζ ± ρ π ) ,
S = 2 R ζ l ( R = )
C ( R ) = { 2 R ζ l ( R ζ < R ) l R ( R ζ 2 + R 2 ) ( R ζ > = R ) ,
A r ( R ) = S C ( R ) .
ρ ( D ) = arccos ( 2 D 1 ) 2 π ,
D = Desired (attenuated) intensity Current intensity .
ρ ( I r ) = arccos ( 2 I r 1 ) 2 π .
ρ { A r ( R ) } = arccos { 2 A r ( R ) 1 } 2 π .
R ( t ) = ( x ( t ) 2 + y ( t ) 2 ) 3 2 x ( t ) y ( t ) y ( t ) x ( t ) ,
MAD = j | I n ( j ) 1.0 | N
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.