Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Image quality improvement of holographic 3-D images based on a wavefront recording plane method with a limiting diffraction region

Open Access Open Access

Abstract

This study aims to improve the image quality of holographic three-dimensional (3-D) images based on the wavefront recording plane (WRP) method. In this method, we place a WRP close to the 3-D objects to reduce the propagation distance of light from the objects to the WRP. The conventional WRP method has been implemented only under conditions that did not cause aliasing noise. This study proposes a WRP method with a limiting diffraction region from the WRP to the hologram such that we can perform the WRP method under any condition. As a result, we succeeded in improving the image quality of the 3-D images based on the WRP method.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Among various three-dimensional (3-D) display technologies, electro-holography [14] has attracted attention in recent years [5,6]. Electro-holography can reconstruct natural and dynamic 3-D video images by displaying computer-generated holograms (CGHs) on a spatial light modulator. A CGH is calculated through a numerical simulation on a computer and records the 3-D information of 3-D objects as the wavefront pattern of light. Previous studies reported many CGH calculation methods, including point cloud- [79], layer- [10,11], ray sampling plane- [1214], and polygon-based CGHs [1517]. The computational complexity of a CGH calculation is very large; hence, the calculation must be accelerated to put 3-D displays into practical use in the future.

The wavefront recording plane (WRP) method [1827] has been reported as one of the candidates for accelerating the point cloud-based CGH calculation. The WRP method comprises two steps: first, we perform a propagation calculation from the 3-D objects to the WRP, and second, we perform a diffraction calculation from the WRP to the CGH. In the first step, we reduce the calculation region of light by placing a WRP near the 3-D objects, which results in the reduction of the computational complexity of the CGH calculation. However, in the conventional WRP method [18], we did not limit the diffraction region in the diffraction calculation of the second step similar to the first step. By not limiting the diffraction region of the second step, the light spread behavior was made different between the first and second steps. Moreover, we could not express the light behavior as a conventional point cloud-based method without the WRP method. The conventional WRP method was performed only under the conditions, which did not cause an aliasing noise. The conditions that do not cause an aliasing noise depend on various factors, such as the CGH pixel pitch, CGH resolution, and distance from the 3-D objects to the CGH. For the practical use of electro-holographic displays using the WRP method, the implementation of the WRP method, which does not cause an aliasing noise under any conditions, is required.

To overcome this problem, we propose a WRP method with a limiting diffraction region in the diffraction calculation of the second step to perform the WRP method under any conditions. Using the proposed WRP method, we perform the diffraction calculation of the second step in the region that does not cause an aliasing noise and unify the light spread behavior between the first and second steps. In this study, we evaluate the image quality of the full-color reconstructed 3-D images [28,29] of the proposed and conventional WRP methods through a comparison with the conventional method without the WRP method. We calculate the point cloud- and layer-based CGHs in the conventional method without the WRP method. In addition, we change the region shapes (i.e., square and circular regions) between the first and second steps and evaluate them in terms of image quality.

2. Method

2.1 Full-color CGH calculation

In electro-holography, the color and the depth information of 3-D objects are recorded as CGHs by calculating the propagation and the interference of light on a computer. We created point cloud- and layer-based CGHs from 3-D objects represented by RGB-D images. Figure 1 shows the point cloud- and layer-based CGH calculation methods.

 figure: Fig. 1.

Fig. 1. CGH calculation methods: (a) point cloud- and (b) layer-based CGHs.

Download Full Size | PDF

In the point cloud-based CGH calculation, as shown in Fig. 1(a), we considered 3-D objects as the point cloud and calculated the CGH by superposing the complex amplitudes of the light waves from the point cloud. The complex amplitude formed on the CGH, ${u_h}({{x_h},{y_h}} )$, can be expressed as follows:

$${u_h}({{x_h},{y_h}} )= \sum\limits_{j = 1}^{{N_o}} {\frac{{{A_j}}}{{{r_{hj}}}}\exp \left( {i\frac{{2\pi }}{\lambda }{r_{hj}}} \right)} ,$$
where $({{x_h},{y_h}} )$ denotes the coordinates on the CGH; $({{x_j},{y_j},{z_j}} )$ denotes the coordinates of the j-th point light source of the 3-D objects; ${r_{hj}} = \sqrt {{{({{x_j} - {x_h}} )}^2} + {{({{y_j} - {y_h}} )}^2} + {z_j}^2} $ denotes the distance from the j-th point light source to the coordinates on the CGH; ${A_j}$ denotes the amplitude value of the j-th point light source; ${N_o}$ denotes the number of object points; i denotes the imaginary unit; and $\lambda $ denotes the light wave wavelength. To create a full-color point cloud-based CGH, the complex amplitudes for each color can be calculated as follows:
$$\left[ {\begin{array}{c} {u_h^R({{x_h},{y_h}} )}\\ {u_h^G({{x_h},{y_h}} )}\\ {u_h^B({{x_h},{y_h}} )} \end{array}} \right] = \sum\limits_{j = 1}^{{N_o}} {\left[ {\begin{array}{c} {\frac{{{R_j}}}{{{r_{hj}}}}\exp \left( {i\frac{{2\pi }}{{{\lambda_R}}}{r_{hj}}} \right)}\\ {\frac{{{G_j}}}{{{r_{hj}}}}\exp \left( {i\frac{{2\pi }}{{{\lambda_G}}}{r_{hj}}} \right)}\\ {\frac{{{B_j}}}{{{r_{hj}}}}\exp \left( {i\frac{{2\pi }}{{{\lambda_B}}}{r_{hj}}} \right)} \end{array}} \right]} ,$$
where ${R_j}$, ${G_j}$, and ${B_j}$ denote the j-th red, green, and blue amplitude values, respectively, and ${\lambda _R}$, ${\lambda _G}$, and ${\lambda _B}$ denote the red, green, and blue light wave wavelengths, respectively. In Eq. (2), the number of red, green, and blue object points is equal to ${N_o}$.

In the layer-based CGH calculation, as shown in Fig. 1(b), we calculated the CGHs by performing the propagation calculation of light for each depth layer. The complex amplitude formed on the CGH, ${u_h}({{x_h},{y_h}} )$, can be expressed as follows [30]:

$${u_h}({{x_h},{y_h}} )= \sum\limits_{z = 0}^{{N_d}} {\textrm{Pro}{\textrm{p}_z}[{rgb({{x_p},{y_p}} )\exp ({i2\pi n({{x_p},{y_p}} )} ){m_z}({{x_p},{y_p}} )} ]} ,$$
where $({{x_p},{y_p}} )$ denotes the coordinates on the RGB-D image; $\textrm{Pro}{\textrm{p}_z}$ denotes the Fresnel diffraction calculation [31] of the z-th propagation distance; $rgb({{x_p},{y_p}} )$ denotes the amplitude value of the RGB image; $n({{x_p},{y_p}} )$ denotes the pseudo-random number ranging from 0.0 to 1.0; and ${N_d}$ denotes the number of depth layers. The function ${m_z}({{x_p},{y_p}} )$ can be expressed as follows:
$${m_z}({{x_p},{y_p}} )= \left\{ {\begin{array}{ll} {1,}&{\left( {\begin{array}{cc} {\textrm{if}}&{dep({{x_p},{y_p}} )= z} \end{array}} \right)}\\ {0,}&{\left( {\begin{array}{c} {\textrm{otherwise}} \end{array}} \right)} \end{array}} \right.,$$
where $dep({{x_p},{y_p}} )$ denotes the depth image. To create full-color layer-based CGH, the complex amplitudes for each color can be calculated as follows:
$$\left[ {\begin{array}{c} {u_h^R({{x_h},{y_h}} )}\\ {u_h^G({{x_h},{y_h}} )}\\ {u_h^B({{x_h},{y_h}} )} \end{array}} \right] = \sum\limits_{z = 0}^{{N_d}} {\textrm{Pro}{\textrm{p}_z}\left[ {\begin{array}{c} {R({{x_p},{y_p}} )\exp ({i2\pi n({{x_p},{y_p}} )} ){m_z}({{x_p},{y_p}} )}\\ {G({{x_p},{y_p}} )\exp ({i2\pi n({{x_p},{y_p}} )} ){m_z}({{x_p},{y_p}} )}\\ {B({{x_p},{y_p}} )\exp ({i2\pi n({{x_p},{y_p}} )} ){m_z}({{x_p},{y_p}} )} \end{array}} \right]} ,$$
where $R({{x_p},{y_p}} )$, $G({{x_p},{y_p}} )$, and $B({{x_p},{y_p}} )$ denote the red, green, and blue amplitude values of the RGB image, respectively.

2.2 WRP method

The WRP method is a CGH acceleration algorithm for a point cloud-based CGH, in which the calculation region is reduced by placing the WRP close to the 3-D objects. Figure 2 shows the schematic of the WRP method. The WRP method proceeds in two steps. First, we recorded the complex amplitude of the 3-D objects on the WRP placed close to the 3-D objects. The complex amplitude formed on the WRP, ${u_w}({{x_w},{y_w}} )$, can be expressed as follows:

$${u_w}({{x_w},{y_w}} )= \sum\limits_{j = 1}^{{N_o}} {\frac{{{A_j}}}{{{r_{wj}}}}\exp \left( {i\frac{{2\pi }}{\lambda }{r_{wj}}} \right)} ,$$
where $({{x_w},{y_w}} )$ denotes the coordinates on the WRP; ${d_j} = {z_j} - {z_w}$ denotes the perpendicular distance from the j-th point light source to the WRP; and ${r_{wj}} = \sqrt {{{({{x_j} - {x_w}} )}^2} + {{({{y_j} - {y_w}} )}^2} + {d_j}^2} $ denotes the distance from the j-th point light source to the WRP.

 figure: Fig. 2.

Fig. 2. WRP method: (a) outline of the WRP method and (b) calculation region. ${W_x}$ and ${W_y}$ denote the WRP resolution.

Download Full Size | PDF

Second, we calculated the complex amplitude on the CGH by performing a diffraction calculation from the WRP to the CGH. The WRP had the amplitude and phase information of the 3-D objects; hence, the diffraction calculation was equivalent to directly calculating the complex amplitude on the CGH from the 3-D objects. The complex amplitude on the CGH, ${u_h}({{x_h},{y_h}} )$, can be expressed as follows:

$${u_h}({{x_h},{y_h}} )= \textrm{Pro}{\textrm{p}_{{z_w}}}[{{u_w}({{x_w},{y_w}} )} ].$$
We now explain the calculation region on the WRP. The radius of the circular region for the j-th point light source on the WRP, ${C_j}$, can be expressed as follows:
$${C_j} = |{{d_j}} |\tan (\theta ),$$
where $\theta = {\sin ^{ - 1}}({{\lambda \mathord{\left/ {\vphantom {\lambda {({2p} )}}} \right.} {({2p} )}}} )$ denotes the maximum diffraction angle for reconstructing the 3-D images and p denotes the WRP and CGH pixel pitch. We must judge whether or not the point light source passed through the circular region. We set the square region inscribed in the circular region shown in Fig. 2(b) to mitigate the judgment of the circular region. The side length of the square region, ${S_j}$, can be expressed as follows:
$${S_j} = \frac{2}{{\sqrt 2 }}{C_j}.$$
We implemented both circular and square regions and compared the image qualities of the reconstructed 3-D images.

2.3 Proposed method: diffraction calculation with a limiting diffraction region

We must prevent aliasing in the diffraction calculation of the WRP method. The Fresnel diffraction of the convolution form used in Eq. (7) can be expressed as follows:

$${u_h}({{x_h},{y_h}} )= \frac{{\exp \left( {i{\textstyle{{2\pi } \over \lambda }}{z_w}} \right)}}{{i\lambda {z_w}}}{{{\mathcal F}}^{ - 1}}[{{{\mathcal F}}[{{u_w}({{x_w},{y_w}} )} ]\cdot {{\mathcal F}}[{h({{x_w},{y_w},{z_w}} )} ]} ],$$
where ${{\mathcal F}}[{\cdot} ]$ and ${{{\mathcal F}}^{ - 1}}[{\cdot} ]$ denote the Fourier transform and the inverse Fourier transform, respectively, and $h({{x_w},{y_w},{z_w}} )= \exp ({i\phi ({{x_w},{y_w},{z_w}} )} )= \exp \left( {i{\textstyle{\pi \over {\lambda {z_w}}}}({{x_w}^2 + {y_w}^2} )} \right)$ denotes the impulse response. The impulse response $h({{x_w},{y_w},{z_w}} )$ is a spatially varying signal on $({{x_w},{y_w}} )$ that may cause aliasing. To avoid this, the signal at a frequency must be sampled more than twice the maximum frequency. Therefore, the conditions that do not cause aliasing can be expressed as follows:
$$\frac{1}{p} \ge 2|{{f_x}} |= 2\left|{\frac{1}{{2\pi }}\frac{{\partial \phi ({{x_w},{y_w},{z_w}} )}}{{\partial {x_w}}}} \right|= \left|{\frac{{2{x_w}}}{{\lambda {z_w}}}} \right|,$$
$$\frac{1}{p} \ge 2|{{f_y}} |= 2\left|{\frac{1}{{2\pi }}\frac{{\partial \phi ({{x_w},{y_w},{z_w}} )}}{{\partial {y_w}}}} \right|= \left|{\frac{{2{y_w}}}{{\lambda {z_w}}}} \right|,$$
where ${f_x}$ and ${f_y}$ denote the horizontal and vertical spatial frequencies of $h({{x_w},{y_w},{z_w}} )$, respectively. In Eqs. (11) and (12), the region where the aliasing is not produced can be expressed as follows:
$$|{{x_w}} |\le \frac{{\lambda {z_w}}}{{2p}},$$
$$|{{y_w}} |\le \frac{{\lambda {z_w}}}{{2p}}.$$
To avoid aliasing in the first step of the WRP method, we must perform a diffraction calculation within the region satisfied in Eqs. (13) and (14). Note, however, that in the second step, we need not limit the diffraction region of the impulse response if the diffraction region is larger than the WRP size. The condition can be expressed as follows:
$$\left|{\frac{{\lambda {z_w}}}{{2p}}} \right|\ge \frac{{{W_x}}}{2},$$
$$\left|{\frac{{\lambda {z_w}}}{{2p}}} \right|\ge \frac{{{W_y}}}{2},$$
where ${W_x}$ and ${W_y}$ denote the WRP resolution. The conventional WRP method [18] was implemented only under the conditions that satisfied Eqs. (15) and (16).

In contrast to the conventional WRP method [18], we performed herein the diffraction calculation from the WRP to the CGH with a limiting diffraction region similar to the first step satisfying Eqs. (13) and (14). The shape of the limiting diffraction region of $h({{x_w},{y_w},{z_w}} )$ was circular or square because we used the circular and square regions in the first step as in Fig. 2. Figure 3 shows the schematic of the WRP method with a limiting diffraction region. We prevented the calculation of the complex amplitudes in the unnecessary region (shown in green, Fig. 3) by unifying the light spread behavior between the first and second steps. If we calculate the complex amplitudes in the unnecessary region, aliasing will be produced because Eqs. (13) and (14) are not satisfied. Moreover, the image quality of the reconstructed 3-D images will worsen.

 figure: Fig. 3.

Fig. 3. WRP method with a limiting diffraction region.

Download Full Size | PDF

The radius of the circular region for the diffraction region in the CGH, ${C_{{z_w}}}$, can be expressed as follows:

$${C_{{z_w}}} = |{{z_w}} |\tan (\theta ).$$
The side length of the square region for a diffraction region, ${S_{{z_w}}}$, can be expressed as follows:
$${S_{{z_w}}} = \frac{2}{{\sqrt 2 }}{C_{{z_w}}}.$$
We limited the radius of the impulse response $h({{x_w},{y_w},{z_w}} )$ with ${C_{{z_w}}}$ when the circular region was used in the first step. In contrast, the radius of the impulse response was ${S_{{z_w}}}$ when the square region was used in the first step.

Figure 4 shows the complex amplitude ${u_w}({{x_w},{y_w}} )$ calculated from a single object point, the impulse responses $h({{x_w},{y_w},{z_w}} )$ with unlimiting and limiting regions, and the corresponding complex amplitudes ${u_h}({{x_h},{y_h}} )$ on the CGH plane. Figure 5 shows the flowcharts of the WRP methods.

 figure: Fig. 4.

Fig. 4. Complex amplitudes ${u_w}({{x_w},{y_w}} )$ and ${u_h}({{x_h},{y_h}} )$ and impulse response $h({{x_w},{y_w},{z_w}} )$ of the WRP method: (a) conventional WRP method, (b) proposed WRP method (circular region), and (c) proposed WRP method (square region).

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Flowcharts of the WRP methods: (a) conventional WRP method, (b) proposed WRP method (circular region), and (c) proposed WRP method (square region).

Download Full Size | PDF

3. Results

We evaluated the image quality of full-color reconstructed 3-D images by comparing the proposed WRP method with the conventional WRP one. We used Microsoft Windows 10 Professional as the operating system with a central processing unit (Intel Core i7-9700KF with 3.6 GHz) and a graphics processing unit (GeForce RTX 2070). We used Microsoft Visual Studio Community 2019 and a computer-unified device architecture [32] and cuFFT [33] for the GPU implementation in Eq. (10). We set the CGH ${H_x} \times {H_y}$ resolution to be similar to the WRP ${W_x} \times {W_y}$ resolution. We also set the WRP and CGH resolutions to $1024 \times 1024\,\textrm{px}$, $2048 \times 2048\,\textrm{px}$, and $4096 \times 4096\,\textrm{px}$. We calculated the phase-only CGHs from the complex amplitudes ${u_h}({{x_h},{y_h}} )$. In addition, we enhanced the brightness of the reconstructed images by taking within ${\pm} 4\sigma $, where $\sigma $ is the standard variation [30]. Figure 6 shows the RGB-D images [34] used as the 3-D objects herein. These RGB-D images were resized to a size similar to that of the CGHs.

 figure: Fig. 6.

Fig. 6. RGB-D images: (a) “Honey” and (b) “Monasroom.”

Download Full Size | PDF

The calculation conditions are as follows: red, green, and blue light wavelengths are ${\lambda _R} = 650\,\textrm{nm}$, ${\lambda _G} = 532\,\textrm{nm}$, and ${\lambda _B} = 450\,\textrm{nm}$, respectively; pixel pitch, $p = 8\,\mu \textrm{m}$; distance between the 3-D objects and the WRP, $3\,\textrm{mm} \le {d_j} \le 25\,\textrm{mm}$; and distance between the WRP and the CGH, ${z_w} = 500\,\textrm{mm}$. We changed the focus distance ${d_r}$ to reconstruct and compare the 3-D images at different positions.

Table 1 lists the computational time for each CGH calculation method. We measured the computational time of the CGH calculation when we set the RGB-D image, WRP and CGH resolutions to $4096 \times 4096\,\textrm{px}$ and used “Honey” shown in Fig. 6(a) as a 3-D object. We calculated the speed up ratio by comparing with the conventional method (point-cloud-based CGH). From Table 1, we confirmed that the computational time of the second step of was different between the conventional WRP method and the proposed WRP method because the calculation region of the impulse response was different between the conventional WRP method and the proposed WRP method. In addition, we confirmed that the WRP method with limiting square region had shorter computational time than the WRP method with limiting circular region.

Tables Icon

Table 1. Computational time for each CGH calculation method (${4096 \times 4096}\,{px}$).

Figures 7 and 8 show the reconstructed 3-D images of “Honey” focused on the distances ${d_r} = 505$ and $520\,\textrm{mm}$. We measured the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) [35] to evaluate the image quality by comparing the reconstructed 3-D images obtained using the conventional point cloud-based CGH [4] without the WRP method. Consequently, we confirmed the different focus of the reconstructed 3-D images of “Honey” shown in Figs. 7 and 8. We also confirmed that the reconstructed 3-D images of the conventional WRP method caused the aliasing noise of the blue light because the blue light did not satisfy the conditions of Eqs. (15) and (16). In contrast, in the proposed WRP method, the reconstructed 3-D images did not cause noise of the blue light. The proposed WRP method also had a higher image quality compared with the conventional WRP method.

 figure: Fig. 7.

Fig. 7. Reconstructed 3-D images of “Honey” focused on the distance of ${d_r} = 505\,\textrm{mm}$: (a) $1024 \times 1024\,\textrm{px}$, (b) $2048 \times 2048\,\textrm{px}$, and (c) $4096 \times 4096\,\textrm{px}$.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Reconstructed 3-D images of “Honey” focused on the distance of ${d_r} = 520\,\textrm{mm}$: (a) $1024 \times 1024\,\textrm{px}$, (b) $2048 \times 2048\,\textrm{px}$, and (c) $4096 \times 4096\,\textrm{px}$.

Download Full Size | PDF

We evaluated the image quality by comparing the reconstructed 3-D images of the square limiting region with those of the circular limiting region. By measuring the PSNR and the SSIM, the circular limiting region was found to have a higher image quality compared with the square limiting region because the latter was formed inscribed in the circular region, and the 3-D information recorded in the CGH was reduced.

Figures 9 and 10 show the reconstructed 3-D images of “Monasroom” focused on the distances of ${d_r} = 505$ and $520\,\textrm{mm}$. The results were the same as in Figs. 7 and 8. The conventional WRP method caused the noise of the blue light, whereas the proposed WRP method did not produce the noise and had a higher image quality than the conventional WRP method.

 figure: Fig. 9.

Fig. 9. Reconstructed 3-D images of “Monasroom” focused on the distance of ${d_r} = 505\,\textrm{mm}$: (a) $1024 \times 1024\,\textrm{px}$, (b) $2048 \times 2048\,\textrm{px}$, and (c) $4096 \times 4096\,\textrm{px}$.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Reconstructed 3-D images of “Monasroom” focused on the distance of ${d_r} = 520\,\textrm{mm}$: (a) $1024 \times 1024\,\textrm{px}$, (b) $2048 \times 2048\,\textrm{px}$, and (c) $4096 \times 4096\,\textrm{px}$.

Download Full Size | PDF

Next, we investigated the combination of the calculation regions in the first and second steps of the WRP method. Table 2 lists the symbols of the combinations of the calculation regions. The calculation conditions are as follows: WRP and CGH resolutions are ${W_x} \times {W_y} = 4096 \times 4096\,\textrm{px}$ and ${H_x} \times {H_y} = 4096 \times 4096\,\textrm{px}$, respectively, and focus distance, ${d_r} = 505\,\textrm{mm}$.

Tables Icon

Table 2. Symbols of the combinations of the calculation region in the WRP method.

Figures 11 and 12 show the reconstructed 3-D images when changing the combinations. Panels (a)–(d) of Figs. 11 and 12 correspond to the symbols presented in Table 2. We measured the PSNR and the SSIM by comparison with the reconstructed 3-D images obtained using the conventional point cloud-based CGH. Figures 11 and 12 depict that the best result was that of combination (b). In combinations (c) and (d), the image quality of the reconstructed 3-D images deteriorated because the calculation region between the first and second steps of the WRP method was not unified. In addition, the image quality was almost the same in combinations (a), (c), and (d) because they used the square limiting region.

 figure: Fig. 11.

Fig. 11. Reconstructed 3-D images of “Honey” when changing the calculation region of the WRP method: (a–d) symbols presented in Table 1.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Reconstructed 3-D images of “Monasroom” when changing the calculation region of the WRP method: (a–d) symbols presented in Table 1.

Download Full Size | PDF

4. Conclusion

This study proposed the WRP method with a limiting diffraction region in the diffraction calculation of the second step to perform the WRP method under any condition. By limiting the diffraction region, we unified the spread of light between the first and second steps of the WRP method and expressed the spread of light when using the conventional method without the WRP method. The proposed WRP method did not cause aliasing noise for the reconstructed 3-D images and improved the image quality. We also evaluated the image quality of the reconstructed 3-D images when changing the combination of the square and circular regions in the first and second steps. The best result was obtained using the circular limiting region in the first and second steps.

Funding

Japan Society for the Promotion of Science (19H04132, 19H01097).

Disclosures

The authors declare no conflicts of interest.

References

1. P. St-Hilaire, S. A. Benton, M. E. Lucente, M. L. Jepsen, J. Kollin, H. Yoshikawa, and J. S. Underkoffler, “Electronic display system for computational holography,” Proc. SPIE 1212, 174–182 (1990). [CrossRef]  

2. S.-C. Kim and E. S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008). [CrossRef]  

3. D. Blinder and P. Schelkens, “Accelerated computer generated holography using sparse bases in the STFT domain,” Opt. Express 26(2), 1461–1473 (2018). [CrossRef]  

4. H. Yanagihara, T. Kakue, Y. Yamamoto, T. Shimobaba, and T. Ito, “Real-time three-dimensional video reconstruction of real scenes with deep depth using electro-holographic display system,” Opt. Express 27(11), 15662–15678 (2019). [CrossRef]  

5. Y. Yamaguchi and Y. Takaki, “See-through integral imaging display with background occlusion capability,” Appl. Opt. 55(3), A144–A149 (2016). [CrossRef]  

6. R. Hirayama, D. M. Plasencia, N. Masuda, and S. Subramanian, “A volumetric display for visual, tactile and audio presentation using acoustic trapping,” Nature 575(7782), 320–323 (2019). [CrossRef]  

7. P. W. M. Tsang and T.-C. Poon, “Review on the state-of-the-art technologies for acquisition and display of digital holograms,” IEEE Trans. Ind. Inf. 12(3), 886–901 (2016). [CrossRef]  

8. T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, Y. Endo, T. Kakue, T. Shimobaba, and T. Ito, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018). [CrossRef]  

9. H. Amano, Y. Ichihashi, T. Kakue, K. Wakunami, H. Hashimoto, R. Miura, T. Shimobaba, and T. Ito, “Reconstruction of a three-dimensional color-video of a point-cloud object using the projection-type holographic display with a holographic optical element,” Opt. Express 28(4), 5692–5705 (2020). [CrossRef]  

10. J.-S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015). [CrossRef]  

11. H. Zhang, L. Cao, and G. Jin, “Computer-generated hologram with occlusion effect using layer-based processing,” Appl. Opt. 56(13), F138–F143 (2017). [CrossRef]  

12. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011). [CrossRef]  

13. H. Sato, T. Kakue, Y. Ichihashi, Y. Endo, K. Wakunami, R. Oi, K. Yamamoto, H. Nakayama, T. Shimobaba, and T. Ito, “Real-time colour hologram generation based on ray-sampling plane with multi-GPU acceleration,” Sci. Rep. 8(1), 1500 (2018). [CrossRef]  

14. S. Igarashi, T. Nakamura, K. Matsushima, and M. Yamaguchi, “Efficient tiled calculation of over-10-gigapixel holograms using ray-wavefront conversion,” Opt. Express 26(8), 10773–10786 (2018). [CrossRef]  

15. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48(34), H54–H63 (2009). [CrossRef]  

16. Y.-M. Ji, H. Yeom, and J.-M. Park, “Efficient texture mapping by adaptive mesh division in mesh-based computer generated hologram,” Opt. Express 24(24), 28154–28169 (2016). [CrossRef]  

17. H. Nishi and K. Matsushima, “Rendering of specular curved objects in polygon-based computer holography,” Appl. Opt. 56(13), F37–F44 (2017). [CrossRef]  

18. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]  

19. P. Tsang, W.-K. Cheung, T.-C. Poon, and C. Zhou, “Holographic video at 40 frames per second for 4-million object points,” Opt. Express 19(16), 15205–15211 (2011). [CrossRef]  

20. A. Symeonidou, D. Blinder, A. Munteanu, and P. Schelkens, “Computer-generated holograms by multiple wavefront recording plane method with occlusion culling,” Opt. Express 23(17), 22149–22161 (2015). [CrossRef]  

21. C. Chang, J. Wu, Y. Qi, C. Yuan, S. Nie, and J. Xia, “Simple calculation of a computer-generated hologram for lensless holographic 3D projection using a nonuniform sampled wavefront recording plane,” Appl. Opt. 55(28), 7988–7996 (2016). [CrossRef]  

22. N. Hasegawa, T. Shimobaba, T. Kakue, and T. Ito, “Acceleration of hologram generation by optimizing the arrangement of wavefront recording planes,” Appl. Opt. 56(1), A97–A103 (2017). [CrossRef]  

23. D. Arai, T. Shimobaba, T. Nishitsuji, T. Kakue, N. Masuda, and T. Ito, “An accelerated hologram calculation using the wavefront recording plane method and wavelet transform,” Opt. Commun. 393(15), 107–112 (2017). [CrossRef]  

24. Y.-L. Piao, Y. Zhao, H.-Y. Wu, A. Khuderchuluun, E. Dashdavaa, J.-R. Jeong, and N. Kim, “Image quality enhancement for digital holographic display using multiple wavefront recording planes method,” Proc. SPIE 10944, 1094416 (2019). [CrossRef]  

25. H. Yanagihara, T. Shimobaba, T. Kakue, and T. Ito, “Comparison of wavefront recording plane-based hologram calculations: ray-tracing method versus look-up table method,” Appl. Opt. 59(8), 2400–2408 (2020). [CrossRef]  

26. M. S. Islam, Y.-L. Piao, Y. Zhao, K.-C. Kwon, E. Cho, and N. Kim, “Max-depth-range technique for faster full-color hologram generation,” Appl. Opt. 59(10), 3156–3164 (2020). [CrossRef]  

27. D. Pi, J. Liu, Y. Han, S. Yu, and N. Xiang, “Acceleration of computer-generated hologram using wavefront-recording plane and look-up table in three-dimensional holographic display,” Opt. Express 28(7), 9833–9841 (2020). [CrossRef]  

28. G. Xue, J. Liu, X. Li, J. Jia, Z. Zhang, B. Hu, and Y. Wang, “Multiplexing encoding method for full-color dynamic 3D holographic display,” Opt. Express 22(15), 18473–18482 (2014). [CrossRef]  

29. S. Yamada, T. Shimobaba, T. Kakue, and T. Ito, “Full-color computer-generated hologram using wavelet transform and color space conversion,” Opt. Express 27(6), 8153–8167 (2019). [CrossRef]  

30. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013). [CrossRef]  

31. R. P. Muffoletto, “Numerical techniques for Fresnel diffraction in computational holography,” Ph. D. Thesis, Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College (2006).

32. “CUDA,” https://developer.nvidia.com/cuda-zone.

33. “cuFFT,” https://developer.nvidia.com/cufft.

34. S. Wanner, S. Meister, and B. Goldluecke, “Datasets and Benchmarks for Densely Sampled 4D Light Fields,” Vision, Modeling & Visualization 225–226 (2013).

35. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. CGH calculation methods: (a) point cloud- and (b) layer-based CGHs.
Fig. 2.
Fig. 2. WRP method: (a) outline of the WRP method and (b) calculation region. ${W_x}$ and ${W_y}$ denote the WRP resolution.
Fig. 3.
Fig. 3. WRP method with a limiting diffraction region.
Fig. 4.
Fig. 4. Complex amplitudes ${u_w}({{x_w},{y_w}} )$ and ${u_h}({{x_h},{y_h}} )$ and impulse response $h({{x_w},{y_w},{z_w}} )$ of the WRP method: (a) conventional WRP method, (b) proposed WRP method (circular region), and (c) proposed WRP method (square region).
Fig. 5.
Fig. 5. Flowcharts of the WRP methods: (a) conventional WRP method, (b) proposed WRP method (circular region), and (c) proposed WRP method (square region).
Fig. 6.
Fig. 6. RGB-D images: (a) “Honey” and (b) “Monasroom.”
Fig. 7.
Fig. 7. Reconstructed 3-D images of “Honey” focused on the distance of ${d_r} = 505\,\textrm{mm}$ : (a) $1024 \times 1024\,\textrm{px}$ , (b) $2048 \times 2048\,\textrm{px}$ , and (c) $4096 \times 4096\,\textrm{px}$ .
Fig. 8.
Fig. 8. Reconstructed 3-D images of “Honey” focused on the distance of ${d_r} = 520\,\textrm{mm}$ : (a) $1024 \times 1024\,\textrm{px}$ , (b) $2048 \times 2048\,\textrm{px}$ , and (c) $4096 \times 4096\,\textrm{px}$ .
Fig. 9.
Fig. 9. Reconstructed 3-D images of “Monasroom” focused on the distance of ${d_r} = 505\,\textrm{mm}$ : (a) $1024 \times 1024\,\textrm{px}$ , (b) $2048 \times 2048\,\textrm{px}$ , and (c) $4096 \times 4096\,\textrm{px}$ .
Fig. 10.
Fig. 10. Reconstructed 3-D images of “Monasroom” focused on the distance of ${d_r} = 520\,\textrm{mm}$ : (a) $1024 \times 1024\,\textrm{px}$ , (b) $2048 \times 2048\,\textrm{px}$ , and (c) $4096 \times 4096\,\textrm{px}$ .
Fig. 11.
Fig. 11. Reconstructed 3-D images of “Honey” when changing the calculation region of the WRP method: (a–d) symbols presented in Table 1.
Fig. 12.
Fig. 12. Reconstructed 3-D images of “Monasroom” when changing the calculation region of the WRP method: (a–d) symbols presented in Table 1.

Tables (2)

Tables Icon

Table 1. Computational time for each CGH calculation method ( 4096 × 4096 p x ).

Tables Icon

Table 2. Symbols of the combinations of the calculation region in the WRP method.

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

u h ( x h , y h ) = j = 1 N o A j r h j exp ( i 2 π λ r h j ) ,
[ u h R ( x h , y h ) u h G ( x h , y h ) u h B ( x h , y h ) ] = j = 1 N o [ R j r h j exp ( i 2 π λ R r h j ) G j r h j exp ( i 2 π λ G r h j ) B j r h j exp ( i 2 π λ B r h j ) ] ,
u h ( x h , y h ) = z = 0 N d Pro p z [ r g b ( x p , y p ) exp ( i 2 π n ( x p , y p ) ) m z ( x p , y p ) ] ,
m z ( x p , y p ) = { 1 , ( if d e p ( x p , y p ) = z ) 0 , ( otherwise ) ,
[ u h R ( x h , y h ) u h G ( x h , y h ) u h B ( x h , y h ) ] = z = 0 N d Pro p z [ R ( x p , y p ) exp ( i 2 π n ( x p , y p ) ) m z ( x p , y p ) G ( x p , y p ) exp ( i 2 π n ( x p , y p ) ) m z ( x p , y p ) B ( x p , y p ) exp ( i 2 π n ( x p , y p ) ) m z ( x p , y p ) ] ,
u w ( x w , y w ) = j = 1 N o A j r w j exp ( i 2 π λ r w j ) ,
u h ( x h , y h ) = Pro p z w [ u w ( x w , y w ) ] .
C j = | d j | tan ( θ ) ,
S j = 2 2 C j .
u h ( x h , y h ) = exp ( i 2 π λ z w ) i λ z w F 1 [ F [ u w ( x w , y w ) ] F [ h ( x w , y w , z w ) ] ] ,
1 p 2 | f x | = 2 | 1 2 π ϕ ( x w , y w , z w ) x w | = | 2 x w λ z w | ,
1 p 2 | f y | = 2 | 1 2 π ϕ ( x w , y w , z w ) y w | = | 2 y w λ z w | ,
| x w | λ z w 2 p ,
| y w | λ z w 2 p .
| λ z w 2 p | W x 2 ,
| λ z w 2 p | W y 2 ,
C z w = | z w | tan ( θ ) .
S z w = 2 2 C z w .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.