Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Convolutional symmetric compressed look-up-table method for 360° dynamic color 3D holographic display

Open Access Open Access

Abstract

In this paper, we propose a convolutional symmetric compressed look-up-table (CSC-LUT) method to accelerate computer-generated hologram (CGH) computation based on the Fresnel diffraction theory and LUT. The proposed method can achieve one-time high-quality fast generation of color holograms by utilizing dynamic convolution operation, which is divided three processes. Firstly, the pre-calculated data of maximum horizontal modulation factor is compressed in 1D array by coordinate symmetry. Then, the test object is resampled to satisfy convolutional translation invariance. Finally, the dynamic convolution operation is used to simplify CGH computation process rather than the point-by-point computation. Numerical simulation and optical experimental results show that our proposed method can achieve faster computation speed, higher reconstruction quality and wider application compared to conventional SC-LUT method. The further optimization method for parallel acceleration on the GPU framework can achieve real-time (>24fps) color holographic display corresponding to three perspectives of a 3D scene.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Holographic display can provide all the necessary parallax and depth information for the human visual system, which is regarded as an ultimate three-dimensional (3D) display technology [13]. Computer-generated hologram (CGH) is a promising method to realize the real-time 3D display by recording holograms of 3D scenes digitally [46]. However, there are still two pressing issues that need to be addressed, which are constraining the further development of holographic display. One problem is that the field of view for reconstructed images is limited by the pixel pitch of commercial spatial light modulators (SLM) [6,7]. Another problem is the heavy computational load involved in the real-time generation of CGH [8,9].

So far, various kinds of fast algorithms have been proposed to accelerate computation of CGH, such as point-based [1013], polygon-based [1417], layer-based [1820], graphic processing unit (GPU)-based [21,22] and deep learning-based [23,24] methods. Among these methods, point-based method is a simple and flexible calculation method, which has been used widely. However, due to the need to calculate and accumulate the spherical wavefronts of each point of a 3D object, the calculation process is very time-consuming. In recent years, a lot of research works have been investigated to reduce the computational complexity [2532]. Look-up-table (LUT) method precomputed a table containing the fringe patterns of all the possible points [25]. When calculating the holograms, it just needs to read out the patterns corresponding to the target point and add them up. Although the calculation speed of holograms has been greatly improved by simplifying online operations, the table data needs to occupy huge memory. Novel look-up-table (N-LUT) method was proposed to reduce the memory usage of LUT by storing only the fringe patterns of the center object points on each depth plane of a 3D object [26]. Then, the fringe patterns for other object points can be obtained by simply shifting this precalculated ones. Nevertheless, the table data still needs gigabytes (GBs) of memory usage, which also limits the computation speed. Split look-up-table (S-LUT) method was proposed to further decrease the memory capacity of the N-LUT by extraction of a pair of horizontal and vertical modulation factors from the diffraction fringe pattern [27], which can shorten the hologram calculation time. But with the increasing of depth layers, the memory usage increases from megabytes (MBs) to GBs. Compressed look-up table (C-LUT) [28] and accurate compressed look-up-table (AC-LUT) [29] methods have been developed from S-LUT method to further accelerate CGH computation by extracting the longitudinal terms of the Fresnel diffraction equation from the horizontal and vertical modulation factors, which enables the memory usage not to increase with depth layers. However, for the methods mentioned above, the memory usage and computation time will increase three times in color holographic display.

Recently, symmetric compressed look-up-table (SC-LUT) method has been proposed to reduce memory usage and computation time in color holographic display by using translational symmetric compression, wavelength separation and matrix convolution operation [30]. But due to the limitation of the static convolutional kernel, information redundancy occurs during the calculation process, which causes reduction in computation speed and reconstruction quality. Moreover, in order to satisfy translation invariance required for convolution operations, this method can only be applied under the condition that the sampling interval of the object is equal to the one of hologram. These problems result in the inability to achieve high-quality real-time color holographic display.

In this paper, to overcome the limitation of the static convolutional kernel and improve the applicability of the algorithm, the convolutional symmetric compressed look-up-table (CSC-LUT) method is proposed, which can achieve one-time high-quality fast generation of color holograms for different sampling interval ratios. Numerical simulations and optical experiments are performed to verify the feasibility of the proposed method. It is confirmed that our proposed method can achieve real-time (>24fps) color holographic display corresponding to three perspectives of a 3D scene on the GPU framework.

2. Proposed ISC-LUT method

As shown in Fig. 1, we obtain the intensity map and depth map of a 3D object, which is considered as the test scene of color 3D holographic display. The intensity and depth map provide amplitude and coordinate information for each object point, respectively. The operational diagram of the proposed method is composed of three processes, such as 1) Pre-calculation of the horizontal maximum modulation factor, 2) Resampling optimization of the test 3D scene for different sampling interval ratios, 3) Calculation of color holograms by utilizing dynamic convolution operation. Finally, the reconstructed images are synthesized according to RGB channels to achieve color holographic display of the test 3D scene.

 figure: Fig. 1.

Fig. 1. Operational diagram of the three-step process of the proposed CSC-LUT method.

Download Full Size | PDF

2.1 Pre-calculation of max horizontal factor

In point-based method (CRT), a 3D object is regarded as a collection of a large number of self-luminous points. The wavefront distribution on the hologram plane after transmission can be described as:

$$E({{x_h},{y_h}} )= \sum\limits_{j = 1}^N {{A_j}\exp \left[ {ik\sqrt {{{({x_h} - {x_j})}^2} + {{({y_h} - {y_j})}^2} + {{(d - {z_j})}^2}} } \right]}$$
where $({x_h},{y_h})$ is the coordinate on hologram plane, $({x_j},{y_j},{z_j})$ and ${A_j}$ are the coordinate and amplitude of object point j, respectively. N is the number of object points. $k = 2\pi /\lambda $ is wave number, and $\lambda $ is wavelength. d is the distance between the object and hologram plane.

In Fresnel region, Eq. (1) can be written as:

$$E({{x_h},{y_h}} )= \sum\limits_{j = 1}^N {{A_j}\exp \{ ik[(d - {z_j}) + \frac{{{{({x_h} - {x_j})}^2} + {{({y_h} - {y_j})}^2}}}{{2(d - {z_j})}}]\} }$$

We split the horizontal, vertical and wavelength information. The Eq. (2) can be simplified as:

$$E\left( {{x_h},{y_h}} \right) = \sum\limits_{j = 1}^N {{A_j}\exp [jk(d - {z_j})] \cdot {{\{ \exp [\frac{{{{({x_h} - {x_j})}^2}}}{2}]\} }^{^{\frac{{jk}}{{d - {z_j}}}}}} \cdot {{\{ \exp [\frac{{{{({y_h} - {y_j})}^2}}}{2}]\} }^{^{\frac{{jk}}{{d - {z_j}}}}}}}$$

For ${N_{xy}}$ object points falling on the same layer of the 3D object, they have the same depth information. Thus, Eq. (3) can be written as:

$$E\left( {{x_h},{y_h}} \right) = \sum\limits_{{j_z} = 1}^{{N_z}} {\{ \sum\limits_{{j_{xy}} = 1}^{{N_{xy}}} {{A_{{j_{xy}}}}} {{\{ \exp [\frac{{{{({x_h} - {x_j})}^2}}}{2}]\} }^{^{\frac{{jk}}{{d - {z_{{j_z}}}}}}}} \cdot {{\{ \exp [\frac{{{{({y_h} - {y_j})}^2}}}{2}]\} }^{^{\frac{{jk}}{{d - {z_{{j_z}}}}}}}}\} } \cdot \exp [jk(d - {z_{{j_z}}})$$

In conventional SC-LUT method, based on translational symmetry, Eq. (4) can be rewritten into the form of convolution operation as:

$$\begin{array}{l} E\left( {{x_h},{y_h}} \right) = \sum\limits_{{j_z} = 1}^{{N_z}} {\{ \sum\limits_{{j_{xy}} = 1}^{{N_{xy}}} {{A_{{j_{xy}}}}} \otimes {{\{ \exp [\frac{{{{({x_h} - {x_m})}^2}}}{2}]\} }^{^{\frac{{jk}}{{d - {z_{{j_z}}}}}}}} \otimes {{\{ \exp [\frac{{{{({y_h} - {y_m})}^2}}}{2}]\} }^{^{\frac{{jk}}{{d - {z_{{j_z}}}}}}}}\} } \cdot \exp [jk(d - {z_{{j_z}}})]\\ \textrm{} = \sum\limits_{{j_z} = 1}^{{N_z}} {\{ {O_{{j_z}}} \otimes [({H_m}^w \otimes {V_m}^w) \cdot L]\} } \end{array}$$
where ${\otimes} $ denoting the convolution operation. ${O_{{j_z}}}$ is the $jth$ object plane. The ${x_m}$ and ${y_m}$ are the middle points of the row and the column on the object plane, respectively. ${H_m} = \exp[{({x_h} - {x_m})^2}/2]$ and ${V_m} = \exp[{({y_h} - {y_m})^2}/2]$ are defined as the horizontal and vertical modulation factor, respectively. $w = jk/(d - {z_{{j_z}}})$ and $L = \exp[jk(d - {z_{{j_z}}})]$ are defined as the wavelength modulation factor which contain wavelength and depth information.

Here, as the convolution kernel, $({H_m}^w \otimes {V_m}^w) \cdot L$ can be regarded as the impulse response of point-source transmission. In matrix convolution operation, the convolution kernel will gradually translate on the layered object plane to ensure that its center can traverse each object point. After each translation, the convolution kernel is multiplied and accumulated with the part of ${O_{{j_z}}}$ to generate one hologram pixel. Therefore, in order to obtain the accurate wavefront distribution on the hologram plane, there are two conditions that need to be satisfied: 1) The size of the convolution kernel needs to be consistent with the region where the object points that contribute to one hologram pixel are located, 2) The calculation process needs to satisfy translation invariance, which means that the expression of the convolution kernel does not change with translation.

Figure 2 illustrates the determination of the sub-object (SO) region. According to diffraction theory, the effective contribution region of an object point on the hologram plane is a circular region whose radius is related to the diffraction angle and the distance from the point to the hologram plane. The diffraction angle can be expressed as follows:

$$\theta = \arcsin \frac{\lambda }{{2{H_p}}}$$
where ${H_p}$ is the pixel pitch of the CGH. The radius of the diffraction region is given by:
$$R = d\tan \theta \approx d\sin \theta = \frac{{d\lambda }}{{2{H_p}}}$$

 figure: Fig. 2.

Fig. 2. Schematic of the sub-object (SO) region. (a) Relationship between the SO and SH region. (b) Determination of the SO region with maximum effective size.

Download Full Size | PDF

For ease of calculation, the diffraction region is approximated as a square region with a side length of 2R, called the sub-hologram (SH). As shown in Fig. 2(a), SH regions of blue and green object points are represented by light blue and light green, respectively. Because the center of SH corresponds to the position of the object point, it is easy to find object points that contribute to one hologram pixel, which are also located in a square region with a side length of 2R, called the sub-object (SO). In Fig. 2(a), SO region of gray hologram pixel is represented by light gray. Thus, to ensure that the convolution kernel matches SO region, the horizontal and vertical modulation factors in the Eq. (5) need to be adjusted as follows:

$$\left\{ {\begin{array}{{c}} {{H_{SO}} = \exp [\frac{{{{({x_h} - x_m^{so} + {x_{{j_x}}})}^2}}}{2}],\textrm{ }{N_x}/2 - R/{O_p} < {j_x} \le {N_x}/2 + R/{O_p}}\\ {{V_{SO}} = \exp [\frac{{{{({y_h} - y_m^{so} + {y_{{j_y}}})}^2}}}{2}],\textrm{ }{N_y}/2 - R/{O_p} < {j_y} \le {N_y}/2 + R/{O_p}} \end{array}} \right.$$
where $({x_{{j_x}}},{y_{{j_y}}})$ is the coordinate on the $jth$ object plane. $(x_m^{so},y_m^{so})$ is the coordinate of the center point of SO region. $\textrm{}{O_p}$, ${N_x}$ and ${N_y}$ are the sampling interval, horizontal and vertical resolution of the object plane, respectively. Here, the coordinate of object points located in SO region is represented with $(x_m^{so} - {x_{{j_x}}},y_m^{so} - {y_{{j_x}}})$ based on the coordinate symmetry.

In order to ensure that the convolution kernel satisfies translation invariance, it is necessary to eliminate the deviation introduced by ${x_h} - x_m^{so}$ and ${y_h} - y_m^{so}$ that varies with translation. According to the Eq. (5) and Eq. (8), only when the sampling interval of the object is equal to the one of hologram, can the deviation be eliminated without generating information redundancy. Thus, the Eq. (5) needs to be adjusted as follows:

$$E({x_h},{y_h}) = \sum\limits_{{j_z} = 1}^N {\{ O_{{j_z}}^{resample} \otimes [({H_{SO}}^w \otimes {V_{SO}}^w) \cdot L]\} }$$
where $O_{{j_z}}^{resample}$ is the $jth$ object plane after resampling optimization according to the pixel pitch of the hologram. $({H_{SO}}^w \otimes {V_{SO}}^w) \cdot L$ is the convolutional kernel, whose size dynamically changes with SO region on different depth planes.

We pre-calculate the maximum modulation factors corresponding to the SO region with maximum effective size, which ensures that any size convolution kernels required for hologram calculation can be constructed. As shown in Fig. 2(b), when the SO region of the hologram pixel at the vertex just contains the entire object plane, the effective size of SO reaches the maximum value, which can be expressed as follows:

$$l = \max ({N_x}{O_p} + p{H_p},\textrm{ }{N_y}{O_p} + q{H_p})$$
where p and q are the horizontal and vertical resolution of the hologram, respectively. $max({A,B} )$ means selecting the maximum of A and B. Generally, because ${N_x} \ge {N_y}$ and $p \ge q$, $l = {N_x}{O_p} + p{H_p}$. Furthermore, the maximum vertical modulation factor can be represented by the transposition of the maximum horizontal modulation factor ($H_{max}^T$). The resolution of ${H_{max}}$ is given as:
$$\frac{{{N_x}{O_p} + p{H_p}}}{{O_p^{resample}}}$$
where $O_p^{resample}$ is the sampling interval of the $jth$ object plane after resampling optimization.

In offline computation, the table data can be further compressed based on coordinate symmetry by storing only half of ${H_{max}}$. Hence, the resolution of offline LUT is $({N_x}{O_p} + p{H_p})/(2O_p^{resample})$.

2.2 Resampling optimization of the 3D object

The analysis in section 2.1 shows that only when the sampling interval of the object is equal to the one of hologram, can the convolution kernel satisfy translation invariance. However, there are always different sampling interval ratios because sampling objects is more flexible than having to sample holograms according to the pixel pitch of SLM. Thus, we develop a resampling optimization method for different sampling interval ratios to ensure that holograms can be generated quickly and accurately based on the Eq. (9).

When $k{O_p} = {H_p}$ (k is an integer greater than 1), although the deviation introduced by ${x_h} - x_m^{so}$ and ${y_h} - y_m^{so}$ can also be eliminated, there is a problem of information redundancy during the calculation process. Taking the generation of two horizontally adjacent hologram pixels as an example, the centers of their SO regions are spaced apart by $k{O_p}$, which means that the convolution kernel needs to shift by k steps on the object plane to generate them. Because the default translation step for the matrix convolution operation is 1, k-1 redundant calculations will occur during the calculation process, which causes reduction in computation speed and reconstruction quality.

Figure 3 illustrates the process of downsampling optimization for the 3D object in the case of $k{O_p} = {H_p}$. As shown in Fig. 3(a), SH regions of blue and green object points are still represented by light blue and light green, respectively. Here, we replace $k \times k$ object points located in the red square region with a superpixel, whose amplitude is the average of amplitude of all replaced objects. In order to compensate errors introduced in this process, the diffraction region ($S{H_{EC}}$) of the superpixel is selected as the smallest square region that can contain SH regions of all replaced object points [32]. According to the positional correspondence between $S{H_{EC}}$ and superpixels, we can find the region ($S{O_{EC}}$) which contains all superpixels that contributes to one hologram pixel. The $S{O_{EC}}$ region of gray hologram pixel is represented by light gray in Fig. 3(a). As shown in Fig. 3(b), the object is downsampled by performing the above processing on all object points, which ensures that the resampling interval is consistent with the one of hologram. Then, holograms can be generated quickly by performing the convolution operation with resampled object planes and dynamic convolution kernel matching $S{O_{EC}}$.

 figure: Fig. 3.

Fig. 3. Downsampling optimization for the 3D object in the case of $k{O_p} = {H_p}$. (a) Error compensation for the SO region. (b) Downsampling optimization and convolution operation.

Download Full Size | PDF

When ${O_p} = k{H_p}$ (k is an integer greater than 1), the deviation introduced by ${x_h} - x_m^{so}$ and ${y_h} - y_m^{so}$ cannot be eliminated. Figure 4(a) shows the deviation distribution for $k \times k$ hologram pixels. Here, SO regions of green and blue hologram pixels are represented by light green and blue, respectively. Similar to the processing method in Fig. 3(a), we choose the smallest square area that can contain SO regions of $k \times k$ hologram pixels as the common the sub-object region ($S{O_{EC}}$). Note that after the above processing, although $k \times k$ hologram pixels have the same $S{O_{EC}}$, the corresponding convolutional kernel expression is different. Taking green, red, and blue hologram pixels as examples, their $S{O_{EC}}$ centers are all located at the object point represented by red square, but only the one of red hologram pixel completely coincide with the object point. In other words, there is no deviation in the convolutional kernel expression corresponding to the red hologram pixel. As shown in Fig. 4(a), the deviation for other hologram pixels is their coordinate difference with the object point, which is expressed as $SO_m^n$, where m and n represent the multiple of horizontal and vertical deviation to ${H_p}$, respectively.

 figure: Fig. 4.

Fig. 4. Upsampling optimization for the 3D object in the case of ${O_p} = k{H_p}$. (a) Deviation distribution. (b) Upsampling optimization and convolution operation.

Download Full Size | PDF

Figure 4(b) illustrates the process of upsampling optimization for the 3D object in the case of ${O_p} = k{H_p}$. Firstly, to ensure that the resampling interval of the object is consistent with the one of hologram, each object point is divided into $k \times k$ pixels. The amplitude of the center pixel is the same as the original point, and the amplitude of the other pixels is 0. Then, since the change in sampling interval of the resampled object has been taken into account when pre-calculating offline LUT, the sampling interval of the convolutional kernel constructed based on the table data is $O_p^{resample}$. Compared to the convolutional kernel ($SO_0^0$) with initial sampling interval, the resolution has been improved by k times, equivalent to dividing a pixel into $k \times k$ pixels, and there is a deviation distribution relationship in Fig. 4(a) between these pixels. Finally, as shown in Fig. 4(b), in matrix convolution operation, convolutional kernel will first rotate 180 degrees, then by utilizing the gradual translation of convolutional kernel and the selection effect of non-zero object points located in $S{O_{EC}}$ region, it is possible to achieve accurate matching of deviation and hologram pixels, which enables to generate the hologram accurately.

2.3 Generation of color holograms

Figure 5 shows the process of generating color holograms once using the proposed CSC-LUT method. In offline computation, we calculate the maximum horizontal modulation factor (${H_{max}}$) and store half of it in the table as a 1D array. In online computation, the color 3D object is divided into three monochrome channels (R, G, B). The hologram generation process for each channel is composed of the following four steps.

 figure: Fig. 5.

Fig. 5. One-time color holograms generation process.

Download Full Size | PDF

Step 1: The 3D object is divided into different depth planes and each layered object plane is resampled based on the actual sampling interval ratio.

Step 2: Construction of convolutional kernels for each depth plane, which can be further divided into four steps, such as 1) Construction of the horizontal modulation factor (${H_{SO}}$) by selecting the parts of the pre-calculated table data that match the SO region on each depth plane, 2) Construction of the vertical modulation factor (${V_{SO}}$) by transposing ${H_{SO}}$, 3) Generation of the red, green, and blue modulation factors (${H_\lambda }$ and ${V_\lambda }$) by using corresponding wavelength factor ($w$), 4) Construction of convolutional kernel by performing the convolution operation with ${H_\lambda }$ and ${V_\lambda }$, and multiplying the result with wavelength factor (L).

Step 3: Calculation of holograms for each layered object plane by performing the convolution operation with resampled object plane and corresponding convolutional kernel.

Step 4: Generation of the final holograms for each channel by adding up holograms of all layered object planes.

As shown in Fig. 5, during the generation process of color holograms, because ${H_{max}}$ is independent of wavelength, the pre-calculation, storage, and online reading of offline table data only need to be performed once. And the convolutional kernel constructed in the step 2 can dynamically change with the depth of the object plane, and its size is always consistent with SO region on the object plane. Finally, through the above steps, it is possible to achieve one-time high-quality fast generation of color holograms.

3. Numerical simulations and optical experiments

3.1 Implementation of the proposed CSC-LUT on the GPU framework

In the experiments, our proposed CSC-LUT method has been implemented with the GPU framework under the computer-unified-device-architecture (CUDA) platform to confirm its potential application to the real-time generation of color holographic images for the test 3D scene.

Currently, the hologram-pixel-based parallel processing method is commonly used to accelerate CGH calculation, which means that one hologram pixel is generated using valid object-point information in one thread. During the process of filtering valid points, it is necessary to find out these points located in the SO region with non-zero amplitude. When the total number of object points is much larger than the one located in the SO region, this process will consume a lot of time. Therefore, we develop an optimization method for parallel acceleration of the proposed method on the GPU framework. As shown in Fig. 6, the size of the SO region of the hologram pixel $(i,j)$ is proportional to the distance from the depth plane to the hologram plane. The sub-object ($S{O_Z}$) located in the farthest depth plane has the largest size, and $S{O_Z}$ contains $S{O_n}(n < Z)$. Therefore, the filtering process in one thread can be accelerated by filtering only the points located in the $S{O_Z}$ region of one hologram pixel.

 figure: Fig. 6.

Fig. 6. Pre-calculation of the location information of the SO region located in the farthest depth plane.

Download Full Size | PDF

Figure 7 shows the block-diagram of the parallel acceleration and memory management for the proposed method on the GPU framework. The calculation processes are divided into the two host and device parts. In the host, load the information of the 3D object and precomputed table data of the maximum horizontal modulation factor, and calculate the location information of $S{O_Z}$ of each hologram pixel. Then, these data are transferred to the device memories, such as the global, constant and shared memories with the faster access speed. In the device, the calculation task on the kernel function is divided into three steps: 1) Filtering of valid object points, 2) Construction of the horizontal, vertical and wavelength modulation factors, 3) Generation of one hologram pixel by calculating and accumulating the contribution of each valid object point to this hologram pixel. Finally, the hologram can be generated quickly by executing this kernel function in parallel on a large number of threads. In addition, the memory optimization has been also done to avoid the traffic jam of data access throughout the calculation process. Here, the location information of $S{O_Z}$ is packed into 1D data arrays which consist of ${X_{start}}$ array, ${X_{end}}$ array, ${Y_{start}}$ array and ${Y_{end}}$ array, and then transferred to the constant memory which supports faster data access because these data do not change but only be read in the device. The precomputed table data are transferred from the host memory to the global memory first, and then transferred to the shared memory because the shared memory is independent among different blocks, which also supports faster data access.

 figure: Fig. 7.

Fig. 7. Block-diagram of the parallel acceleration and memory management for the proposed method on the GPU framework.

Download Full Size | PDF

3.2 Numerical simulations

In order to verify the feasibility of the proposed method, we carry out numerical simulations on the CPU framework to compare the proposed CSC-LUT method with conventional SC-LUT method. In addition, a feasibility test of the proposed method for the real-time generation of color holographic images is also carried out on the GPU framework. CPU and GPU programs are made under the MATLAB 2021 running in the personal computer with the CPU (Intel Core i7-12700F CPU @ 2.10 GHz 16GB) and CUDA platform running in the GPU (RTX 3090 24GB), respectively. The parameters of the numerical simulations are shown in Table 1.

Tables Icon

Table 1. Parameters of the numerical simulations

To evaluate the reconstruction quality of the two methods, we calculate the peak signal-to- noise ratio (PSNR) of the reconstructed images. The PSNR is usually used to evaluate the interference of background noise on the reconstructed image, which can be expressed as follows:

$$PSNR = 10 \times {\log _{10}}(\frac{{{{255}^2}}}{{MSE}})$$
where MSE denotes the mean square error of original and reconstructed images, which can be described as:
$$MSE = \frac{1}{{MN}}\sum\limits_{m = 1}^M {\sum\limits_{n = 1}^N {{{[{I_o}(m,n) - {I_r}(m,n)]}^2}} }$$
where M and N are the numbers of rows and columns of the original image, respectively. ${I_o}(m,n)$ and ${I_r}(m,n)$ denote the pixel values of the pixel $(m,n)$ of the original and reconstructed images, respectively.

Figure 8 demonstrates the reconstruction quality of proposed CSC-LUT and conventional SC-LUT methods for different sampling interval ratios shown in Table 1. The monochrome vegetable image and color teapot image are used as original images to generate holograms. As shown in Fig. 8, in order to satisfy translation invariance required for convolution operations, the SC-LUT method is only applicable when the sampling interval ratio is 1. The PSNR of the reconstructed image is reduced due to the information redundancy caused by the static convolution kernel. Compared with SC-LUT method, our proposed method has higher flexibility and accuracy, which can achieve the same high-quality reconstruction as the point-based method (CRT) under different sampling interval ratios.

 figure: Fig. 8.

Fig. 8. Numerical simulation results for different sampling interval ratios.

Download Full Size | PDF

In order to further illustrate the advantages of the proposed method, the CSC-LUT method is compared with SC-LUT method in terms of reconstruction quality and computation time in the numerical simulation, under the condition of different numbers of object points. All methods have been tested for the numbers of object points ranged from 16 K to 262 K. As shown in Fig. 9(a)-(b), the reconstruction quality of SC-LUT method is significantly reduced due to the mismatch between the static convolution kernel and the SO region. The proposed method solves this problem well by constructing a dynamic convolution kernel, which can flexibly adapt to different sampling interval ratios while maintaining high reconstruction quality. When the number of object points reaches 262 K and the sampling interval ratio is 1, PSNR of the reconstructed image of SC-LUT and CSC-LUT methods is 9.3 dB and 18.3 dB, respectively. The reconstruction quality of proposed method is improved by 96.7%. When the sampling interval ratio is taken as 1/3, 1/2, 2, and 3, the difference between PSNR of CSC-LUT and CRT methods is -0.1 dB, 0.3 dB, 0.2 dB, and -0.2 dB, respectively, which means that both can achieve the same high-quality reconstruction.

 figure: Fig. 9.

Fig. 9. Comparison of reconstruction quality and computation time (All values are averaged over 20 test images). (a) Reconstruction quality of CSC-LUT and SC-LUT for different numbers of object points. (b) Reconstruction quality of CSC-LUT and CRT for different sampling ratios. (c) Computation time of CSC-LUT and SC-LUT for color holograms on the CPU framework. (d) Computation time of CSC-LUT for color holograms on the GPU framework.

Download Full Size | PDF

Figure 9(c) shows the time required for SC-LUT and CSC-LUT methods to calculate color holograms on the CPU framework. The computation time of SC-LUT method increases significantly with the increase of the number of object points. When the number of object points increases to 262 K, the calculation time of SC-LUT and CSC-LUT method for color holograms is 25.13 s and 5.64 s, respectively. The calculation time of proposed method is decreases by 77.6%. Figure 9 (c) also shows the variation of the calculation time of the proposed method with the number of objects under different sampling interval ratios. When the sampling interval ratio is taken as 2, the calculation time is 9.6 s for 262 K object points, which is 70.2% more than the one when k = 1, which is due to the fact that although the number of effective object points has not changed after upsampling processing, the introduction of pixels with zero amplitude also increase the computational burden in convolution operations. When the sampling interval ratio is taken as 1/2, the calculation time is 2.47 s for 262 K object points, which is reduced by 56.2% compared with the one when k = 1, which is due to the decrease in the number of effective object points by replacing them with super pixel points during downsampling processing. In summary, our proposed method can significantly improve the computation speed for color holograms compared with the conventional SC-LUT method for different sampling ratios.

Figure 9(d) shows the effect of parallel acceleration of the proposed method on the GPU framework. Compared with traditional parallel processing method, our proposed optimization method can improve the acceleration effect on the GPU framework by reducing the time required for filtering valid object points on a single thread. When the number of object points increases to 262 K, the optimized CSC-LUT method takes 29.2 ms to calculate color holograms, which increases the speed by about 8.8 times compared with the 258.4 ms before optimization. It is confirmed that our proposed method can achieve real-time (>24 fps) computation of color holograms for the numbers of object points ranged from 16 K to 262 K. Furthermore, when the number of object points is 16 K, the optimized CSC-LUT method calculates color holograms in 11 ms (>90fps), which enables to achieve real-time color holographic images corresponding to three perspectives of a 3D scene.

3.3 Optical experiments

To verify the consistency of the numerical simulations and the practical display results, we conduct optical experiments to reconstruct the original images with the proposed method. The implemented display system is shown in Fig. 10(a), which is a two-layer structure. Figure 10(b) further illustrates the details of the system. The bottom layer is the combining module of laser light source. A white laser integrated with red light (638 nm), green light (532 nm), and blue light (450 nm) passes through dichroic mirrors DC1 and DC2 to achieve wavelength separation. The collimators and beam splitters (BS) collimate and combine R, G, B lasers. Then, the collimated white laser is reflected by the mirror M2 and incident at a specific angle into the color hologram module located in the second layer.

 figure: Fig. 10.

Fig. 10. Experimental setup. (a)-(b) Implemented display system. (c) Alignment of color holograms using three DMDs and TIR prism. (d) The process of achieving 3D color holographic display with 360° horizontal viewing angle.

Download Full Size | PDF

As shown in Fig. 10(c), the color hologram module consists of a trichroic prism combined with a TIR prism and three digital micromirror devices (DMD). DMD is a reflective amplitude modulation device with the pixel pitch of 13.7 µm, the resolution of 1024 × 768 and the operating speed of 22,272 Hz for binary images, which is always used together with TIR prism to achieve vertical emission of imaging beam and eliminate stray light. Here, the color binary amplitude holograms generated by CSC-LUT method are loaded into three DMDs, respectively. We use a trichroic prism combined with a TIR prism to distribute three collimated mono-color light sources, and combine R, G and B hologram images. And by further adjusting the angle and position of DMDs, precise alignment of the color hologram has been achieved. Then, A single sideband filter system composed of the lens L1 and a filter is used to filter direct current, conjugate noise, and higher order dispersion in the reconstructed image. The lenses L2 and L3 amplify the reconstructed image and adjust the size and position of the viewing window (VW) which is the image of the filter. A mirror tilted 22.5° with respect to the horizontal plane is placed at the magnifying image plane. The mirror can be rotated around the optical axis of the incident light, which is driven by a stepper motor. When the rotation angle of the mirror is synchronized with the holograms loaded on the DMDs, it is possible to reconstruct a wavefront transmission in any direction with 45° to the optical axis.

Figure 10(d) shows the process of achieving 3D color holographic display with 360° horizontal viewing angle based on the display system. Firstly, based on the theoretical formula of VW, we count the necessary number of VWs [33], at least 812 to compactly distribute the VW on the viewing circumstance band. Secondly, we rotate the 3D scene around the rotation axis at equal intervals of 0.443°, and the camera captures the corresponding intensity and depth maps for each rotation angle from upper view. Taking account of this periodic rotation of the final reconstruction image, pre-compensation is applied to the captured images [34]. Then, CSC-LUT algorithm is used to generate corresponding color binary amplitude holograms, which are loaded into three DMDs, respectively. In order to filter out conjugate images, the single sideband method is used in the generation process of holograms [35]. Finally, we achieve a three-dimensional color holographic display with 360° horizontal viewing angle by increasing the rotation speed of the stepper motor and the refresh rate of the holograms on the DMDs. Viewers can observe a 1.2-inch color 3D reconstruction image with a refresh rate of 24 frames per second from any horizontal viewing angle.

Figure 11(a) demonstrates that the proposed method has 3D display capabilities. The 3D scene is composed of a teapot and a teacup located at different distances to the hologram plane. The distances from the teacup and teapot to the hologram plane are ${Z_1} = 10mm$ and ${Z_2} = 50mm$, respectively. In optical experiments, when the camera is focused on the distance ${Z_1}$, the teacup is clear and the teapot is blurred. As the focus position of the camera moves from ${Z_1}$ to ${Z_2}$, the teapot becomes gradually clear and the teacup becomes gradually blurred. These changes show that the holographic reconstructed scene provides correct depth information. Figure 11(b)-(d) shows the optical reconstruction images of the CSC-LUT and SC-LUT methods for different sampling ratios. Our proposed method has higher flexibility and accuracy compared with the conventional SC-LUT method, which can achieve the same high-quality optical reconstruction as the point-based method (CRT) for different sampling interval ratios. The optical experimental results match with our numerical reconstructed results well.

 figure: Fig. 11.

Fig. 11. Optical experiment results. (a) Optical experimental reconstructed 3D images focused at 10 mm and 50 mm, respectively. (b-d) Optical reconstructed images of SC-LUT, ISC-LUT and CRT methods for different sampling interval ratios.

Download Full Size | PDF

As shown in Fig. 12, to verify the feasibility of the proposed method in achieving real-time color 3D holographic display, we conduct an optical experiment using the holographic display system with 360° horizontal viewing angle. The test 3D scene is a video that shows the process of pouring water from a teapot to a teacup. The video is divided into 48 frames, lasting for 2 seconds. Visualization 1 shows the process of using CSC-LUT method to generate color holograms corresponding to three perspectives (0°, 120°, and 240°) of the 3D scene in real time on the GPU framework, as well as optical reconstructed color holographic images. Figure 12(a)-(c) shows the optical reconstructed images of the 1th, 8th, 16th, 24th, 32nd, and 40th frames from three viewing angles, respectively. This optical experimental result proves that our proposed method could be applied in 360° dynamic color 3D holographic display.

 figure: Fig. 12.

Fig. 12. Optically-reconstructed 3D scene images of the 1st, 8th, 16th, 24th, 32th, 40th frames from three viewing angles. (a-c) 0°, 120° and 240° viewing angles. (Visualization 1)

Download Full Size | PDF

3.4 Analysis

According to the analysis in section 2.1, the size of the SO region is directly proportional to the reconstruction distance. In numerical simulation, we found that as the size of SO increases, the gap in reconstruction quality between SC-LUT and CSC-LUT methods gradually narrows, as shown in Fig. 13(b). Here, considering the limitation of SC-LUT, we compared the reconstruction quality of the two methods under the condition of the sampling interval ratio of 1. Next, we analyzed the reason for this phenomenon.

 figure: Fig. 13.

Fig. 13. The impact of SO size on reconstruction quality. (a) Color reconstruction images for different size of the SO region. (b) Reconstruction quality of CSC-LUT and SC-LUT for different size of the SO region.

Download Full Size | PDF

The offline horizontal modulation factor resolutions of the two methods are ${N_x} + p$ and $({N_x}{O_p} + p{H_p})/(O_p^{resample})$, respectively, which are the same when the sampling interval ratio is 1. In other words, in holograms computation, the static convolutional kernel constructed in SC-LUT method matches the SO with maximum effective size. The mismatch between the static convolutional kernel and the actual SO region leads to information redundancy and aliasing noise in the reconstructed image. As the size of SO increases, the actual SO gradually approaches the SO with maximum effective size, and the degree of mismatch will decrease, which achieves an improvement in reconstruction quality. When SO reaches the maximum effective size, the aliasing noise is eliminated completely and the reconstructed image quality reaches maximum value, as shown in Fig. 13(a). In the above process, because the dynamic convolutional kernel constructed in CSC-LUT method can always match the actual SO region, there is no information redundancy or aliasing noise. However, as the size of SO increases, this means that the effective contribution area of the object point on the hologram plane also increases, as shown in Fig. 2(a). At this point, the wavefront distribution of the object points far from the center of the object plane will not be fully recorded by the hologram, resulting in a decrease in reconstruction quality, as shown in Fig. 13(b).

In summary, by increasing the size of SO, although the gap in reconstruction quality between the two methods can be reduced, it comes at the cost of sacrificing the resolution of the reconstructed image. In order to generate high-quality color hologram using CSC-LUT, it is necessary to choose an appropriate reconstruction distance.

4. Conclusion

We propose a CSC-LUT method based on the Fresnel diffraction theory and LUT to accelerate the computation of color holograms. In the proposed algorithm, we achieve three significant improvements: faster computation speed, higher reconstruction quality and wider application. In offline computation, we compress the redundant data by storing only half of the maximum horizontal modulation factor. In holograms computation, we develop a one-time fast high-quality generation of color holograms method by performing the matrix convolution operation with a dynamic convolution kernel and resampled layered object planes. Experimental results show that reconstruction quality is improved by 96.7% and computation time is reduced by 77.6% compared with conventional SC-LUT method. Moreover, our proposed method can achieve the same high-quality reconstruction as point-based method for different sampling interval ratios while significantly reducing time. In addition, we develop an optimization method for parallel acceleration of the proposed method on the GPU framework. The computing speed is improved by about 8.8 times compared with traditional parallel acceleration method. It is confirmed that the proposed method can achieve real-time (>24fps) color holographic display corresponding to three perspectives of a 3D scene. This work may pave a new avenue for 360° dynamic color 3D holographic display, which could lead to applications in tabletop holographic display devices.

Funding

National Natural Science Founding of China (61975014, 62035003, U22A2079); Beijing Municipal Science & Technology Commission, Administrative Commission of Zhongguancun Science Park (Z211100004821012).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]  

2. C. J. Kuo and M. -H Tsai, Three-Dimensional Holographic Imaging (John Wiley & Sons, 2002)

3. T. C. Poon, Digital Holography and Three-dimensional Display (Springer Verlag, 2007).

4. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38(8), 46–53 (2005). [CrossRef]  

5. Y. Hayasaki, J. P. Liu, and M. Georges, “Feature issue of digital holography and 3D imaging (DH): introduction,” Appl. Opt. 54(1), DH1–DH2 (2015). [CrossRef]  

6. J. H. Park, “Recent progresses in computer generated holography for three-dimensional scene,” J. Inf. Displ. 18(1), 1–12 (2017). [CrossRef]  

7. J. Hahn, H. Kim, Y. Lim, G. Park, and B. Lee, “Wide viewing angle dynamic holographic stereogram with a curved array of spatial light modulators,” Opt. Express 16(16), 12372–12386 (2008). [CrossRef]  

8. A. E. Shortt, T. J. Naughton, and B. Javidi, “Histogram approaches for lossy compression of digital holograms of three-dimensional objects,” IEEE Trans. on Image Process. 16(6), 1548–1556 (2007). [CrossRef]  

9. T. Nishitsuji, T. T. Shimobaba, T. Kakue, N. Masuda, and T. Ito, “Review of fast calculation techniques for computer-generated holograms with the point-light-source-based model,” IEEE Trans. Ind. Inf. 13(5), 2447–2454 (2017). [CrossRef]  

10. A. D. Stein, Z. Wang, and J. J. S. Leigh, “Computer-generated holograms: a simplified ray-tracing approach,” Comput. Phys. 6(4), 389–392 (1992). [CrossRef]  

11. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]  

12. S. C. Kim, J. M. Kim, and E. S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012). [CrossRef]  

13. D. Pi, J. Liu, Y. Han, A. Khalid, and S. Yu, “Simple and effective calculation method for computer-generatedhologram based on non-uniform sampling using look-up-table,” Opt. Express 27(26), 37337–37348 (2019). [CrossRef]  

14. H. Kim, J. K. Hahn, and B. Lee, “Mathematical modeling of triangle-mesh-modeled three-dimensional surface objects for digital holography,” Appl. Opt. 47(19), D117–D127 (2008). [CrossRef]  

15. Y. Pan, Y. Wang, J. Liu, X. Li, and J. Jia, “Fast polygon-based method for calculating computer-generated holograms in three-dimensional display,” Appl. Opt. 52(1), A290–A299 (2013). [CrossRef]  

16. D. Im, J. Cho, J. Hahn, B. Lee, and H. Kim, “Accelerated synthesis algorithm of polygon computer-generated holograms,” Opt. Express 23(3), 2863–2871 (2015). [CrossRef]  

17. Y.-M. Ji, H. Yeom, and J. H. Park, “Efficient texture mapping by adaptive mesh division in mesh-based computer generated hologram,” Opt. Express 24(24), 28154–28169 (2016). [CrossRef]  

18. J. S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015). [CrossRef]  

19. Y. Zhao, L. C. Cao, H. Zhang, D. Kong, and G. F. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015). [CrossRef]  

20. H. Zhang, L. C. Cao, and G. F. Jin, “Three-dimensional computer-generated hologram with Fourier domain segmentation,” Opt. Express 27(8), 11689–11697 (2019). [CrossRef]  

21. D. W. Kim, Y. H. Lee, and Y. H. Seo, “High-speed computer-generated hologram based on resource optimization for block-based parallel processing,” Appl. Opt. 57(13), 3511–3518 (2018). [CrossRef]  

22. T. Nishitsuji, D. Blinder, and T. Kakue, “GPU-accelerated calculation of computer-generated holograms for line-drawn objects,” Opt. Express 29(9), 12849–12866 (2021). [CrossRef]  

23. L. Shi, B. Li, and C. Kim, “Towards real-time photorealistic 3D holography with deep neural networks,” Nature 591(7849), 234–239 (2021). [CrossRef]  

24. J. C. Wu, K. X. Liu, X. M. Sui, and L. C. Cao, “High-speed computer-generated holography using an autoencoder-based deep neural network,” Opt. Lett 46(12), 2908–2911 (2021). [CrossRef]  

25. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993). [CrossRef]  

26. S. C. Kim and E. S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008). [CrossRef]  

27. Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. Tanjung, C. Tan, and T. C. Chong, “Fast CGH computation using S-LUT on GPU,” Opt. Express 17(21), 18543–18555 (2009). [CrossRef]  

28. J. Jia, Y. T. Wang, J. Liu, X. Li, Y. Pan, Z. Sun, B. Zhang, Q. Zhao, and W. Jiang, “Reducing the memory usage for effective computer-generated hologram calculation using compressed look-up table in full-color holographic display,” Appl. Opt. 52(7), 1404–1412 (2013). [CrossRef]  

29. Q. K. Gao, J. Liu, X. Li, G. L. Xue, J. Jia, and Y. T. Wang, “Accurate compressed look up table method for CGH in 3D holographic display,” Opt. Express 23(26), 33194–33204 (2015). [CrossRef]  

30. T. Zhao, J. Liu, Q. K. Gao, P. He, Y. Han, and Y. T. Wang, “Accelerating computation of CGH using symmetric compressed look-up-table in color holographic display,” Opt. Express 26(13), 16063–16073 (2018). [CrossRef]  

31. H. K. Cao, X. Jin, L. Y. Ai, and E. S. Kim, “Faster generation of holographic video of 3-D scenes with a Fourier spectrum-based NLUT method,” Opt. Express 29(24), 39738–39754 (2021). [CrossRef]  

32. H. W. Ma, C. X. Wei, J. H. Wei, and J. Liu, “Superpixel-based sub-hologram method for real-time color three-dimensional holographic display with large size,” Opt. Express 30(17), 31287–31297 (2022). [CrossRef]  

33. Y. J. Lim, K. H. Hong, and H. Kim, “360-degree tabletop electronic holographic display,” Opt. Express 24(22), 24999–25009 (2016). [CrossRef]  

34. Y. Sando, D. Barada, and T. Yatagai, “Optical rotation compensation for a holographic 3D display with a 360 degree horizontal viewing zone,” Appl. Opt. 55(30), 8589–8595 (2016). [CrossRef]  

35. O. Bryngdahl and A. Lohmann, “Single-Sideband Holography,” J. Opt. Soc. Am. 58(5), 620–624 (1968). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       24 frames of optically-reconstructed 3D scene images from three viewing angles are compressed into one video file of Visualization 1

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Operational diagram of the three-step process of the proposed CSC-LUT method.
Fig. 2.
Fig. 2. Schematic of the sub-object (SO) region. (a) Relationship between the SO and SH region. (b) Determination of the SO region with maximum effective size.
Fig. 3.
Fig. 3. Downsampling optimization for the 3D object in the case of $k{O_p} = {H_p}$. (a) Error compensation for the SO region. (b) Downsampling optimization and convolution operation.
Fig. 4.
Fig. 4. Upsampling optimization for the 3D object in the case of ${O_p} = k{H_p}$. (a) Deviation distribution. (b) Upsampling optimization and convolution operation.
Fig. 5.
Fig. 5. One-time color holograms generation process.
Fig. 6.
Fig. 6. Pre-calculation of the location information of the SO region located in the farthest depth plane.
Fig. 7.
Fig. 7. Block-diagram of the parallel acceleration and memory management for the proposed method on the GPU framework.
Fig. 8.
Fig. 8. Numerical simulation results for different sampling interval ratios.
Fig. 9.
Fig. 9. Comparison of reconstruction quality and computation time (All values are averaged over 20 test images). (a) Reconstruction quality of CSC-LUT and SC-LUT for different numbers of object points. (b) Reconstruction quality of CSC-LUT and CRT for different sampling ratios. (c) Computation time of CSC-LUT and SC-LUT for color holograms on the CPU framework. (d) Computation time of CSC-LUT for color holograms on the GPU framework.
Fig. 10.
Fig. 10. Experimental setup. (a)-(b) Implemented display system. (c) Alignment of color holograms using three DMDs and TIR prism. (d) The process of achieving 3D color holographic display with 360° horizontal viewing angle.
Fig. 11.
Fig. 11. Optical experiment results. (a) Optical experimental reconstructed 3D images focused at 10 mm and 50 mm, respectively. (b-d) Optical reconstructed images of SC-LUT, ISC-LUT and CRT methods for different sampling interval ratios.
Fig. 12.
Fig. 12. Optically-reconstructed 3D scene images of the 1st, 8th, 16th, 24th, 32th, 40th frames from three viewing angles. (a-c) 0°, 120° and 240° viewing angles. (Visualization 1)
Fig. 13.
Fig. 13. The impact of SO size on reconstruction quality. (a) Color reconstruction images for different size of the SO region. (b) Reconstruction quality of CSC-LUT and SC-LUT for different size of the SO region.

Tables (1)

Tables Icon

Table 1. Parameters of the numerical simulations

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

E ( x h , y h ) = j = 1 N A j exp [ i k ( x h x j ) 2 + ( y h y j ) 2 + ( d z j ) 2 ]
E ( x h , y h ) = j = 1 N A j exp { i k [ ( d z j ) + ( x h x j ) 2 + ( y h y j ) 2 2 ( d z j ) ] }
E ( x h , y h ) = j = 1 N A j exp [ j k ( d z j ) ] { exp [ ( x h x j ) 2 2 ] } j k d z j { exp [ ( y h y j ) 2 2 ] } j k d z j
E ( x h , y h ) = j z = 1 N z { j x y = 1 N x y A j x y { exp [ ( x h x j ) 2 2 ] } j k d z j z { exp [ ( y h y j ) 2 2 ] } j k d z j z } exp [ j k ( d z j z )
E ( x h , y h ) = j z = 1 N z { j x y = 1 N x y A j x y { exp [ ( x h x m ) 2 2 ] } j k d z j z { exp [ ( y h y m ) 2 2 ] } j k d z j z } exp [ j k ( d z j z ) ] = j z = 1 N z { O j z [ ( H m w V m w ) L ] }
θ = arcsin λ 2 H p
R = d tan θ d sin θ = d λ 2 H p
{ H S O = exp [ ( x h x m s o + x j x ) 2 2 ] ,   N x / 2 R / O p < j x N x / 2 + R / O p V S O = exp [ ( y h y m s o + y j y ) 2 2 ] ,   N y / 2 R / O p < j y N y / 2 + R / O p
E ( x h , y h ) = j z = 1 N { O j z r e s a m p l e [ ( H S O w V S O w ) L ] }
l = max ( N x O p + p H p ,   N y O p + q H p )
N x O p + p H p O p r e s a m p l e
P S N R = 10 × log 10 ( 255 2 M S E )
M S E = 1 M N m = 1 M n = 1 N [ I o ( m , n ) I r ( m , n ) ] 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.