Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Full-scale one-dimensional NLUT method for accelerated generation of holographic videos with the least memory capacity

Open Access Open Access

Abstract

A full-scale one-dimensional novel-look-up-table (1-D NLUT) method enabling faster generation of holographic videos with the minimum memory capacity is proposed. Only a pair of half-sized 1-D baseline and depth-compensating principal-fringe-patterns (PFPs) is pre-calculated and stored based on the concentric-symmetry property of the PFP, and from which a set of half-sized 1-D PFPs for all depth planes are generated based on its thin-lens property, which enables minimization of the required memory size down to a few KB regardless of the number of depth planes. Moreover, all those hologram calculations are fully one-dimensionally performed with a set of half-sized 1-D PFPs based on its shift invariance property, which also allows minimization of its overall hologram calculation time. From experiments with test videos, the proposed method has been found to have the shortest hologram calculation time even with the least memory in comparison with several modified versions of the conventional NLUT and LUT methods, which confirms its feasibility.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Thus far, as one of the ultimate approaches for realizing a realistic three-dimensional television (3-DTV) broadcasting system, electro-holographic displays based on computer- generated holograms (CGHs) have attracted many attentions because the CGH can precisely record and reconstruct the light wave of the input 3-D scene [1–3]. With recent active researches on the holographic technology, the electro-holographic 3-DTV expects to be appeared on the market in a near future [4,5].

In fact, the practical holographic 3-DTV broadcasting system inevitably faces with the huge amount of holographic data to be transmitted over the network. For dealing with this issue, there could be two kinds of approaches such as the hologram and image-data transmission systems [6,7]. For the case of the hologram-data transmission, holographic videos for the input 3-D scenes are computed and compressed at the transmitter, and then transmitted to the recipient over the network. At the receiver, those holographic videos are then reconstructed into the 3-D video images and displayed on the holographic 3-DTV. Basically, hologram images are treated as random data due to the weak correlation among the hologram image pixels, so that they couldn’t be effectively compressed unlike the conventional object images having the strong correlation among the image pixels, even though several holographic data compression algorithms were proposed [8,9], Thus, holographic video data still remains almost uncompressed, and rather appears to be much larger than the original 3-D video data at the transmitter. It means that this system certainly requires a massive bandwidth capacity of the network for the transmission of those huge holographic video data [8–10].

On the other hand, in the image-data transmission system input 3-D video images are compressed with a high ratio at the transmitter and then directly transmitted to the recipient. At the receiver, holographic videos for those 3-D video data are generated and displayed on the holographic 3-DTV [11]. This system can be recognized as a much more reasonable approach for the future holographic 3-DTV since it may not require a great increase of the transmission bandwidth capacity of the current communication network [12]. But, in order to make this system realizable, holographic videos for those 3-D video data must be generated in real-time at the receiver.

For this, many kinds of CGH algorithms have been proposed, which include the classical ray-tracing (RT) method [13], look-up table (LUT) [14], novel look-up table (NLUT) [15], wave-front recording plane (WRP)-based [16], polygon-based [17] image hologram-based [18], recurrence relation-based [19], double-step Fresnel diffraction (DSF)-based [20], GPU-based [21–25], FPGA-based [26,27], sparse-based [28–30] methods. Among them, the NLUT was proposed as one of the accelerated CGH algorithms [15]. In this method, only the two-dimensional principle fringe patterns (2-D PFPs) for those center object points at each depth layer acting like the Fresnel zone plates (FZPs), are pre-calculated and stored, which enables reduction of the required memory capacity. In addition, only three simple operations of shifting, multiplication and addition are involved in calculation of the CGH patterns, which also enables shortening of the overall calculation time. However, for the practical application, its memory capacity and calculation time need to be further reduced and shortened, respectively.

Thus, many kinds of modified versions of the NLUT were proposed for enhancing their performances in terms of the memory capacity and calculation time, which include the temporal redundancy-based [22,31], spatial redundancy-based [32] motion compensation- based [21,33–36], circular symmetry-based [37–39], trigonometric decomposition-based [40], depth compensation-based [41,42], in addition to several other LUT-based methods such as the Mini-LUT [43], split LUT(S-LUT) [44], compressed LUT(C-LUT) [45] and accurate- compressed LUT (AC-LUT) [46].

In the redundancy and motion compensation-based methods [21,22,31,33–36,47], redundant object data between the two consecutive video frames can be removed with motion estimation and compensation algorithms, which results in great reductions of their overall calculation times [33–36]. In the circular symmetry and trigonometric decomposition-based methods [37–40], 1-D forms of the PFPs such as the line-type PFP and Sub-PFP are stored, which also results in great reductions of their required memory capacity. In the depth compensation-based method [41,42], only two 2-D baseline and depth-compensating PFPs (2-D B-PFP and DC-PFP) are pre-calculated and stored, from which a set of 2-D PFPs for each depth plane are generated based on the thin-lens property of the PFP, which results in a massive reduction of the required memory.

Moreover, in the Mini-LUT, data redundancy of the circular symmetric-based LUT can be further removed with the principle component analysis method, which also results in a great reduction of its memory capacity [43]. The memory size, however, increases as the number of depth layers increases. In the S-LUT [44], 2-D PFPs on each depth plane are split into pairs of 1-D light-modulation factors corresponding to each of the horizontal and vertical components of the PFPs, respectively, with which CGH patterns are to be calculated. It results in reductions both of the overall hologram calculation time and memory capacity. However, its memory size increases depending on the number of depth planes. Meanwhile, in the C-LUT [45], the memory size can be made further reduced by separating the longitudinal light-modulation factors from both of the horizontal and vertical components under the condition that the depth of the object is kept to be much smaller than the hologram recording distance. However, this approximation causes the reconstructed object image to be distorted in case the condition is unsatisfied. For this, the AC-LUT [46] is also proposed, where horizontal and vertical modulation factors are made independent from the longitudinal light-modulation factors and free from the approximation condition of the C-LUT, as well as 1-D forms of horizontal and vertical light-modulation factors are stored without relationships with depth information. However, the AC-LUT needs a few MB memory size since those horizontal and vertical light-modulation factors for each x and y coordinates need to be stored.

Thus, in this paper, we propose a new full-scale one-dimensional novel look-up Table (1-D NLUT) method enabling accelerated generation of holographic videos with the minimum memory. Each 2-D PFP is decomposed into a pair of 1-D PFPs and only halves of those 1-D PFPs are stored based on the concentric-redundancy property of the PFP. Since a pair of half-sized 1-D PFPs (HS 1-D PFPs) are derived from the 2-D PFP, they also have the same concentric-symmetry, shift-invariance and thin-lens properties of the 2-D PFPs. It then allows employing a new 1-D depth-compensation (1-D DC) scheme where a set of HS 1-D PFPs for each depth plane can be generated only with a pair of pre-stored HS 1-D B-PFP and DC-PFP. Therefore, with this 1-D DC method, the memory capacity of the proposed method can be minimized down to a few KB regardless of the number of depth planes. Furthermore, its overall hologram calculation time can be also minimized since three simple operations of shifting, multiplication and adding are fully one-dimensionally carried out with those HS 1-D PFPs. To confirm the feasibility of the proposed 1-D NLUT method, experiments with a test 3-D video are carried out and the results are compared with those of the conventional methods in terms of the overall calculation time and required memory capacity.

2. Operational principle of the NLUT method

In fact, a 3-D object can be treated as a set of 2-D images discretely-sliced along the z-direction, in which each 2-D image with a fixed depth is regarded as a collection of self-luminous object points of light. In the NLUT method, only the Fresnel-zone-plates (FZPs) of the object points located at the centers of each image plane, which are called principal-fringe-patterns (PFPs), are pre-calculated and stored [15]. Therefore, as seen in Fig. 1(a), the complex amplitude of the PFP for the center object point O(x0, y0, zq) located on the image plane with a depth of zq, which is denoted as Tq(x, y, zq), can be defined as Eq. (1) [41].

Tq(x,y,zq)exp[ik2zq{(xx0)2+(yy0)2}]
Where k denotes the wave number given by k = 2π/λ and λ means the free-space wavelength of the light. Then, the fringe patterns for other object points on that image plane can be obtained simply by shifting this PFP according to the relative displacements from the center point to other object points, which is a so-called shift-invariance property of the PFP.

 figure: Fig. 1

Fig. 1 (a) Geometric configuration for generating the PFP for the center object point of O(x0, y0, zq) on the qth depth plane, and (b) Operational diagram of the NLUT-based CGH generation process for the three object points of A(x1, y1, zq), B(x2, y2, zq) and C(-x3, -y3, zq) located on the depth layer of zq with the PFP for the center object point based on the shift-invariance property of the PFP.

Download Full Size | PDF

For instance, Fig. 1(b) shows a NLUT-based CGH generation process for three object points of A(x1, y1, zq), B(x2, y2, zq) and C(-x3, -y3, zq) located on the depth layer of zq. Here, the center object point O(x0, y0, zq) is assumed to be on the origin, then x0 and y0 become zeros. In case of the object point A(x1, y1, zq), which is displaced by (x1, y1) from the center point on the image plane of zq, the CGH pattern for this object point can be obtained just by shifting the PFP for the center object point of O(0, 0, zq) with the displacements of x1 and y1 along the x and y directions, respectively. Following the same procedure, the CGH patterns for the object points of B(x2, y2, zq) and C(-x3, -y3, zq) located on the same depth layer can be also obtained just by shifting the same PFP for the center object point with the displacements of (x2, y2) and (-x3, -y3) along the x and y directions, respectively. Then, the CGH pattern for those three object points can be obtained by adding those three shifted PFPs together.

Accordingly, this process is to be performed for all object points on each depth plane and all those shifted versions of PFPs are then added up together to obtain the final CGH pattern for an input 3-D object. In order to guarantee that the PFP can fill the predetermined size of the CGH pattern for all possible object points, the minimum resolution of the PFP must satisfy the following conditions of Eq. (2) [15].

HorizontalresolutionofthePFP:[hx+(disc×ox)],VerticalresolutionofthePFP:[hy+(disc×oy)],
Where hx and hy represents the horizontal and vertical resolutions of the hologram pattern, as well as disc, ox and oy mean the discretization step, and the number of sampling object point in the horizontal and vertical directions, respectively.

Accordingly, in the NLUT method, the final CGH pattern of a 3-D object, which is noted as I(x, y), can be expressed as the overlap of those shifted versions of PFPs for all object points on each depth plane based on the shift-invariance property of the PFP, which is shown in Eq. (3) [41].

I(x,y)=q=1Qp=1PapTq(xxp,yyp)
Where P and Q denote the total numbers of object points on the qth depth layer, and depth layers, while xp, yp and ap represent the x and y-directional coordinates and intensity of the pth object point, respectively. Moreover, the PFP of the NLUT also has the concentric-symmetry, Fourier-transform and achromatic thin-lens properties as well as the shift-invariance property [41]. Based on these unique properties of the PFP, many kinds of modified versions of the NLUT have been proposed for enhancing their performances in terms of the memory capacity and calculation time [21,22,31–36,41,42].

3. Proposed method

In this paper, we propose a full-scale one-dimensional NLUT (1-D NLUT) method enabling accelerated generation of the holographic video of the input 3-D scene even with the least memory capacity of a few KB regardless of the number of depth planes. As seen in Fig. 2, the proposed method is composed of a three-step process.

 figure: Fig. 2

Fig. 2 Overall block-diagram of the proposed full-scale 1-D NLUT method composed of a three-step process.

Download Full Size | PDF

It includes 1) pre-calculation and storing of a pair of half-sized 1-D baseline-PFP (HS 1-D B-PFP) and depth-compensating PFP (HS 1-D DC-PFP) based on the concentric-symmetry property of the PFP, 2) generation of a set of HS 1-D PFPs for all depth planes by combined use of the pre-stored HS 1-D B-PFP and DC-PFP based on the thin-lens property of the PFP, and 3) calculation of the holographic patterns for all object points on each depth plane with corresponding HS 1-D PFPs based on the shift-invariance property of the PFP, and adding up them together to generate the final 3-D holographic pattern for each frame.

3.1 Decomposition of the 2-D PFP into a pair of 1-D PFPs

As seen Eq. (1), each pixel-value of the 2-D PFP can be determined by both of the x and y-directional coordinate pixel values. Here, each of the x and y-coordinate pixel values effects on the PFP independently, so that a 2-D PFP of Eq. (1) can be divided into a pair of x and y-directional components of 1-D PFPs as seen in Eq. (4).

Tq(x,y,zq)exp[ik2zq(xx0)2]exp[ik2zq(yy0)2]
Here, a pair of 1-D PFPs being decomposed from the 2-D PFP are designated as the x and y-dimensional 1-D PFPs and given by Eqs. (5) and (6), respectively.
Tqx(x,zq)exp[jk2zq(xx0)2]
Tqy(y,zq)exp[jk2zq(yy0)2]
Where each of the 1-D PFP forms of Tqx(x, zq) and Tqy(y, zq) denote the x and y-dimensional components of the 2-D PFP, respectively, on the qth depth plane. Thus, the 2-D PFP of Tq (x, y, zq) can be restored just by multiplication of those two 1-D PFPs of Tqx(x, zq) and Tqy(y, zq), where Tqx(x, zq) and Tqy(y, zq) become identically same under the condition of the same sampling rates of the object points in both of the x and y-directions.

Here, a 2-D PFP is divided into a pair of 1-D PFPs corresponding to each of the x and y-directional components, so that those 1-D PFPs also have the same concentric-symmetry, shift-invariance and thin-lens properties of the 2-D PFP. Thus, this decomposition process from the 2-D to 1-D forms enables reduction of the required memory size of the NLUT since only the 1-D forms of PFPs are stored based on the concentric-symmetry property, as well as reduction of the overall calculation time since CGH patterns are fully one-dimensionally calculated with those 1-D PFPs based on the shift-invariance property.

3.2. Pre-calculation of a pair of half-sized 1-D B-PFPs and DC-PFPs

Since the 2-D PFP, which acts just like a FZP, has the concentric-symmetry structure, it can be divided into four same parts as seen in Fig. 3(a). Thus, instead of storing the whole 2-D PFP, only one quarter of the 2-D PFP as seen in Fig. 3(b) can be stored for reducing the required memory size of the NLUT [48]. In addition, for the case of a pair of 1-D PFPs decomposed from the 2-D PFP, each of the x and y-directional 1-D PFPs can be further divided into two same parts due to its concentric-symmetry property as seen in Fig. 3(c), which are called half-sized 1-D PFPs (HS 1-D PFPs) here. Thus, only one HS 1-D PFP for either of the x and y directions is to be stored in the proposed method, which allows a further reduction of the required memory size of the NLUT.

 figure: Fig. 3

Fig. 3 A 2-D PFP and its quarter and half-sized 1-D versions based on its concentric- symmetry property: (a) 2-D PFP, (b) 1/4 part of the 2-D PFP, and (c) Half-sized 1-D PFP.

Download Full Size | PDF

Those HS 1-D PFPs are decomposed from the 2-D PFP, so that they also have the same concentric-symmetry, thin-lens and shift-invariance properties of the 2-D PFP. Thus, in the proposed method, a new 1-D depth compensation (1-D DC) method is employed, where only a pair of HS 1-D B-PFP and DC-PFP is stored. Then, just by use of the HS 1-D B-PFP and DC-PFP, a set of HS 1-D PFPs corresponding to all depth planes can be generated based on the thin-lens property, where HS 1-D B-PFP and DC-PFP, respectively, correspond to those for the 1st depth plane and compensating the depth differences between the baseline and other depth planes. That is, the HS 1-D B-PFP is transformed into those for their depth planes just by being multiplied with their corresponding HS 1-D DC-PFPs, which act just like the thin-lenses having their focal lengths corresponding to the depth differences between the baseline and other depth planes.

Therefore, this 1-D DC method allows us to minimize the required memory capacity of the NLUT down to a few KB regardless of the number of depth planes. In addition, the CGH calculation time can be also further reduced since all those shifting, multiplication and adding operations involved in generation of the CGHs are fully one-dimensionally carried out with the HS 1-D PFPs in the proposed method.

3.3. Generation of a set of half-sized 1-D PFPs for each depth plane

In fact, the total memory size required for storing all those HS 1-D PFPs for each depth plane gets increased as the number of depth planes increases, even up to the order of MB. The 1-D DC scheme employed in the proposed method, however, enables reduction of the required memory capacity down to a few KB, where only two kinds of PFPs such as the HS 1-D B-PFP and DC-PFP are pre-calculated and stored. Then, a set of HS 1-D B-PFPs for each depth plane can be generated by combined use of these HS 1-D B-PFP and DC-PFP.

Figure 4 shows a conceptual diagram of the 1-D DC method, where a set of HS 1-D PFPs for each depth plane can be generated from those HS 1-D B-PFP and DC-PFP based on a thin-lens property of the PFP.

 figure: Fig. 4

Fig. 4 (a) HS 1-D B-PFP and (b) HS 1-D DC-PFP with their focal lengths of z1 and zc, respectively, (c) HS 1-D PFP2 with a new focal length of z2 generated with multiplication of those PFPs of (a) and (b).

Download Full Size | PDF

As seen in Fig. 4(a) and 4(b), 1-D B-PFP and 1-D DC-PFP acting as the thin lenses are focused at the point A and point B with the focal lengths of z1 and zc, respectively. Thus, by multiplying the 1-D DC-PFP to the 1-D B-PFP, a new 1-D PFP with the focal length of z2, which is focused at the point C, can be obtained as shown in Fig. 4(c). Here, the depth-compensation operation can be analyzed based on ray-optics. If a thin-lens having the focal length of zc is attached to the thin-lens having the focal length of z1, the resultant focal length of the combined lens system becomes z2 according to the ABCD matrix as seen in Eqs. (7) and (8).

[101/z21]=[101/z11][101/zc1]=[10(1/z1+1/zc)1]
1/z2=1/z1+1/zc
Just like attaching a thin-lens, if the 1-D DC-PFP (TDC x) having the depth distance of zc is multiplied to the 1-D B-PFP (TBx) having the depth distance of z1, the resultant 1-D PFP (T2x) can be given by TBx TDCx as seen in Eq. (9), which corresponds to the PFP for the 2nd depth layer.
T2x=TBxTDCx=exp[jk2z1(xx0)2]exp[jk2zc(xx0)2]=exp[(12z1+12zc)(xx0)2]=exp[jk2z2(xx0)2]
Then, the 1-D PFP for the nth depth layer, Tnx can be given by Eq. (10).
Tnx=TBx(TDCx)n-1=Tn1xTDCx
Equation (10) shows that just by multiplying the 1-D DC-PFP to the 1-D PFP of the previous depth plane, the 1-D PFP for the current depth plane can be generated. Furthermore, the depth compensation can be carried out just by one-time multiplication of two HS 1-D PFPs, which makes the depth compensation time greatly decreased in comparison with the conventional 2-D depth compensation method.

3.4. 1-D calculation of the CGH pattern with the half-sized 1-D PFPs and 1-D object point arrays

The CGH pattern of the whole 3-D object can be one-dimensionally calculated just by combined use of the HS 1-D PFPs and 1-D object point arrays (1-D OPAs). Here the 1-D CGH calculation process of the proposed method is composed of a three-step process such as the re-arrangement of the 3-D object points into a set of 1-D OPAs, calculation of the CGH patterns for each 1-D OPA, and addition up all those calculated CGH patterns to generate the final CGH pattern for the 3-D object.

3.4.1 Re-arrangement of the 3-D object points into a set of 1-D object point arrays

A 3-D object can be modeled as a set of 2-D images discretely-sliced along the z-direction, where each image plane with its depth is treated as a collection of self-luminous object points of light. Here, for the effective 1-D calculations of the CGH patterns for each object point, each depth-sliced 2-D image is to be re-arranged into a set of 1-D OPAs and four kinds of object data such as the x and y coordinates, and depth and intensity information are stored.

As seen in an example of Fig. 5(a), a 3-D object of the ‘Airplane’ is sliced into 256 depth planes, where each depth plane is composed of 240 numbers of 1-D OPAs along the x-direction. It means that the 3-D object space of the ‘Airplane’ is decomposed into 61,440 ( = 256 × 240) numbers of 1-D OPAs. Object points of the 3-D object, however, may not be located in all those 1-D OPAs, which means that lots of 1-D OPAs may not have object points in them. In other words, the 3-D object space of the ‘Airplane’ happen to be sparse, so that lookups of each object point over the whole 3-D space may take times and make an effect on the calculation speed of the CGH patterns.

 figure: Fig. 5

Fig. 5 Re-arrangement process of the 3-D object space of an ‘Airplane’ into a 2-D array of sets of 1-D OPAs: (a) A 3-D object space modeled as 256 depth planes where each plane is composed of 240 numbers of 1-D OPAs, (b) Packaged sets of 1-D OPAs for each depth plane.

Download Full Size | PDF

Thus, as seen in Fig. 5(b), 1-D OPAs in each depth plane can be densified just by storing only the 1-D OPAs with their object points are stored. That is, four kinds of object data such as x, y coordinates, and depth and intensity information, are packaged into a data structure of the 1-D OPA and stored. 1-D OPAs in each depth plane and data for each element of the 1-D OPAs are ordered according to the y and x-coordinates, respectively, as seen in Fig. 5(b).

3.4.2 1-D calculation of the CGH patterns for the 1-D object point arrays

1-D calculations of the CGH pattern for each 1-D OPA can be carried out based on a two-step process, such as 1) calculation of the 1-D CGHs for each 1-D OPA with the full-sized 1-D PFPs by duplicating the stored half-sized 1-D PFP, and 2) transformation of the calculated 1-D CGHs into their 2-D forms by multiplying the corresponding y-factors of 1-D PFPs. Figure 6 shows a two-step calculation process of the CGH for the 1st 1-D OPA on the 1st depth plane. In the 1st step process, which is shown in the red-colored rectangle, the 1-D CGH pattern for the 1st 1-D OPA (Array-1) composed of m numbers of object points is calculated. That is, the 1-D PFP of the center object point is shifted according to the displacements of each object point from the center and they are multiplied with their corresponding object points’ intensities and added up together.

 figure: Fig. 6

Fig. 6 Two-step processes of the 2-D CGH calculation for the Array-1 on the 1st depth layer.

Download Full Size | PDF

As seen in Fig. 6, CGH1, CGH2,…, CGHm representing the shifted 1-D PFPs for each object point of O1, O2, …, Om are multiplied with their respective intensities denoted as IO1, IO2, IO3, …, IOm and then added up together to generate the 1-D CGH pattern for the Array-1, which is denoted as CGH11-D and given by Eq. (11). For the case of the Array-1, the calculated 1-D CGH pattern in the 1st step process, CGH11-D is to be multiplied with a transposed version of the CGH1, then the 2-D CGH pattern for the Array-1, which is designated as CGH12-D, is defined as Eq. (12).

CGH11D=IO1CGH1+IO2CGH2+IO3CGH3+...+IOmCGHm
CGH12D=CGH1TCGH11D

Since CGH calculation processes for each 1-D OPA are identically same, CGHs for all those object point arrays in each depth plane can be generated just by following the same procedure described above for the Array-1.

3.4.3 Generation of the CGH pattern for the whole 3-D object

Calculated CGH patterns for each OPA in all depth plane are added together to generate the final 2-D CGH pattern for the whole 3-D object, which is designated as CGHobj2-D and given by Eq. (13).

CGHobj2D=iNOPACGHi2D
Where NOPA means the total number of 1-D OPAs for all depth planes of the 3-D object space, and CGHi2-D represents the 2-D CGH pattern for the ith 1-D OPA. It is noted here that addition operations in the digital system can be implemented in 1-D forms for acceleration of the computational speed, where additional calculation operations of the CGHs for each OPA can be executed in parallel, which results in a massive reduction of the amount of iterations of the programs.

4. Experiments and the results

Figure 7 shows an overall experimental setup of the proposed system, which consists of digital and optical processes. In the digital process of Fig. 7(a), 3-D video data composed of 30 frames of intensity and depth images of the input 3-D scene, which are denoted as F1, F2…, F30, are rearranged into a set of 1-D object point arrays (1-D OPAs) and stored in the computer memory. In addition, a pair of HS 1-D B-PFP and DC-PFP is stored in the computer memory. CGH patterns for each video frame are then generated with the proposed method. In the optical process of Fig. 7(b), holographic video images are reconstructed into the 3-D video images on the 4-f lens system with two dichromatic convex (Model: 63-564, Edmond Optics) whose focal lengths are 15cm.

 figure: Fig. 7

Fig. 7 Experimental setup of the proposed system: (a) Digital and (b) Optical processes.

Download Full Size | PDF

In the optical system, the green laser (StradusTM532, VORTRAN Laser Technology) is used as a light source, which is collimated and expanded with the laser collimator (LC) and beam expander (BE) (Model: HB-4XAR.14, Newport) and then illuminated onto the SLM where the calculated holographic video is loaded. In the experiments, a reflection-type amplitude-modulation mode SLM (Model: HOLOEYE LC-R-1080) with the resolution of 1920 × 1200 pixels and pixel-pitch of 8.1µm is employed.

As a test scene, an input 3-D scene with 30 frames of video images, where a ‘Car’ object moves around the fixed ‘House’ along the curved path, is generated with the 3DS MAX as seen in Fig. 8. Each 3-D video image has 256 depth planes where each depth plane is composed of 320 × 240 pixels, as well as sampling rates in each of the x and y planes are set to be 0.1mm, and number of 1-D OPAs for each video frame is set to be 1637 on the average. In the z-direction, the input 3-D scene is nonlinearly quantized to be well matched with the human visual system (HVS) [43].

 figure: Fig. 8

Fig. 8 Configuration of the test input 3-D video scenario where a ‘Car’ moves around the fixed ‘House’. (a) Top view, (b) Front view.

Download Full Size | PDF

As seen in the test video scenario of Fig. 8, a ‘Car’ with the size of 4.7 × 1.2 × 1.0mm3 moves around the fixed ‘House’ whose size is 10.7 × 9.4 × 4.0mm3 and located at around (16mm, 4mm) in the x-z coordinates. The ‘Car’ object is set to be located at P1 = (3.4mm, 8.0mm) in the 1st frame, moved to P9 = (11.5mm, 10.0mm) and P30 = (29.7mm, 7.9mm) in in the 9th and 30th frames, respectively. Over the 30th frame, the ‘Car’ object has a moving trajectory of about 30mm ( = 300 × 0.1mm) and 3mm ( = 30 × 0.1mm) along the x and z-directions, respectively.

4.1 Calculation of a pair of half-sized 1-D B-PFP and DC-PFP

A pair of the 2-D B-PFP and DC-PFP is generated with Eq. (1), where the resolution of each PFP and each pixel size are given by 2,880 × 1,920 pixels and 8.1µm, respectively. Moreover, discretization steps along the horizontal and vertical directions are set to be 243µm, which means that amounts of pixel shifts are equally given by 3 pixels in both directions [19]. Thus, to fully cover the hologram patterns for all those object points, the PFP must be shifted by 960( = 320 × 3) and 720( = 240 × 3) pixels along the horizontal and vertical directions, respectively. The total resolution of the PFP then becomes 2,980( = 1,920 + 960) × 1,920 ( = 1,200 + 720) pixels according to Eq. (2).

In the proposed method, the 2-D PFP pattern is decomposed into the 1-D form of the PFP, which can be calculated with Eq. (5) and (6) since Tqx(x, zq) and Tqy(y, zq) become identical under the condition of the same sampling rates of the object points of 8.1µm in both x and y-directions. Then, the resolution of the 1-D PFP becomes 2,980( = 1920 + 960) pixels, which corresponds to the 1/1,920 size of the 2-D PFP pattern. Here, calculation times of the 2-D and 1-D forms of the B-PFP and DC-PFP have been found to be around 0.4s and1.45ms, respectively, which means 1-D forms of B and DC-PFPs obtain a 276-fold reduction of the calculation time. In addition, those times taken for the direct calculations of the 2-D and 1-D PFPs for each depth plane without using the depth compensation method, have been found to be about 51s and 186ms, respectively, which also shows that 1-D forms of PFPs obtain a 274-fold improvement in the calculation time. Furthermore, based on the nonlinear quantization scheme, the recording distance of the B-PFP, representing the 1st depth plane of the object, is set to be 500mm, and whereas the depth-compensation distance is set to be −617,000mm that is calculated by the non-uniform function considering the HVS [43]. Thus, the 1-D B-PFP and DC-PFP can be calculated with Eq. (5), and then the 1-D PFPs for each depth plane can be obtained with them.

4.2 Generation of a set of half-sized 1-D PFPs for each depth plane

Based on the proposed 1-D depth compensation scheme, a set of 1-D PFPs for all depth are generated. The 1-D PFP for the 2nd depth plane (2nd depth 1-D PFP) is generated by multiplying the 1-D DC-PFP to the 1-D B-PFP corresponding to the 1st depth plane, and 3rd depth 1-D PFP is also generated from the 2nd depth 1-D PFP by being multiplied with the 1-D DC-PFP. Thus, the mth depth 1-D PFP can be generated from the (m-1)th depth 1-D PFP by being multiplied with the 1-D DC-PFP according to Eq. (10). For those calculation processes, only 2,980 multiplication operations have been executed in the computer.

The depth distances for each depth plane can be calculated with Eq. (8), where the depth distance of the 2nd depth layer, z2 is calculated to be about 500.40mm under the condition of z1 = 500mm and zc = −617,000mm. Following the equation, when z1 indicates the depth distance of the 2nd depth layer, the distance of the 3rd depth layer, z2 is calculated to be 500.81mm. Figure 9 shows the reconstruction results of the 2-D PFPs of the 1st, 2nd and 3rd depth, where 2-D PFPs are restored from their corresponding 1-D PFPs and 1-D PFPs generated with the 1-D B-PFP and 1-D DC-PFP based on the 1-D DC method. As seen in Fig. 9(a), three kinds of focused point images of those 2-D PFPs have been reconstructed right on their depth planes of 500mm, 500.45mm and 500.81mm, respectively, whereas their defocused ones have been reconstructed on the depth plane of 500.50mm. It clearly shows that object points can be reconstructed to be focused right on their depth distances, which confirms the feasibility of the proposed 1-D DC method.

 figure: Fig. 9

Fig. 9 Reconstruction results of the 1st, 2nd and 3rd 2-D PFPs generated with the 1-D DC scheme: (a) three focused point images reconstructed on their depth distances of 500mm, 500.40mm and 500.81mm, (b) defocused ones reconstructed on the different depth distance of 500.50mm, respectively.

Download Full Size | PDF

4.3 1-D calculation of the CGH pattern of the test scene with the half-sized 1-D PFPs and 1-D object point arrays

Initially 30-frame intensity and depth images representing the information of the input 3-D scene are rearranged into a set of 1-D object point arrays (OPAs) for calculating the CGH patterns of the test scene as seen in Fig. 10. Figure 10(a) and (b) show 30 frames of 2-D intensity and depth images of the input test scenario and memory structures of the rearranged OPAs for each depth plane, where each object element of the 1-D OPA is composed by four-dimensional information of the x and y coordinates, intensity and depth values.

 figure: Fig. 10

Fig. 10 Input data of the test scene: (a) 2-D intensity and depth images of 30 frames, (b) sets of rearranged 1-D OPAs with their x, y coordinates, intensity and depth values of the test scene.

Download Full Size | PDF

Since only the object point arrays with object points are stored, the required memory capacity becomes reduced down to 1,320,000( = 4 × 11,000 × 30) bytes where 4 means four bytes for each element of the 1-D OPA, and 11,000 and 30 represent the average number of object points in each video frame and number of video frames. On the other hand, 4,608,000( = 240 × 320 × 2 × 30) bytes of the memory size must be needed for the direct storing of those intensity and depth images of the input 3-D scene. It means that a 3.5-fold reduction of the memory capacity has been achieved due to the rearrangement of the input scene into a set of OPAs in the proposed method. Furthermore, the average number of iterations traversing each object point for one frame, becomes only 11,000 in the proposed method, whereas 19,660,800( = 240 × 320 × 256) iterations must be required in the conventional method where CGH patterns are directly calculated with those intensity and depth images of the input 3-D scene. It results in a 1,787-fold reduction of the number of computational iterations and then corresponding massive shortening of the CGH calculation time in the proposed method.

The CGH pattern of the test 3-D scene can be calculated by calculating the CGH patterns for each 1-D OPA, and then add them all together. The CGH pattern for each 1-D OPA can be calculated with a set of 1-D PFPs that are obtained by combined use of the 1-D B-PFP and DC-PFP based on the proposed 1-D depth compensation scheme, where the CGH calculation process for one OPA is composed of two steps such as 1) generation of the 1-D CGHs and 2) transforming them into the 2-D forms of CGHs.

In the first step, each intensity of those object points in the OPA are multiplied to their corresponding 1-D PFPs which have been laterally shifted depending on their locational coordinates displaced from the center. For instance, the first element of the Array-1 on the 59th depth plane has four-dimensional data of 30, 201, 198 and 59, which means that the object point with the intensity value of 198 is located at the x and y-coordinates of (30, 201) on the 59th depth plane. Thus, the 1-D PFP for the 59th depth plane is shifted along the x-coordinate with −390( = (30-160) × 3)) pixels, where 160 denotes the x-coordinate of the center object point, and multiplied with its intensity value of 198 to generate the 1-D CGH of this object point. Furthermore, the 1-D CGH for the Array-1 can be obtained just by adding all those calculated 1-D CGHs for every object point in the Array-1 following the same calculation procedure mentioned above.

In the second step, the generated 1-D CGH is multiplied with its vertical components and transformed into the 2-D form of the CGH, where all object points in the Array-1 have the same y-coordinate value of 201, so that vertical components for all those object points also become same. The vertical component of the Array-1 can be obtained just by shifting the 1-D PFP with 243( = (201-120) × 3) pixels and transposed and then multiplied to the 1-D CGH generated in the first step to generate the 2-D CGH as seen in Fig. 11(a). Hence, the final CGH pattern for the whole test 3-D scene can be obtained by adding all those CGH patterns calculated for all those 1-D OPAs together, whose result is shown in Fig. 11(b).

 figure: Fig. 11

Fig. 11 2-D CGH patterns calculated for the (a) Array-1 and (b) all 1-D POAs of the test scene.

Download Full Size | PDF

5. Performance analysis of the proposed method

5.1 Analysis of the memory capacity and computational complexity

Table 1 shows the comparison results of the conventional NLUT, S-LUT, 2-D DC-based, Sub-PFP-based, AC-LUT, Mini-LUT with the proposed 1-D NLUT in terms of memory capacity and computational complexity. In the experiments, width and length of the hologram and object images has been set to be Wh = 1920 and Lh = 1200, and Wo = 320 and Lo = 240, respectively. In addition, the number of depth planes of the input 3-D scene has been set to be Q = 256. Experiments with the same test scenario are carried out for each of those conventional and proposed methods. The memory capacities for each of those conventional NLUT, S-LUT, 2-D DC-based and Sub-PFP-based, AC-LUT, Mini-LUT, methods have been found to be about 6,300MB, 1,200MB, 49.22MB, 8.75MB, 4.69MB, 117.65KB, respectively, whereas the memory size of the proposed method has been found to have only 17.5KB, which is the minimum value among them. These results show that only both of the Mini-LUT and proposed methods have KB orders of memory sizes, whereas other methods have MB orders of memory sizes. That is, the memory capacity of the proposed method has been reduced by 368,640, 70,217, 5,734, 512, 274 and 6.72-folds, respectively, when it is compared with those of the original NLUT, S-LUT, 2-D DC-based, Sub-LUT-based, AC-LUT and Mini-LUT.

Tables Icon

Table 1. Comparison results of the memory capacity and computational complexity among the conventional and proposed methods.

Here, in the original NLUT, a set of 2-D PFPs for all depth plane are stored, so that its memory capacity is given by (Wh + Wo) × (Lh + Lo) × Q as seen in Table 1. In the 2-D DC-based, only a pair of 2-D B-PFP and DC-PFP are stored, thus its memory capacity becomes (Wh + Wo) × (Lh + Lo) × 2. In the SLUT, a pair of horizontal and vertical 1-D light-modulation factors for each depth plane is stored, so that its required memory size increases as the number of depth planes increases. Thus, as seen in Table 1, its memory size becomes (WhWo + LhLo) × Q where Wh and Lh mean the lengths of the horizontal and vertical 1-D light-modulation factors, and Wo and Lo denote the resolution of the object image in the horizontal and vertical directions, respectively. Moreover, in the AC-LUT, depth data are extracted from the x and y-directional light modulation factors, and only a pair of light-modulation factors for one depth plane is stored, thus its memory capacity can be greatly reduced. In the Sub-PFP-based, each 2-D PFP is decomposed into a pair of x and y-directional 1-D Sub-PFPs, where each length of the 1-D Sub-PFP becomes Wh + Wo, thus its memory capacity becomes ((Wh + Wo) × 2 × Q). Furthermore, in the Mini-LUT, the memory capacity of the conventional line-type PFP can be made reduced down to Q*R/r, where R means the length of the 1-D line-type PFP, by using the principle component analysis method.

On the other hand, in the proposed method, each 2-D PFP is decomposed into a pair of 1-D PFPs and only halves of those 1-D PFPs are stored. In addition, by employing a 1-D DC scheme, only a pair of HS 1-D B-PFPs and DC-PFP is stored. Thus, the memory capacity of the proposed method is given by (Lh + Lo)/2, which could be minimized down to a few KB regardless of the number of depth planes as seen in Table 1.

Hence, for the case of the computational complexity, the overall CGH calculation times of the S-LUT, and AC-LUT as well as the proposed method are given by O(NLh + WhLh), whereas those of the original NLUT, 2-D DC-based, Sub-PFP-based and Mini-LUT methods become O(WhLhN). In the experiments with the same test scenario, the overall CGH calculation times of the conventional original NLUT, S-LUT, 2-D DC-based, Sub-PFP-based, AC-LUT and Mini-LUT have been found to be about 246.87s, 31s, 248.61s, 247.16s, 26.06s and 247.59s, respectively, whereas the calculation time of the proposed method has been found to be 24.09s. These results show that the overall calculation time of the proposed method has been reduced by 10.25, 1.29, 10.32, 10.28, 1.06 and 10.28-folds, respectively, when it is compared with those of the original NLUT, S-LUT, 2-D DC-based, Sub-LUT-based, AC-LUT and Mini-LUT.

Here, the original NLUT, 2-D DC-based, sub-PFP-based and Mini-LUT methods calculate their CGH patterns in 2-D forms, which means each intensity of the object points needs to be multiplied to their 2-D PDPs, as well as addition processes for those 2-D PFPs of the object points are implemented in 2-D forms, thus their computational complexities become O(WhLhN), and their resultant overall calculation times turn out to be much longer than those of other methods with their computational complexities of O(NLh + WhLh). On the other hand, in the conventional S-LUT, AC-LUT, and proposed 1-D NLUT, CGHs are calculated in 1-D forms, thus multiplication and addition operations in calculation processes of the CGHs for each object point are implemented in 1-D forms and transformed into the 2-D forms, where the computational complexity can be separated into two parts such as generation of the 1-D forms of CGHs and transformation of them into 2-D forms such as O(NLh + WhLh).

Here it must be noted that the proposed method use only the smallest HS 1-D PFPs, which enables a great reduction of their loading time in comparison with those of the S-LUT and AC-LUT. In addition, the CGH calculation can be carried out directly with the HS 1-D PFPs in the 1-D NLUT unlike the and AC-LUT, where longitudinal components need to be calculated and synthesized with the horizontal and vertical components during the CGH calculation processes, which might cause their calculation times be much longer than of the proposed method.

Figure 12 shows the required memory capacities of the conventional and proposed methods depending on the number of depth layers in the range of Q = 100 to 1,000. As seen in Fig. 12(a), where the memory capacity is drawn in the range of 0MB to 104MB with a scale of 103MB, the memory sizes of the original NLUT and S-LUT have been found to be around 2,500MB and 500MB even for the case of Q = 100, and getting sharply increased as the number of depth layers increases, whereas those of other methods looks much less than the original NLUT and S-LUT as seen in Fig. 12(a). Thus, in Figs. 12(b) and 12(c), the memory capacity dependences on the number of depth layers have been re-drawn in the range of 0KB to 80,000KB with a scale of 104KB, and 0KB to 200KB with a scale of 20KB, respectively. As seen in Fig. 12(b), the required memory sizes of the Sub-PFP-based and Mini-LUT methods have been also found to be dependent on the number of depth planes even though the absolute value of the Mini-LUT has been found to very small. For the cases of Q = 100 and 1,000, the memory capacity of the Sub-PFP-based has been increased from 3.4MB to around 34MB, while that of the Mini-LUT has been increased from 46 KB to around 460KB. In Fig. 12(c), differences in memory capacity between Mini-LUT and 1-D NLUT and their dependences on the number of depth layers could be viewed more clearly. That is, the memory capacity of the Mini-LUT depends on the depth number of depth layers, whereas that of the proposed method becomes fixed to 17.5 KB regardless of the number of depth layers.

 figure: Fig. 12

Fig. 12 Memory capacity dependence on the number of depth layers of the conventional and proposed methods in the memory ranges of (a) 0MB to 3,000MB with a scale of 1000MB, (b) 0KB to 10,000KB with a scale of 1000KB and (c) 0KB to 200KB with a scale of 20KB.

Download Full Size | PDF

Hence, as seen in Fig. 12(b) and (c), the memory capacities of the conventional 2-D DC-based and AC-LUT have been also found to be fixed to some values irrespective of the number of depth layers just like the proposed method. In the 2-D DC-based, only a pair of 2-D PFPs is stored with the memory of 49.22MB, whereas in the AC-LUT, pairs of 1-D light-modulation factors in the x and y-directions are stored, so that its memory size has been found to be increased depending on the number of depth layers. On the other hand, in the proposed method, only a pair of HS 1-D B-PFP and DC-PFP is stored with the smallest memory of 17.5KB regardless of the number of depth layers as mentioned above.

In addition, Fig. 13 shows the overall calculation times of the conventional and proposed methods. CGH patterns for the test video of 30 frames are calculated under Matlab 2017 implemented on the personal computer with the CPU of 3.00GHz clock frequency and 64.0GB memory with each of those conventional and proposed methods. As seen in Fig. 13(a), which is drawn in the time range of 0 to 350s with a scale of 50s, calculation times of the original NLUT, 2-D DC-based, sub-PFP-based and Mini-LUT methods have been estimated to be 246.87s, 248.61s, 247.16s, and 247.59s respectively, whereas the other method have shown to have much less times down to around 25s. For more detailed analysis of the S-LUT, AC-LUT and proposed methods, Fig. 13(a) has been re-drawn into Fig. 13(b) in the time range of 0- 50s with a scale of 5s. As seen in Fig. 13(b). the SLUT has been found to take the longest CGH calculation time due to the extra time needed for reading the LUT for each depth layer, where the average calculation time (ACT) for each video frame and total calculation time (TCT) for the whole 30 video frames have been estimated to be about 31s and 930s, respectively. Moreover, ACT and TCT of the AC-LUT have been estimated to be 26s and 810s, respectively, which shows that the AC-LUT has been found to be faster than the S-LUT in calculation of CGH patterns.

 figure: Fig. 13

Fig. 13 Comparison results of the ACT and TCT for each of the conventional and proposed methods for the test scenario of 30 frames in the time ranges of (a) 0 to 350s with a scale of 50s, (b) 0 to 50s with a scale of 5s.

Download Full Size | PDF

In the AC-LUT, since longitudinal terms are extracted as complex numbers for each depth layer, so that no extra time is needed for calculating the longitudinal terms, but the horizontal and vertical terms need exponential calculations related to the longitudinal terms during the CGH calculations, which causes the in-line calculation time to be increased. On the other hand, in the proposed method, the ACT and TCT have been found to be the minimum values of around 24s and 720s, respectively, among them because the CGH calculations have been carried out fully one-dimensionally based on simple three simple operations of shifting, multiplication and adding with HS 1-D PFPs.

5.2 Reconstruction of the holographic 3-D video

Figures 14 shows four kinds of computationally and optically-reconstructed images of the 1st, 11th, 21st and 30th frames from the CGH patterns generated with the proposed method for the test scenario. 30 frames of computationally and optically reconstructed 3-D scenes are also compressed into the two-second video files of Visualization 1 and Visualization 2, respectively, and included in Fig. 14. Here, the reconstruction distance has been set to be 50cm, which looks equal to the depth distance of the ‘House’ object. As seen in the computationally-reconstructed object images of Figs. 14(a1)-14(a4), the ‘Car’ objects look a little bit blurred at both of the 11th and 21st frame, which are shown in Figs. 14(a2) and 14(a3), respectively, since those objects have been reconstructed in front of the focused ‘House’ object at the depth distance of 50cm, whereas the ‘Car’ objects of the 1st and 30th frame shown in Figs. 14(a1) and 14(a4) have been reconstructed to be focused since they were located close to the ‘House’ object.

 figure: Fig. 14

Fig. 14 Computationally and optically-reconstructed input scene images of the 1st, 11th, 21st and 30th frames from the CGHs generated with the proposed method for the test video scenario (Visualization 1, Visualization 2): (a) computationally-reconstructed input scene images, (b) optically-reconstructed input scene images.

Download Full Size | PDF

In addition, Figs. 14(b1)-14(b4) show optically-reconstructed results of the CGH patterns of the 1st, 11th, 21st and 30th frames of the test scenario, respectively, where those object images have been captured with the CCD camera and compressed into a video. In the optical reconstruction, the ‘Car’ object of the 11th and 21st frames have been found to be also a little bit out of being focused when it is compare to the ‘House’ object, while those of the 1st and 30th frame have been found to be focused since 3-D scenes have been optically reconstructed by being focused on the fixed ‘House’ image with the depth distance of 50cm.

6. Conclusions

In the proposed 1-D NLUT method, only a pair of HS 1-D B-PFP and DC-PFP is to be stored for the CGH calculation, which enables the memory size of the proposed method to be minimized down to a few KB regardless of the number of depth planes. In addition, all calculations are fully one-dimensionally carried out with a set of HS 1-D PFPs, which also allows minimization of the overall CGH calculation time. Experimental results show that the proposed method has been found to have the shortest calculation time even with the least memory size in comparison with those of several conventional methods, which confirms its feasibility in the practical application.

Funding

This work was partially supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC support program (IITP-2017-01629) supervised by the IITP, Basic Science Research Program through the NRF of Korea funded by the Ministry of Education (No. 2018R1A6A1A03025242) and Research Grant of Kwangwoon University in 2019.'

References

1. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]   [PubMed]  

2. C. J. Kuo and M. H. Tsai, Three-Dimensional Holographic Imaging (John Wiley & Sons, 2002).

3. T. C. Poon, Digital Holography and Three-dimensional Display (Springer Verlag, 2007).

4. X. Xu, Y. Pan, P. P. M. Y. Lwin, and X. Liang, “3D holographic display and its data transmission requirement,” in Proceeding of IEEE Conference on Information Photonics and Optical Communications (IEEE, 2011), pp. 1–4. [CrossRef]  

5. F. Yaraş, H. Kang, and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. 48(34), H48–H53 (2009). [CrossRef]   [PubMed]  

6. H. Yoshikawa and J. Tamai, “Holographic image compression by motion picture coding,” Proc. SPIE 2652, 2–10 (1996). [CrossRef]  

7. T. J. Naughton, Y. Frauel, B. Javidi, and E. Tajahuerce, “Compression of digital holograms for three-dimensional object reconstruction and recognition,” Appl. Opt. 41(20), 4124–4132 (2002). [CrossRef]   [PubMed]  

8. A. Shortt, T. J. Naughton, and B. Javidi, “Compression of digital holograms of three-dimensional objects using wavelets,” Opt. Express 14(7), 2625–2630 (2006). [CrossRef]   [PubMed]  

9. A. E. Shortt, T. J. Naughton, and B. Javidi, “Histogram approaches for lossy compression of digital holograms of three-dimensional objects,” IEEE Trans. Image Process. 16(6), 1548–1556 (2007). [CrossRef]   [PubMed]  

10. E. Darakis and T. J. Naughton, “Compression of digital hologram sequences using MPEG-4,” Proc. SPIE 7358, 735811 (2009). [CrossRef]  

11. T. Senoh, K. Wakunami, Y. Ichihashi, H. Sasaki, O. Ryutaro, and K. Yamamoto, “Multiview image and depth map coding for holographic TV system,” Opt. Eng. 53(11), 112302 (2014). [CrossRef]  

12. M. W. Kwon, S. C. Kim, S. E. Yoon, Y. S. Ho, and E. S. Kim, “Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes,” Opt. Express 23(3), 2101–2120 (2015). [CrossRef]   [PubMed]  

13. T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, “Realistic expression for full-parallax computer-generated holograms with the ray-tracing method,” Appl. Opt. 52(1), A201–A209 (2013). [CrossRef]   [PubMed]  

14. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993). [CrossRef]  

15. S. C. Kim and E. S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008). [CrossRef]   [PubMed]  

16. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]   [PubMed]  

17. D. Im, J. Cho, J. Hahn, B. Lee, and H. Kim, “Accelerated synthesis algorithm of polygon computer-generated holograms,” Opt. Express 23(3), 2863–2871 (2015). [CrossRef]   [PubMed]  

18. H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” in Digital Holography and Three-Dimensional Imaging, 2009 OSA Technical Digest Series (Optical Society of America, 2009), pp. DWC4.

19. H. Yoshikawa, S. Iwase, and T. Oneda, “Fast Computation of Fresnel Holograms employing Difference,” Proc. SPIE 3956, 48–55 (2000). [CrossRef]  

20. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013). [CrossRef]   [PubMed]  

21. M. W. Kwon, S. C. Kim, and E. S. Kim, “Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes,” Appl. Opt. 55(3), A22–A31 (2016). [CrossRef]   [PubMed]  

22. M. W. Kwon, S. C. Kim, S. E. Yoon, Y. S. Ho, and E. S. Kim, “Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes,” Opt. Express 23(3), 2101–2120 (2015). [CrossRef]   [PubMed]  

23. H. Sato, T. Kakue, Y. Ichihashi, Y. Endo, K. Wakunami, R. Oi, K. Yamamoto, H. Nakayama, T. Shimobaba, and T. Ito, “Real-time colour hologram generation based on ray-sampling plane with multi-GPU acceleration,” Sci. Rep. 8(1), 1500 (2018). [CrossRef]   [PubMed]  

24. H. Niwase, T. Naoki, A. Hiromitsu, M. Yuki, F. Masato, N. Hirotaka, K. Takashi, S. Tomoyoshi, and I. Tomoyoshi, “Real-time electro holography using a multiple-graphics processing unit cluster system with a single spatial light modulator and the InfiniBand network,” Opt. Eng. 55(9), 093108 (2016). [CrossRef]  

25. D. W. Kim, Y. H. Lee, and Y. H. Seo, “High-speed computer-generated hologram based on resource optimization for block-based parallel processing,” Appl. Opt. 57(13), 3511–3518 (2018). [CrossRef]   [PubMed]  

26. Y. Kimura, R.Kawaguchi, T. Sugie, T.Kakue, T. Shimobaba, and T. Tto, “Circuit design of special-purpose computer for holography HORN-8 using eight Virtex-5 FPGAs,” in Proc. 3D Syst. Appl. S3–2 (2015).

27. T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, Y. Endo, T. Kakue, T. Shimobaba, and T. Ito, “High-performance parallel computing for next-generation holographic imaging,” Nature Electronics 1(4), 254–259 (2018). [CrossRef]  

28. H. G. Kim and Y. Man Ro, “Ultrafast layer based computer-generated hologram calculation with sparse template holographic fringe pattern for 3-D object,” Opt. Express 25(24), 30418–30427 (2017). [CrossRef]   [PubMed]  

29. T. Shimobaba and T. Ito, “Fast generation of computer-generated holograms using wavelet shrinkage,” Opt. Express 25(1), 77–87 (2017). [CrossRef]   [PubMed]  

30. D. Blinder and P. Schelkens, “Accelerated computer generated holography using sparse bases in the STFT domain,” Opt. Express 26(2), 1461–1473 (2018). [CrossRef]   [PubMed]  

31. S. C. Kim, J. H. Yoon, and E. S. Kim, “Fast generation of three-dimensional video holograms by combined use of data compression and lookup table techniques,” Appl. Opt. 47(32), 5986–5995 (2008). [CrossRef]   [PubMed]  

32. S. C. Kim and E. S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. 48(6), 1030–1041 (2009). [CrossRef]   [PubMed]  

33. S. C. Kim, X. B. Dong, M. W. Kwon, and E. S. Kim, “Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table,” Opt. Express 21(9), 11568–11584 (2013). [CrossRef]   [PubMed]  

34. X. B. Dong, S. C. Kim, and E. S. Kim, “MPEG-based novel look-up table for rapid generation of video holograms of fast-moving three-dimensional objects,” Opt. Express 22(7), 8047–8067 (2014). [CrossRef]   [PubMed]  

35. X. B. Dong, S. C. Kim, and E. S. Kim, “Three-directional motion compensation-based novel-look-up-table for video hologram generation of three-dimensional objects freely maneuvering in space,” Opt. Express 22(14), 16925–16944 (2014). [CrossRef]   [PubMed]  

36. H. K. Cao, S. F. Lin, and E. S. Kim, “Accelerated generation of holographic videos of 3-D objects in rotational motion using a curved hologram-based rotational-motion compensation method,” Opt. Express 26(16), 21279–21300 (2018). [CrossRef]   [PubMed]  

37. T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram using run-length encoding based recurrence relation,” Opt. Express 23(8), 9852–9857 (2015). [CrossRef]   [PubMed]  

38. S. Lee, H. Wey, D. Nam, D. Park, and C. Kim, “59:3: Fast Hologram Pattern Generation by Removing Concentric Redundancy,” Sid Symp. Dig. Tech. Pap. 43(1), 800–803 (2012). [CrossRef]  

39. T. Nishitsuji, T. Shimobaba, T. Kakue, N. Masuda, and T. Ito, “Fast calculation of computer-generated hologram using the circular symmetry of zone plates,” Opt. Express 20(25), 27496–27502 (2012). [CrossRef]   [PubMed]  

40. S. C. Kim, J. M. Kim, and E. S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012). [CrossRef]   [PubMed]  

41. S. C. Kim and E. S. Kim, “Fast one-step calculation of holographic videos of three-dimensional scenes by combined use of baseline and depth-compensating principal fringe patterns,” Opt. Express 22(19), 22513–22527 (2014). [CrossRef]   [PubMed]  

42. S. C. Kim, X. B. Dong, and E. S. Kim, “Accelerated one-step generation of full-color holographic videos using a color-tunable novel-look-up-table method for holographic three-dimensional television broadcasting,” Sci. Rep. 5(1), 14056 (2015). [CrossRef]   [PubMed]  

43. S. Jiao, Z. Zhuang, and W. Zou, “Fast computer generated hologram calculation with a mini look-up table incorporated with radial symmetric interpolation,” Opt. Express 25(1), 112–123 (2017). [CrossRef]   [PubMed]  

44. Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. Tanjung, C. Tan, and T. C. Chong, “Fast CGH computation using S-LUT on GPU,” Opt. Express 17(21), 18543–18555 (2009). [CrossRef]   [PubMed]  

45. J. Jia, Y. Wang, J. Liu, X. Li, Y. Pan, Z. Sun, B. Zhang, Q. Zhao, and W. Jiang, “Reducing the memory usage for effective computer-generated hologram calculation using compressed look-up table in full-color holographic display,” Appl. Opt. 52(7), 1404–1412 (2013). [CrossRef]   [PubMed]  

46. C. Gao, J. Liu, X. Li, G. Xue, J. Jia, and Y. Wang, “Accurate compressed look up table method for CGH in 3D holographic display,” Opt. Express 23(26), 33194–33204 (2015). [CrossRef]   [PubMed]  

47. T. Nishitsuji, T. T. Shimobaba, T. Kakue, and T. Ito, “Review of fast calculation techniques for computer-generated holograms with the point-light-source-based model,” IEEE Trans. Industr. Inform. 13(5), 2447–2454 (2017). [CrossRef]  

48. D. W. Kwon, S. C. Kim, and E. S. Kim, “Efficient digital hologram generation using reflection symmetry of principle fringe pattern,” in Proceedings of IEEE Conference on Information and Communication Technology Convergence (ICTC) (IEEE, 2010), pp. 197–198.

Supplementary Material (2)

NameDescription
Visualization 1       Optically reconstructed the test video scenario.
Visualization 2       Computationally reconstructed the test video scenario.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 (a) Geometric configuration for generating the PFP for the center object point of O(x0, y0, zq) on the qth depth plane, and (b) Operational diagram of the NLUT-based CGH generation process for the three object points of A(x1, y1, zq), B(x2, y2, zq) and C(-x3, -y3, zq) located on the depth layer of zq with the PFP for the center object point based on the shift-invariance property of the PFP.
Fig. 2
Fig. 2 Overall block-diagram of the proposed full-scale 1-D NLUT method composed of a three-step process.
Fig. 3
Fig. 3 A 2-D PFP and its quarter and half-sized 1-D versions based on its concentric- symmetry property: (a) 2-D PFP, (b) 1/4 part of the 2-D PFP, and (c) Half-sized 1-D PFP.
Fig. 4
Fig. 4 (a) HS 1-D B-PFP and (b) HS 1-D DC-PFP with their focal lengths of z1 and zc, respectively, (c) HS 1-D PFP2 with a new focal length of z2 generated with multiplication of those PFPs of (a) and (b).
Fig. 5
Fig. 5 Re-arrangement process of the 3-D object space of an ‘Airplane’ into a 2-D array of sets of 1-D OPAs: (a) A 3-D object space modeled as 256 depth planes where each plane is composed of 240 numbers of 1-D OPAs, (b) Packaged sets of 1-D OPAs for each depth plane.
Fig. 6
Fig. 6 Two-step processes of the 2-D CGH calculation for the Array-1 on the 1st depth layer.
Fig. 7
Fig. 7 Experimental setup of the proposed system: (a) Digital and (b) Optical processes.
Fig. 8
Fig. 8 Configuration of the test input 3-D video scenario where a ‘Car’ moves around the fixed ‘House’. (a) Top view, (b) Front view.
Fig. 9
Fig. 9 Reconstruction results of the 1st, 2nd and 3rd 2-D PFPs generated with the 1-D DC scheme: (a) three focused point images reconstructed on their depth distances of 500mm, 500.40mm and 500.81mm, (b) defocused ones reconstructed on the different depth distance of 500.50mm, respectively.
Fig. 10
Fig. 10 Input data of the test scene: (a) 2-D intensity and depth images of 30 frames, (b) sets of rearranged 1-D OPAs with their x, y coordinates, intensity and depth values of the test scene.
Fig. 11
Fig. 11 2-D CGH patterns calculated for the (a) Array-1 and (b) all 1-D POAs of the test scene.
Fig. 12
Fig. 12 Memory capacity dependence on the number of depth layers of the conventional and proposed methods in the memory ranges of (a) 0MB to 3,000MB with a scale of 1000MB, (b) 0KB to 10,000KB with a scale of 1000KB and (c) 0KB to 200KB with a scale of 20KB.
Fig. 13
Fig. 13 Comparison results of the ACT and TCT for each of the conventional and proposed methods for the test scenario of 30 frames in the time ranges of (a) 0 to 350s with a scale of 50s, (b) 0 to 50s with a scale of 5s.
Fig. 14
Fig. 14 Computationally and optically-reconstructed input scene images of the 1st, 11th, 21st and 30th frames from the CGHs generated with the proposed method for the test video scenario (Visualization 1, Visualization 2): (a) computationally-reconstructed input scene images, (b) optically-reconstructed input scene images.

Tables (1)

Tables Icon

Table 1 Comparison results of the memory capacity and computational complexity among the conventional and proposed methods.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

T q (x,y, z q )exp[ i k 2 z q { (x x 0 ) 2 + (y y 0 ) 2 } ]
Horizontal resolution of the PFP: [ h x +(disc× o x )], Vertical resolution of the PFP: [ h y +(disc× o y )],
I(x,y)= q=1 Q p=1 P a p T q (x x p ,y y p )
T q (x,y, z q )exp[ i k 2 z q (x x 0 ) 2 ]exp[ i k 2 z q (y y 0 ) 2 ]
T q x (x, z q )exp[ j k 2 z q ( x x 0 ) 2 ]
T q y (y, z q )exp[ j k 2 z q ( y y 0 ) 2 ]
[ 1 0 1/ z 2 1 ]=[ 1 0 1/ z 1 1 ][ 1 0 1/ z c 1 ]=[ 1 0 (1/ z 1 +1/ z c ) 1 ]
1/ z 2 =1/ z 1 +1/ z c
T 2 x = T B x T DC x =exp[ j k 2 z 1 (x x 0 ) 2 ]exp[ j k 2 z c (x x 0 ) 2 ] =exp[ ( 1 2 z 1 + 1 2 z c ) (x x 0 ) 2 ]=exp[ j k 2 z 2 (x x 0 ) 2 ]
T n x = T B x ( T DC x ) n-1 = T n1 x T DC x
CG H 1 1D = I O1 CG H 1 + I O2 CG H 2 + I O3 CG H 3 +...+ I Om CG H m
CG H 1 2D =CG H 1 T CG H 1 1D
CG H obj 2D = i N OPA CG H i 2D
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.