Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Implementation of full-color holographic system using non-uniformly sampled 2D images and compressed point cloud gridding

Open Access Open Access

Abstract

A multiple-camera holographic system using non-uniformly sampled 2D images and compressed point cloud gridding (C-PCG) is suggested. High-quality, digital single-lens reflex cameras are used to acquire the depth and color information from real scenes; these are then virtually reconstructed by the uniform point cloud using a non-uniform sampling method. The C-PCG method is proposed to generate efficient depth grids by classifying groups of object points with the same depth values in the red, green, and blue channels. Holograms are obtained by applying fast Fourier transform diffraction calculations to the grids. Compared to wave-front recording plane methods, the quality of the reconstructed images is substantially better, and the computational complexity is dramatically reduced. The feasibility of our method is confirmed both numerically and optically.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Holography has been considered as a perfect three-dimensional (3D) display technique because it can record and reproduce all the 3D information like a motion parallax, a convergence, an occlusion, an accommodation, and so on. Due to the rapid development of computing technology over the past few decades, optical holography now features digital processing. Paris and Lohmann introduced the computer-generated hologram (CGH) in the mid-1960s [1]. In a CGH, digitally generated holographic interference patterns are created from a 3D object, and an image is reconstructed using optical techniques. CGH has a significant advantage [2], in that a hologram reflecting physical reality can be created quickly based on real objects. Many techniques have been developed for generating CGHs [35]. The depth and color information of a real scene is acquired using a depth camera, and a point cloud model can be virtually reconstructed using a spatial light modulator (SLM).

Real object-based methods have become very popular over the past decade because they are simple and afford efficient, natural object visualization. Lee et al. proposed a method of generating a digital hologram of a real 3D object using a depth camera [6] that acquired both depth and color information via a time-of-flight method. Li et al. showed a simplified monochromatic CGH pickup method employing a Kinect depth camera [7]; both depth and color information was obtained, and a hologram was then generated from the 3D data using a point cloud approach. Kim et al. [8] applied a monochromatic CGH method with stereoscopic images of a real 3D object. Depth was calculated based on the stereoscopic images and holograms were then computationally generated. Some CGHs based on real objects have been constructed by employing 2D color and depth images (e.g., Kinect [9,10] and Axi-Vision [11]). Scanning vertical camera array is also a promising approach to capture full-parallax light-field [12,13]. A vertical camera array can scan the dense light-ray information.

When developing a full-color display technique, it is important to consider the approach used to derive CGHs from real objects. There are various approaches, including point source-, polygon-, and depth plane-based methods. The polygon-based approach affording a greater sensation of depth is compatible, and efficiently images large 3D scenes [1416]. Because sub-holograms must be calculated for all object triangles, full-color polygon-based reconstructive systems are very slow. Another approach is the depth plane-based method [17,18]. However, if the plane density is high, the image may appear to be layered rather than continuously 3D. Among the various approaches, the point cloud approach is the most popular. In this method, spherical waves from point sources are sampled and then superimposed onto a continuous surface representing the holographic plane. The point cloud method is simple, flexible, accurate, and appropriate for 3D object modeling using a depth camera or camera scans. The hologram construction time is long because it is necessary to calculate sub-holograms from each point of the cloud; the computational burden is thus very high. Many methods [1926] have been proposed to reduce this computational complexity. Look-up table (LUT) methods can reduce calculation time. However, this is memory-intensive [19,20]. The wave-front recording plane (WRP) [2124] method overcomes the drawbacks of the point cloud method. An angular-spectrum layer-oriented method is proposed by Zhao et al. [25] to generate CGHs of 3-D scenes. Then, Su et al. [26] proposed a novel layer-based CGH calculation using layer classification and occlusion culling for the 3-D object. However, the display of computer-synthesized images of virtual 3D objects is the limitation of these approaches.

A recent algorithm applied a point cloud gridding (PCG) method to accelerate the CGH generation from real objects [27]. A relocated PCG (R-PCG) method has been developed to accelerate the calculation of chromatic CGHs for full-color holographic systems [28]. A depth camera was employed to acquire the depth and color information from real objects, and a color point cloud model was then extracted. To meet modern expectations, many large viewing angle holographic 3D displays [29] have been developed. However, the low resolution of the depth camera limits the quality of the reconstructed image. In this paper, rather than generating a virtual object using a depth camera, we captured 2D images using high-quality digital single-lens reflex (DSLR) cameras to reconstruct real 3D objects. A non-uniform sampling method is used to generate a more accurate point cloud model. CGHs are computed from a model generated using the 2D images captured by scans derived acquired using a vertical camera array. The quality of real 3D objects reconstructed in full-color CGHs is thus enhanced. To achieve both good reconstruction quality and rapid generation, a compressed point cloud gridding (C-PCG) method is proposed to increase the calculation speed with GPU. Our experiments can demonstrate the feasibility of reconstructing high-quality 3D images.

2. Full-color holographic system

To meet ever-increasing expectations, herein we used the point cloud-based system employed in our previous studies to obtain more accurate information on real objects using multiple, appropriately positioned cameras. Figure 1 shows the acquisition process using multiple DLSR cameras. The five principal steps are: real object data acquisition, point cloud model generation, C-PCG processing, hologram generation, and reconstruction. In the first step, depth and color information from real 3D objects are simultaneously acquired by three DSLR cameras connected to a personal computer (PC). Several DSLR cameras scan horizontally to collect color information. In the second step, point cloud models are generated using this information. The generation of full-color holograms from a real 3D object can be summarized as follows:

 figure: Fig. 1.

Fig. 1. Schematic of the system: (a) data acquisition; (b) point cloud generation; (c) depth grid generation; (d) hologram creation; and (e) reconstruction.

Download Full Size | PDF

During real object data acquisition, several high-quality cameras scan horizontally, as shown in Fig. 2(a). In the experiment described below, we used a vertical array of three cameras controlled by a single PC. As only a small number of cameras are employed, high-quality point cloud data can be collected relatively easily in a compact environment. High-density horizontal sampling is simple, although the number of cameras that can be mounted vertically is limited. Vertical and parallax horizontal information for the point cloud model can be generated from the captured 2D data.

 figure: Fig. 2.

Fig. 2. (a) The scanning vertical camera system used to acquire depth and color information. (b) The geometry of the parallel capture method and (c) the convergent capture method. (d) The point cloud models generated using the parallel capture method and (e) the convergent capture method.

Download Full Size | PDF

As shown in Fig. 2, there are two main approaches to capture the real object: convergent and parallel capture methods. Parallel capture is used to obtain images captured by cameras arranged parallel to the camera motion axis [Fig. 2(b)]. In the convergent method, the vertical camera array is rotated during horizontal motion so that the cameras always point at the object [Fig. 2(c)]. In our system, a convergent capture method is used to increase the angular range of image acquisition. This allows the generation of a more accurate point cloud model than that afforded by the parallel capture method [Figs. 2(d) and 2(e)].

2.1 Non-uniform sampling of 2D images

During point cloud model generation, 3D models are generated from captured 2D data. As shown in Fig. 3(a), three DSLR cameras and one moving/rotating stage are used to capture the real object. Structure-from-motion (SFM) algorithm is used to generate the point cloud model from 2D images [30]. The computerized 2D images are stored as pixels in a matrix arrangement; each matrix contains red, green, and blue (RGB) values that depend on the intensity of each component. Image pixels storing all such color data can be used for matrix operation/manipulation, to generate high-level variations and hence 3-D point cloud. Using their projections, SFM reconstructs 3D items into a series of images collected from different viewpoints. SFM commences with feature extraction and matching, followed by geometric verification. However, in the conventional SFM algorithm, uniform 2D images will be selected. More points will be generated from the 2D images which locate in the center part of the point cloud model. In the same time, fewer points will generate in the side part of the point cloud. Therefore, to achieve the uniform the point cloud model, a non-uniform sampling method is used to acquire uniform point cloud data from high-quality 2D light field images.

 figure: Fig. 3.

Fig. 3. (a) Outline of the capturing setup and (b) non-uniform sampled 2D images method.

Download Full Size | PDF

In the non-uniform sampling method shown in Fig. 3(b), more 2D images of side views and fewer head-on 2D images are used to generate the point cloud. The number of 2D images Np at a given capture angle is:

$${N_p} = \left\{ {\begin{array}{{c}{c}} 1&{|{{\theta_i}} |= 0}\\ {\frac{{|{{\theta_i}} |}}{{|{{\theta_{Max}}} |}} \times {X_A} \times |{{N_{max}} - 1} |}&{|{{\theta_i}} |\ne 0} \end{array}} \right\}$$
where, Nmax is the total number of the 2D images from each camera, ${\theta _i}$ is the corresponding angle of the 2D images, $|{{\theta_i}} |$ is defined as positive integers. ${\theta _{Max}}$ is the maximum angle in the capturing process. ${X_A}$ is calculated as:
$${X_A} = \frac{{|{{\theta_{Max}}} |}}{{|{{\theta_{ - i}}} |+ |{{\theta_{ - i + 1}}} |+ |{{\theta_{ - i + 2}}} |+ \cdot \cdot \cdot \cdot \cdot \cdot + |{{\theta_{i - 1}}} |+ |{{\theta_i}} |}},$$
After generating the point cloud model, the region of interest (ROI) is identified, expressed as follows:
$${N_{\textrm{cv}}} = ROI\left( {\sum\nolimits_{i = 1}^T {{P_{\textrm{cv}}}} } \right),$$
where ${P_{cv}}$ is the coordinate value of the point of the point cloud model, T is the number of points in the cloud; and ${N_{\textrm{cv}}}$ is the coordinate value of the region of interest.

To create smoother point cloud models, hole-filling is employed using the Catmull–Clark [31] subdivision method, which employs surface-interpolating corner vertices and boundary curves to fill holes. In the proposed multi-camera system, the holograms are generated from synthesized real object point clouds. Occlusion must be considered: a recent hidden point removal (HPR) operator [32] that identifies points evident from a given viewpoint can be used to address this problem. The point cloud model of the object is spherically flipped, as follows:

$${\hat{P}_i} = {P_i} + 2({{r_f} - \|{P_i}\|} )\frac{{{P_i}}}{{\|{P_i}\|}}.$$
where ${r_f}$ is the radius of the sphere. The created sphere includes all the object points ${P_i}$.

2.2 Compressed point cloud gridding

Groups of object points are classified and compressed into grids representing RGB channels. The calculation time of CGH is reduced by calculating the holograms from 2D “depth grids” rather than individual points in the 3D point cloud [28]. The point cloud model thus includes exact coordinates and accurate color data. In the previous depth camera-based holographic system, real 3D objects can be reconstructed clearly with relocated PCG method by shifting repeat values once. However, in our proposed multiple-camera holographic system, the quality of full-color reconstructed images from the R-PCG method is not satisfactory because of losing accurate RGB values. More repeated values on one coordinate of the depth grids will appear from high-resolution cameras. This motivated us to develop an improved method that uses C-PCG for high-resolution 3D point cloud data. In this manuscript, we proposed a compressed point cloud gridding (C-PCG) method to accelerate full-color CGH generation of real-existing objects. The C-PCG method features the four steps outlined in Fig. 4. This method is used to compress the depth grids of the point cloud model; then relocate the depth coordinates of repeated values.

 figure: Fig. 4.

Fig. 4. The principle of compressed point cloud gridding (C-PCG). (a) A side view of the point cloud model generated using data from several DLSR cameras. Side views of (b) the compressed, stretched and round process of the sub-layers and (c) the relocated layers. (d) RGB depth grid generation.

Download Full Size | PDF

In the first step, the point clouds are stretched to match the required hologram resolution (Nx × Ny). The stretching factor A is used to match the hologram resolution. Here, A is chosen by reference to the exact coordinates of object points and the required resolution. Then the number of depth layers in the stretched point cloud model is compressed. To reduce the amount of calculation, the number of depth layers is reduced to factor K.

In the second step, when a graphics processing unit (GPU) is used to calculate the hologram, a round function is used to ensure that the all the point cloud coordinates are unique positive integers. As shown in Fig. 4(b), after stretching and rounding, several repeated point cloud values will appear; these have the different RGB values but same XYZ coordinates. The C-PCG method is then applied to reduce the number of depth layers and ensure that the 3D object points have accurate RGB values and unique XYZ coordinates before assigning these values to the GPU array.

In the third step, as shown in Fig. 4(c), the z coordinates of the repeated values are shifted by a small constant, b, then forming new depth layers. The sublayers acquired from the HPR process are separated by non-uniform distances dz. The relocated depth grids lie close to the original depths. Compared to depth camera-based systems, our system generates more points from the real object. Therefore, each depth grid coordinate hosts more repeated values. C-PCGs afford efficient depth grid generation, thus solving the problem. As shown in Fig. 4(d), repeated points with the same zi values are shifted to generate new depth layers zi + Nb. N is the order of the repeated points. The points are relocated as follows:

$${O_r} = \left\{ {\begin{array}{{c}{c}} {\sum\nolimits_{i = 1}^{{N_p}} {[{O({A{x_i},A{y_i},A{z_i},{r_i},{g_i},{b_i}} )} ],} }&{\textrm{If}}\;O\;\textrm{is unique value}\\ {\sum\nolimits_{i = 1}^{{N_p}} {[{O({A{x_i},A{y_i},A({{z_i} + Nb} ),{r_i},{g_i},{b_i}} )} ],} }&{\textrm{If}}\;O\;\textrm{is repeated value} \end{array}} \right.$$
where Np is the number of points of the point cloud, Or is the new point cloud and O is the original point cloud.

In the fourth step, the point clouds of relocated layers are classified in terms of depth information, assigned to depth grid, and separated into RGB channels.

2.3 Full-color hologram generation

For hologram creation, RGB CGHs are generated to perform diffraction calculations, the overall calculation time is reduced. From each depth grid, the sub-holograms are generated using the angular spectrum method (ASM) [33]:

$${H_{Depth\; grid\; {N_d}}} = {F^{ - 1}}[{F[{{U_X}[{{f_x},{f_y}} ]} ]H({{f_X},{f_Y}} )} ],$$
where ${H_{Depth\; grid\; {N_d}}}$ denotes the hologram obtained from a depth grid in channel X (X = R, G or B), and ${U_X}$ is the input optical field. Fresnel diffraction impulse response are calculated by using a 2D FF), as follows:
$$H({{f_X},{f_Y}} )= {e^{jkz}}exp [ - j\pi \lambda z({f_X^2 + f_Y^2} )],$$
where z is the distance between the object and the hologram points, λ is the wavelength, and $H({{f_X},{f_Y}} )$ is the angular spectrum.
$${H_{X\_sub}} = {H_{Depth\; grid\; 1}} + {H_{Depth\; grid\; 2}} + \cdot \cdot \cdot \cdot \cdot \cdot + {H_{Depth\; grid\; {N_d}}},$$
Equation (8) shows the combined RGB hologram ${H_{X\_sub}}$. The principles of sub-hologram generation and RGB hologram combination are shown in Fig. 5. The combination holograms are a 24-bit (8-bit×3) bitmap images.

 figure: Fig. 5.

Fig. 5. The principles of sub-hologram generation and RGB hologram combination.

Download Full Size | PDF

3. Verification and results

3.1 Numerical simulation experiments

Numerical simulations and optical experiments are employed to evaluate the performance of our new method. The numerical simulation experiments are implemented in MATLAB 2017b and run on a Windows 10 64-bit PC with 8 GB RAM, an NVIDIA GTX 660 GPU, and a 3.2 GHz Intel Xeon(R) W3670 CPU. High-quality DSLR cameras (α6000, resolution 6,000 × 4,000 pixels; Sony) are used to capture real objects. Over 1 million points are generated; the view angles are 22° and 21° in the horizontal and vertical directions, respectively. The depth grids and RGB holograms all contain 1,080 × 1,080 pixels. The wavelengths of the red, green and blue reference beams and the pixel size, dx, of the hologram are set to 633 nm, 532 nm, 473 nm, and 7.4 µm, respectively. The point clouds generated using the conventional method and our proposed non-uniform sampling are compared in Figs. 6(a) and 6(b). The black points on the bear and pears are holes, due to problem with occlusion of the real object. Compare with conventional method fewer holes will appear by using our proposed method.

 figure: Fig. 6.

Fig. 6. Comparison between the (a) conventional method and (b) non-uniform sampled method.

Download Full Size | PDF

Figure 7(a) shows images of real objects; the parameters are the same as those used for the simulation. Figure 7(a) consists of three models (two magic cubes” and a snowman); the centers are 950mm from the cameras. Two-dimensional images created using the non-uniform sampling method are employed to generate the point cloud model. The point cloud of Fig. 7(b) generated from real objects is composed of 1,266,074 object points and 9,000 depth grids; the hole-filling process was employed. In Figs. 7(d) and 7(e), the target 3D objects, a bottle and two pears, are reconstructed; the centers of these objects are located 950 and 900mm from the focal plane. Reconstructed images of real objects taken at horizontal angles of −20, –10, 0, 10, and 20° relative to the viewpoint are shown.

 figure: Fig. 7.

Fig. 7. (a) Real 3D objects. Results obtained using non-uniform sampled method without hole-filling (b) and with hole-filling (c). Images reconstructed using the C-PCG method with the centers located (d) 980 mm and (e) 920 mm from the cameras.

Download Full Size | PDF

Figure 8(a) shows the 2D images captured by the cameras. The size is limited of the real object by restricting it to a range from −30 to 30 mm on each axis. Figure 8(b) shows the accuracies of the models after the application of the non-uniform sampling method. The point cloud model generated from a real object contains 1,162,890 object points and 12,000 depth grids; the distance from the cameras to the center of the point cloud is around 1,000 mm. Figures 8(c)–8(f) show holograms generated by the WRP method, M-WRP method R-PCG method, and proposed C-PCG method, respectively. The brightness and contrast values of all images are adjusted to optimize visualization; the peak signal-to-noise ratios (PSNRs) of the reconstructed images of WRP [21] and M-WRP [24] are shown in Figs. 8(c) and 8(d). The PSNRs of R-PCG method are shown in Fig. 8(e); the PSNRs are 23.1, 22.7, and 21.9 dB, respectively. As shown in Fig. 8(f), the PSNRs of the reconstructed images using the proposed C-PCG method are 26.1, 25.7, and 25.6 dB, respectively. When evaluating the quality of the holograms, the intensity information of the point cloud model served as the reference image. The reference image is a perspective view of the point cloud model, it has the same orientation as the reconstructed image. We calculate the PSNR for the R, G, and B components separately and then take their average value as an overall evaluation for the color image. It shows that they are higher quality than those generated by the WRP, M-PCG and R-PCG methods. Figure 8(g) shows the red, green, and blue components of the reconstructed images, i.e., the wavelengths from the red, green, and blue channels. The proposed method successfully reconstructed real 3D objects in numerical simulations.

 figure: Fig. 8.

Fig. 8. Different view of (a) real 3D object and (b) point cloud model. Images reconstructed using the (c) wave-front recording (WRP) method, (d) multiple wave-front recording (M-WRP) method, (e) relocated point cloud gridding (R-PCG) method and (f) C-PCG method with the centers located at 1,000 mm from the cameras. (g) Numerically reconstructed images at wavelengths of 633, 532, and 473 nm.

Download Full Size | PDF

Figures 9(a) and 9(d) show 2D images of the real objects that we used in our experiments; the parameters are the same as those used in the simulation. Figure 9(b) shows a doll and a football, the centers of which are located 1,000mm from the camera. Color and depth data are acquired from both images to generate the virtual 3D representation using point cloud models. The point cloud of Fig. 9(b) generated from the real object shown in Fig. 9(a) is composed of 1,236,601 object points and 9,000 depth grids, and the distance from the center of the point cloud to the camera is 940mm; the objects are thus 900 and 980mm from the rear focal plane. The full-color reconstructed images of the doll and football derived using the R-PCG and C-PCG methods are shown in Figs. 9(c) and 9(d).

 figure: Fig. 9.

Fig. 9. (a) Different view of real 3D object and (b) point cloud model. Images reconstructed using the (c) relocated point cloud gridding (R-PCG) method and (d) C-PCG method with the centers located at 1,000 mm from the cameras.

Download Full Size | PDF

3.2 Optical experiments

Optical reconstruction is performed by using a liquid crystal display HD kit (1,920 × 1,080 pixels; 7.4 µm), a transmission-type SLM with 255 intensity levels, and RGB lasers. The RGB holograms are loaded onto an SLM adapter board, which then displayed the holograms on RGB SLM panels.

Three lasers served as reference beams. The wavelengths and output powers are as follows: red laser, 633 nm and 75 mW; green laser, 532 nm and 100 mW; and blue laser, 473 nm and 50 mW. The hologram resolution is 1,080 × 1,080 pixels at a pixel size of 7.4 µm. The full-color holographic display system is shown in Fig. 10. The optically reconstructed images of a bear and a snowman are shown in Fig. 11. The reconstructions from the red, green, and blue channels are shown in Figs. 11(a)–11(c) and Figs. 11(e)–11(g). As shown in Figs. 11(d) and 11(h), the full-color images are obtained by combining the three channels. The real 3D objects are correctly reconstructed.

 figure: Fig. 10.

Fig. 10. The full-color holographic display system.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Optically reconstructed images of a bear using reference beams from the (a)(e) red, (b)(f) green, and (c)(g) blue lasers; and full-color reconstructed (d) (h)images with the centers located 1,000 mm from the cameras.

Download Full Size | PDF

3.3 Generation speed enhancement

The computational amount in proposed C-PCG method can be calculated as

$$\textrm{(}N + \varphi \textrm{)}3\beta {N_x}{N_y}{log _2}({{N_x}{N_y}} ),$$
where N is the number of the depth grids, $\beta $ is the number of arithmetic operations in the FFT and $\varphi $ is the number of grids added when eliminating the repeated values. Note that, Eq. (9) shows that the computational amount of the C-PCG method does not depend on the number of 3D object points. It depends on the number of depth grids. As a result, generation time increase linearly when the number of depth grids increased. In order to achieve both good reconstruction quality and rapid generation, it is necessary to analyze the effect of the number of depth grids on the quality of the reconstructed object.

A comparison of the execution time with a different number of depth grids is shown in Table 1. The acceleration capability of the C-PCG method is related to the number of depth grids. In addition, the numerically reconstructed images with 3,000, 6000, and 12,000 depth grids are shown in Fig. 12. The PSNR values between the reconstructed objects and the original object are also shown. To achieve both good reconstruction quality and rapid generation, the number of the depth grids is selected at 6,000.

 figure: Fig. 12.

Fig. 12. Numerically reconstructed images of a bear with (a) 3,000, (b) 6,000, and (c) 12,000 depth grids and reconstructed images of a snowman with (d) 3,000, (e) 6,000, and (f) 12,000 depth grids.

Download Full Size | PDF

Tables Icon

Table 1. Comparison of the execution time with different number of depth grid from real objects.

Our system was verified experimentally. Table 2 shows the calculation time of the bear which featured 1,162,890 object points and 6,000 depth grids. We used MATLAB 2017b running on a CPU for the calculation. The processing time depended principally on the time required for hologram, wave-front recording plane and depth grid generation. The calculation time required by the C-PCG method is considerably less than that required by the conventional WRP method. Using our C-PCG method, the entire process required 366.838 s, i.e., over 50-fold less time than that required by the WRP method. Most of this time is required for generating wave-fronts from the depth grids to create the hologram. Even the R-PCG method is a little faster than the previous PCG interpolation method, the C-PCG method yields higher quality results than the R-PCG method.

Tables Icon

Table 2. Calculation time of WRP and C-PCG methods.

As evident in Table 3, greater computational efficiency is achieved when GPU parallel processing (compared to CPU processing) is employed. The diffraction calculation from depth grids to the hologram plane is calculated with GPU parallel processing. The calculation times on CPU of C-PCG method, GPU of C-PCG method, the WRP method, and multiple WRP (M-WRP) method are compared in Tables 3. Correlation efficiency for WRP position is analyzed to select an acceptable WRP position [23]. The WRPs position is selected at one time the total depth of the object. The same parameters are used for the CGH calculations.

Tables Icon

Table 3. Calculation time of the multiple DLSR camera holographic system. Running times are in seconds.

The C-PCG method on a GPU generates holograms 4.3∼4.6 times faster than that of the generations on a CPU. Furthermore, applying the C-PCG method with a GPU enhances the speed of the hologram generation by a factor of 1.9∼2.3 with respect to the WRP and M-WRP methods when the number of depth grids is 6,000. Even for the 9,000 depth grids, the proposed C-PCG method is 1.5∼1.8 times faster than the WRP and M-WRP method.

4. Summary

In this paper, a full-color holographic system featuring non-uniformly sampled 2D images and C-PCG is developed to represent real 3D objects. The depth and color data of real scenes were acquired simultaneously by multiple DSLR cameras, and the uniform point cloud model is extracted using the non-uniform sampling method. Compared to the depth camera-based holographic system, the quality of the reconstructed images is substantially better. By using the C-PCG method, real 3D objects can be easily encoded into CGHs. Compared to WRP and M-WRP methods, the quality of the reconstructed images is substantially better, and the computational complexity is dramatically reduced. We demonstrated that our system reconstructs real 3D objects clearly, both numerically and experimentally. This system will be useful for creating CGHs in high-quality full-color holographic displays.

Funding

This work was conducted during the research year of Chungbuk National University in 2018 and this work was also supported by the National Research Foundation of Korea grant funded by the Korea government (NRF-2017R1A2B4012096, NRF-2018R1D1A3B07044041).

Acknowledgment

This work was conducted during the research year of Chungbuk National University in 2018.

References

1. A. W. Lohmann and D. P. Paris, “Binary Fraunhofer holograms, generated by computer,” Appl. Opt. 6(10), 1739–1748 (1967). [CrossRef]  

2. J. H. Park, “Recent progress in computer-generated holography for three-dimensional scenes,” J. Inf. Disp. 18(1), 1–12 (2017). [CrossRef]  

3. D. Wang, C. Liu, F. Chu, and Q. H. Wang, “Full color holographic display system based on intensity matching of reconstructed image,” Opt. Express 27(12), 16599–16612 (2019). [CrossRef]  

4. N. Chen, Z. Ren, and E. Y. Lam, “High resolution Fourier hologram synthesis from photographic images through computing the light field,” Appl. Opt. 55(7), 1751–1756 (2016). [CrossRef]  

5. S. F. Lin and E. S. Kim, “Single SLM full-color holographic 3-D display based on sampling and selective frequency-filtering methods,” Opt. Express 25(10), 11389–11404 (2017). [CrossRef]  

6. S. H. Lee, S. C. Kwon, H. B. Chae, J. Y. Park, H. J. Kang, and J. D. K. Kim, “Digital hologram generation for a real 3D object using by a depth camera,” J. Phys.: Conf. Ser. 415(1), 012049 (2013). [CrossRef]  

7. G. Li, K. Hong, J. Yeom, N. Chen, J. H. Park, N. Kim, and B. Lee, “Acceleration method for computer generated spherical hologram calculation of real objects using graphics processing unit,” Chin. Opt. Lett. 12(6), 060016 (2014).

8. S. C. Kim, D. C. Hwang, D. H. Lee, and E. S. Kim, “Computer generated holograms of a real three-dimensional object based on stereoscopic video images,” Appl. Opt. 45(22), 5669–5676 (2006). [CrossRef]  

9. E. Y. Chang, J. Choi, S. Lee, S. Kwon, J. Yoo, H. G. Choo, and J. Kim, “360-degree color hologram generation for real 3-D Object,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (Optical Society of America, 2017), paper W2A.28.

10. E. Y. Chang, J. Choi, S. Lee, S. Kwon, J. Yoo, M. Park, and J. Kim, “360-degree color hologram generation for real 3-D objects,” Appl. Opt. 57(1), A91–A100 (2018). [CrossRef]  

11. K. Nomura, R. Oi, T. Kurita, and T. Hamamoto, “Electronic hologram generation using high quality color and depth information of natural scene,” in Proceedings of the Picture Coding Symposium (PCS) Conference. (Academic, 2010), pp. 46–49.

12. A. Gilles, P. Gioia, R. Cozot, and L. Morin, “Computer generated hologram from multiview-plus-depth data considering specular reflections,” In 2016 IEEE International Conference on Multimedia & Expo Workshops, (Academic, 2016), pp. 1–6.

13. M. Yamaguchi, K. Wakunami, and M. Inaniwa, “Computer generated hologram from full-parallax 3D image data captured by scanning vertical camera array,” Chin. Opt. Lett. 12(6), 060018 (2014). [CrossRef]  

14. H. Kim, J. Hahn, and B. Lee, “Mathematical modeling of triangle-mesh-modeled three-dimensional surface objects for digital holography,” Appl. Opt. 47(19), D117–D127 (2008). [CrossRef]  

15. D. Im, E. Moon, Y. Park, D. Lee, J. Hahn, and H. Kim, “Phase-regularized polygon computer-generated holograms,” Opt. Lett. 39(12), 3642–3645 (2014). [CrossRef]  

16. Y. Zhao, K. C. Kwon, Y. L. Piao, S. H. Jeon, and N. Kim, “Depth-layer weighted prediction method for a full-color polygon-based holographic system with real objects,” Opt. Lett. 42(13), 2599–2602 (2017). [CrossRef]  

17. H. Zhang, L. Cao, and G. Jin, “Computer-generated hologram with occlusion effect using layer-based processing,” Appl. Opt. 56(13), F138–F143 (2017). [CrossRef]  

18. Y. Ohsawa, K. Yamaguchi, T. Ichikawa, and Y. Sakamoto, “Computer-generated holograms using multiview images captured by a small number of sparsely arranged cameras,” Appl. Opt. 52(1), A167–A176 (2013). [CrossRef]  

19. J. Jia, Y. Wang, J. Liu, X. Li, Y. Pan, Z. Sun, B. Zhang, Q. Zhao, and W. Jiang, “Reducing the memory usage for effective computer-generated hologram calculation using compressed look-up table in full-color holographic display,” Appl. Opt. 52(7), 1404–1412 (2013). [CrossRef]  

20. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of fresnel computer-generated-hologram using look-up table and wavefront recording method,” Opt. Express 18(19), 19504–19509 (2010). [CrossRef]  

21. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]  

22. J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20(4), 4018–4023 (2012). [CrossRef]  

23. A. H. Phan, M. L. Piao, S. K. Gil, and N. Kim, “Generation speed and reconstructed image quality enhancement of a long-depth object using double wavefront recording planes and a GPU,” Appl. Opt. 53(22), 4817–4824 (2014). [CrossRef]  

24. N. Hasegawa, T. Shimobaba, T. Kakue, and T. Ito, “Acceleration of hologram generation by optimizing the arrangement of wavefront recording planes,” Appl. Opt. 56(1), A97–A103 (2017). [CrossRef]  

25. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015). [CrossRef]  

26. Y. Zhao, L. Cao, H. Zhang, W. Tan, S. Wu, Z. Wang, and G. Jin, “Time-division multiplexing holographic display using angular-spectrum layer-oriented method,” Chin. Opt. Lett. 14(1), 010005 (2016). [CrossRef]  

27. Y. Zhao, C. X. Shi, K. C. Kwon, Y. L. Piao, M. L. Piao, and N. Kim, “Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding,” Opt. Commun. 411, 166–169 (2018). [CrossRef]  

28. Y. Zhao, K. C. Kwon, M. U. Erdenebat, M. S. Islam, S. H. Jeon, and N. Kim, “Quality enhancement and GPU acceleration for a full-color holographic system using a relocated point cloud gridding method,” Appl. Opt. 57(15), 4253–4262 (2018). [CrossRef]  

29. Y. Zhao, M. U. Erdenebat, M. L. Piao, M. S. Alam, S. H. Jeon, and N. Kim, “Multiple-camera holographic system featuring efficient depth grids for representation of real 3D objects,” Appl. Opt. 58(5), A242–A250 (2019). [CrossRef]  

30. J. L. Schonberger and J. M. Frahm, “Structure-from-motion revisited,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (Academic, 2016), pp. 4104–4113.

31. C. Deng and X. Yang, “A simple method for interpolating meshes of arbitrary topology by Catmull–Clark surfaces,” Vis. Comput. 26(2), 137–146 (2010). [CrossRef]  

32. S. Katz, A. Tal, and R. Basri, “Direct Visibility of Point Sets,” ACM Trans. Graph. 26(3), 24 (2007). [CrossRef]  

33. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, 1996).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Schematic of the system: (a) data acquisition; (b) point cloud generation; (c) depth grid generation; (d) hologram creation; and (e) reconstruction.
Fig. 2.
Fig. 2. (a) The scanning vertical camera system used to acquire depth and color information. (b) The geometry of the parallel capture method and (c) the convergent capture method. (d) The point cloud models generated using the parallel capture method and (e) the convergent capture method.
Fig. 3.
Fig. 3. (a) Outline of the capturing setup and (b) non-uniform sampled 2D images method.
Fig. 4.
Fig. 4. The principle of compressed point cloud gridding (C-PCG). (a) A side view of the point cloud model generated using data from several DLSR cameras. Side views of (b) the compressed, stretched and round process of the sub-layers and (c) the relocated layers. (d) RGB depth grid generation.
Fig. 5.
Fig. 5. The principles of sub-hologram generation and RGB hologram combination.
Fig. 6.
Fig. 6. Comparison between the (a) conventional method and (b) non-uniform sampled method.
Fig. 7.
Fig. 7. (a) Real 3D objects. Results obtained using non-uniform sampled method without hole-filling (b) and with hole-filling (c). Images reconstructed using the C-PCG method with the centers located (d) 980 mm and (e) 920 mm from the cameras.
Fig. 8.
Fig. 8. Different view of (a) real 3D object and (b) point cloud model. Images reconstructed using the (c) wave-front recording (WRP) method, (d) multiple wave-front recording (M-WRP) method, (e) relocated point cloud gridding (R-PCG) method and (f) C-PCG method with the centers located at 1,000 mm from the cameras. (g) Numerically reconstructed images at wavelengths of 633, 532, and 473 nm.
Fig. 9.
Fig. 9. (a) Different view of real 3D object and (b) point cloud model. Images reconstructed using the (c) relocated point cloud gridding (R-PCG) method and (d) C-PCG method with the centers located at 1,000 mm from the cameras.
Fig. 10.
Fig. 10. The full-color holographic display system.
Fig. 11.
Fig. 11. Optically reconstructed images of a bear using reference beams from the (a)(e) red, (b)(f) green, and (c)(g) blue lasers; and full-color reconstructed (d) (h)images with the centers located 1,000 mm from the cameras.
Fig. 12.
Fig. 12. Numerically reconstructed images of a bear with (a) 3,000, (b) 6,000, and (c) 12,000 depth grids and reconstructed images of a snowman with (d) 3,000, (e) 6,000, and (f) 12,000 depth grids.

Tables (3)

Tables Icon

Table 1. Comparison of the execution time with different number of depth grid from real objects.

Tables Icon

Table 2. Calculation time of WRP and C-PCG methods.

Tables Icon

Table 3. Calculation time of the multiple DLSR camera holographic system. Running times are in seconds.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

N p = { 1 | θ i | = 0 | θ i | | θ M a x | × X A × | N m a x 1 | | θ i | 0 }
X A = | θ M a x | | θ i | + | θ i + 1 | + | θ i + 2 | + + | θ i 1 | + | θ i | ,
N cv = R O I ( i = 1 T P cv ) ,
P ^ i = P i + 2 ( r f P i ) P i P i .
O r = { i = 1 N p [ O ( A x i , A y i , A z i , r i , g i , b i ) ] , If O is unique value i = 1 N p [ O ( A x i , A y i , A ( z i + N b ) , r i , g i , b i ) ] , If O is repeated value
H D e p t h g r i d N d = F 1 [ F [ U X [ f x , f y ] ] H ( f X , f Y ) ] ,
H ( f X , f Y ) = e j k z e x p [ j π λ z ( f X 2 + f Y 2 ) ] ,
H X _ s u b = H D e p t h g r i d 1 + H D e p t h g r i d 2 + + H D e p t h g r i d N d ,
( N + φ ) 3 β N x N y l o g 2 ( N x N y ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.