Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Compact full-color holographic 3-D display based on undersampled computer-generated holograms and oblique projection imaging

Open Access Open Access

Abstract

A compact full-color electro-holographic three-dimensional (3-D) display with undersampled computer-generated holograms (US-CGHs) and oblique projection imaging (OPI) is proposed. For its realization, undersampling conditions of the CGH enabling the complete recovery of image information are derived, and the OPI-based longitudinal-to-lateral depth conversion (LTL-DC) scheme allowing the simple reconstruction of full-color images is also proposed. Three-color off-axis US-CGHs are generated with their center-shifted principle fringe patterns (CS-PFPs) of the novel look-up table (NLUT) method, where center-shifts are calculated with the derived undersampling conditions of the CGH based on the generalized sampling theorem, and then multiplexed into the color-multiplexed hologram (CMH). The CMH is loaded on a SLM (spatial light modulator) and reconstructed by being illuminated with a multi-wavelength light source, where an original full-color image is reconstructed being spatially separated from the other color-dispersed images on the projected image plane with the OPI-based LTL-DC process, which enables us to view the original full-color image just with a simple filter mask. Performance analysis and successful experiments with the test 3-D objects in motion confirm the feasibility of the proposed system.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Corrections

3 December 2020: A typographical correction was made to the author affiliations.

1. Introduction

Electro-holography has been considered as one of the ultimate techniques for realizing a lifelike three-dimensional television (3-DTV) broadcasting system [14]. In the electro-holographic technique, three-color holographic patterns of an input 3-D scene are digitally generated, which are designated as the red, green and blue-color (R, G and B-color) holograms, respectively, and then optically reconstructed into a duplicated form of the input 3-D scene with the spatial light modulator (SLM) [5].

There exist, however, two kinds of critical problems to hinder the practical deployment of the electro-holographic technique in various application fields [48]. One is the unavailability of the SLM with the relatively large display-area and narrow pixel-pitch enabling the optical reconstruction of holographic data into a large-scale high-resolution 3-D image with the wide viewing-angle [58]. The other is the computational complexity being involved in real-time generation of holographic videos of the input 3-D scene [4].

Actually, the conventional full-color electro-holographic display requires three SLMs for each of the R, G and B-color hologram channels since those three-color holograms of the input 3-D scene are individually generated and reconstructed based on the optical interference and diffraction equations [9]. Thus, the whole holographic display turns out to be very bulky, complex and even cost high. To solving these problems, two types of the time division multiplexing (TDM) [10,11] and spatial division multiplexing (SDM) methods [12] have been proposed for realizing the single SLM-based full-color holographic 3-D display. But, for the simultaneous display of three-color holograms on a single SLM, the TDM method requires a SLM with the very high frame-rate together with the sophisticated synchronization system, while the SDM method also suffers from other problems of color dispersion and resolution loss since the SLM must be used being divided into three parts corresponding to each of those three colors [12].

As an alternative, the color-encoding method was suggested and many types of single SLM color holographic displays based on the color-encoding scheme have been proposed [1319]. In this method, three-color holograms are multiplexed into a single hologram, which is called a color-multiplexed hologram (CMH). This scheme, however, undergoes two types of color dispersion (CD) problems such as the CMH and SLM-based CDs which are resulted from the optical diffractions of the CMH pattern and pixelated panel structure of the SLM, respectively [13]. Until now, several color-encoding methods to effectively deal with those CD problems were proposed, which include the space-division multiplexing [14], depth-division multiplexing [15], frequency-division multiplexing [16], multiplexing encoding [17], sampling & selective frequency-filtering multiplexing [13], image & frequency-shift multiplexing [18], off-axis multiplexing [19] and Gerchberg Saxton(GS)-based [20] methods.

Here it must be noted that in case the calculated CMH is loaded on a SLM and directly reconstructed by being illuminated with the multi-wavelength light source, not only three original color images, but six kinds of color-dispersed images are also reconstructed along the longitudinal direction with their depths due to the CDs. All those longitudinal images, which are different in depth and color, would be viewed to the observer as a noise-like mixture of color images at the output plane. Thus, most CMH-based displays employ the 4-f lens system with which only the original color components are spatially separated from the other color-dispersed components on the Fourier plane, and then reconstructed into a full-color image on the output plane [1318]. Thus, the conventional CMH-based display also becomes bulky and complex even though a single SLM is employed.

In fact, the allowable highest spatial-frequency of an off-axis hologram is restricted by the pixel pitch of the employed SLM according to the Nyquist sampling theorem [9]. Here, the maximum spatial-frequency of the hologram, which is called a carrier spatial-frequency, can be determined by the angle between the object and reference beams to be interfered together.

For the practical electro-holographic display, the angle between the object and reference beams must be kept very small to meet the oversampling conditions of the commercially-available SLMs whose pixel pitches are in the range of several micrometers [9,19], which means that all those CMHs generated for the conventional electro-holographic display virtually look like the on-axis holograms. Thus, it is impossible to laterally separate the original full-color image from the other color-dispersed images on the output plane since they are all reconstructed along the longitudinal direction with a very small diffraction angle.

If the angle between the object and reference beams increases beyond the Nyquist limit of the commercial SLM, the CMH is to be generated under the so-called undersampling condition, which means information loss definitely occurs in the reconstructed images due to the aliasing error even though they can be reconstructed with the higher diffraction angle [21]. Meanwhile, from this undersampled CMH, a set of original full-color and color-dispersed images with their depths can be reconstructed along the optic-axis whose axis is laterally shifted from its origin depending on the angle of the reference beam from the object.

Thus, if the undersampled CMH is reconstructed with the oblique projection imaging (OPI) scheme, all those longitudinal depth images reconstructed along the shifted optic-axis with their depths can be converted into the laterally-projected depth images with their corresponding lateral gaps, which is called as a longitudinal-to-lateral depth conversion (LTL-DC) process [9]. Then, there can be a possibility to separate the original full-color image from the other color-dispersed images on the laterally-projected image plane, and view it directly only with a simple spatial-filter mask. However, as mentioned above, undersampled holograms couldn’t be evaded from the critical problem of information loss occurred in their reconstructed images due to aliasing error.

Meanwhile, the undersampled off-axis digital holography was proposed [2225]. In the off-axis digital holography, hologram images of the objects are acquired with the charge-coupled device (CCD) camera and processed with the computer, where the input signal bandwidth is set by the spatial-carrier frequency while the output hologram reconstruction is constrained by the pixel pitch of the CCD camera. In the practical applications such as the vibration analysis, microscopic imaging, and real-time monitoring of the fast moving objects [2628], sampling conditions cannot be maintained due to the constraints of optical alignments, or the changes of the sampling conditions due to the fast movements of the target objects. For alleviating these issues, the undersampled off-axis digital holography has been researched, and several theoretical and experimental results confirm a possibility of complete recovery of image data under the specific conditions [2225]. In other words, acceptable signal retrieval of the band-pass signals from the digital holograms could be achieved even under the undersampling condition.

Accordingly, in this paper, a novel type of the compact full-color electro-holographic 3-D display is proposed based on the undersampled computer-generated holograms (US-CGHs) and oblique projection imaging (OPI)-based longitudinal-to-lateral depth conversion (LTL-DC) scheme. For its implementation, undersampling conditions of the US-CGH to enable the complete recovery of image information are newly derived. In addition, for the simple reconstruction of full-color images, a new process of the longitudinal-to-lateral depth conversion (LTL-DC) is also proposed by employing the OPI scheme in the optical reconstruction process. Three-color off-axis US-CGHs for an input 3-D scene are calculated with their corresponding center-shifted principle fringe patterns (CS-PFPs) of the novel look-up table (NLUT) method [2932] and then multiplexed into the CMH. Here, those center-shifts are calculated with the undersampling conditions of the CGH based on the generalized sampling theorem.

The CMH is loaded on the SLM and reconstructed into a set of full-color and color-dispersed images with their depths just by being illuminated with a multi-wavelength light source. Here, the original full-color image can be separated from the other color-dispersed images on the projected image plane with the OPI-based LTL-DC process, which enables us to view the original full-color image directly with a simple spatial-filter mask without any use of additional optical devices. To confirm the feasibility of the proposed system, its operational performances are analyzed and experiments with the test 3-D objects in motion are carried out, and the results are discussed.

2. Proposed method

2.1 CS-PFPs for the generation of US-CGHs

In fact, a 3-D object can be treated as a set of discretely-sliced 2-D images along the z-direction, where each 2-D image with its depth is regarded as a collection of self-luminous object points of light. In the NLUT method [2932], only the Fresnel-zone-plates (FZPs) of the center object point of Oc(x0, y0, z) on each image plane, which are called centered-principal-fringe-patterns (C-PFPs) as defined in Eq. (1), are pre-calculated and stored [15], as seen in Fig. 1(a).

$$\begin{aligned} Tc(x,\textrm{ }y,\textrm{ }z) &\equiv \textrm{exp} \left[ {i\frac{k}{{2z}}\{ {{(x - {x_0})}^2} + {{(y - {y_0})}^2}\} } \right] \\ &\equiv \textrm{exp} \left[ {i\frac{k}{{2z}}{{(x - {x_0})}^2}} \right]\textrm{exp} \left[ {i\frac{k}{{2z}}{{(y - {y_0})}^2}} \right] \equiv Tc(x,\textrm{ }z)Tc(y,\textrm{ }z).\textrm{ } \end{aligned}$$

 figure: Fig. 1.

Fig. 1. Generation of the US-CGHs with CS-PFPs: (a) C-PFP and its reconstructed point image, (b) CS-PFP and its reconstructed point image, (c) US-CGH generated with the CS-PFPs.

Download Full Size | PDF

In Eq. (1), Tc(x, y, z), and Tc(x, z) and Tc(y, z) represent the 2-D and 1-D forms of the C-PFPs, respectively, while k and z denote the wave number, which is given by k=2π/λ, and distance measured from the object to the hologram, respectively.

Figure 1(a) shows the C-PFP of the center point object of Oc(x0, y0, z), and its reconstructed point image of O′c(x0, y0, z) on the center of the output plane. In addition, Fig. 1(b) shows the center-shifted PFP (CS-PFPs) in the Fresnel plane and its consequence on the output plane. That is, the CS-PFP is generated just by shifting the C-PFP to the x and y-directions with their specific values of Sx and Sy, respectively.

Here, the CS-PFP whose center is moved to the new location of (x0 +Sx, y0+Sy, z) can be represented by Eq. (2), where Sx and Sy represent the center-shifts of the C-PFP in the x and y-directions, respectively.

$$\begin{aligned} &Tcs(x,\textrm{ }y,\textrm{ }z) \equiv \textrm{exp} \left[ {i\frac{k}{{2z}}\{ {{(x - {x_0} - {S_x})}^2} + {{(y - {y_0} - {S_y})}^2}\} } \right]\\ &\qquad \equiv \textrm{exp} \left[ {i\frac{k}{{2z}}{{(x - {x_0} - {S_x})}^2}} \right]\textrm{exp} \left[ {i\frac{k}{{2z}}{{(y - {y_0} - {S_y})}^2}} \right] \equiv Tcs(x,\textrm{ }z)Tcs(y,\textrm{ }z).\textrm{ } \end{aligned}$$
As shown in Fig. 1(b), the reconstructed point image of O′cs(x0 +Sx, y0+Sy, z) from this CS-PFP is also moved to the new location of (x0 +Sx, y0+Sy, z) with the same displacements of Sx and Sy along the x and y-directions, respectively, on the output plane. Here, it must be noted that the displacements of the reconstructed image along the x and y-directions, are equal to the center-shifts of the C-PFP, which enables the NLUT method to be easily used for translating the object image to any locations with any displacements on each depth plane.

In fact, the spatial frequency of the CGH in the x-direction can be determined by its corresponding PFP whose spatial frequency is linearly related to the spatial coordinates as defined by Eq. (3),

$${f_{PFP}} = \frac{1}{{2\pi }}\frac{z}{{dx}}\left( {\frac{k}{{2z}}{x^2}} \right) = \frac{x}{{\lambda z}}\textrm{, }$$
since the center-shift makes the space interval of the PFP increased, this center-shift is directly correlated with the spatial fringe frequency of the PFP. For instance, the space interval of the C-PFP without center shifts is set to be [-(Wh+Wo)/2, (Wh+Wo)/2], and its maximum spatial fringe frequency is given by (Wh+Wo)/(2λz), in which Wh and Wo mean the widths of the hologram and object, respectively. For the case of the CS-PFP, its maximum fringe frequency become (Wh+Wo+2Sx)/(2λz), where Sx/λz indicates the bandwidth generated by the off-axis reference beam.

According to the Nyquist sampling theorem, the bandwidth of a signal must be smaller than twice of the sampling rate. It certainly limits the highest frequency of the CGH not to be larger than two times of the sampling frequency of the SLM, which is defined as Eq. (4), where p denotes the pixel pitch of the SLM.

$$\frac{{({W_h} + {W_o} + 2{S_x})}}{{2\lambda z}} \le \frac{1}{{2p}}\textrm{. }$$
Here it is noted that Sx must be larger than (Wh+Wo)/2 to guarantee that the reconstructed object image (ROI) can be non-overlapped with the direct current (DC) component and departed from its conjugation term. Thus, the center-shift of the PFP along the x-direction for the oversampled off-axis CGH, $S_x^{os}$ can be defined by Eq. (5) where Wo is given by Eq. (6).
$$\frac{{{W_h} + {W_o}}}{2} \le S_x^{os} \le \frac{{\lambda z}}{{2p}} - \frac{{{W_h} + {W_o}}}{2}\textrm{, }$$
$${W_o} \le \frac{{\lambda z}}{{2p}} - {W_h}\textrm{.}$$

Hence, as seen in Fig. 1(c), US-CGHs for each color can be generated with their corresponding CS-PFPs in the undersampling condition of Sx>Sxos. That is, the R-color US-CGH, which is denoted as HRUS, is generated by multiplication of the R-color CS-PFPs with the intensities of every object point the 3-D object space and adding up all together as expressed by Eq. (7).

$$H_{us}^R = \sum\limits_z^Z {\left( {\sum\limits_y^Y {\left( {\sum\limits_x^X {{I_{x,\textrm{ }y,\textrm{ }z}} \cdot Tcs(x,\textrm{ }z)} } \right)} \cdot Tcs(y,\textrm{ }z)} \right)} \textrm{ }\textrm{. }$$

Here x, y and z denote the indexes in the horizontal, vertical and longitudinal directions of the 3-D object space, and Ix, y, z indicates the intensities of the object points indexed by (x, y, z). The full-color US-CGH, which is designated as HFCUS, can be then obtained just by adding all of the US-CGHs for three-color channels, which is defined as Eq. (8).

$$H_{US}^{FC} = H_{US}^R + H_{US}^G + H_{US}^B\textrm{.}$$

2.2 Undersampled computer-generated holograms (US-CGHs)

In fact, the sampling theory to support a lowered sampling rate while keeping all information of the signal looks very critical in digital holography [2225]. According to the generalized sampling theorem of the bandpass signals, the sampling condition of a Fresnel hologram may not depend on its full bandwidth, but its maximum local spatial frequency, which allows the sampling rate of the CGH to be much lower than that strictly determined by the Nyquist sampling theorem.

Here in this paper, the generalized sampling theorem is extended to the scope of the CGH-based electro-holography. On the Wigner domain, unlike the Nyquist sampling theorem, the sampling rate can be defined as shown in Eq. (9) with the generalized sampling theorem [24,25].

$$\frac{1}{p} \ge B_{MLB}\textrm{ }\textrm{. }$$
Here, BMLB means the maximum local bandwidth and can be defined by Eq. (10).
$$B_{MLB} = \left\{ \begin{array}{l} {B_o}, \quad \frac{1}{{\lambda z}} \ge \frac{{{B_o}}}{{{W_o}}}\\ \frac{1}{{\lambda z}}{W_o}, \quad \frac{1}{{\lambda z}} < \frac{{{B_o}}}{{{W_o}}} \end{array} \right.\textrm{. }$$
In Eq. (10), Bo and Wo indicate the bandwidth and spatial extent of the object, respectively. In the electro-holographic display, z generally becomes larger than Boλ/Wo for the wide viewing area, thus BMLB is equal to be Wo/λz. Thus, the sampling frequency can be defined by Eq. (11).
$$\frac{1}{p} = \frac{{{W_0}}}{{\lambda z}}\textrm{. }$$

Even though the on-axis Fresnel hologram is considered here to prove the generalized sampling theorem with the Wigner-chart [21,25], the sampling frequency of the off-axis Fresnel hologram can be also defined under the same condition of the on-axis Fresnel hologram since the off-axis reference beam just makes the Wigner-chart of a Fresnel transform to be laterally shifted without any change of its shape. Hence, for the case that the object forms an angle with the reference beam and Eq. (11) is satisfied, object information can be then entirely reconstructed without aliasing even though the reference beam adds an extra-bandwidth in the Fourier domain under the Nyquist sampling condition is unsatisfied.

In the Nyquist sampling theorem, the sampling rate is mainly determined by the spatial extents of the object and CGH, as well as the angle of the reference beam. On the other hand, in the generalized sampling theorem, the sampling rate only depends on the spatial extent of the object when the wavelength and distance between the object and hologram are fixed, which indicates that the spatial extent of the object can be enlarged, and the angle of the reference beam may not be limited to the small value for the whole recovery of the original object data.

On the Fourier domain, the sampling operation causes the repeat of the frequency spectrum with the period of 1/p as seen in Fig. 2. Intrinsically, the carrier frequency of the reference light determines the frequency spectrum location of the CGH. Normally only the first interval spectrum is taken while cutting off all those of the rest intervals for the over-sampled CGHs as seen in the red rectangle of Fig. 2, which represents the situation described by Eq. (4).

 figure: Fig. 2.

Fig. 2. Periodic frequency spectrums of the US-CGH.

Download Full Size | PDF

As the carrier frequency increases, the Nyquist sampling condition expressed in Eq. (4) cannot be met anymore, which causes the CGH to be undersampled. However, if we take the second interval spectrum of the CGH as shown in the blue rectangle in Fig. 2, the term of 1/2p in the right part of Eq. (4) becomes to be 1/p. Thus, a larger carrier frequency fc can be obtained and the maximum lateral-shift distance of Sxus along the x-direction for the undersampled off-axis CGH can be defined by Eq. (12).

$$S_x^{us} \le \frac{{\lambda z}}{p} - \frac{{{W_h} + W_o^{}}}{2}\textrm{.}$$

However, the lateral-shift distance of the reconstructed object is still limited by the effective reconstruction zone of the SLM. The grating structure of the SLM diffracts the incident light into the different orders, which can be described by the grating equation as follows.

$$m\lambda = psin(\alpha \pm \beta ). $$
Where m and p denote the order number and distance between the two adjacent slits, while α and β represent the incident and diffraction angles, respectively. For the holographic display, the illuminating light is set to be perpendicular to the SLM panel, and only the 1st-order diffraction zone is utilized since most light-energy is distributed in the 1st-order diffraction zone while the reconstructed object images (ROIs) at the 2nd-order diffraction zone tend to be too dark to be well observed. Thus, the 1st-order diffraction angle, which is designated as β, can be defined as Eq. (14) under the condition of par-axis approximation [9].
$$\beta = arcsin(\frac{\lambda }{p}) \approx \frac{\lambda }{p}\textrm{. }$$
Thus, in order to keep the ROIs to be located within the 1st-order diffraction area of the SLM, the lateral shift distance can be defined by Eq. (15) under the condition of undersampling, in which SxA1 means the distance from the center of the 0th-order diffraction to the center of the 1st-order diffraction of the SLM for the ROIs to be located within the ‘Area 1’ in Fig. 3. Here, Sxus should be less than SxA1 because the ROIs must be located within the boundary of SxA1.
$$S_x^{us} < S_x^{A1} = ztan(\beta ) - {B_x} \approx \frac{{\lambda z}}{p} - {B_x}\textrm{.}$$

 figure: Fig. 3.

Fig. 3. The 1st-order diffractions of the SLM for each of the R, G and B-color beams and locations of the ROIs along the (a) x-axis and (b) y-axis.

Download Full Size | PDF

As seen in Fig. 3, ROIs have to be located in the reconstruction areas boxed by the dotted rectangle. They would be restricted by the 1st-order diffraction angles of the blue light along the x and y-directions, which are designated as βBx and βBy, respectively. This is because the 1st-order diffraction angles of the green and red color lights, which are denoted as βGx, βGx and βRy, βRy, respectively, are much larger than those of the blue light. In fact, only the x-directional separation of the wanted full-color image from the other color-dispersed images can be made as shown in Fig. 3(a), whereas in the y-direction, the separation between them is smaller than the object size, so that they looks overlapped as shown in Fig. 3(b). However, the y-directional shift is also needed for the ROIs to be diagonally located at the area of ‘Area 1’ since there exist speckle noises of interference along the x and y-directional axes due to the rectangle shape of the SLM.

Thus, with our experimental experiences the reconstruction quality can be made much improved when the ROIs are distributed along the diagonal direction of the SLM as seen in Fig. 3. Here, the lateral shift along the y-axis of the US-CGH, Syus can be defined by Eq. (16), where Whx and Why denote the x and y-directional spatial extents of the CGH, respectively.

$$S_y^{us} = S_x^{us}{W_{hy}}/{W_{hx}}\textrm{. }$$
Here, the value of Sxus depends on the wavelength λ and distance of z from the object to the CGH. Since three-color CGHs with different wavelengths, but same distances of z are multiplexed into the CMH, the ultimate Sxus must be given by Eq. (17).
$$S_x^{us} = min\{{S_{xR}^{us}, S_{xG}^{us}, S_{xB}^{us}} \}= S_{xB}^{us}.$$
Here R, G and B denote the red, green and blue color light, respectively.

2.3 Longitudinal-to-lateral depth conversion (LTL-DC)

Each of the R, G and B-holograms can be calculated based on the Fresnel diffraction integral of Eq. (18).

$$U\textrm{(}x,y\textrm{)} = \int\!\!\!\int\limits_{\xi ,\eta } {u\textrm{(}\xi ,\eta \textrm{)}} exp\left\{ {\frac{{i\pi }}{{\lambda z}}\textrm{((}x - \xi {\textrm{)}^2} + {{\textrm{(}y - \eta )}^2}\textrm{)}} \right\}d\xi d\eta .$$
In Eq. (18), λ and z denote the wavelength and reconstruction distance, while U(x, y) and u(ξ, η) represent the amplitudes of the hologram and object image, respectively. As shown in Eq. (18), all those R, G and B-hologram patterns are identical except for their differences in wavelengths, such as λR, λG and λB.

In the hologram reconstruction process, three-color lights with their wavelengths of λR, λG and λB are illuminated onto the SLM where the CMH synthesized with three-color CGHs is loaded. Then, three original color images and six color-dispersed images are simultaneously reconstructed along the optic-axis.

That is, each color light reconstructs its original color image from the matched hologram, as well as two other color-dispersed images from the unmatched holograms. In fact, these color dispersions occur due to the wavelength deviations, but these color dispersions can be compensated with their corresponding depth deviations in the reconstruction process when the values of λz in Eq. (16) are kept to be equal for three-color lights in the hologram generation process.

For instance, when the R-hologram with the value of λRz0 is illuminated with the G-light, the G-color image is to be reconstructed at the depth of z1 under the relation of λRz0=λGz1, which is designated as the ROI-RG (ROI from the R-color hologram with the G-light). Thus, color dispersions can be regarded as their corresponding depth dispersions in the reconstruction process.

When an on-axis CMH loaded on the SLM is illuminated with the reference beam with an incident angle of α=0 as shown in Fig. 4, nine kinds of color object images are reconstructed along the longitudinal direction with their depths. They include the original full-color image composed of three-color images of the ROI-RR, ROI-GG and ROI-BB (ROIs from each of the R, G and B-holograms with their corresponding lights, respectively) reconstructed at the same depth plane of z, as well as six kinds of color-dispersed images reconstructed at their different depth planes, which are designated as the ROI-RG & ROI-RB (ROIs from the R-hologram with each of the G and B-lights), ROI-GR & ROI-GB (ROIs from the G-hologram with each of the R and B-lights) and ROI-BR & ROI-BG (ROIs from the B-hologram with each of the R and G-lights), respectively.

 figure: Fig. 4.

Fig. 4. Schematic of the ROIs from the on-axis CMH along the z-direction with their depths.

Download Full Size | PDF

Here, all those longitudinal ROIs, which are different in depth and color, might be viewed to the observer as a noise-like mixture of color images, which means it is impossible to separate the original full-color image from the other color-dispersed images on the output plane.

In fact, depth gaps of the original full-color image from each of those color-dispersed images along the longitudinal direction appear to be relatively large, where the closest color-dispersed image to the original full-color image is the ROI-BG. As seen in Fig. 4, for the case of z=60 cm, λB=0.472 µm and λG=0.532 µm, the depth gap between the full-color image and ROI-BG is estimated to be z(1-λBG)≈6.77 cm, which shows that the original full-color image can be marginally separated from those color-dispersed ROIs on the longitudinal direction, even though all those longitudinal color images are viewed to the observer as a noise-like mixture on the output plane.

Here a new approach to effectively separate the original full-color image from the other color-dispersed images based on a ray-optical reconstruction process of the CMH, which is synthesized with three-color off-axis US-CGHs, is proposed. From the undersampled CMH (US-CMH), the same set of the original full-color and six other color-dispersed images are also reconstructed along the longitudinal direction like the case of the on-axis CMH of Fig. 4. However, those color images are reconstructed along the z-axis whose axis is laterally shifted from its origin due to the angle of the reference beam from the object as seen in Fig. 5.

 figure: Fig. 5.

Fig. 5. Schematic diagrams of the OPI-based LTL-DC process: (a) Projected longitudinal depths on the image plane by the OPI, (b) Geometric relationship between the longitudinal and lateral depths of the ROIs with the OPI, (c) Triangular relations between the original full-color and color-dispersed images projected on the display plane, (d) Lateral shift-dependent expansion of the lateral gap distances among the ROIs.

Download Full Size | PDF

Now, ROIs from the US-CMH can be made separable along the laterally-shifted z-axis if each of those reconstructed images is regarded as the independent light beams. That is, each light beam of the reconstructed images emits from the same origin, but propagates to the different directions due to the off-axis, and form the projected images at a specific depth plane, where all those projected images can be made separated from each other with their relative separation gaps as seen in Fig. 5(a).

Thus, here in this paper, an oblique projection imaging (OPI) scheme is proposed. In fact, light beams of the ROIs from each of those three-color US-CGHs generated with their corresponding CS-PFPs are emitted to the mutually-different directions. As mentioned above, a set of ROIs are distributed along a new z′-axis which is parallel to the original z-axis, but laterally shifted with an amount of the center-shift of the CS-PFP as shown in Fig. 5(a). Thus, at a specific longitudinal distance, a set of obliquely-projected light beams of the ROIs are to be distributed along the new x′-axis which is parallel to the original x-axis. This OPI process results in a transformation of the longitudinal depth gaps between the ROIs into their corresponding lateral gaps on the projected reconstruction plane, which is called here a longitudinal-to-lateral depth conversion (LTL-DC) process. The LTL-DC process enables us to have a possibility that only the wanted full-color image is to be spatially separated from the other color-dispersed images on the projected display plane, and directly viewed with a simple window-filter mask without any use of additional optical devices.

Here it must be noted that the depth gap between the wanted full-color image and nearest color-dispersed image of the ROI-GB determines whether the wanted full-color image can be completely separated from this ROI-GB on the projected image plane or not. Of course, depth-gap distances of the reconstructed images along the longitudinal direction are to be shrunk in their corresponding lateral gap distances during the LTL-DC process according to the rule of ray geometry. The key factor to expand the depth-gap between the wanted full-color image and ROI-GB is the tilted angle of the wanted full-color image from the original z-axis as shown in Fig. 5(b), which is directly related with the lateral-shift distance of OA1.

For instance, as seen in Fig. 5(c), the projected gap distances of x2 and x3 due to the LTL-DC process is linearly proportional to the lateral-shift distance of x1 with the ratios of z1/(z2-z1) and z3/(z1-z3), which are given by Eqs. (19) and (20), respectively.

$$\frac{{{z_2}}}{{|{{z_2} - {z_1}} |}} = \frac{{{x_1}}}{{{x_2}}}, \frac{{{z_3}}}{{|{{z_1} - {z_3}} |}} = \frac{{{x_1}}}{{{x_3}}},$$
$${x_2} = {x_1}\frac{{|{{z_2} - {z_1}} |}}{{{z_2}}},{x_3} = {x_1}\frac{{|{{z_3} - {z_1}} |}}{{{z_3}}}.\textrm{ }$$

Here z1, z2 and z3 denote the longitudinal distances of the wanted full-color image, ROI-GB and ROI-BG, respectively, where ROI-GB and ROI-BG are two color-dispersed images mostly closed to the original full-color image.

As seen in Fig. 5(d), the separation distances between the original full-color image and dispersed images become wider as the lateral shift distance is increased from the boundary distance of Sxos for the oversampling CGH to that of Sxus for the undersampling CGH. For instance, when the lateral spatial extents of the object Wo are set to be 2,800 µm for one case and 3,000 µm for the other case, their corresponding values of Sxus are then equal to be 25,687 µm and 25,787 µm, respectively according to Eq. (15) under the conditions of z1 = 60 cm and λ= λB = 0.472 µm. As a result, the separation distance between the original full-color image and ROI-GB is calculated to be x2 = 2,898µm for the case of Wo = 2,800 µm and x2 = 2,909 µm for the case of Wo = 3,000 µm, respectively, which confirms that the full-color image can be fully separated from the other color-dispersed images when the spatial extent of the object is less than 2,800 µm, whereas some overlap may occur when its spatial extent becomes 3,000 µm.

3. Experiments and results

3.1 Overall experimental setup of the proposed system

Figure 6 shows an experimental setup of the proposed system which is composed of the digital and optical processes. In the digital process of Fig. 6(a), three-color off-axis US-CGHs for the input 3-D scenes are generated with their corresponding CS-PFPs of the NLUT method and then multiplexed into the CMHs.

 figure: Fig. 6.

Fig. 6. Overall experimental setup: (a) Digital generation of the US-CMH, (b) Optical setup for reconstruction and display, (c) Conceptual diagram of the proposed full-color holographic 3-D display system.

Download Full Size | PDF

In the optical process of Fig. 6(b), three R, G and B-color lasers (StradusTM635, StradusTM530 and StradusTM470, VORTRAN Laser Technology) with their wavelengths of 633 nm, 532 nm, 472 nm are combined and used as a multi-wavelength light source. This light source is collimated and expanded by the beam collimator (BC) and beam expander (BE) (Model: HB-4XAR.14, Newport), and then illuminated into the SLM where the undersampled CMHs are loaded. As the SLM, a reflection-type amplitude/phase-modulation mode SLM (Model: HOLOEYE LC-R-1080) with the resolution of 1,920×1,200 pixels and pixel-pitch of 8.1 µm is employed.

In the display process, one original full-color image and six kinds of color-dispersed images are reconstructed on the projected image plane with the OPI scheme, in which the lateral-gap distance of the wanted full-color image from the nearest color-dispersed image of the ROI-GB must be kept larger than the spatial extent of the object to guarantee that the original full-color image of the object is to be completely separated from the other color-dispersed images, and viewed with a simple windowed filter mask. Thus, with combined use of the US-CGHs and OPI scheme, a simple type of the single SLM full-color holographic 3-D display can be implemented. It significantly decreases the complexity of the whole electro-holographic display system since no other optical devices and systems are used in the display process except a simple filter mask, which can then provide lots of conveniences in many practical application fields.

3.2 Generation of US-CGHs and US-CMH with the CS-PFPs

Three-color US-CGHs are generated with the CS-PFPs of the NLUT method. That is, three R, G and B-color CS-PFPs are pre-calculated and stored, where center shifts for each color are set to be 25,000 µm, and wavelengths are assumed to be 0.633 µm, 0.532 µm and 0.472 µm, respectively. In addition, the PFP’s size is set to be 2,880×1,920 pixels, which can be determined by adding each pixel size of the input image and CGH in the x and y-directions, respectively, where pixel sizes of the input object image and CGH are set to be 960×720 and 1,920×1,200 pixels, respectively. For the test scenarios, two sets of 30 and 36 frames of 3-D scenes for each of the full-color ‘Cube’ and ‘Airplane’ objects in motion are generated with the 3DS MAX.

In the experiment, as mentioned above, two kinds of input 3-D scenes of video images where a full-color ‘Cube’ object whose shape continuously varies, and a full-color ‘Airplane’ rotating along the clockwise direction, are used as the test video scenarios. Every input 3-D image has 256 depth planes where each depth plane is composed of 320×240 pixels. In addition, sampling rates on the x-y plane and along the z-direction are all set to be 0.025mm and 0.1mm. Figures 7(a) and 7(b), respectively, show six kinds of the ‘Cube’ object scenes of the 1st, 6th, 12th, 18nd, 24th, 30th frames and those of the ‘Airplane’ object scenes of the 1st, 8th, 15th, 22nd, 29th, 36th frames among the 30 frames.

 figure: Fig. 7.

Fig. 7. (a1)-(a6) Six ‘Cube’ scenes of the 1st, 6th, 12th, 18th, 24st, 30th frames, and (b1)-(b6) Six ‘Airplane’ scenes of the 1st, 8th, 15th, 22nd, 29th, 36th frames. (see Visualization 1 and Visualization 2)

Download Full Size | PDF

Figure 8(a) shows the CS-PFP patterns for each of the R, G and B-color channels, which look very similar, but their spatial frequencies are much high than those of the C-PFP patterns since no fringes with relatively wide pitches representing the slow amplitude variances are seen in those CS-PFP patterns. Figure 8(b) also shows three R, G and B-color US-CGHs calculated with their corresponding CS-PFP patterns for the input object. As seen in Fig. 8(b), all those calculated US-CGHs apparently look very smooth without having any significant fringe, which is due to the fact that they were generated with their corresponding CS-PFP patterns having very high spatial frequencies.

 figure: Fig. 8.

Fig. 8. (a) Pre-calculated CS-PFPR, CS-PFPG and CS-PFPB patterns for each of the R, G and B colors, (b) US-CGHR, US-CGHG, and US-CGHB generated with each of them in (a) for the input object.

Download Full Size | PDF

Now, the complex full-color US-CMH can be synthesized just by adding the R, G and B-color US-CGH patterns of Fig. 8(b) together. Then, only the real-part of the full-color US-CMH is taken and loaded on the SLM for the optical reconstruction.

Here in the generation processes of three-color US-CGHs with their corresponding CS-PFP patterns, resolutions of the hologram and each test image are set to be 1,920×1,200 and 320×240, while pixel pitches of the hologram and each test image are set to be 8.1 µm and 25 µm, respectively. The critical factor of the lateral center-shift of Sx is set to be 25,000 µm which look much larger than Sxos of 8,306 µm, but less than the maximum Sxus of 25,687 µm. The vertical center-shift of Syus is also set to be 15,625 µm (=25,000 µm×1,200/1,920). With all those parameter values, three-color US-CGHs are generated for each test object and they are then multiplexed into their corresponding US-CMHs according to Eq. (8).

3.3 Direct reconstruction of the US-CMH

Figure 9 shows the ROIs captured with the CCD camera for the case of the ‘Airplane’ test scenario, where nine kinds of ROIs such as the original full-color ROI (=ROI-RR + ROI-GG + ROI-BB) and six color-dispersed ROIs (ROI-RB, ROI-RG, ROI-GB, ROI-BG, ROI-GR, ROI-BR), are reconstructed and diagonally projected on the display plane. As shown Fig. 9, we can visually see the marginal lateral separation of the original full-color image from the other color-dispersed images, particularly from two nearest color-dispersed images of the ROI-GB and ROI-BG.

 figure: Fig. 9.

Fig. 9. Captured ROIs mutually separated with their lateral gaps on the projected output plane.

Download Full Size | PDF

Here, the horizontal and vertical extents of the object image are estimated to be about 3,000 µm and 2,600 µm, and the horizontal separation gap between the original full-color image (ROI-RR + ROI-GG + ROI-BB) and ROI-GB is evaluated to be about 2,800 µm under the condition that the lateral-shift distance in the x-direction of Sx is set to be 25,000 µm. Correspondingly, the vertical separation gap is about 1,750 µm under the condition that the vertical-shift distance in the y-direction of Sy is set to be 15,625 µm. Finally, the diagonal separation can be 3,302 µm with the Pythagorean theorem, thus the full-color airplane image can be completely separated from the other color-dispersed images.

In addition, ROIs for the case of the ‘Cube’ test scenario are also successfully reconstructed, where nine kinds of ROIs composed of one original full-color ROI and six other color-dispersed are diagonally projected on the display plane.

Figure 10 shows the reconstructed ROIs of the1st, 6th, 12nd, 18th, 24st, 30th frames and 1st, 8th, 15th, 22nd, 29th, 36th frames on the projected image planes with and without spatial filtering processes for each of the ‘Cube’ and ‘Airplane’ test scenarios respectively. To confirm the validation of the proposed OPI-based LTL-DC scheme, both of the calculated and measured lateral gap distances between the original full-color image and the other color-dispersed images on the reconstruction plane are listed in Table 1, where all those values are calculated according to Eqs. (19) and (20) with x1=25,000µm. Here, the longitudinal distance values of z for each of the ROI-RB, ROI-RG, ROI-GB, ROI-RR + ROI-GG + ROI-BB, ROI-BG, ROI-GR and ROI-BR are calculated to be 80.47 cm, 71.39 cm, 67.63 cm, 53.23 cm, 50.43 cm, 44.74 cm respectively, which indicates that their lateral separation gaps from the wanted full-color image can be estimated to be 6.36 mm, 3.99 mm, 2.82 mm, 3.18 mm, 4.74 mm and 8.53 mm, respectively. In addition, those values are also directly measured from the ROIs on the display plane as shown in Fig. 9 and compared with the calculated ones.

 figure: Fig. 10.

Fig. 10. (a1)-(a6) and (a7)-(a12) Reconstructed original full-color images of the 1st, 6th, 12th, 18th, 24th and 30th frames of the ‘Cube’ video scenario before and after the spatial filtering processes, (b1)-(b6) and (b7)-(b12) Reconstructed original full-color images of the 1st, 8th, 15th, 22nd, 29th and 36th frames for the ‘Airplane’ video scenario before and after the spatial filtering processes, respectively.

Download Full Size | PDF

Tables Icon

Table 1. Comparison results of the calculated and measured gap separations between the full-color image and other color-dispersed images.

As shown in Table 1, all those measured values have been found to be well matched with the calculated ones with very low differences, which confirms the feasibility of the proposed OPI-based LTL-DC scheme. Here, the errors for the cases of the ROI-GR and ROI-BR might be resulted from the wavelength-shift of the R-color laser employed in the experiments due to its physical aging, but these things do not make any effect on the lateral separation of the full-color image from the other color-dispersed images because they are far located from the wanted image. Thus, a kind of wavelength correction needs to be considered just to make the ROI-RR to be well matched with the ROI-GG and ROI-BB just by slightly shifting the R-color US-CGH.

As seen in Fig. 10, two test scenarios are optically reconstructed, and all those full-color images of the 1st, 6th, 12th, 18th, 24th and 30th for the ‘Cube’ test scenario and the 1st, 8th, 15th, 22nd, 29th and 36th frames for the ‘Airplane’ test scenario are shown in Figs. 10(a1-a12) and 10(b1-b12), respectively. From the reconstructed images for both two scenarios, we can see the full-color images are fully separated from the other color-dispersed images, especially for the ‘Cube’ test scenario since the object size of the ‘Cube’ is much smaller than the separation gap between the ROI-GB and ROI-BG at any lateral direction. For the case of the ‘Airplane’ test scenario, the lateral separation of the ROI-GB and ROI-BG looks not much wider than the wing width of the ‘Airplane’, but still there is no problem to separate the full-color images from them.

3.4 Performance analysis of the proposed system

First, from the successful experiments with the test video scenarios on the implemented prototype, the proposed method is confirmed to be able to fully reconstruct the original full-color 3-D images without any effect of color dispersions. In addition, all those reconstructed full-color 3-D images can be viewed simply with a spatial filter mask, which also confirms the proposed display system can be implemented as a simple structure without any use of additional optical devices and systems. As seen in Fig. 10, full-color object images have been successful reconstructed, but looked somewhat unclearly matched with those of the original test images in terms of the color contrast, which could be, however, improved by adjusting three-color laser sources to be more harmonious.

Second, due to the short wavelength of the practical B-color laser, and limited reconstruction distance and hologram size, all those full-color images reconstructed from the proposed system couldn’t be made large as much as of that defined by Eq. (6). For increasing the size of the ROI, the distance from the object to the hologram, as well as the wavelength of the involved laser need to be increased. However, even though the size of the ROI can be increased with this approach, its size still remains very limited in practice. Thus, for the practical magnification of the ROI, a specifically-designed optical lens system seems to be needed.

Third, for the video demonstrations of two kinds of the reconstructed full-color 3-D object images in motions, 30 and 36-frame videos for each of the ‘Cube’ and ‘Airplane’ test scenarios are generated, which are named as the ‘Visualization 1’ and ‘Visualization 2’, respectively, and attached to Fig. 10 as the MS video files. These two types of video demonstrations showing the separated reconstruction of the full-color object images in motions without any effect of color dispersion, clearly reveals the feasibility of the proposed system in the applications fields of practical full-color holographic 3-D displays.

4. Conclusions

In this paper, a compact full-color holographic 3-D display system based on the new concepts of US-CGHs and OPI-based LTL-DC has been proposed and implemented. Three-color US-CGHs are generated with their corresponding CS-PFPs, and multiplexed into the CS-CMHs. From these CS-CMHs, original full-color 3-D object images have been completely reconstructed by being spatially separated from the other color-dispersed images on the projected image plane based on the proposed OPI-based LTL-DC scheme, and directly viewed just with a simple spatial-filter mask on the display plane. Successful experiments with the test video scenarios on the implemented prototype confirm the feasibility of the proposed system in the practical application fields.

Funding

Ministry of Science and ICT, Korea, under the ITRC Support Program (IITP-2017-01629); Ministry of Education, Korea, under the Basic Science Research Program (2018R1A6A1A03025242); National Natural Science Foundation of China (61827804, 61991450).

Disclosures

The authors declare no conflicts of interest.

References

1. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]  

2. C. J. Kuo and M. -H Tsai, Three-Dimensional Holographic Imaging (John Wiley & Sons, 2002).

3. T. C. Poon, Digital Holography and Three-dimensional Display (Springer Verlag, 2007).

4. X. Xu, Y. Pan, P. M. Y. Lwin, and X. Liang, “3D holographic display and its data transmission requirement,” in Proceeding of IEEE Conference on Information Photonics and Optical Communications (IEEE, 2011), pp. 1–4.

5. Y. Zhao, K. C. Kwon, M. U. Erdenebat, M. S. Islam, S. H. Jeon, and N. Kim, “Quality enhancement and GPU acceleration for a full-color holographic system using a relocated point cloud gridding method,” Appl. Opt. 57(15), 4253–4262 (2018). [CrossRef]  

6. H. Sasaki, K. Yamamoto, K. Wakunami, Y. Ichihashi, R. Oi, and T. Senoh, “Large size three-dimensional video by electronic holography using multiple spatial light modulators,” Sci. Rep. 4(1), 6177 (2015). [CrossRef]  

7. H. Zhang, L. C. Cao, and G. F. Jin, “Three-dimensional computer-generated hologram with Fourier domain segmentation,” Opt. Express 27(8), 11689–11697 (2019). [CrossRef]  

8. J. Hahn, H. Kim, Y. Lim, G. Park, and B. Lee, “Wide viewing angle dynamic holographic stereogram with a curved array of spatial light modulators,” Opt. Express 16(16), 12372–12386 (2008). [CrossRef]  

9. J. W. Goodman, Introduction to Fourier Optics (Roberts & Company Publishers, 2005).

10. T. Shimobaba and T. Ito, “A color holographic reconstruction system by time division multiplexing with reference lights of laser,” Opt. Rev. 10(5), 339–341 (2003). [CrossRef]  

11. M. Oikawa, T. Shimobaba, T. Yoda, and H. Nakayama, “Time-division color electroholography using one-chip RGB LED and synchronizing controller,” Opt. Express 19(13), 12008–12013 (2011). [CrossRef]  

12. W. Zaperty, T. Kozacki, and M. Kujawińska, “Native frame rate single SLM color holographic 3D display,” Photonics Lett. Pol. 6(3), 93–95 (2014). [CrossRef]  

13. S. F. Lin and E. S. Kim, “Single SLM full-color holographic 3-D display based on sampling and selective frequency-filtering methods,” Opt. Express 25(10), 11389–11404 (2017). [CrossRef]  

14. T. Shimobaba, T. Takahashi, N. Masuda, and T. Ito, “Numerical study of color holographic projection using space-division method,” Opt. Express 19(11), 10287–10292 (2011). [CrossRef]  

15. M. Makowski, M. Sypek, and A. Kolodziejczyk, “Colorful reconstructions from a thin multi-plane phase hologram,” Opt. Express 16(15), 11618–11623 (2008). [CrossRef]  

16. T. Kozacki and M. Chlipala, “Color holographic display with white light LED source and single phase only SLM,” Opt. Express 24(3), 2189–2199 (2016). [CrossRef]  

17. G. Xue, J. Liu, X. Li, J. Jia, Z. Zhang, B. Hu, and Y. Wang, “Multiplexing encoding method for full-color dynamic 3D holographic display,” Opt. Express 22(15), 18473–18482 (2014). [CrossRef]  

18. S. F. Lin, H. K. Cao, and E. S. Kim, “Single SLM full-color holographic three-dimensional video display based on image and frequency-shift multiplexing,” Opt. Express 27(11), 15926–15942 (2019). [CrossRef]  

19. S. F. Lin, D. Wang, Q. H. Wang, and E. S. Kim, “Full-color holographic 3D display system using off-axis color-multiplexed-hologram on single SLM,” Opt. Lasers Eng. 126, 105895 (2020). [CrossRef]  

20. Z. Han, B. Yan, Y. Qi, Y. Wang, and Y. Wang, “Color holographic display using single chip LCOS,” Appl. Opt. 58(1), 69–75 (2019). [CrossRef]  

21. T. Markus, H. Bryan, and O. Jorge, Phase-space optics fundamentals and applications (The McGraw-Hill Companies, 2010).

22. N. Demoli, H. Halaq, K. Šariri, M. Torzynski, and D. Vukicevic, “Undersampled digital holography,” Opt. Express 17(18), 15842–15852 (2009). [CrossRef]  

23. K Grebenyuk, A. Grebenyuk, and V. Ryabukho, “Digital off-axis holography: reconstruction from undersampled pattern,” AIP Conference Proceedings, American Institute of Physics 1537(1), 102–106 (2013).

24. A. Stern and B. Javidi, “General sampling theorem and application in digital holography,” Proc. SPIE 5557, 110–123 (2004). [CrossRef]  

25. A. Stern and B. Javidi, “Improved-resolution digital holography using the generalized sampling theorem for locally band-limited fields,” J. Opt. Soc. Am. A 23(5), 1227–1235 (2006). [CrossRef]  

26. Z. B. Ren, Z. X. and E, and Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018). [CrossRef]  

27. Z. Zhong, H. J. Zhao, L. C. Cao, and M. G. Shan, “Automatic cross filtering for off-axis digital holographic microscopy,” Results Phys. 16, 102910 (2020). [CrossRef]  

28. B. Dong, W. Xian, F. Pan, and L. Che, “High-precision spherical subaperture stitching interferometry based on digital holography,” Opt. Lasers Eng. 110, 132–140 (2018). [CrossRef]  

29. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel lookup table method,” Appl. Opt. 47(19), D55–D62 (2008). [CrossRef]  

30. H. K. Cao and E. S. Kim, “Full-scale one-dimensional NLUT method for accelerated generation of holographic videos with the least memory capacity,” Opt. Express 27(9), 12673–12691 (2019). [CrossRef]  

31. S.-C. Kim, J.-M. Kim, and E.-S. Kim, “Effective memory reduction of the novel look-up table with one dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012). [CrossRef]  

32. H. K. Cao, S. F. Lin, and E. S. Kim, “Accelerated generation of holographic videos of 3-D objects in rotational motion using a curved hologram-based rotational-motion compensation method,” Opt. Express 26(16), 21279–21300 (2018). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       36-frame video for each of the ‘Airplane’ test scenarios is generated, which is named as the ‘Visualization 2’ and to confirm the feasibility of proposed method for separation between full-color object images and color-dispersed images.
Visualization 2       30-frame video for each of the ‘Cube’ test scenarios is generated, which is named as the ‘Visualization 1’ and to confirm the feasibility of proposed method for separation between full-color object images and color-dispersed images.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Generation of the US-CGHs with CS-PFPs: (a) C-PFP and its reconstructed point image, (b) CS-PFP and its reconstructed point image, (c) US-CGH generated with the CS-PFPs.
Fig. 2.
Fig. 2. Periodic frequency spectrums of the US-CGH.
Fig. 3.
Fig. 3. The 1st-order diffractions of the SLM for each of the R, G and B-color beams and locations of the ROIs along the (a) x-axis and (b) y-axis.
Fig. 4.
Fig. 4. Schematic of the ROIs from the on-axis CMH along the z-direction with their depths.
Fig. 5.
Fig. 5. Schematic diagrams of the OPI-based LTL-DC process: (a) Projected longitudinal depths on the image plane by the OPI, (b) Geometric relationship between the longitudinal and lateral depths of the ROIs with the OPI, (c) Triangular relations between the original full-color and color-dispersed images projected on the display plane, (d) Lateral shift-dependent expansion of the lateral gap distances among the ROIs.
Fig. 6.
Fig. 6. Overall experimental setup: (a) Digital generation of the US-CMH, (b) Optical setup for reconstruction and display, (c) Conceptual diagram of the proposed full-color holographic 3-D display system.
Fig. 7.
Fig. 7. (a1)-(a6) Six ‘Cube’ scenes of the 1st, 6th, 12th, 18th, 24st, 30th frames, and (b1)-(b6) Six ‘Airplane’ scenes of the 1st, 8th, 15th, 22nd, 29th, 36th frames. (see Visualization 1 and Visualization 2)
Fig. 8.
Fig. 8. (a) Pre-calculated CS-PFPR, CS-PFPG and CS-PFPB patterns for each of the R, G and B colors, (b) US-CGHR, US-CGHG, and US-CGHB generated with each of them in (a) for the input object.
Fig. 9.
Fig. 9. Captured ROIs mutually separated with their lateral gaps on the projected output plane.
Fig. 10.
Fig. 10. (a1)-(a6) and (a7)-(a12) Reconstructed original full-color images of the 1st, 6th, 12th, 18th, 24th and 30th frames of the ‘Cube’ video scenario before and after the spatial filtering processes, (b1)-(b6) and (b7)-(b12) Reconstructed original full-color images of the 1st, 8th, 15th, 22nd, 29th and 36th frames for the ‘Airplane’ video scenario before and after the spatial filtering processes, respectively.

Tables (1)

Tables Icon

Table 1. Comparison results of the calculated and measured gap separations between the full-color image and other color-dispersed images.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

T c ( x ,   y ,   z ) exp [ i k 2 z { ( x x 0 ) 2 + ( y y 0 ) 2 } ] exp [ i k 2 z ( x x 0 ) 2 ] exp [ i k 2 z ( y y 0 ) 2 ] T c ( x ,   z ) T c ( y ,   z ) .  
T c s ( x ,   y ,   z ) exp [ i k 2 z { ( x x 0 S x ) 2 + ( y y 0 S y ) 2 } ] exp [ i k 2 z ( x x 0 S x ) 2 ] exp [ i k 2 z ( y y 0 S y ) 2 ] T c s ( x ,   z ) T c s ( y ,   z ) .  
f P F P = 1 2 π z d x ( k 2 z x 2 ) = x λ z
( W h + W o + 2 S x ) 2 λ z 1 2 p
W h + W o 2 S x o s λ z 2 p W h + W o 2
W o λ z 2 p W h .
H u s R = z Z ( y Y ( x X I x ,   y ,   z T c s ( x ,   z ) ) T c s ( y ,   z ) )  
H U S F C = H U S R + H U S G + H U S B .
1 p B M L B  
B M L B = { B o , 1 λ z B o W o 1 λ z W o , 1 λ z < B o W o
1 p = W 0 λ z
S x u s λ z p W h + W o 2 .
m λ = p s i n ( α ± β ) .
β = a r c s i n ( λ p ) λ p
S x u s < S x A 1 = z t a n ( β ) B x λ z p B x .
S y u s = S x u s W h y / W h x
S x u s = m i n { S x R u s , S x G u s , S x B u s } = S x B u s .
U ( x , y ) = ξ , η u ( ξ , η ) e x p { i π λ z (( x ξ ) 2 + ( y η ) 2 ) } d ξ d η .
z 2 | z 2 z 1 | = x 1 x 2 , z 3 | z 1 z 3 | = x 1 x 3 ,
x 2 = x 1 | z 2 z 1 | z 2 , x 3 = x 1 | z 3 z 1 | z 3 .  
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.