Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Miniaturization and image optimization of a full-color holographic display system using a vibrating light guide

Open Access Open Access

Abstract

In this study, a miniaturized full-color holographic reconstruction system that uses a single spatial light modulator to achieve full-color image reconstruction was developed. The reconstruction system uses a single light guide for light combination and is therefore less voluminous than conventional reconstruction systems. The experimental results demonstrated that the system had a full-color display, corrected light combination, and eliminated zero-order light. The vibrations of the light guide disrupted the temporal coherence of the laser beam, thus ensuring that the speckle in the reconstructed image was almost imperceptible to the human eye.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Holographic displays employ 3D display technology to generate realistic imagery [1]. Holographic technologies can reconstruct 3D information and have full depth accommodation. Therefore, using holographic technologies in 3D displays, virtual reality, and augmented reality can help address the problem of vergence–accommodation conflict in conventional stereoscopic displays [2]. The advantage of computer-generated holography over conventional recording processes lies in its use of the interference fringe created during the mutual interference of reflected light and the reference beams of real objects simulated by computers. This can reduce the complexity of the holographic display and increase the quality of the reconstructed images [3]. The main objectives of current research on computer-generated holography include improving the quality of reconstructed images, achieving full-color holographic reconstruction, and reducing the effects of laser speckle [4]. Full-color holographic displays require three light sources (red, green, and blue [RGB] light sources) as well as spatial light modulators (SLMs) and corresponding information for reconstruction [5]. Color mixing methods can be classified as time-multiplexing, spatial multiplexing, frequency division, frequency multiplexing and angular division methods. Time-multiplexing methods use persistence of vision and switch images rapidly to achieve a full-color effect. However, they rely on high-frame-rate SLMs to overcome flickering and are affected by complex problems related to synchronous communication. Time-multiplexing methods often involve the use of ferroelectric liquid crystal materials. For example, if the switching speed of nematic liquid crystal SLMs is 60 Hz. To produce full-color displays, the maximum switching speed for each color is reduced to 20 Hz, which greatly affects the color combination and display quality. By contrast, the switching speeds of ferroelectric liquid crystal SLMs can reach up to several hundred hertz [6]. Spatial multiplexing methods employ SLMs to divide RGB beams into channels and use the corresponding light sources to reconstruct, mix, and produce full-color displays [7]. Spatial multiplexing methods are easier to understand and use than time-multiplexing methods, and they do not result in flickering images or color breakup [8]. Consequently, spatial multiplexing methods do not require the switching speed of the light sources to be matched to the refresh rates of the displays and can effectively minimize variability under experimental conditions. However, spatial multiplexing methods require more space and optical devices and are associated with problems such as color dispersion and loss of resolution [9]. Frequency division method is a multiplexing technique that encodes RGB component holograms to the distinct Fourier spaces of a single synthesized RGB complex hologram [10]. When the SLM is illuminated by white light beam, the holographic display decodes RGB complex wavefields of full-color synthesized hologram. According to the concept of frequency division method, several color-encoding methods have been proposed, which include frequency multiplexing [11] and angular division methods [12]. Although color dispersion and loss of resolution have been improved, the problem of laser speckle still cause the resulting images to have poor quality.

Holographic diffraction grating can be observed during the image reconstruction process. According to diffraction theory, different color images can be directly reconstructed from different locations. Therefore, the image location must be corrected during color mixing [13]. During reconstruction, zero-order light and laser speckle often cause the resulting composite color images to have poor quality [14]. In this study, a full-color holographic 3D display system was developed, and algorithms and optical systems were used for image optimization. Thereafter, a single SLM and specially designed optical components were integrated to reduce the complexity and volume of the proposed system. The system successfully superposed holographic images and minimized the effects of zero-order light and laser speckle. The image quality was evaluated as an assessment of the feasibility of using the proposed system in real-life applications.

2. Methodology

2.1 Operation principle

In this study, the 3D Modified Gerchberg–Saxton algorithm (MGSA) proposed by the research team in 2017 was used to compute the holographic information [15]. The holographic reconstruction formula is herein used to explain why the resolution and reconstructed diffraction offset of the holographic information of the three primary colors (RGB) differed during the reconstruction of the target image [8]. The holographic reconstruction formula can be used to calculate the resolution of a reconstructed image by using substitution and the Fresnel transform (FrT), as in Eq. (1):

$${E^\mathrm{^{\prime}}}(\xi ,\eta ) = \frac{j}{{\lambda z}}{e^{ - j\frac{{2\pi }}{\lambda }z}}{e^{ - j\frac{\pi }{{\lambda z}}\textrm{ (}{\xi ^2} + {\eta ^2}\textrm{)}}}\int {\int {_{ - \infty }^\infty \textrm{[}} } E(x,y){e^{ - j\frac{\pi }{{\lambda z}}\textrm{ (}{x^2} + {y^2}\textrm{)}}}]{e^{j\frac{{2\pi }}{{\lambda z}}\textrm{ (}x\xi + y\eta \textrm{)}}}dxdy$$
$$\triangle \xi = \frac{{\lambda z}}{{M\triangle x}};\triangle \eta = \frac{{\lambda z}}{{M\triangle y}}$$

In Eq. (2), $\triangle x$ represents the pixel size of the holographic display or recorder, M represents the number of pixels of the holographic display or recorder, $\triangle \xi$ represents the pixel size of the reconstructed image, $\lambda$ represents the optical wavelength of the laser, and z represents the reconstructed distance. The pixel size $\triangle x$ and number of pixels M of the holographic display or recorder and the reconstructed distance z are fixed values. Equation (3) presents the pixel size of images reconstructed using RGB lasers with different optical wavelengths $\lambda$ as the reference light sources:

$$\triangle {\xi _R} = \frac{{{\lambda _R}z}}{{M\triangle x}};\,\triangle {\xi _G} = \frac{{{\lambda _G}z}}{{M\triangle x}};\,\triangle {\xi _B} = \frac{{{\lambda _B}z}}{{M\triangle x}}$$

According to Eq. (3), when RGB lasers with different optical wavelengths $\lambda$ are used as the reference light sources, different pixel sizes $\triangle x$ are obtained, causing the sizes of the reconstructed images to differ. The wavelengths of the red laser ${\lambda _R}$, green laser ${\lambda _G}$, and blue laser ${\lambda _B}$ used in this study were 632, 532, and 473 nm, respectively. The $\triangle {\xi _R}$, $\triangle {\xi _G}$, and $\triangle {\xi _B}$ of the reconstructed image were calculated as follows:

$$\begin{aligned} \triangle {\xi _R} :\triangle {\xi _G}:\triangle {\xi _B} &= \frac{{{\lambda _R}z}}{{M\triangle x}}:\frac{{{\lambda _G}z}}{{M\triangle x}}:\frac{{{\lambda _B}z}}{{M\triangle x}}\\& = {\lambda _R}:{\lambda _G}:{\lambda _B}\\& = 632:532:473 \end{aligned}$$

According to Eq. (4), the RGB rtio of the reconstructed images was 632:532:473. Because the sizes of the images differed, the images could not be superposed. To overcome this problem, the 3D MGSA was used for image correction. Because the red optical wavelength > green optical wavelength > blue optical wavelength, the blue optical wavelength was used as the standard, and the sizes of the reconstructed images of red and green optical wavelengths were corrected accordingly. Regarding input image correction, zero padding was not used to enhance the image resolution because of the limitations of the SLM resolution. Instead, A downscaling algorithm was used to correct the image resolution (R:G:B = 473/632:473/532:473/473 ≈ 0.748:0.889:1).

In addition to image resolution correction, the problem of reconstructed diffraction location offsetting due to the variation among the diffraction angles of different optical wavelengths was also considered. Equation (5) represents the relationship between the diffraction angle and the optical wavelength:

$${\theta _D} = \frac{{n\lambda }}{d}$$

In Eq. (5), ${\theta _D}$ represents the nth diffraction angle, n represents the diffraction order, $\lambda$ represents the optical wavelength, and d represents the pixel period. According to Eq. (5), under the same diffraction conditions, a longer optical wavelength $\lambda$ results in a larger diffraction angle ${\theta _D}$. A larger diffraction angle ${\theta _D}$ causes the reconstructed image to have a larger offset, which affects the full-color superposition and color combination performance. In this study, the offset parameter in MGSA spatial multiplexing was used to correct the offset, and different offsets were used for different optical wavelengths to ensure the complete superposition of the final images. The MGSA used in this study can use the optical wavelength or imaging position as encryption parameters for phase-only functions (POFs) or phase-only masks (POMs). Figure 1. shows the graphical representation of the point-based POM retrieved. The proposed MGSA is used to retrieve the POF for each object point. The POFs for all the object points are summarized and modulated to obtain the final POM. A POM consists of multiple POFs; the POFs are combined using different encryption conditions on the FrT plane to achieve wavelength multiplexing or spatial multiplexing of images. Spatial multiplexing technology is an extension of the MGSA. The MGSA spatial multiplexing algorithm is illustrated in Fig. 2. The algorithm first matches N target images ${g_n}(x,y),n = 1\sim N$ to a different corresponding location and distance ${z_n}$ and then uses the MGSA to obtain an individual phase information ${\psi _{{z_n}}}({x_0},{y_0})$, as expressed in Eq. (6):

$$FrT\{{exp[j{\psi_{{z_n}}}({x_0},{y_0}) ];\lambda ;{z_n}} \}= {\hat{g}_{{z_n}}}(x,y)exp[j{\psi _{{{\hat{g}}_{{z_n}}} }}(x,y)]$$

 figure: Fig. 1.

Fig. 1. The graphical representation of the point-based POM retrieved.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Flowchart of spatial multiplexing program.

Download Full Size | PDF

In Eq. (6), ${\psi _{{{\hat{g}}_{{z_n}}} }}(x,y)$ represents the individual phase information after the FrT. To produce more image depth information, this study designated different depth information ${z_n}$ for each target image ${g_n}({x_0},{y_0}),n = 1\sim N$. Next, the target images were transformed to different depth locations, and a fixed wavelength $\lambda$ was used. Finally, the phase information was integrated into a POM. This POM was used to decrypt each target image ${g_n}({x_1},{y_1})$ and form an approximate target image ${\hat{g}_{{z_n}}}({x_1},{y_1})$, as in Eq. (7):

$$FrT\{{exp[j{\psi^\mathrm{^{\prime}}}_{{z_n}} ({x_0},{y_0}) ];\lambda ;{z_n}} \}= {\hat{g}_{{z_n}}} (x - {\mu _n},y - {\upsilon _n})exp[j\phi (x,y)]$$

In Eq. (7), $\phi (x,y)$ represents the individual phase information after the $FrT{\psi ^\mathrm{^{\prime}}}_{{z_n}} ({x_n},{y_n}),(n = 1\sim S)$, and ${\mu _n}$ and ${\nu _n}$ represent the default offset along the x-axis and y-axis, respectively, of the individual image ${\hat{g}_{{z_n}}}(x,y)$ in the diffraction field. The individual phases are integrated into a single POM, as in Eq. (8). During image reconstruction, the RGB reconstructed images could not be superposed because of the diffraction angle; therefore, image displacement was necessary, as shown in Fig. 3. The offset parameter $({\mu _n},{\upsilon _n})$ in MGSA spatial multiplexing was used to correct the offset, and different offsets were used for different optical wavelengths to ensure the complete superposition of the final images, as shown in Fig. 4.

$${H^{\prime}}({x_1},{y_1}) = exp\left\{ {j\sum\limits_{n = 1}^S {{\psi^\mathrm{^{\prime}}}_{{z_n}}({x_n},{y_n})} } \right\}$$

 figure: Fig. 3.

Fig. 3. Single SLM full-color holographic display: (a)before (b) after spatial multiplexing modification.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Single SLM full-color holographic display with spatial multiplexing modification.

Download Full Size | PDF

2.2 System designing

The research team determined that full-color light mixing systems are large. Spatial multiplexing and optical systems can easily superpose mixed colors and eliminate zero-order light. However, although spatial multiplexing can remove zero-order light far from the center of the reconstruction panel, it limits the size and spatial distribution of image information within the panel. Therefore, in this study, an optical mechanism was used to achieve the full-color display and to increase the reconstruction distance and eliminate zero-order light without compromising the resolution of the panel. A miniaturized holographic reconstruction system that uses a single SLM instead of three SLMs to achieve a full-color holographic display was developed. To reduce the volume of the system and eliminate zero-order light, a light guide was used for image superposition and system optimization.

Different optical wavelengths have different diffraction angles in computer-generated holography. Therefore, problems such as different image sizes and offsets may occur during the light combination process required for producing full-color images. The diffraction angles of the RGB laser lights (Table 1) were obtained using the diffraction angle formula in Eq. (5).

Tables Icon

Table 1. Diffraction angles of RGB lasers

According to the calculations, the RGB lasers superpose at 500 mm after diffraction and reconstruct the full-color image. However, this distance is too long for a miniaturized system. Therefore, the proposed system incorporated a light guide. The image was transmitted inside the light guide to reduce the volume of the system. The light guide was made of BK7 glass and has a refractive index of 1.52. To ensure that the light intensity of the reconstructed image was not affected, total reflection was used for transmission in the light guide. Snell's Law was used to calculate the critical angle of total reflection:

$${n_1}\sin {\theta _1} = {n_2}\sin {\theta _2}$$

The critical angle of total reflection in the light guide ${\theta _C}$ was 41.1°. Therefore, the light guide was tilted 42° to ensure that all the RGB images were completely reflected in the light guide. According to the results of the simulation, the optimal length of the light guide was 155 mm, and the cross-sectional area was 16 × 14 mm2. The simulation is illustrated in Fig. 5.

 figure: Fig. 5.

Fig. 5. Light guide simulation.

Download Full Size | PDF

The light sources used for image reconstruction were lasers. Lasers are highly coherent light sources and are therefore easily affected by the environment and form constructive or destructive interference, thus producing speckle in reconstructed images. The wavefront of a laser can be effectively disrupted by disrupting or interfering with the phase relationship between two points, thus reducing the spatial coherence of the laser. The Huygens–Fresnel principle and the superposition principle wavefronts are superposed secondary waves (spherical waves); that is, they are superposed scattered light. The spatial coherence of a laser is directly proportional to the square of the scattering angle [16,17]. Therefore, smaller scattering angles produce lasers with higher directivity. In terms of the total reflection in the static light guide, when the reconstructed image was reflected into the light guide, different light beams formed different internal reflection angles in the inner wall of the light guide. Equation (10) defines the limit of the angle of divergence when the laser was completely reflected in the light guide:

$${90^ \circ } - {\sin ^{ - 1}}\left[ {\frac{1}{n}\sin (\frac{\theta }{2})} \right] > {\theta _C}$$

In Eq. (10), $\theta$ represents the angle of divergence, ${\theta _C}$ represents the critical angle of the light guide, and $n$ represents the refractive index of the light guide.

Laser speckles were visible because the scattering angle of the reconstructed image was small. Previous studies conducted by the research team have revealed that vibrating the light guide could disrupt the temporal coherence of the laser light and effectively eliminate laser speckles [18]. When the light guide vibrated, the displacement of the wall of the light guide changed the angle of the incident light. When different angles of incidence were used, the wavefront of the reconstructed image showed mutual interference with the vibration of the light guide and equalized the light intensity of the reconstructed image, thereby eliminating the laser speckle. The ratio of the standard deviation to the average of the light intensity (Equation [11]) was defined as the speckle contrast ratio of the speckle. The standard deviation of the light intensity was defined as the autocorrelation function of the speckle over time after equalization. The autocorrelation function was the probability density function of the light intensity (Equation [12]).

$$C = \frac{{{\sigma _S}}}{{\left\langle I \right\rangle }}$$
$${\sigma _S}^2(T) = \frac{1}{T}\int {_0^T{C_\tau }(\tau )} d\tau$$

3. Experiments and results

3.1 Experimental setup

The proposed system is illustrated in Fig. 6 and 7. The light sources were a 632-nm red He–Ne laser, a 532-nm 50-mW green DPSS laser, and a 30-mW 473-nm blue DPSS laser. After the lasers were discharged, they each passed through a spatial filter (SF) consisting of an objective lens and pinhole to filter out high-frequency waves. After the laser beams passed through the SF, they became a divergent light source. Therefore, a biconvex lens with a focal length of 150 mm was placed behind the SF to converge the divergent light into a straight collimated light, and apertures were used to control the sizes of the light beams. The RGB lasers passed through the reflective elements and were reflected into the SLM. The reflected red light passed through the green light and blue light, and the reflected green light passed through the blue light. A mirror was used as the reflective element for the red light, whereas two beam splitters (BS), which have penetrative and reflective characteristics, were used as the reflective elements for the green and blue light. The SLM had the RGB POFs. Therefore, a color filter, which only allowed light from a source corresponding to a specific region to pass through, was placed in front of the SLM before the lights were reflected into the SLM to ensure that the RGB lasers would show reconstructed diffraction for the corresponding POF. After the light was discharged from the SLM, a BS was used to reflect the reconstructed image into the light guide, which was tilted at 42°. The RGB image is incident obliquely into the optical waveguide. To avoid affecting the full-color image reconstruction, a linear vibration motor was used to vibrate the light guide up and down at a frequency of 65 Hz. After the lights left the light guide, light combination and reconstruction was performed, and a biconvex lens with a focal length of 200 mm was used to magnify the image by 1.5×. Finally, a Nikon D50 camera and 85-mm/F1.8-16 convertible lens were placed on a tripod to simulate human vision, and the hologram was recorded. The proposed reconstruction system required a space of 40 × 30 cm2, which was 86.6% less space than the space of 100 × 90 cm2 required for conventional reconstruction systems, which use three SLMs for light combination.

 figure: Fig. 6.

Fig. 6. Miniaturized full-color holographic optical reconstruction system.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Miniaturized full-color holographic optical reconstruction system (actual photo).

Download Full Size | PDF

3.2 Verification and analysis of the proposed system

2D and 3D images were optically reconstructed. The reconstructed images were a 2D color image of a Rubik’s cube with 258 × 263 pixels [19] and a 3D human figure with 478,314 information points (Fig. 8). Because the target images were full-color images with different RGB ratios, the chromatic components of the images were analyzed. Color separation was performed on the images (Fig. 8), and the RGB grayscale images were obtained. After the MGSA process, the reconstruction distance and number of iterations were set to 0.5 m and 20, respectively. Thereafter, the MGSA to calculate the results and integrate the three POFs into a single POF. The new POF was loaded into the SLM and reconstructed using the proposed system. Visualization 1 demonstrates the reconstruction of 3D full-color dynamic content.

 figure: Fig. 8.

Fig. 8. Original target images: (a) 2D [16] and (b) 3D. (“ Rubiks cube ” by RTCNCA CC BY 3.0. All rights reserved.)

Download Full Size | PDF

In this study, the relative diffraction efficiency (RDE) [20], root mean square error (RMSE) [21,22], speckle contrast (SC) [23,24], and 3D structure similarity index (SSIM index) [25] were used as standards to objectively evaluate the quality of the reconstructed images. The formulae are presented in Eqs. (13)–(16). Figure 9. illustrates the results of image reconstruction. Tables 2 and 3 present the analysis results for the 2D and 3D images, respectively.

 figure: Fig. 9.

Fig. 9. Reconstructed images (a) without light guide vibration and (b) with light guide vibration.

Download Full Size | PDF

Tables Icon

Table 2. Analysis of 2D image reconstruction with and without light guide vibration

$${I_{RDE}} = \frac{{\sum {{I_S}} }}{{\sum {({I_S} + {I_N})} }} \times 100\%$$
$$RMSE = \sqrt {\sum {\frac{{{I_N}^2}}{{\textrm{XY}}}} }$$
$$\textrm{SC} = \frac{{\sqrt {\left\langle {{I^2}} \right\rangle - {{\left\langle I \right\rangle }^2}} }}{{\left\langle I \right\rangle }}$$
$$SSIM(x,y) = {[{l(x,y)} ]^\alpha }{[{c(x,y)} ]^\beta }{[{s(x,y)} ]^\gamma }$$

4. Discussion

According to the analysis results presented in Table 2 and Table 3, the full-color images must be reconstructed in reference of RGB lasers. Consequently, the noise produced was greater than that produced in the reconstruction of monochromatic images. The RDEs in the 2D and 3D image reconstruction processes were both higher than 80%. The RMSE distributions of both reconstructed images were similar to those of the original images, indicating that the reconstructed images had minimal distortion. An excess of information points often caused laser speckle in the images and reduced the SC. The laser speckle also interfered with some of the information points in the images and affected the image quality. The analysis results indicated that the reconstructed images had poor SC and clearly visible laser speckle. Therefore, the light guide was vibrated to disrupt the temporal coherence of the laser, minimize the laser speckle, and induce the time-averaging effect. When the light guide vibrated, the displacement of the inner wall of the light guide caused the angle of incidence to change. The incident light was scattered at different angles from the fixed-point light source and, together with the vibrating light guide, showed mutual interference with the wavefront of the laser speckle; the time-averaging effect occurred as the vibrations continued and time passed. The concept of an autocorrelation function involves letting the original waveform superpose itself and equalize the light intensity of the laser speckle. In this study, to avoid affecting the full-color image reconstruction, a linear vibration motor was used to vibrate the light guide up and down at a frequency of 65 Hz. As indicated in Table 2 and Table 3, the vibration of the light guide caused the inner wall of the light guide to produce small, rapid vibrations and slight displacements. These displacements equalized the laser beams that were completely reflected at different angles in the light guide, and as time passed, the laser speckle field caused the laser speckle images to superpose, thereby equalizing the laser speckles. This reduced the SC to 5.30% and increased the RDE, RMSE, and SSIM. The results indicate that the temporal coherence of the laser in the light guide was disrupted by the vibration; the laser speckle was thus reduced, and the quality of the reconstructed image was improved. In addition, as indicated in Table 2 and Table 3, the quality of the 2D reconstructed image was superior to that of the 3D reconstructed image. The 3D image showed greater background noise and image distortion, possibly due to defocused-information, which caused some information points to be misinterpreted as background stray lights, resulting in image distortion or weakened reconstruction signals.

When lasers with different wavelengths showed reconstructed diffraction in the developed system, they had different diffraction angles and superposed after a certain distance. This characteristic served as the basis of light combination for full-color images. The integration of the single light guide minimized the volume of the system and enabled full-color light combination without compromising the resolution of the panel. In addition, the light guide used multiple total reflections to enhance the optical path of the reconstructed diffraction, thereby eliminating the zero-order light and improving the image quality. The light guide system was compared with the x-cube BS light combination system. The RGB POFs were separately input into three SLMs, and an x-cube BS was used for light mixing and reconstruction. During image reconstruction, the RGB reconstructed images could not be superposed because of the diffraction angle; therefore, image displacement was necessary. To eliminate the zero-order light and superpose the images in the displayed region, the RGB images were moved 232, 275, and 310 pixels along the x-axis, respectively. The image resolution using the x-cube BS was limited to achieve the desired result (Table 4) [26]. By contrast, the light guide system enabled image superposition and reconstruction as soon as the light was discharged and did not require correction. The optical path of reconstructed diffraction was enhanced to eliminate the zero-order light. The simulation results indicated that when accurate angles of incidence were used, the RGB images could be superposed without displacement. However, during the optical reconstruction process, discharging the light into the light guide with an accurate angle of incidence was difficult. Therefore, slight image modifications were still necessary. The necessary corrections are listed in Table 5. The loss of the resolution of the panel in the system developed in this study was less than that in the x-cube BS light combination system.

Tables Icon

Table 3. Analysis of 3D image reconstruction with and without light guide vibration

Tables Icon

Table 4. Image offset in the x-cube BS light combination system [26].

Tables Icon

Table 5. Reconstructed image offset in the light guide system

5. Conclusions

Considerable research has been conducted on full-color holographic displays. In this study, a spatial light combination system designed to address problems such as color separation and SLM response time was developed. Multiple problems were encountered during the system design process, the greatest of which were the different image sizes and displacement error due to wavelength differences. To overcome these problems, the design of the proposed system incorporated a single light guide for light mixing and reconstruction to minimize the size and complexity of the system. The volume of the system was reduced by 86.6%, and the system achieved full-color light combination without compromising the image resolution. The system used multiple total reflections to enhance the optical paths and eliminate zero-order light, thereby improving the image quality. To overcome the problem of laser speckle, the system integrated a vibrating light guide, which reduced the laser speckle to 5.30%, thus improving the image quality. Overall, this study verifies the principle and the practicability of the method; besides, the proposed system uses a single SLM and a simple optical structure to achieve high-quality dynamic full-color holographic 3D displays and is verified from the image quality evaluation result.

Funding

National Science and Technology Council, R.O.C (110-2221-E-011 -149 -, 111-2218-E-011 -013 -MBK).

Disclosures

The authors declare no conflicts of interest.

Data availability

For the patent application, data for the simulation and experiment for the NTUST are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. Lucente, “Interactive holographic displays: the first 10 years,” Book chapter for “Holography—The First”Springer50. (2003).

2. J. Xiong, E. L. Hsiang, Z. He, T. Zhan, and S. T. Wu, “Augmented reality and virtual reality displays: emerging technologies and future perspectives,” Light Sci. Appl. 10(1), 1–30 (2021). [CrossRef]  

3. B. Brown and A. Lohmann, “Computer-generated binary holograms,” IBM J Res Dev 13(2), 160–168 (1969). [CrossRef]  

4. D. P. Pi, J. Liu, and Y. T. Wang, “Review of computer-generated hologram algorithms for color dynamic holographic three-dimensional display,” Light Sci. Appl 11(1), 1–17 (2022). [CrossRef]  

5. K. Yamamoto, Y. Ichihashi, T. Senoh, R. Oi, and T. Kurita, “Calculating the Fresnel diffraction of light from a shifted and tilted plane,” Opt. Express 20(12), 12949–12958 (2012). [CrossRef]  

6. Y. Matsumoto and Y. Takaki, “Time-multiplexed color image generation by viewing-zone scanning holographic display employing MEMS-SLM,” J. Inf. Disp 25(8), 515–523 (2017). [CrossRef]  

7. M. Makowski, I. Ducin, M. Sypek, A. Siemion, A. Siemion, J. Suszek, and A. Kolodziejczyk, “Color image projection based on Fourier holograms,” Opt. Lett 35(8), 1227–1229 (2010). [CrossRef]  

8. S. F. Lin, D. Wang, Q. H. Wang, and E. S. Kim, “Full-color holographic 3D display system using off-axis color-multiplexed-hologram on single SLM,” Opt. Lasers Eng. 126, 105895 (2020). [CrossRef]  

9. S. A. Benton, “Hologram reconstruction with extended incoherent sources,” J. Opt. Soc. Am. A 59, 1545 (1969).

10. T. Kozacki and M. Chlipala, “Color holographic display with white light LED source and single phase only SLM,” Opt. Express 24(3), 2189 (2016). [CrossRef]  

11. S. F. Lin, H. K. Cao, and E. S. Kim, “Single SLM full-color holographic three dimensional video display based on image and frequency-shift multiplexing,” Opt. Express 27(11), 15926 (2019). [CrossRef]  

12. G. Xue, J. Liu, X. Li, J. Jia, Z. Zhang, B. Hu, and Y. Wang, “Multiplexing encoding method for full-color dynamic 3D holographic display,” Opt. Express 22(15), 18473 (2014). [CrossRef]  

13. W. Song, X. Li, Y. Zheng, Y. Liu, and Y. Wang, “Full-color retinal-projection near-eye display using a multiplexing-encoding holographic method,” Opt. Express 29(6), 8098–8107 (2021). [CrossRef]  

14. E. Robert, “Holographic nondestructive testing.” Elsevier, 2012.

15. C. Y. Chen, W. C. Li, H. T. Chang, C. H. Chuang, and T. J. Chang, “3-D modified Gerchberg–Saxton algorithm developed for panoramic computer-generated phase-only holographic display,” J. Opt. Soc. Am. B 34(5), B42–B48 (2017). [CrossRef]  

16. V. Yurlov, A. Lapchuk, S. Yun, J. Song, and H. Yang, “Speckle suppression in scanning laser display,” Appl. Opt 47(2), 179–187 (2008). [CrossRef]  

17. V. Yurlov, A. Lapchuk, S. Yun, J. Song, I. Yeo, H. Yang, and S. An, “Speckle suppression in scanning laser displays: aberration and defocusing of the projection system,” Appl. Opt 48(1), 80–90 (2009). [CrossRef]  

18. Q. L. Deng, B. S. Lin, P. J. Wu, K. Y. Chiu, P. L. Fan, and C. Y. Chen, “A hybrid temporal and spatial speckle-suppression method for laser displays,” Opt. Express 21(25), 31062–31071 (2013). [CrossRef]  

19. RTCNCA, “Rubiks cube,” wikipedia, 2007. (https://zh.m.wikipedia.org/wiki/File:Rubiks_cube.svg)

20. K. Choi, H. Kim, and B. Lee, “Synthetic phase holograms for auto-stereoscopic image displays using a modified IFTA,” Opt. Express 12(11), 2454–2462 (2004). [CrossRef]  

21. J. P. Liu, W. Y. Hsieh, T. C. Poon, and P. Tsang, “Complex Fresnel hologram display using a single SLM,” Appl. Opt 50(34), H128–H135 (2011). [CrossRef]  

22. C. H. Chuang, C. Y. Chen, H. T. Chang, H. Y. Lin, and C. F. Kuo, “Reducing Defocused-Information Crosstalk to Multi-View Holography by Using Multichannel Encryption of Random Phase Distribution,” Appl. Sci 12(3), 1413 (2022). [CrossRef]  

23. J. C. Dainty, Laser speckle and related phenomena. Springer science & business Media, Springer2013.

24. F. Riechert, G. Bastian, and U. Lemmer, “Laser speckle reduction via colloidal-dispersion-filled projection screens,” Appl. Opt 48(19), 3742–3749 (2009). [CrossRef]  

25. C. Y. Chen, C. H. Chuang, H. Y. Lin, and D. Y. Zhuo, “Imaging evaluation of computer-generated hologram by using three-dimensional modified structural similarity index,” J Opt 24(5), 055702 (2022). [CrossRef]  

26. C. Y. Chen, H. T. Chang, T. J. Chang, and C. H. Chuang, “Full-color and less-speckled modified Gerchberg–Saxton algorithm computer-generated hologram floating in a dual-parabolic projection system,” Chinese Optics Letters 13(11), 110901 (2015). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       3D full-color dynamic content

Data availability

For the patent application, data for the simulation and experiment for the NTUST are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The graphical representation of the point-based POM retrieved.
Fig. 2.
Fig. 2. Flowchart of spatial multiplexing program.
Fig. 3.
Fig. 3. Single SLM full-color holographic display: (a)before (b) after spatial multiplexing modification.
Fig. 4.
Fig. 4. Single SLM full-color holographic display with spatial multiplexing modification.
Fig. 5.
Fig. 5. Light guide simulation.
Fig. 6.
Fig. 6. Miniaturized full-color holographic optical reconstruction system.
Fig. 7.
Fig. 7. Miniaturized full-color holographic optical reconstruction system (actual photo).
Fig. 8.
Fig. 8. Original target images: (a) 2D [16] and (b) 3D. (“ Rubiks cube ” by RTCNCA CC BY 3.0. All rights reserved.)
Fig. 9.
Fig. 9. Reconstructed images (a) without light guide vibration and (b) with light guide vibration.

Tables (5)

Tables Icon

Table 1. Diffraction angles of RGB lasers

Tables Icon

Table 2. Analysis of 2D image reconstruction with and without light guide vibration

Tables Icon

Table 3. Analysis of 3D image reconstruction with and without light guide vibration

Tables Icon

Table 4. Image offset in the x-cube BS light combination system [26].

Tables Icon

Table 5. Reconstructed image offset in the light guide system

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

E ( ξ , η ) = j λ z e j 2 π λ z e j π λ z  ( ξ 2 + η 2 ) [ E ( x , y ) e j π λ z  ( x 2 + y 2 ) ] e j 2 π λ z  ( x ξ + y η ) d x d y
ξ = λ z M x ; η = λ z M y
ξ R = λ R z M x ; ξ G = λ G z M x ; ξ B = λ B z M x
ξ R : ξ G : ξ B = λ R z M x : λ G z M x : λ B z M x = λ R : λ G : λ B = 632 : 532 : 473
θ D = n λ d
F r T { e x p [ j ψ z n ( x 0 , y 0 ) ] ; λ ; z n } = g ^ z n ( x , y ) e x p [ j ψ g ^ z n ( x , y ) ]
F r T { e x p [ j ψ z n ( x 0 , y 0 ) ] ; λ ; z n } = g ^ z n ( x μ n , y υ n ) e x p [ j ϕ ( x , y ) ]
H ( x 1 , y 1 ) = e x p { j n = 1 S ψ z n ( x n , y n ) }
n 1 sin θ 1 = n 2 sin θ 2
90 sin 1 [ 1 n sin ( θ 2 ) ] > θ C
C = σ S I
σ S 2 ( T ) = 1 T 0 T C τ ( τ ) d τ
I R D E = I S ( I S + I N ) × 100 %
R M S E = I N 2 XY
SC = I 2 I 2 I
S S I M ( x , y ) = [ l ( x , y ) ] α [ c ( x , y ) ] β [ s ( x , y ) ] γ
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.