Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Grayscale performance enhancement for time-multiplexing light field rendering

Open Access Open Access

Abstract

One of the common approaches to compensate for the grayscale performance limitation in time-multiplexing light field displays is to employ a halftone technique. We propose an ordered-dithering halftone algorithm based on a 3-dimension super-mask to increase the gray levels of the time-multiplexing light field display. Our method makes full use of the overlapping perceived pixels which are caused by the time-multiplexing design, such that effectively trading-off the spatial resolution and color performance. A real-time rendering time-multiplexing display prototype is built to validate the proposed halftone algorithm. We conducted a user study to evaluate the quality of display scenes dithered by different super-mask configuration, which showed the consistency with the parameters we pre-calculated. The 3D ordered-dithering algorithm is able to present better visual perception than the conventional halftone algorithms with respect to grayscale representation, and flexible to be applied in different time-multiplexing light field display systems.

© 2015 Optical Society of America

1. Introduction

Three-dimensional (3D) display has been an attractive research field for years. Amounts of techniques have been developed to create ideal displays with a 3D effect, including autostereoscopic [1], integral [2], volumetric [3], light field [4] and holographic [5] displays. Among all, light field displays are promising to achieve an occlusion-correct, dynamic and panoramic viewable 3D display performance [6,7 ]. However with decades of development, the performance of current light field displays are far from sufficient for providing a natural 3D visual effect. The key challenge is the limited data bandwidth of current display hardware.

Researchers have proposed alternative techniques to construct light field displays under the current limitation of hardware performance, using either temporal interlacing or spatial interlacing. With respect to the spatial interlacing or space-multiplexing, Lee [8], Zhong [9] and Jones [10] have employed large-scale 3D display systems via assembling multiple projectors. These systems illustrate vivid 3D scenes with large viewing angle, while the multiple projectors’ configuration make the prototype complex and thereby requiring considerable calibration effort [11]. With respect to the temporal interlacing or time-multiplexing, Jones [6], Yan [12], Xia [7] and Takaki [13] have employed 3D display systems via combining a high-speed projector and a revolving directional diffuser screen. The crucial components in the above systems, the high-speed projectors, are usually powered by the digital mirror devices (DMD) being used as the high-speed spatial light modulators. Due to the property that state-of-the-art DMDs can display only low-bit resolution grayscale images or purely binary images, the grayscale reconstruction performance of existing methods is limited.

Consequently, the process of the projector light source and the binary images are considered as the methods to improve the grayscale representation. On the one hand, Takaki [14] has modulated the illumination intensity of light sources to improve the grayscale performance, with a trade-off of decreasing the average illumination. On the other hand, the conventional halftone algorithms, which were originally used for 2-dimentional binary high-density images to represent grayscale images, have been widely applied in the binarization of 3D displays [15], including the computer-generated holograms fabricated by laser technique [16] and the time-multiplexing light field display configured with the monochrome DMDs [17]. Jones used the Ostromoukhov error-diffusion [18] and ordered-dithering halftone algorithm [19] to create the binary images [6,20 ], and Matsumoto [21] introduced the Floyd-Steinberg error diffusion halftone technique [22] to increase the grayscale of reconstructed images. However, these methods have focused on the processing applied to the individual projection image, thereby ignoring the overlapping pixels of the time-multiplexing system caused by the extended scanning line, which can further improve the grayscale representation more naturally.

In this study, inspired by the conventional halftone algorithm and the multiplexing principle, a 3-dimentional super-mask is proposed to ordered-dither the consecutive projection images spatially and temporally, which further improves the grayscale performance of 3D display compared with state-of-the-art solutions [6,20,21 ]. Unlike the error-diffusion, the ordered-dithering algorithm maintains the marginal information and can be easily in-parallel operated in GPU shaders for acceleration requirement. Our main technical contributions are as follows: 1) We theoretically analyze the spatial light field reconstruction of time-multiplexing techniques, specially dive into defining the planar resolution and angular resolution. 2) We propose an order-dithering technical framework employing a 3D super-mask for general time-multiplexing light field displays. 3) We build a real-time rendering time-multiplexing light field display prototype to validate the proposed ordered-dithering algorithm.

2. Spatial light field in time-multiplexing light field display

2.1 Time-multiplexing light field display principle

With respect to the temporal interlacing or time-multiplexing light field display, a horizontal-parallax-only light field display is achieved by horizontally scanning technique. Derived from several reported light field display prototypes using time-multiplexing methods [6,7,12–14,17,20,21,23 ], a general system schematic is summarized as Fig. 1 , which is mainly comprised of a high-speed projector and a revolving anisotropic diffuser screen. Wherein, the high-speed projector is assembled by the spatial light modulator (SLM) and the LED or laser diode as the light source. The anisotropic screen horizontally diffuses the light within a very small angular range, meanwhile vertically diffuses the light in a large angle. The screen, which can be either reflective [Fig. 1(a)] or transmitted [Fig. 1(b)], is usually assembled in a tilted angle [Fig. 1(a)] or designed with extra optical structure [Fig. 1(b)] such that the diffusing light rays are directed to the observers. Generally, with respect to a viewpoint located at a fixed position, the revolving anisotropic screen provides a narrow scanning light strip sweeping across the screen for perception.

 figure: Fig. 1

Fig. 1 The schematic of time-multiplexing light field displays, (a) with a tilted reflective screen. (b) with a flat microstructural transmitted screen.

Download Full Size | PDF

The main process of a scanning light field display can be described as: the scanning line rebuilds the directional or angular information of the light-field ray (by deflecting and diffusing the light rays directionally), while the relevant pixels information rebuilds the color information of the light-field ray (by projecting images on the anisotropic screen). By projecting the synthesized images generated via the mapping relationship mentioned in [6,9 ], the light field is rendered for correct perspective visual requirement. To achieve a stable light field, all projection images need to be synchronized with high-speed revolving screen to satisfy the persistence of vision.

Although the DMD enables the high-speed updating of projection images, the inherent binary characteristics result in a loss of grayscale information. Using the conventional halftone algorithm, a 2D binary image can be perceived as a grey images. The challenge lies in how to effectively and compromisingly generate the grey images for DMD display under the condition of limited data bandwidth. We regard it is of significance to dive in the spatial resolution of the time-multiplexing light field display seeking to enhance the displayed gray levels.

2.2 Analysis on spatial light field resolution

The perceived spatial resolution of a time-multiplexing light field display can be decomposed as the planar resolution and the angular resolution, which are detailed as follows.

2.2.1 Planar resolution

The planar resolution is defined deriving from the static projection image. We define the planar resolution as the number of the perceived pixels per millimeter. As illustrated in Fig. 2 , assume that an image with the resolution of wr × hr is projected onto a screen area with the size of ws × hs (mm × mm as the unit), equivalently there are wr/ws pixels displayed per millimeter. Further considering the human eye’s resolution limit and assume human’s eye can distinguish two points exactly in the viewing distance of l, thereby the minimum distance between two distinguishable points is l × δ (δ is donated as the human eye’s minimum angle of resolution). Once the value wr/ws is designed large enough, not all the displayed pixels shall be distinguished by the viewer, thereby some of them will be visually perceived appearing in the same location. In other words, the displayed pixels are perceived overlapped for the viewer, and the number of overlapped pixels (denoted by p) can be expressed as Eq. (1).

p=wrlδws
As all the reconstructed light field rays are emitted from the screen, the distinguishable pixels shall lead to a distinct display scene. Therefore, this process is expressed as “the beneficial overlapping of perceived pixels”, because it does not introduce any intuitive effects of obscure, and the overlapping pixels can further contribute to enhance the grayscales using the reported conventional halftone methods [6,20,21 ].

 figure: Fig. 2

Fig. 2 The schematic diagram of planar overlapping of perceived pixels, and some of the displayed pixels are perceived appearing in the same location because of the human eye’s resolution limit.

Download Full Size | PDF

2.2.2 Angular resolution

The angular resolution is defined deriving from the time-multiplexing process. We define the angular resolution as the number of the perspectives (or projection images) per radian. As illustrated in Fig. 3 , when the revolving screen is irritated by the projector, a lightened scanning line will be perceived swept across the screen.

 figure: Fig. 3

Fig. 3 The schematic diagram of temporal overlapping of perceived pixels. In the condition of the thin scanning line as (a) and (b), the consecutive projection images contain the perspective information and the reconstructed light field rays are directed to the viewers without crosstalk. In the condition of the extended scanning line as (c) and (d), the light field rays which originally represents Vn-1 and Vn + 1 are directed to Vn. Thus the temporal overlapping introduces crosstalk and decreases the angular resolution.

Download Full Size | PDF

Ideally, the scanning line is expected to be thinner than one pixel size, thereby each screen point observed by the viewpoint corresponds to the correct value. As illustrated in Figs. 3(a) and 3(b), for a viewpoint located at the point Vn, once a point on the screen is swept by the scanning line and its corresponding image information is loaded synchronously, the light ray targeting on the relevant perspective is reconstructed. Benefiting from the screen characteristics of slim horizontal diffusing angle, a fixed screen point A projected by perspective-correct images is able to represent the valid angular information by the scanning method. Assume that the number of projection images for each circular period is N, a viewer shall perceive N different perspectives without crosstalk when he observes the reconstructed 3D scene around in 360-degree. Therefore, the angular resolution is presented as N/2π.

However, the real scanning line suffers from an extension in width, as illustrated in Figs. 3(c) and 3(d), which is dependent on the horizontal diffusing angle of the screen and the pupil size of projector’s lens. Consequently, the light field information which originally represents the neighbor perspectives (Vn-1 and Vn + 1) will be directed to the viewpoint Vn improperly, and the angular resolution suffers from a decrease. In other words, when a fixed screen point is swept by the extended scanning line, the perceived light field results from the overlap of several consecutive projection images.

Mathematically, the scanning line shall be extended by a distributed function G(x), i.e. Gaussian function. It equals to the intensity summation when a point is swept by the scanning line. As illustrated in Figs. 3(c) and 3(d), assume a pixel A’s information is correctly reconstructed at the moment kn = 0 and the corresponding image shall be changed once the scanning line sweeps over a distance of d. The extended scanning line leads to the overlapping of several pixels information, thereby, the perceived information for point A is no longer I0A but the intensity summation result IA, which is expressed as Eq. (2). Wherein, IA and IkA indicate the summation and the k-th image information for point A, respectively.

IA=k=N/2N/2(IkAkd0.5dkd+0.5dG(x)dx)
According to Eq. (2), the summation will be deviated from the expected result if IkA performs different from I0A. Therefore, this overlapping process is expressed as “the error overlapping of perceived pixels”, because it inevitably introduce the effect of obscure for the 3D display. According to the mapping algorithm, the reconstructed pixels which are spatially located further from the screen will suffer from a larger deviation, and thereby, presenting a less angular resolution and obscure. Although the extended scanning line inevitably results in a sharpness decrease, we believe the overlapping property can be used to enhance the grayscale performance.

3. Grayscale performance enhancement via spatial and temporal dithering

3.1 3D super-mask

For the ordered-dithering halftone algorithm, the ability of grayscale representation is related to the size of the dithering mask. To enhance grayscale and avoid noticeable artifacts, the conventional methods need to both increase the image resolution and enlarge the dithering mask. However, the practical display system is confined by the display resolution, such that the large-size conventional 2D dithering mask will introduce severe artifacts but contribute less to the display performance. Fortunately, the time-multiplexing light field display inherently provides an extra dithering space in the time dimension. Based on the analysis on the planar and angular resolution, a 3D super-mask with the size of m × m × t is proposed and designed to 3D ordered-dither a series of consecutive t projected binary images, leading to achieve an enhanced grayscale with m × m × t + 1 gray levels but without artifacts. The super-mask dithering is the combination of dithering in spatial domain and temporal domain, which makes full use of the “beneficial overlapping of perceived pixels” and “error overlapping of perceived pixels” discussed above. This method is able to achieve a better performance in a time-multiplexing display system even though it is limited by the projection image number and resolution.

We further define the super-mask as a 3-dimensional matrix M[m,m,t] = [M1[m,m], …, Mt[m,m]]. The values in the matrix is set randomly ranging from 1 to m × m × t for avoiding artificial textures, and then the threshold values are multiplied by 1/(m × m × t + 1) for normalization. Therefore, m × m × t + 1 gray levels are obtained. This process can be expressed as Eq. (3), where Ξ raw(i, j, k) and Ξ dithered(i, j, k) indicate the grayscale of the pixel located at the i-th row, j-th colume, k-th frame for the original normalized grey images and the ordered-dithered binary projection images, respectively.

Ξdithered(i,j,k)={0,Ξraw(i,j,k)<M(i%m,j%m,k%t)1,Ξraw(i,j,k)M(i%m,j%m,k%t)
The crucial procedure to achieve a natural grayscale representation is to set the proper size of the 3D super-mask or the values of m and t, and it is detailed as follows.

3.2 Dithering in the spatial domain

When an m × m ordered-dithering mask traverses a grayscale image, each pixel will be compared with the corresponding value in the mask. The larger ones will be set as white, while the smaller ones black, thereby, m × m + 1 grayscale is achieved. As discussed in the “beneficial overlapping of perceived pixels” and indicated in Eq. (1), the dithering will not lead to additional artifacts, as long as m is configured smaller than p. The acceptable value of parameter m can be expressed as Eq. (4).

m=round(p)=round(wrlδws)
Note that the occasional cases at where m is becoming over large would result in severe granulation and artifacts. We demonstrate this point in detail in the experimental results.

3.3 Dithering in the temporal domain

Similarly, assume t consecutive projection images are used for dithering, as discussed in the “error overlapping of perceived pixels” and indicated in Eq. (2), the overlapped result of point A (expressed as IA(t)) is adapted as Eq. (5). As the pixel is binary, so IkA is either 0 or 1 and it can be set as 1 in general.

IA(t)=k=ceil(t12)ceil(t12)(IkAkd0.5dkd+0.5dG(x)dx)={0.5td0.5tdG(x)dxwhentisodd0.5(t1)d0.5(t+1)dG(x)dxwhentiseven
So the dithering result is determined by the integration of illumination intensity. If the illumination intensity of some overlapping image is not large enough, its corresponding pixel information will not be rendered distinctly, which may cause obvious effect of light and shade for the 3D scene. That is to say, for the t consecutive projection images for dithering, the t-th overlapping image may contribute less to the final temporal dithering. The illumination intensity of the t-th overlapping image need to make up a certain percentage of the total overlapping intensity for dithering, and the value of t is considered as the acceptable trade-off result. The procedure can be expressed as Eq. (6), where the threshold ε is set according to the visual characteristics. Generally, a smaller ε will lead a larger t, which may cause unnatural overlapping and color non-uniformity.
IA(t)IA(t1)IA(t)>ε
Given the specific parameters (as the experiment indicated in Section 4.2) and the equations above, the acceptable values of m and t and the configuration of the super-mask are all obtained, The super-masks with different configurations are also applied in the experiment to validate the 3D dithering algorithm.

4. Experimental validation and discussion

4.1 Prototype specification

A real-time rendering display prototype is developed to validate the proposed dithering framework. The prototype configuration is illustrated in Fig. 4(a) , and its schematic diagram is as illustrated as Fig. 1(b). A three-chip DMD-based color high-speed projector is implemented with RGB LEDs as the light field rendering engine. The utilized SLM of high-speed projector is Discovery 4100 (Texas Instruments), which displays as high as 20,000 single-bit frames per second with the resolution of 1024 × 768. In the projector, three chips of DMD are utilized to display three-channel images synchronously, and the projection images are order-dithered by the super-mask. The customized transmitted screen consists of an off-axis linear Fresnel lens and a directional diffuser screen. The off-axis linear Fresnel lens deflects the incident light to the observers around, and the directional diffuser screen diffuses light with 60-degree vertically and 1-degree horizontally. The screen (R = 80mm) is mounted at the top of a hollow column and the projector is located at the bottom center of the column. The equivalent optical distance between the high-speed projector and the screen, which is also the height of the hollow column, is 200 mm.

 figure: Fig. 4

Fig. 4 (a) The photographs and configuration of experimental system. (b) The captured picture and the Gaussian fitting of the extended scanning line.

Download Full Size | PDF

Driven by a servo motor, the column revolves together with the screen. We use three graphics cards (NVIDIA GeForce GTX 760) with a total of 6 DVI outputs. All the drawing and rendering operations are performed on the GPU in real time. The computed 3D model is transformed to a series of projection images using the mapping relationship mentioned in [6,9 ]. The series of projection images are encoded to a high resolution image which is conveniently rendered using graphics cards. For acceleration purpose, the processing procedure is implemented in GPU vertex and fragment shaders, afterwards, the data of three-channel images is transmitted to the corresponding FPGA by dual link DVI cables. In the FPGA circuits, the entire image will be decoded as the image series and distributed to the D4100 control panels for driving DMD chips to display images. Considering the trade-off between the angular resolution and the limited data bandwidth of the display configurations, the number of projection images (1024 × 768) is chosen as 600 (N = 600), and the rotating speed of the screen is 1200 rpm, thereby the refresh frequency is 20 Hz. For dynamic scenes, the refresh rate is about 20 fps which is dependent on the complexity of 3D model.

4.2 Display results

Given the system parameters and the 3D ordered-dithering algorithm mentioned above, the size of super-mask matrix can be calculated definitely. As the image with the resolution of 1024 × 768 is projected onto a screen with the radius of 80mm, that is wr = 768 pixels and ws = 160mm. Since the system can only support the horizontal parallax, the projection images should be designed for the viewers at fixed locations, such that the viewpoint’s horizontal distance (the distance between the viewpoint and the spinning axis) and height (the distance between the viewpoint and the screen) are set as 700 mm and 700 mm, respectively. It is noted that the viewpoint’s position is flexible subject to different optimal configuration of super-mask, and we assume that the viewing distance l is 990mm in the experiment. Considering the optical characteristics of the projector’s lens and the defocusing error in the experimental system, and assume that the human eye’s minimum angle of resolution is δ = 90”, we set the acceptance value of m as 3 according to Eq. (4).

The optics performance of the customized screen used in our prototype is measured, and the extended scanning line is illustrated in Fig. 4(b). Its horizontal illumination intensity distribution is indicated as a proximal Gaussian distribution, which is fit as G(x) = exp(−0.0277x 2). According to the system parameters and the viewpoint location, the corresponding image shall be changed once the scanning line sweeps over 2mm, so the value of d is 2. If ε is set as 20%, we can set the acceptance value of t as 4 according to the Eq. (6).

Via multiple approximation especially the estimation of δ and ε, the parameters of the super-mask are obtained as m = 3 and t = 4. The elements in the super-mask should be arranged randomly as discussed in Section 3.1, otherwise, the super-mask with the well-aligned elements will cause obvious flicker.

As some parameters are set subjectively to some extent, it is essential to construct a testing 3D model in order to compare the super-masks with different configurations to validate whether the assumptions is reasonable or not. The comparison between the target scene and the real reconstructed scene is illustrated in Fig. 5 . It was noticed that the large value of m led to the severe granulation, while the large value of t led to the non-uniform grayscale performance. To objectively evaluate the prediction of the 3D super-mask configuration, the proper evaluation method is indispensable.

 figure: Fig. 5

Fig. 5 Comparisons between the target scenes and the reconstructed scenes dithered by different super-masks.

Download Full Size | PDF

4.3 Evaluation via user study

Conventional 2D image evaluation methods, i.e. PSNR and SSIM evaluation [24], fail in quantitatively evaluating the proposed 3D display, because 1) a regular capture snapshot on the time multiplexing display is not accurate because of some unexpected non-alignment effect. 2) The quality perception of human visual system is complicated and intuitive, especially for the virtual volume pixels reconstructed in the air [25]. Accordingly, we conducted a small scale user study to evaluate the display performance intuitively. Twenty volunteers participated in the user study to evaluate 16 displayed scenes dithered by different super-masks. The users were asked to rank the selected displayed scenes from 1 (the worst) to 16 (the best). The ranking criteria included gray uniformity, colors or details revivification, granulation, and et al. The error bars of evaluation result using the super-masks in different sizes are illustrated in Fig. 6 . We see that the configuration of m = 3 t = 4 obtained the highest average score but the least standard deviation, which means most participants gave it the highest score in the user-study, and it matched to the parameters we pre-calculated. In fact, most users ranked the configuration of m = 3 or 4 and t = 3 or 4 as the top four by compromising the grayscale representation and the granulation. The feedback also showed that the increasing mask size m to a level larger than 3 would enhance the granulation level, but the sharp noisy point significantly affected the grayscale performance, which led to the low ratings. Meanwhile, applying the dithering in temporal domain would increase the score at first, yet decrease gray uniformity severely when layers t were set over 4, so the participants gave low scores to all the configurations of t = 5.

 figure: Fig. 6

Fig. 6 User study evaluation result of the reconstructed 3D scene with different super-masks’ parameters (the higher score indicates better performance).

Download Full Size | PDF

4.4 Comparisons with state-of-the-art halftone algorithms

Figure 7 illustrates the reconstructed 3D scenes using different halftone methods, including the error-diffusion, the 2D ordered-dithering using the Bayer’s mask and the 3D ordered-dithering we proposed. We see that, the error-diffusion method led to a natural grayscale gradient, but it was not robust enough to maintain the sharp marginal information, which resulted in a blurry reconstructed scene. In addition, the cumulative error was inclined to lead to the unnatural noisy points [Fig. 7(a)]. The conventional 2D ordered-dithering using Bayer’s mask [26] was able to display a delicate 3D scene, but the grayscale representation were still limited, and the granulation was severe in some areas [Fig. 7(b)]. By contrast, the proposed 3D ordered-dithering method performed overall good quality in the aspects of both the 3D display resolution and the grayscale representation [Fig. 7(c)].

 figure: Fig. 7

Fig. 7 The comparison among the reconstructed 3D scenes with different state-of-the-art halftone algorithm.

Download Full Size | PDF

As we have discussed above, the major limitation of gaining a high-quality light field display is the limited data bandwidth of the time-multiplexing display hardware, including the projection image number and resolution. With regard to the grayscale-enhanced light field reconstruction, although a wide-extended scanning line can results in a larger value of t, it deteriorates the clarity of the reconstructed 3D scene in the meantime, especially for the scene expected to be reconstructed far from the screen. Specifically, dithering in temporal domain leads to reconstruct a 3D scene with more gray levels, but it is a trade-off to make full use of the “error overlapping of perceived pixels” and not able to improve the spatial resolution. As a result, to enhance the time-multiplexing display performance in the respects of both the grayscale and the spatial resolution, it is significant to achieve a thin scanning line and improve the projection image resolution to enhance the effects of “beneficial overlapping of perceived pixels”. We envision this point be an interesting research direction in future work.

5. Conclusions

In this paper, we investigated an ordered-dithering halftone algorithm based on a 3-dimension super-mask to enhance the grayscale performance for general time-multiplexing light field displays. Specially, we detailed the design and configuration of dithering in spatial and temporal domains. The extension from a 2D dithering mask to a 3D super-mask provides more flexibility to trade-off the overall performance of spatial visible resolution and grayscale. A real-time rendering prototype was designed to validate the proposed halftone algorithm. We conducted a user study to evaluate the quality of display scenes dithered by different super-mask configuration. The comparison results with selected state-of-the-art algorithms also supported our claim that a 3D ordered-dithering algorithm performs better in the grayscale presentation than a conventional halftone strategy, and is promising to be applied in various prototypes of time-multiplexing light field displays.

Acknowledgments

This work is supported by National Basic Research Program of China (973 Program) (No. 2013CB328802), National High Technology Research, Development Program of China (863 Program) (No.2012AA011902) and National Natural Science Foundation of China (No. 61177015).

References and links

1. N. A. Dodgson, “Autostereoscopic 3D displays,” Computer 38(8), 31–36 (2005). [CrossRef]  

2. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef]   [PubMed]  

3. G. E. Favalora, “Volumetric 3D displays and application infrastructure,” Computer 38(8), 37–44 (2005). [CrossRef]  

4. M. Levoy and P. Hanrahan, “Light Field Rendering,” in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (ACM, 1996), pp. 31–42.

5. T. Kreis, P. Aswendt, and R. Hofling, “Hologram reconstruction using a digital micromirror device,” Opt. Eng. 40(6), 926–933 (2001). [CrossRef]  

6. A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360° light field display,” ACM Trans. Graph. 26(3), 40 (2007). [CrossRef]  

7. X. Xia, X. Liu, H. Li, Z. Zheng, H. Wang, Y. Peng, and W. Shen, “A 360-degree floating 3D display based on light field regeneration,” Opt. Express 21(9), 11237–11247 (2013). [CrossRef]   [PubMed]  

8. J. Lee, J. Park, D. Nam, S. Choi, D. Park, and C. Kim, “32.1: Optimal projector configuration design for 300-Mpixel light-field 3D display,” SID Int. Symp. Dig. Tec. 44(1), 400–403(2013). [CrossRef]  

9. Q. Zhong, Y. Peng, H. Li, C. Su, W. Shen, and X. Liu, “Multiview and light-field reconstruction algorithms for 360 degree multiple-projector-type 3D display,” Appl. Opt. 52(19), 4419–4425 (2013). [CrossRef]   [PubMed]  

10. A. Jones, K. Nagano, J. Liu, J. Busch, X. Yu, M. Bolas, and P. Debevec, “Interpolating vertical parallax for an autostereoscopic three-dimensional projector array,” J. Electron. Imaging 23(1), 011005 (2014). [CrossRef]  

11. B. Chen, Q. Zhong, H. Li, X. Liu, and H. Xu, “Automatic geometrical calibration for multiprojector-type light field three-dimensional display,” Opt. Eng. 53(7), 073107 (2014). [CrossRef]  

12. C. Yan, X. Liu, H. Li, X. Xia, H. Lu, and W. Zheng, “Color three-dimensional display with omnidirectional view based on a light-emitting diode projector,” Appl. Opt. 48(22), 4490–4495 (2009). [CrossRef]   [PubMed]  

13. Y. Takaki and S. Uchida, “Table screen 360-degree three-dimensional display using a small array of high-speed projectors,” Opt. Express 20(8), 8848–8861 (2012). [CrossRef]   [PubMed]  

14. Y. Takaki, M. Yokouchi, and N. Okada, “Improvement of grayscale representation of the horizontally scanning holographic display,” Opt. Express 18(24), 24926–24936 (2010). [CrossRef]   [PubMed]  

15. O. Veryovka and J. Buchanan, “Comprehensive halftoning of 3D scenes,” Comput. Graph. Forum 18(3), 13–22 (1999). [CrossRef]  

16. S. Weissbach, F. Wyrowski, and O. Bryngdahl, “Quantization noise in pulse density modulated holograms,” Opt. Commun. 67(3), 167–171 (1988). [CrossRef]  

17. C. Su, X. Xia, H. Li, X. Liu, C. Kuang, J. Xia, and B. Wang, “A penetrable interactive 3D display based on motion recognition,” Chin. Opt. Lett. 12(6), 060007 (2014). [CrossRef]  

18. V. Ostromoukhov, “A simple and efficient error-diffusion algorithm,” in Proceedings of the 28th annual conference on Computer graphics and interactive techniques (ACM, 2001), pp. 567–572.

19. C. N. Judice, J. F. Jarvis, and W. H. Ninke, “Using ordered dither to display continuous tone pictures on an AC Plasma Panel,” SID Int. Symp. Dig. Tec. 15(4), 161–169 (1974).

20. A. Jones, M. Lang, G. Fyffe, X. Yu, J. Busch, I. McDowall, M. Bolas, and P. Debevec, “Achieving eye contact in a one-to-many 3D video teleconferencing system,” ACM Trans. Graph. 28(3), 64 (2009). [CrossRef]  

21. Y. Matsumoto and Y. Takaki, “Improvement of gray-scale representation of horizontally scanning holographic display using error diffusion,” Opt. Lett. 39(12), 3433–3436 (2014). [CrossRef]   [PubMed]  

22. R. W. Floyd and L. Steinberg, “An adaptive algorithm for spatial grey scale,” SID Int. Symp. Dig. Tec. 17, 75–77 (1976).

23. T. Inoue and Y. Takaki, “Table screen 360-degree holographic display using circular viewing-zone scanning,” Opt. Express 23(5), 6533–6542 (2015). [CrossRef]   [PubMed]  

24. A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2366–2369.

25. P. Campisi, P. Callet, and E. Marini, “Stereoscopic images quality assessment,” in Proceedings of 15th European Signal Processing Conference (IEEE, 2007), pp. 2110–2114.

26. B. E. Bayer, “An optimum method for two-level rendition of continuous-tone pictures,” Proc. SPIE 154, 139–143 (1999).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 The schematic of time-multiplexing light field displays, (a) with a tilted reflective screen. (b) with a flat microstructural transmitted screen.
Fig. 2
Fig. 2 The schematic diagram of planar overlapping of perceived pixels, and some of the displayed pixels are perceived appearing in the same location because of the human eye’s resolution limit.
Fig. 3
Fig. 3 The schematic diagram of temporal overlapping of perceived pixels. In the condition of the thin scanning line as (a) and (b), the consecutive projection images contain the perspective information and the reconstructed light field rays are directed to the viewers without crosstalk. In the condition of the extended scanning line as (c) and (d), the light field rays which originally represents Vn-1 and Vn + 1 are directed to Vn. Thus the temporal overlapping introduces crosstalk and decreases the angular resolution.
Fig. 4
Fig. 4 (a) The photographs and configuration of experimental system. (b) The captured picture and the Gaussian fitting of the extended scanning line.
Fig. 5
Fig. 5 Comparisons between the target scenes and the reconstructed scenes dithered by different super-masks.
Fig. 6
Fig. 6 User study evaluation result of the reconstructed 3D scene with different super-masks’ parameters (the higher score indicates better performance).
Fig. 7
Fig. 7 The comparison among the reconstructed 3D scenes with different state-of-the-art halftone algorithm.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

p = w r l δ w s
I A = k = N / 2 N / 2 ( I k A k d 0.5 d k d + 0.5 d G ( x ) d x )
Ξ dithered ( i , j , k ) = { 0 , Ξ raw ( i , j , k ) < M ( i % m , j % m , k % t ) 1 , Ξ raw ( i , j , k ) M ( i % m , j % m , k % t )
m = round ( p ) = round ( w r l δ w s )
I A ( t ) = k = ceil ( t 1 2 ) ceil ( t 1 2 ) ( I k A k d 0.5 d k d + 0.5 d G ( x ) d x ) = { 0.5 t d 0.5 t d G ( x ) d x when t is odd 0.5 ( t 1 ) d 0.5 ( t + 1 ) d G ( x ) d x when t is even
I A ( t ) I A ( t 1 ) I A ( t ) > ε
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.