Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Motion parallax and lossless resolution autostereoscopic 3D display based on a binocular viewpoint tracking liquid crystal dynamic grating adaptive screen

Open Access Open Access

Abstract

The autostereoscopic 3D display has two important indicators, both the number of viewpoints and display resolution. However, it's a challenge to improve both the viewpoint and the resolution. Here, we develop a fixed-position multiview and lossless resolution autostereoscopic 3D display system that includes the dynamic liquid crystal (LC) grating screen. This display system consists of an LC display panel and an LC grating screen. The synchronization of the frame switching of the LC display panel and the LC grating screen shutter enables the preserved resolution. The “eye space” design makes the viewpoint dense enough and determines the LC grating screen's parameters. We use binocular viewpoint tracking technology to realize the LC grating screen's adaptive control based on the above work. Different binocular views are rendered in real-time according to the different positions of a single pair of stereoscopic viewpoints in the eye space, making the motion parallax possible. We present the working principle and mathematical analysis. We implement a prototype for verifying the principle. According to the experiment results analysis, this prototype can achieve viewpoint tracking and motion parallax based on resolution lossless and viewpoint dense enough.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

A commercially-prospective glassless (or autostereoscopic) three-dimensional (3D) display without limitations of seat locations of viewers and viewer number is required for the next-generation display techniques. There are roughly three branches of the 3D display family: ray (light-field) [1,2], point (volumetric) [3,4,5] and wave (holographic) [6,7,8]. They are all based on parallax 3D display technology. And they have their advantages in the following aspects, such as accommodation, occlusion, view angle, and virtual image formation [9]. Holographic display is a superior technology of true 3D display, which can reproduce 3D objects using the amplitude and phase information of light waves. However, given its wave-front regulation principle, image information recording and reproduction are relatively complex, and the view angle is greatly limited [10,11]. The volumetric display is limited to a small range, and parallax occlusion is difficult [12]. Although there are some difficulties in convergence adjustment, the light field display represented by parallax barrier and lenticular lens grating has a good commercial prospect. Because these two 3D display technologies are based on binocular parallax, their principles are simple and easy to implement, and have good compatibility with traditional 2D display technology. The principle design of lenticular lens grating has some shortcomings in the aspect of resolution loss. Because the number of pixel dots is inherent, N, in a light dot array behind each lenticular lens cell at the time of manufacture [13], the resolution of the 3D scene is inevitably lost. Suppose the naked eye 3D scene achieves the same resolution as wearing glasses. In that case, the pixel and lenticular lens grating cell size will have to be reduced, and the processing difficulty will be significantly increased. In terms of dense viewpoints, the number of viewpoints is determined by N pixels covered by the grating elements of the lenticular lens. And viewers are restricted viewable regions, including the limited viewing angle and limited view depth. Therefore, currently, multiview display technologies always have these drawbacks.

A hardware-based method described in this paper may overcome some of these drawbacks. Our previous work proposed a parallax barrier technology based on dynamic control [1416]. This method provides alternative technical scheme for autostereoscopic 3D display with multiple viewpoints or motion parallax, which could achieve the same image quality (resolution) as the glass-based 3D display without a higher resolution display screen for most commercial applications. We define the viewpoint-redundant region as the “eye space” or “eye zone”, so that the single viewer can be anywhere in the whole space. At the same time, it can ensure the resolution and brightness of the 3D display system without loss [17,18]. For multiple observers, it is usually a group of relatively fixed positions densely enough, that is, the positions between viewpoints are relatively fixed, rather than fixed in a certain position in space [19]. If multiple viewers are in a group of relatively fixed positions at the same time, then they can get the correct 3D view information, although the viewers’ position is different, they see identical images; whereas if the observer is not in those positions, then view inversion conflicts occur. For one free-moving viewer, we can obtain the positioning coordinates through binocular viewpoints tracking technology, feedback the coordinate information to the control circuit, automatically adjust the size of the dynamic screen shutter pupil windows, and realize the adaptive screen function. Simultaneously, the view synchronous rendering of the corresponding coordinate position is completed to realize the motion parallax of the 3D scene. Previous studies include dynamic parallax barrier [2] and dynamic directional backlight [11], which can provide full-resolution views, but most of them can only be provided to a single observer. The difference of this manuscript is that it is possible to obtain full-resolution images simultaneously from multiple viewpoints through eye-space design. At the same time, due to the appropriate space-time multiplexing technology, the display brightness will not be reduced. Moreover, compared with the blinds, the LCD dynamic parallax barrier has better adaptive addressing circuit control function and can achieve better signal synchronization of the two-layer LCD. Compared with directional backlight technology, parallax adaptive screen is clearer in principle, simpler in structure, and easier to implement experimental equipment configuration and processing technology.

2. System configuration and working pipeline of dynamic control adaptive screen

In the modeling example (70-inch TV), the single view zone is from 2 m to 5 m (depth of single view zone is 3 m) away from the screen with a view angle 45 degrees from the edge of the screen on each side, which corresponding to 5-row capacity (row separation is 0.75 m). There are 63 pairs of eyes available in each row, and thus, there are at least 63 × 5 = 315 eye pairs (i.e., locations for possible 315 viewable points) available in this view zone. Therefore, any viewer can sit down at any location in this zone to watch TV without moving seats. The only thing required is adjusting sitting posture (when needed for comfortable, adjusting sitting posture is necessary for a long time sitting) so that eye positions can be switched between two neighboring eye zones (6.6 cm or 2.5 inches away) head. It can be alleviated effectively that viewers’ restricted seat locations [2022]. It needs to reduce time-multiplex to the proper level [2326].

In the process of building the experimental prototype, due to the small size of the LC grating slicing unit, it is necessary to design a splicing scheme to achieve a large-size 3D display, such as 70 inches. The proof-of-principle prototype is based on the LC slicing cell size, and the grating screen has been placed some distance in front of the flat panel display, like the top of Fig. 1, which is the system configuration and working flow chart of the prototype, as well as the diagram of integral and partial amplification of the grating. For details about system configuration parameters, see the following Table 1. In the 1D grating adaptive screen design, we thoroughly considered the shutter pupil window edge offset and diffraction effect mentioned in Section 4. Due to the grating beam splitting effect, we can see the transmitted green and red images for left and right eyes, respectively. However, there is no grating, and the display appears yellow with a mixture of green and red. According to the LC response time requirements in Section 4.4, we customized the grating, which is a kind of rapid response twisted nematic LCD with a size of 310mm*285 mm, as shown in the middle of Fig. 1. The detail of partial magnification of the grating is shown at the bottom of Fig. 1.

 figure: Fig. 1.

Fig. 1. Schematic diagram of prototype system configuration and workflow, including the overall and partial magnification of the grating

Download Full Size | PDF

Tables Icon

Table 1. System configuration parameters

A depth camera can obtain binocular viewpoint tracking positioning within the range of view zone. The binocular viewpoint positioning coordinates are fed back to the FPGA, which has two output signals. One is to control the dynamic control adaptive screen through the addressing circuit. The other is to control the display screen to render the corresponding image information through different viewing angles, as shown in Fig. 2. The adaptive screen with dynamic control is required to adjust the pupil window size and determine the left or right eye pupil windows according to the viewer's distance from the screen and the angle information, respectively. The display screen renders the corresponding viewing angle image according to the viewer's angular position relative to the screen.

 figure: Fig. 2.

Fig. 2. Schematic diagram of binocular viewpoint tracking feedback for adaptive screen and rendering.

Download Full Size | PDF

The view depth is the viewable range along the viewing direction, such as the range 2m∼5 m in the modeling example. Beyond this range, the 3D scene will disappear. However, this paper's proposed method enables the extension for view depth by using multiple address drivers. The total view zone can be divided into M view zones (M=1, 2, 3…) for large view depth of theaters, cinemas, exhibitions, or squares (even though for family use, usually M=1 is good enough). One address driver is used for each view zone, which contains two address matrixes for row and column. 2D adaptive screens contain both row and column addressing matrices, and 1D adaptive screens contain only one column addressing matrix. The pictures for each zone are displayed by time-multiplexing (but not an excessive time-multiplexing), i.e., rotating the display for each zone. More details regarding multiple address drivers will be given in different papers due to limited pages.

3. Principle and methodology of pupil window and eye space redundancy

In Fig. 3, the adaptive screen “C” is located in front of the display screen “A” with a separation. Sizes and edges of shutter pupils (pupil of left eye or right eye abbreviate as “PL” or “PR”) for a given display pixel are determined by the two cross points of two lines with the adaptive screen “C”. Plane “B” is a virtual plane where the two edge crossing lines of one pixel and one eye space intersect. Adaptive screen “C” is usually in the vicinity of plane “B” and, in general, on the opposite side of plane “A”. Therefore, for each given display pixel, there is a group of shutter pupil windows. As shown in Fig. 3, pupils for all left and right eyes are marked by PL and PR, respectively. Each pupil window may contain m (m≥1) shutter units for the adaptive screen. M determines the width of one shutter pupil window, determines the resolution required for the adaptive screen, and also affects its position distribution on the adaptive screen. In summary: (1) concept of pupil or pupil window of the adaptive screen is different from shutter pixel, a pupil may contain m (m≥1) shutter pixels or shutter slots; (2) Pupil sizes (i.e., m) and densities have a distribution over the adaptive screen. Figure 4 shows the distribution of eye space redundancy in different rows. The beam in the farther row has a larger projection size, and vice versa. The tolerance is maximum at a given middle position and decreases correspondingly at the left and right edges of the front and back rows.

 figure: Fig. 3.

Fig. 3. The demonstrative draw of determination of eye pairs in eye space and determination of shutter pupil sizes and locations for given display pixel in horizontal (x) direction and for all left eyes (solid lines and pupils for right eyes) and all right eyes (dash lines and pupils for left eyes); similar is applied for vertical (y) direction.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. The distribution of eye space redundancy in different rows. In all rows of viewers, the eye separations are the same; however, the rear row's projected beam sizes are larger than that of the front row.

Download Full Size | PDF

4. Image calibration and parameters analysis

The definition of the eye space ensures the number of viewpoints to be viewed simultaneously, and binocular viewpoint tracking can provide both accurate binocular parallax and smooth motion parallax. The adaptive modulation function of dynamic grating screen is the key to realize spatial light modulation. Next, the calibration of image and dynamic raster screen will be described, and the influence of relevant parameters will be analyzed.

4.1 Pupil window edge offset

As described in Section 3, one shutter pupil window size is determined by m, that is, by an integer multiple of m. Therefore, there is always an offset between the ideal edge location and the pupil window's actual edge location. The spatial distribution of edge offset along the row of viewers (or eye-pairs) from right to left is given in Fig. 5. The offset of the adjacent edge of a pair of eyes is in-between the solid and dash curves. The offset varies periodically, and the left edge and right edge shift together relative to ideal locations, so the pupil size itself does not vary so much as edge offset. This feature is shown in Fig. 6, which shows some selected pupil windows (opened for given display pixel). Not all pupils are switched on for each pixel, but only these within the angle of the pixel to the left side and the right side of eye space are switched on. As shown in Fig. 6, R1 - R3 means the right edge pixels, M1 - M3 means the center pixels, and L1 - L3 means the left pixels. To show the uniformity of the pupil size, the shutter pupils belong to three groups of display pixels are visualized, and the three groups of display pixels are selected from 11 pixels near the right edge and left edge of the screen and from 11 pixels at the middle of the screen, respectively. For each pixel, there are two groups of shutter pupil, the first group for all left eyes and the second group for all right eyes, such as, in each of R1, M2, and L3, the first two lines (one for left eyes and one for right eyes) are for the first pixel of the 11 pixels. The second two lines are for the second pixel of the 11 pixels, and so on, and the last two lines are for the last pixel of the 11 pixels.

 figure: Fig. 5.

Fig. 5. Offset between actual edge location and ideal edge location of shutter pupil.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Top chart: selected groups from 13 pupils of those shutter pupils which are switched-on for given pixel, top row: 11 display pixels near the right edge, middle row: center 11 display pixels, bottom row: 11 display pixels near the left edge. Bottom chart: further zooming for pieces R1, M2, and L3 only. In each of R1, M2, and L3, the first two lines mean the first pixel of the 11 pixels, and the second two lines mean the second pixel of the 11 pixels, and so on the last two lines mean the last pixel of the 11 pixels.

Download Full Size | PDF

4.2 Beam overlap and boundary error

Because of the pupil edge deviation, there are overlaps at beam boundaries, even for the center row. Figure 7 shows the spatial distribution of light beam overlap. The solid line represents the eye overlap in the same pair of eyes, and the dotted line represents the eye overlap in two adjacent pairs of eyes. Whether the pixels are in the middle or on the left and right edges, there are different degrees of eye overlap. Figure 7 shows the middle pixel and the left edge pixel, as well as the right edge pixel.

 figure: Fig. 7.

Fig. 7. Spatial distribution of beam overlaps, solid line: overlap for eyes in same eye pair, dash line: overlap for eyes belong to neighbor eye pairs. The top left image is from the middle pixel of the rear row, the bottom left image is from the middle pixel of the front row, the top right image is for the left edge pixel of the rear row, the bottom right image is for the left edge pixel of the front row.

Download Full Size | PDF

Figure 8 shows only the beam overlap statistical distribution for pixels on the left edge (similar but opposite for the right edge), which is related to the worst overlap because the edge pixel has maximum overlap. One can see that the maximum overlap is only about 17% of eye spacing. [18]

 figure: Fig. 8.

Fig. 8. Statistic distribution of beam overlaps overall eyes and all rows. Still, for the left edge pixel only, solid line: overlap for eyes in same eye pair, dash line: overlap for eyes belong to neighbor eye pairs.

Download Full Size | PDF

Besides beam sizes and beam overlaps mentioned above, the 3D display method's eligibility can also be evaluated by beam boundary error, as shown in Fig. 9. The boundary errors of projected beams for the center row only are calculated. The boundary errors can also be obtained for the rear and front rows by taking the difference between beam sizes and eye spacing and normalizing the difference to eye spacing or projecting beams onto the rear and front rows. In the latter case, the errors must be normalized to eye spacing. However, both do not give any new content. The maximums of absolute values for beam overlaps, boundary errors, and the offset of pupil sizes and beam sizes are the critical measures in system design.

 figure: Fig. 9.

Fig. 9. Spatial distribution of boundary errors of projected beams on the center row. The top left image corresponds to the middle pixel and the right eye, the bottom left image corresponds to the left edge pixel and the right eye, the top right image corresponds to the middle pixel and the left eye, the bottom right image corresponds to the left edge pixel and the left eye.

Download Full Size | PDF

4.3 Diffraction effect

The diffraction effect of light passing through the edge of the shutter pupil window is inevitable. Diffraction effects increase crosstalk between the right and left eyes. Therefore, the diffraction effect must be considered in designing the geometric model of eye space and grating pupil. We calculated the distribution of light intensity at the optimal viewing distance through Advanced System Analysis Program (ASAP) simulation of 1D shutter grating screen, as shown in Fig. 10 without diffraction effect and in Fig. 11 with diffraction effect.

 figure: Fig. 10.

Fig. 10. Geometric simulation without diffraction effects

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Diffraction effect distribution of image brightness intensity

Download Full Size | PDF

ASAP is an optical analysis software. It is the time-proven standard in optical software, offering unmatched capability, flexibility, speed, and accuracy. ASAP will accurately predict the real-world performance of virtually any optical system [27]. ASAP is a software developed by Breault Research Organization (BRO) to simulate the performance of optical systems through non-sequential ray tracing in 3D space. Over the years, it has been widely used in lighting design, stray light analysis, backlight board design, polarized light analysis and other fields. It is particularly known for its superior precision lighting and stray light analysis capabilities, which are essential features in LED lighting design and high-precision systems. According to the simulation results, the shutter pupil window size is adjusted appropriately to match the eye space size with the light intensity distribution considering the diffraction effect. Crosstalk between the eyes is limited to 17%, as shown in Fig. 12. In the grating design below, we set the slit width to ds=0.485 mm. So that after passing through the shutter pupil window, the beam reaches the eye space, and its transverse distance is about dex=64 mm, which is just the distance between two eyes.

 figure: Fig. 12.

Fig. 12. Schematic diagram of a crosstalk

Download Full Size | PDF

4.4 Liquid crystal response time requirements

We use liquid crystal material to make grating to realize the shutter pupil window's switch dynamic control. We use the light transmission characteristics of liquid crystal materials at different voltages to achieve the light switching effect. Since the shutter pupil window's switch frequency should match the display screen's refresh frequency, the liquid crystal material's response time is required to be fast enough to achieve the human eye viewing effect without flicker. We calculated that the response time of liquid crystal material should be less than 2 milliseconds. The detail of response time about the LCD adaptive screen is given in Fig. 13. The rise time is Ton (1), the fall time is Toff (2), the switch time is t1 (3), the duration time is t2 (4).

$$T_{on} = T_{3} - T_{2}$$
$$T_{off} = T_{6} - T_{5}$$
$$t_{1} = T_{on} + T_{off}$$
$$t_{2} = T_{5} - T_{3}$$

 figure: Fig. 13.

Fig. 13. Liquid crystal response time curve

Download Full Size | PDF

Ideally, the response time $\varDelta T_{resp}$ (5) should equal to rectangular voltage width $\varDelta T_{u}$ (6), but they are not the same. If we set the LCD adaptive screen's target response width $\varDelta T_{tar}$ (7), the normalized rectangular voltage width will be Tu (8). Because the rectangular voltage has a lag $\varDelta T_{d}$ (8), the adaptive screen's control voltage should be $\varDelta T_{d}$ ahead of the frame start time.

$$\varDelta T_{resp} = \frac{1}{2}t_{1} + t_{2} = T_{5} - T_{3} + \frac{1}{2}(T_{on} + T_{off})$$
$$\varDelta T_{u} = T_{4} - T_{1}$$
$$T_{u} = \varDelta T_{tar}\ast \frac{{\varDelta T_{u}}}{{\varDelta T_{resp}}}$$
$$\varDelta T_{d} = T_{2} - T_{1} + \frac{1}{2}T_{on}$$

5. Discussion on the manufacture of 1D shutter parallax barrier adaptive screen

Due to the limitation of the natural response time of liquid crystal and the inability of processing technology to realize complex requirements, we compromised and designed a 1D dynamic grating screen to verify the feasibility. According to the geometric model shown in Fig. 14, we set the following parameters: horizontal distance of eye space is dex, horizontal pixel size is dpx, the distance from the grating to the eye space is Z, the distance between the grating and the display screen is d, the width of the grating slit is ds, grating unit size is Ds which is four times ds. Since the grating slit has a certain width, d’ is introduced to compensate for the calculation error. Their geometric relationship is shown in the following formulas (9-13).

$$dex = \frac{Z}{d}dpx$$
$$ds = \frac{{d^{\prime}}}{{d - d^{\prime}}}dpx$$
$$d^{\prime} = (1 - \frac{1}{n})d,(n \ge 1)$$
$$ds = (n - 1)dpx,(n \ge 1)$$
$$ds = \frac{1}{4}dpx \approx \frac{1}{4}Ds,(n = \frac{5}{4})$$

 figure: Fig. 14.

Fig. 14. Geometric light path diagram

Download Full Size | PDF

The image information on the display screen is a reorganization of the left and right eye views. We use green and red pictures to distinguish the left eye's view from that of the right, and the green for left, the red for right, as shown in Fig. 15(a). Because of the inherent limitation of the number of pixels on the 2D display screen, there are two frames of reorganized views looping. The switching frequency of the two-frame views and the dynamic grating matches synchronously. When the first frame (RLRL) is loaded onto the screen, the even columns of the shutter pupil windows open. In comparison, the second frame (LRLR) is loaded, the odd columns of the grating open. The dynamic grating parameters are marked for fabrication, as shown in Fig. 15(b), like Ds=1.94 mm, ds=0.485 mm. Where “Ds” is a unit consisting of two fixed occluded “ds” and two dynamically controlled “ds”. The two “ds”, meaning even and odd columns, are cyclically switched through the frame-frequency signal of the 2D display.

 figure: Fig. 15.

Fig. 15. Schematic diagram of view recombination and grating synchronization principle

Download Full Size | PDF

It is the basis of 3D stereo perception that the beam splitting effect of grating forms parallax. We verify that the grating can distinguish the green and red view images well at the optimum viewing distance (OVD) through the experiment. To find the OVD, we take pictures in front of the grating screen, keeping two cameras at a binocular distance, as shown in Fig. 16. Experiments show that the OVD is about 2 meters away from the grating. It is consistent with the above theoretical model. If the distance is not OVD, we will get crosstalk caused by the overlapping beams, as shown in Fig. 16(a). Beam boundary errors can also cause crosstalk, as shown in Fig. 16(b). As mentioned in Section 4.2 above, crosstalk is controlled within 17% by controlling beam overlap and correcting beam boundary error. We can obtain views of the left and right eyes separated from each other to form parallax at the OVD, as shown in Fig. 16(c), to have a 3D experience. In the 90-degree horizontal view angle, we can get the correct view images at the boundary, as shown in Fig. 16(d), that are left boundary view images, and the same goes for the right boundary. Experiments show that there are almost 132 viewpoints densely arranged in the visual angle range along the horizontal direction of OVD. Among them, binocular view images can be obtained from random viewpoints at any position within the effective angle range, as shown in Fig. 16(e).

 figure: Fig. 16.

Fig. 16. Binocular view images in different locations, (a) out of the OVD, (b) at the boundary of the beam, (c) at the OVD, (d) at the left boundary of the 90-degree horizontal view angle, (e) random viewpoints at any position within the effective angle range.

Download Full Size | PDF

We reorganize the view images of the left and right eye and display them on the 2D display screen. Then put the grating in front of the 2D display screen, and we can get a stereo car logo. By appropriately increasing the camera's exposure time, all the screen pixels can be captured at any viewpoint without losing resolution, as shown in Fig. 17(a). Because of the visual retention effect, one eye sees all the entire 2D display screen's pixels. Therefore, we consider resolution almost loss-free. However, the image's resolution corresponds to the static grating resolution, and only half of the entire screen pixel can be seen from this perspective. The resolution is significantly reduced, as shown in Fig. 17(b).

 figure: Fig. 17.

Fig. 17. Schematic diagram of car logo stereo display. (a) resolution loss-free, (b) half of the full resolution.

Download Full Size | PDF

The experimental results of left eye view at different depths are shown in Fig. 18, where are 2 m, 3 m, 4 m, 5 m and 6 m from top to bottom. Similarly, the corresponding view of the right eye can be obtained. The photos are same-size views taken at different depths, so the display system takes up a smaller and smaller proportion of the photo from top to bottom. We found that when the viewpoint is within the range of 2m-5 m, the depth camera can obtain the depth information of the observer through viewpoint tracking, and then determine the grating pupil size of the adaptive screen through FPGA to project the view information of the corresponding depth to the position of the observer to obtain 3D perception. However, when the viewpoint is >5 m depth, due to the limitation of the size of adaptive grating pupil unit, a view with serious crosstalk appears, as shown at the bottom of Fig. 18.

 figure: Fig. 18.

Fig. 18. Schematic diagram of results at different depths.

Download Full Size | PDF

6. Binocular viewpoints tracking and motion parallax

To sum up, the display system can simultaneously provide multiple binocular viewpoints in fixed positions. Each observer obtains a binocular parallax image of the same content to produce the stereoscopic perception. The viewpoints are sufficiently dense within a certain range of observation. At the same time, each observer can see the 3D image with no loss of resolution because of the dynamic refresh of the raster and binocular views simultaneously.

However, there is a mismatch between the left and right eye positions in the dense eye space. Viewpoint tracking can solve this problem effectively. Furthermore, motion parallax can be realized by tracking a single binocular viewpoint. The observer's position information is obtained and feedback to the grating screen. It adaptively controls the grating pupil window and renders the left and right eye view information more accurately. Meanwhile, the binocular view from different angles is rendered according to the observer's position to realize the motion parallax for a single binocular viewpoint.

The depth camera can obtain the distance from the binocular viewpoint to the display screen and calculate the grating pupil window's size by using the algorithm to ensure that the binocular view without crosstalk can be obtained at different depth distances. As described above in Section 3, the adaptive adjustment of grating pupil window size is composed of m (m≥1) grating shutter pixel units, realized by the automatic addressing control circuit. At the same time, viewpoint tracking can solve the problem of viewing angle localization in eye space. Within the 120 degrees range, we render the binocular view of each viewing angle and cache it in memory for real-time invocation. As shown in Fig. 19, the images are taken with a monocular camera at the left, middle and right positions of the screen. The background of the image is the left eye view, which is green. We rendered a three-dimensional head model. The display system obtains the position of the viewer through binocular viewpoint tracking, and renders the image of the corresponding view angle at the same time. Figure 19(a) is 60 degrees to the left of the screen, (b) is in the middle, and (c) is 60 degrees to the right.

 figure: Fig. 19.

Fig. 19. Schematic diagram of motion parallax. The picture at 60 degrees to the (a)left of the screen, (b) middle, and (c) right.

Download Full Size | PDF

7. Discussion and conclusion

Parallax barrier 3D stereoscopic display is a very mature technology, but the addressable active control of switchable grating still needs to be perfected. Under ideal conditions, the 2D grating can satisfy the viewing needs of multiple rows of dense viewpoints while maintaining resolution without loss. The design, which resembles a light field display, works well with traditional flat-panel displays and has good compatibility. However, the reality is that there are inherent defects in liquid crystal materials, and grating processing requires high precision. We have to use a 1D grating, although it does not fully capture the above method's advantages. Besides, several aspects can be improved, such as image rendering efficiency and liquid crystal rapid modulation. The experimental design should be optimized, and the number of viewpoints and resolution measurements should be quantified in future work. The design and processing of 2D grating is a research hotspot.

Figure 1619 shows the inevitable appearance of Moiré fringe. The prototype system in this paper is mainly used to verify the principle, and further optimization is really needed in Moiré fringe, splicing scheme and other details. Moiré Fringe can be eliminated by adjusting the tilt angle of 1D adaptive screen. Elimination was not included in the manuscript experiment due to processing costs, but we did consider it.

In the case of dense multiple viewpoints at the same time, we designed a 1D adaptive screen experimental prototype for principle verification. On this basis, we make full use of the breadth of eye space to achieve motion parallax based on single binocular viewpoint tracking. The core hardware device of the system is an adaptive LCD grating screen, which adjusts the grating style according to the tracking binocular view position. It also includes addressing circuit design and time division multiplexing method to realize resolution without loss. We also analyze the edge compensation effect and diffraction effect on the 3D display. From binocular viewpoint tracking and positioning to adaptive screen adjustment, and then to motion parallax image rendering, the real-time performance of the whole system needs to be further optimized and improved.

Funding

Beijing Municipal Natural Science Foundation (7212202); National Natural Science Foundation of China (81771940, 82027807).

Acknowledgment

The authors acknowledge supports from National Natural Science Foundation of China (82027807, 81771940), and Beijing Municipal Natural Science Foundation (7212202).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. X. Xia, Z. Zheng, and X. Liu, “Omnidirectional-view three-dimensional display system based on cylindrical selective-diffusing screen,” Appl. Opt. 49(26), 4915–4920 (2010). [CrossRef]  

2. T. A. Peterka, R. L. Kooima, D. J. Sandin, A. Johnson, J. Leigh, and T. A. DeFanti, “Advances in the Dynallax Solid-State Dynamic Parallax Barrier Autostereoscopic Visualization Display System,” IEEE Trans. Visual. Comput. Graphics 14(3), 487–499 (2008). [CrossRef]  

3. D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018). [CrossRef]  

4. H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D Display with Long Visualization Depth Using Referential Viewing Area-Based Integral Photography,” IEEE Trans. Visual. Comput. Graphics 17(11), 1690–1701 (2011). [CrossRef]  

5. T. Ando, K. Mashitani, M. Higashino, and K. Koyamada, “Multiview image integration system for glassless 3D display,” Proc. SPIE 5664, 158–166 (2005). [CrossRef]  

6. H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues,” Opt. Express 23(4), 3901–3913 (2015). [CrossRef]  

7. T. C. Poon, Digital Holography and Three-Dimensional Display: Principles and Applications, 1st ed. (Springer, 2010).

8. B. Javidi, E. Tajahuerce, and P. Andres, Ray-based and Wavefront-based 3D Representations for Holographic Display, (Wiley-IEEE Press, 2014).

9. D. E. Smalley, OSA Display Technical Group Illumiconclave I, (Optical Society of America, 2015), http://holography.byu.edu/Illumiconclave1.html.

10. S. Yang, X. Sang, X. Yu, X. Gao, L. Liu, B. Liu, and L. Yang, “162-inch 3D light field display based on aspheric lens array and holographic functional screen,” Opt. Express 26(25), 33013–33021 (2018). [CrossRef]  

11. D. Fattal, Z. Peng, T. Tran, S. Vo, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free 3D display,” Proc. IEEE 6656348, 24–25 (2013). [CrossRef]  

12. W. Xie, Y. Wang, H. Deng, and Q. Wang, “Viewing angle-enhanced integral imaging system using three lens arrays,” Chin. Opt. Lett. 12(1), 30–33 (2014). [CrossRef]  

13. D. Zhang and X. Sang, “Comparative Visual Tolerance to Vertical Disparity on 3D Projector versus Lenticular Autostereoscopic TV,” J. Disp. Technol. 12(2), 178–184 (2016). [CrossRef]  

14. L. L. Chen, L. D. Chen, J. Gou, and M. M. Chen, “Method of Glassless 3D Display,” U. S. patent 13,987,705 (9 May 2017).

15. L. L. Chen, Y. Meng, and Z. Yu, “Methods of building dense multiview autostereoscopic display and its hardware requirements,” Proc. SPIE 10459B, 1–9 (2017). [CrossRef]  

16. Y. Meng, Z. Yu, and L. L. Chen, “Analysis of the diffraction effects for a multiview autostereoscopic three-dimensional display system based on shutter parallax barriers with full resolution,” Proc. SPIE 104590P, 1–7 (2017). [CrossRef]  

17. Y. Meng, L. L. Chen, and Z. Yu, “Dense Multi-view Autostereoscopic Three-dimensional Display System Based on Shutter Parallax Barriers with Dynamic Control,” in Imaging and Applied Optics Conference (2018), paper 3W3G.5.

18. C. Zhang, Y. Meng, L. L. Chen, and Z. Yu, “The Image Processing and Data Analysis of Dense Multiview Autostereoscopic 3D Display System Based on Dynamic Parallax Barriers,” in Imaging and Applied Optics Conference (2018), paper 3W3G.6.

19. Y. Meng, Z. Yu, C. Zhang, Y. Wang, Y. Liu, H. Ye, and L. L. Chen, “Numerical simulation and experimental verification of a dense multi-view full-resolution autostereoscopic 3D-display-based dynamic shutter parallax barrier,” Appl. Opt. 58(5), A228–A233 (2019). [CrossRef]  

20. G. Lv, J. Wang, W. Zhao, and Q. Wang, “Three-dimensional display based on dual parallax barriers with uniform resolution,” Appl. Opt. 52(24), 6011–6015 (2013). [CrossRef]  

21. R. Raskar, “Content-Adaptive Parallax Barriers and Six-Dimensional Displays: new ideas from MIT Media Lab,” Proc. SPIE 7863, 7863–7932 (2011).

22. G. Lv, Q. Wang, W. Zhao, and J. Wang, “3D display based on parallax barrier with multiview zones,” Appl. Opt. 53(7), 1339–1342 (2014). [CrossRef]  

23. H. Urey and E. Erden, “State of the Art in Stereoscopic and Autostereoscopic Displays,” Proc. IEEE 99(4), 540–555 (2011). [CrossRef]  

24. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013). [CrossRef]  

25. A. M. Marson and A. Stern, “Horizontal Resolution Enhancement of Autostereoscopy Three-Dimensional Displayed Image by Chroma Subpixel Downsampling,” J. Disp. Technol. 11(10), 800–806 (2015). [CrossRef]  

26. A. Zhang, J. Wang, and J. Zhou, “Illumination Optics in Emerging Naked-Eye 3D Display,” Prog. Electromagn. Res. 159, 93–124 (2017). [CrossRef]  

27. https://www.breault.com/software/asap-nextgen

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (19)

Fig. 1.
Fig. 1. Schematic diagram of prototype system configuration and workflow, including the overall and partial magnification of the grating
Fig. 2.
Fig. 2. Schematic diagram of binocular viewpoint tracking feedback for adaptive screen and rendering.
Fig. 3.
Fig. 3. The demonstrative draw of determination of eye pairs in eye space and determination of shutter pupil sizes and locations for given display pixel in horizontal (x) direction and for all left eyes (solid lines and pupils for right eyes) and all right eyes (dash lines and pupils for left eyes); similar is applied for vertical (y) direction.
Fig. 4.
Fig. 4. The distribution of eye space redundancy in different rows. In all rows of viewers, the eye separations are the same; however, the rear row's projected beam sizes are larger than that of the front row.
Fig. 5.
Fig. 5. Offset between actual edge location and ideal edge location of shutter pupil.
Fig. 6.
Fig. 6. Top chart: selected groups from 13 pupils of those shutter pupils which are switched-on for given pixel, top row: 11 display pixels near the right edge, middle row: center 11 display pixels, bottom row: 11 display pixels near the left edge. Bottom chart: further zooming for pieces R1, M2, and L3 only. In each of R1, M2, and L3, the first two lines mean the first pixel of the 11 pixels, and the second two lines mean the second pixel of the 11 pixels, and so on the last two lines mean the last pixel of the 11 pixels.
Fig. 7.
Fig. 7. Spatial distribution of beam overlaps, solid line: overlap for eyes in same eye pair, dash line: overlap for eyes belong to neighbor eye pairs. The top left image is from the middle pixel of the rear row, the bottom left image is from the middle pixel of the front row, the top right image is for the left edge pixel of the rear row, the bottom right image is for the left edge pixel of the front row.
Fig. 8.
Fig. 8. Statistic distribution of beam overlaps overall eyes and all rows. Still, for the left edge pixel only, solid line: overlap for eyes in same eye pair, dash line: overlap for eyes belong to neighbor eye pairs.
Fig. 9.
Fig. 9. Spatial distribution of boundary errors of projected beams on the center row. The top left image corresponds to the middle pixel and the right eye, the bottom left image corresponds to the left edge pixel and the right eye, the top right image corresponds to the middle pixel and the left eye, the bottom right image corresponds to the left edge pixel and the left eye.
Fig. 10.
Fig. 10. Geometric simulation without diffraction effects
Fig. 11.
Fig. 11. Diffraction effect distribution of image brightness intensity
Fig. 12.
Fig. 12. Schematic diagram of a crosstalk
Fig. 13.
Fig. 13. Liquid crystal response time curve
Fig. 14.
Fig. 14. Geometric light path diagram
Fig. 15.
Fig. 15. Schematic diagram of view recombination and grating synchronization principle
Fig. 16.
Fig. 16. Binocular view images in different locations, (a) out of the OVD, (b) at the boundary of the beam, (c) at the OVD, (d) at the left boundary of the 90-degree horizontal view angle, (e) random viewpoints at any position within the effective angle range.
Fig. 17.
Fig. 17. Schematic diagram of car logo stereo display. (a) resolution loss-free, (b) half of the full resolution.
Fig. 18.
Fig. 18. Schematic diagram of results at different depths.
Fig. 19.
Fig. 19. Schematic diagram of motion parallax. The picture at 60 degrees to the (a)left of the screen, (b) middle, and (c) right.

Tables (1)

Tables Icon

Table 1. System configuration parameters

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

T o n = T 3 T 2
T o f f = T 6 T 5
t 1 = T o n + T o f f
t 2 = T 5 T 3
Δ T r e s p = 1 2 t 1 + t 2 = T 5 T 3 + 1 2 ( T o n + T o f f )
Δ T u = T 4 T 1
T u = Δ T t a r Δ T u Δ T r e s p
Δ T d = T 2 T 1 + 1 2 T o n
d e x = Z d d p x
d s = d d d d p x
d = ( 1 1 n ) d , ( n 1 )
d s = ( n 1 ) d p x , ( n 1 )
d s = 1 4 d p x 1 4 D s , ( n = 5 4 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.