Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multiview three-dimensional display with continuous motion parallax through planar aligned OLED microdisplays

Open Access Open Access

Abstract

Existing multiview three-dimensional (3D) display technologies encounter discontinuous motion parallax problem, due to a limited number of stereo-images which are presented to corresponding sub-viewing zones (SVZs). This paper proposes a novel multiview 3D display system to obtain continuous motion parallax by using a group of planar aligned OLED microdisplays. Through blocking partial light-rays by baffles inserted between adjacent OLED microdisplays, transitional stereo-image assembled by two spatially complementary segments from adjacent stereo-images is presented to a complementary fusing zone (CFZ) which locates between two adjacent SVZs. For a moving observation point, the spatial ratio of the two complementary segments evolves gradually, resulting in continuously changing transitional stereo-images and thus overcoming the problem of discontinuous motion parallax. The proposed display system employs projection-type architecture, taking the merit of full display resolution, but at the same time having a thin optical structure, offering great potentials for portable or mobile 3D display applications. Experimentally, a prototype display system is demonstrated by 9 OLED microdisplays.

© 2015 Optical Society of America

1. Introduction

Through simultaneously presenting multiple perspective views (stereo-images) of a three-dimensional (3D) object, the glasses-free multiview 3D display creates multiple sub-viewing zones (SVZs), i.e. a spatial region around the viewpoint of each stereo-image. A pupil arriving at each SVZ can perceive the corresponding stereo-image, which makes the viewer perceive different stereo-image pairs as his/her left and right eyes moving into corresponding SVZs. Thus, the multiview 3D display evokes both stereo parallax and motion parallax depth cues of the viewers. This technology is compatible with existing flat two-dimensional (2D) display panels. So, the glasses-free multiview 3D display technology is developing very rapidly in recent years and begins to occupy a prominent position in 3D display area. According to mechanisms adopted for delivering each stereo-image to corresponding SVZ, the multiview 3D display techniques can be generally categorized as contact-type and projection-type [1, 2]. The contact-type system bases on thin optical plates, such as lenticular [3–5] and parallax barrier [6, 7] plates, to form SVZs, which makes it be suitable for portable or mobile 3D display products. But pixels of the flat display panel are shared by all the stereo-images, resulting in a rapid decrease of the display resolution when more SVZs are pursued. The projection-type system usually employs multiple projectors [8] or uses one fast switching projector being mated with a screen which provides optical functions for sequential stereo-images [9, 10]. Although it can provide a full display resolution, i.e. being equal to the adopted projector, it is difficult to be packaged into a small-size structure due to the necessary optical adjustment structure.

Inherently, both contact-type and projection-type multiview 3D display systems only provide a limited number of SVZs. So, the stereo-image viewed by a pupil does not change until the pupil moves to the adjacent SVZ. The motion parallax thus appears in a stepwise fashion, degrading the effectiveness of 3D displays. Overlapping adjacent SVZs by some extent may be able to alleviate this discontinuity as mentioned in ref [11], but such overlap will increase the blurriness of the displayed images [12], especially when the overlapping percent reaches 100% [13]. Furthermore, the presented light intensities in the overlapped zone are difficult to control accurately, resulting in an obvious light intensity fluctuation. The obtained continuity of motion parallax is thus at a coarse level. At present, super-multiview display technique and viewer-tracking technique have been proposed to bypass the discontinuous motion parallax issue. The super-multiview display, featured by a great number of SVZs with an interval smaller than the pupil diameter [14, 15], demands super high resolution or ultra large quantity of 2D display panels. Viewer-tracking technique [16–18] only accommodates limited viewers and also encounters difficulties for outdoor application due to complex environments.

In this paper, we propose a novel multiview 3D display system to realize continuous motion parallax by using only a moderate number of planar aligned OLED microdisplays without a viewer-tracking unit. Through appropriately offsetting multiple planar aligned lenses, stereo-images from different microdisplays overlap in a virtual displaying plane and share a common spatial-spectrum plane which serves as the observation plane Pobserv. Through blocking partial microdisplay light-rays by the baffles, a complementary fusing zone (CFZ) gets generated between adjacent SVZs. When a pupil moves across the CFZ, continuously changing transitional stereo-images tiled by two segments from adjacent stereo-images are presented, and thus help to make the observed content evolving spatially from one stereo-image to its adjacent stereo-image gradually. So, discontinuous motion parallax gets overcome. Taking full advantages of the inherent large divergence angle of OLED pixels, very small light intensity fluctuation on the observing plane guarantees successful continuous motion parallax. The proposed system not only can present stereo-images with full display resolution, but also has a thin optical structure, thus offering great potentials for portable or mobile 3D display product applications.

The rest of this paper is organized as follows. In section 2, the theory of the proposed novel multiview 3D display technology for continuous motion parallax is explained. The experimental set up and results are shown in Section 3. Section 4 analyzes the spatial viewing zone along the z-direction that is provided by the proposed display system to the viewer. Section 5 provides conclusions.

2. Theory of the multiview 3D display system with continuous motion parallax

Figure 1 shows the optical structure of the proposed multiview 3D display system in the horizontal x-z plane. An array of planar aligned OLED microdisplays with a display area dx × dy are imaged by an array of planar aligned lenses (f). For simplicity, only two OLED microdisplays (OLED microdisplay k and k + 1) and two corresponding lenses (Lens k and k + 1) are drawn in the figure. The horizontal interval between adjacent microdisplays is denoted as dx + 2Δ. Through designing the specific offset value of each lens’s optical axis with respect to the corresponding microdisplay (δk for OLED microdisplay k), magnified virtual images of the messages loaded on different OLED microdisplays coincide on the Pdisp as stereo-images. The magnification is determined by β = v/u, where u represents the object distance and –v denotes the image distance. Points E and F are the common marginal points of the stereo-images along the horizontal x-direction, or in other word, the EF zone represents the overlapping distribution area of the stereo-images projected from each microdisplay. The lenses are processed into rectangular shape with a horizontal size of dx + 2Δ for seamless alignment along the x-direction. All the lenses and microdisplays are perpendicular with the z-direction. Mk-1, Mk and Mk + 1 are the joined points of adjacent lenses. The common spatial-spectrum plane, i.e. the common focal plane of all the lenses, serves as the observation plane Pobserv. A group of baffles are inserted between adjacent microdisplay-lens combination units to block the OLED microdisplay light-rays which exceed the corresponding lenses. Along the z-axis, they are confined in between the lenses and the microdisplays.

 figure: Fig. 1

Fig. 1 Optical structure of the proposed multiview display system. Two OLED microdisplays are drawn here to demonstrate the proposed ideas.

Download Full Size | PDF

When only OLED microdisplay k is activated, the virtual stereo-image k is projected to the EF zone, as shown in Fig. 2(a). Due to usage of the baffles, partial light-rays of the microdisplay k which exceed the Lens k are blocked. It implies that the aperture of the Lens k plays the role of an optical pupil to the stereo-image k. The light-rays from the stereo-image k pass through the optical pupil freely, but get blocked once they exceed the optical pupil. According to geometric optics, the passing-through light-rays form two kinds of zones on the Pobserv: V(k)1V(k)2 zone and V(k)2V(k + 1)1 zone. Here V(k)1, V(k)2, and V(k + 1)1 are the intersection points of line EMk-1, FMk-1, and EMk with the Pobserv. For each observation points in the V(k)1V(k)2 zone, the whole stereo-image k is visible. This zone is denoted as sub-viewing zones k (SVZk). But for an observation point in the V(k)2V(k + 1)1, e.g. point A, only a segment of the stereo-image k, i.e. ED segment of the stereo-image k, is visible. That is to say, for any point in this zone, only a partial stereo-image k is visible. So, the V(k)2V(k + 1)1 zone is denoted as partial-stereo-image viewing zone k (PVZk) here. Actually, there exist two PVZk zones adhering to the SVZk zone from two sides.

 figure: Fig. 2

Fig. 2 Optical structures of the proposed multiview 3D display system with only two microdisplay-lens combination units for simplicity: (a) Only OLED microdisplay k being activated by stereo-image k; (b) Only OLED microdisplay k + 1 being activated by stereo-image k + 1.

Download Full Size | PDF

Similarly, when only OLED microdisplay k + 1 gets activated by the stereo-image k + 1, a SVZk+1 zone where the whole stereo-image k + 1 is visible and two PVZk + 1 zones where only a partial stereo-image k + 1 is visible get generated on the Pobserv, as shown in Fig. 2(b). Due to the seamless planar alignment of the Lenses k and k + 1, between SVZk and SVZk+1, a PVZk zone and a PVZk + 1 zone overlap with each other completely. This overlapping zone is defined as the complementary fusing zone k~k + 1 (CFZk~k + 1) in this paper. For an observation point A in the CFZk~k + 1, the DF segment of stereo-image k + 1 gets visible.

Based on above discussions, when the microdisplays k and k + 1 are activated simultaneously by the stereo-images k and k + 1, respectively, the ED segment of stereo-image k and the DF segment of stereo-image k + 1 are presented to point A simultaneously, as shown in the left part of the Fig. 3. The two segments link up at the point D seamlessly, spatially tiling up a transitional stereo-image. The nomination of the CFZ is based on the spatially complementary characteristics of the two segments. Under this condition, two SVZs and one CFZ constitute a horizontal viewing zone in a spatial end to end manner. The center points of each SVZ, i.e. VPk and VPk + 1 in Fig. 2, are taken as the viewpoints of the corresponding stereo-images. For example, the stereo-image k is the projection view of the target object converging to VPk. As shown in Fig. 1, the target object is virtually located between point E and F around the Pdisp. The horizontal sizes of the SVZ and CFZ zones are calculated geometrically as:

 figure: Fig. 3

Fig. 3 The spatially changing transitional stereo-images observed by a moving observation point at different positions of a CFZ. The observation points A and A' are as denoted in the Fig. 2(b).

Download Full Size | PDF

LNF=2(f/u)ΔLF=(f/u)dx

For a point in the SVZ zone, light-rays passing through the point are from all displayed pixels of the corresponding stereo-image. For a point in the CFZ zone, light-rays passing through the point are from all displayed pixels of the corresponding transitional stereo-image. Because the transitional stereo-image is tiled by two complementary segments from two adjacent stereo-images, it has a resolution equal to the stereo-image and the number of light-rays passing through these two kinds of points is identical. The light emission from each OLED pixel inherently has a very large divergence angle. After transmitting through the lens with a limited aperture, each light ray presents an approximately homogeneous light intensity distribution on the Pobserv. Thus, for any point in the horizontal viewing zone, a uniform light intensity distribution gets guaranteed.

The joint point D of the transitional stereo-image is in fact the intersection point of the line AMk with the Pdisp, which does linkage movement with the observation point but toward the reverse direction. For example, when the observation point A moves toward A' along the negative direction of x-axis, the joint point D shifts toward D' along the positive direction, as shown in Fig. 2(b). As to the observed transitional stereo-image, the segment from the stereo-image k shrinks to ED' zone and the segment from the stereo-image k + 1 expands to D'F zone, as shown in Fig. 3. That is to say, for an observation point moving along the CFZ zone, the spatial ratio of the observed two complementary segments from different stereo-images changes from 1:0 to 0:1 gradually. Consequently, a continuously changing transitional stereo-image gets realized for a moving observation point. With the help of the continuously changing transitional stereo-image, the observed image will evolve from a complete stereo-image k to a complete stereo-image k + 1 in a spatial “point by point” manner when the observation point moves from SVZk to SVZk + 1. As discussed above, the light intensity distribution in the horizontal viewing zone is approximately homogeneous. So the observed image shows no obvious light intensity fluctuation for a moving observation point.

Actually, the pupil of a viewer is an aperture, not a point as discussed above. Replacing the observation point by a pupil of diameter 2d, the joint point D will grow into a region, denoted as a blending region (BR) in this paper. As shown in Fig. 4, for a pupil in the CFZk~k + 1 with marginal points B1 and B2 along the horizontal direction, the marginal points (D1 and D2) of the BR will be on the extension lines of B1Mk and B2Mk. Outside the BR, each point in the ED2 or D1F presents the image content of the stereo-image k or k + 1 to the pupil, respectively. However, within the BR, the presented content of each point is a hybrid of the two stereo-images’ contents by a spatially varied percent. Specifically, the presented content of a point Q in the BR is

IQ=αQkIQk+αQk+1IQk+1
where IQk and IQk+1 represent the contents included in the point Q coming from the stereo-image k and k + 1, respectively. The weight factors αQk and αQk+1 depend on the position of the point Q, evolving spatially and continuously:
αQk=D2Q/D2D1αQk+1=QD1/D2D1D2D1=2(β1)d
So, from point D2 to D1, the image content presented to the pupil changes spatial-gradually from the content of the stereo-image k to that of the stereo-image k + 1. Obviously, for a pupil, the number of segments building up the transitional stereo-image increases from two to three compared with the situation of an observation point.

 figure: Fig. 4

Fig. 4 Optical diagram showing the observed transition stereo-image when the pupil is covered by a CFZ.

Download Full Size | PDF

In fact, when a pupil moves from the viewpoint k to its adjacent viewpoint k + 1, there are three kinds of situations. In situation I, the pupil is completely covered by the SVZk and the whole stereo-image k is observed. In situation II, the pupil straddles SVZk and CFZk~k + 1, as shown in Fig. 5. The intersection point of the line B2Mk with Pdisp is denoted as D3. Here, B2 is the marginal point of the pupil in the CFZk~K + 1. The transitional stereo-image is built up by two segments. A segment, denoted as ED3, presents contents of the stereo-image k. The complementary segment, D3F, works as the BR. For a point Q' in the BR, the presented content is expressed as:

IQ'=αQ'kIQ'k+αQ'k+1IQ'k+1
where the weight factors are calculated by
αQk+1=Q'D3/(2(β1)d)αQk=1αQk+1
The situation III is as given by Fig. 4. Above processes can be generalized for pupil’s movement from one viewpoint to its adjacent viewpoint along the + x or –x direction.

 figure: Fig. 5

Fig. 5 Optical diagram showing the observed transition stereo-image when the pupil straddles the adjacent SVZ and CFZ.

Download Full Size | PDF

Equations (2)–(5) mathematically indicate that the content of each display point presented to a moving pupil experiences a timed gradual process as the BR sweeps across. In other words, for example, the point D3 in Fig. 5 is at the upper edge of the BR, it presents the content of stereo-image k. When the pupil translates a distance of 2d (the diameter of the pupil) along the negative direction of x-axis, the point D3 becomes the lower edge of the BR, i.e. “D1” in Fig. 4 of the situation III. The presented content changes to that of the stereo-image k + 1. With the pupil’s movement, the content presented by the point D3 changes from the content of the stereo-image k to that of the stereo-image k + 1 timed gradually.

In summary, all above analysis clearly shows that the observed changes continuously from the stereo-image k to the stereo-image k + 1 when the pupil moves from a viewpoint k to the adjacent viewpoint k + 1. This evolution process takes place not only by a spatial “point by point” way for the whole observed image, but also by a timed gradual way for the presented content of each display point of the observed image.

Introducing more OLED microdisplays into the proposed system, more SVZs and CFZs will be formed. En route continuously changing transitional stereo-images, a multiview 3D display with continuous motion parallax will be implemented if only all linked SVZs and CFZs can cover two pupils of a viewer. In the case of 2m + 1 microdisplays with (2m + 1) + 1 baffles being inserted, numbered as m, (m1), ···, 0, ···(m1), m, there will be 2m + 1 SVZs and 2m CFZs constructing the horizontal viewing zone (VZ) of the system. Symmetrically, lets the offset δ0 = 0. As shown in the Fig. 6, the mid-point of the microdisplay 0, C0, and the mid-point of the line-segment EF, C, are on the optical axis of the lens 0. To guarantee the overlapping of images of all OLED microdisplays, the mid-point of the marginal microdisplay m, Cm, should be imaged to the point C by the corresponding Lens m. According to the geometry relationship, the line CCm passes through the optical center of the Lens m. The maximum offset of the microdisplay m with respect to the corresponding lenses can be expressed geometrically as:

βδm=δm+m(dx+2Δ)δm=m(dx+2Δ)/(β1)
Equation (6) is also feasible for calculating the offset value of other lens’s optical axis with respect to the corresponding microdisplay when the m is replaced by the subscript number of the corresponding microdisplay.

 figure: Fig. 6

Fig. 6 Optical diagram showing the offsetting of the optical axis of the Lens m with respect to the corresponding microdisplay.

Download Full Size | PDF

The rectangular lenses are cut down from a group of mother lenses with identical characteristics. To collect light-rays between adjacent baffles, along the horizontal direction, the aperture size of the mother lens should not be less than:

Ax=2(δk+(dx+2Δ)/2)=(2m+β1)(dx+2Δ)/(β1)
So, the number of microdisplays accommodated in the proposed system is limited by:

Eq.(7)N.A.>Ax/f}2m+1<[(f×N.A.)(β1)/(dx+2Δ)(β2)]

To obtain a thin optical structure, a small f is preferred. The parameters2Δ and β have small influences on the number 2m + 1 mathematically. So, the parameters N.A. and dx play key roles on the value of 2m + 1.

As discussed above, 2m + 1 SVZs and 2m CFZs construct a horizontal VZ with a size of (2m + 1)LNF + 2mLF. Thus, the viewing angle of the 3D object displayed by the proposed system, i.e. the field angle of the VZ to the EF zone, can be calculated approximately by:

ViewingAnglearcsin[((2m+1)LNF+2mLF)/(v+f)]
Obviously, more microdisplays would bring a wider horizontal viewing angle. Equations (8) and (9) together lead to the conclusion that the viewing angle of the 3D object displayed by the proposed system is determined by the N.A. of the used lenses when dx is a constant value.

3. Experiments and results

A display system is set up to implement the idea described above, as shown in Fig. 7. Nine white OLED microdisplays, with a display area dx × dy = 10.08 × 7.56mm2 and a resolution 800 × 600, from OLEiD of China are used. To show the inner structure more clearly, four of the ten baffles are removed for photograph. Since each OLED microdisplay device is packaged with a driving board individually, its mechanical size is 22 × 17mm2. Aligning microdisplays side by side, the value of 2Δ reaches 11.92mm. According to Eq. (1), the width of the sub-viewing zone LNF is much larger than the pupil diameter, which is about 3mm at indoor environments. The pupil will receive an invariant stereo-image until it moves out of a SVZ, which will deteriorate continuous motion parallax. In order to reduce the width of the SVZ, nine OLED microdisplays are arranged into two parallel rows alternatively. The vertical interval between the two rows is set as dy + 2Δ = 17.6mm, a little larger than the vertical mechanical size of the microdisplay, which makes sure adjacent microdisplays (one in the upper row and another in the lower row) have no spatial conflict along the horizontal direction. Thus, the horizontal interval between adjacent microdisplays is reduced to 13mm in favor of a smaller width of the SVZ.

 figure: Fig. 7

Fig. 7 Photograph of the experimental display system.

Download Full Size | PDF

To guarantee overlapping of all stereo-images, the optical axises of the two rows of lenses should be arranged by an offset δy = ± (8.8/(β-1))mm with respect to the vertical geometrical center of the corresponding microdisplays. The vertical viewing zones (VZu and VZl) of microdisplays belonging to different rows exhibit misplacement. Their overlapped region defines the vertical range (VZ) of the viewing zone where the whole stereo-image or transitional stereo-image are visible, as shown in Fig. 8.

 figure: Fig. 8

Fig. 8 Schematic diagram showing the vertical viewing zone of the display system.

Download Full Size | PDF

Choosing achromatic lenses with an effective aperture 45mm, i.e. N.A = 0.75, as mother lenses, 9 kinds of rectangular parts, with a size of 13 × 36mm2 to satisfy Eq. (7), are cut out from these mother lenses and processed into the rectangular lenses for usage in our experiment. Their geometrical centers are 0,±(dx+2Δ)/(β1),±2(dx+2Δ)/(β1), ±3(dx+2Δ)/(β1) and ±4(dx+2Δ)/(β1) away from the geometric center of the mother lenses along the horizontal direction, respectively.

Other system parameters include f = 60mm, u = 160/3mm. Under this condition, β = 9, v = 480mm, LNF = 3.285mm, LF = 11.34mm, the vertical range of the VZ is 9.72mm and the available size of the stereo-image is 90.72 × 68.04mm2. The thickness of the optical structure is as small as 65mm. For symmetric consideration, the displayed 3D object is set as 68 × 68 × 68mm3. The horizontal range of the VZ is 120.285mm, which is about 1.9 times as large as the average interocular distance (64mm) of a viewer. According to Eq. (9), a viewing angle of about 13° is reached by the proposed system.

A uniform light intensity distribution on the Pdisp is a necessary condition for the proposed system. The inherent light emission characteristics of OLED, i.e. large divergence angle, guarantees this point. With all the microdisplays being active and all the pixels being set at the maximum intensity value, the light intensity distribution in the viewing zone is measured at a step interval of 3mm along the x-direction by a luminance meter CS-2000A from Konicaminolta. Figure 9 shows the measured values and a small light intensity fluctuation (<2%) is confirmed. So, intensities of the observed different stereo-image or transitional stereo-images are nearly uniform. The perceived image for a moving pupil is free from intensity fluctuation.

 figure: Fig. 9

Fig. 9 Measured light intensities along the horizontal center line of the viewing zone.

Download Full Size | PDF

To exhibit how a transitional stereo-image is tiled spatially, a verification experiment is performed. A research-type CCD (SenSys 1602E) is put in the CFZ0~1 to capture the presented image. For a transitional stereo-image from OLED microdisplay k and k + 1, if only one of them being activated at each time, the presented images are fragments of the transitional stereo-image. Figure 10(a) and 10(b) show the captured images when one of microdisplays 0 and 1 is activated by its corresponding stereo-image. Figure 10(c) shows the captured image when both of them are activated. Obviously, Fig. 10(c) is the spatial tilling of the former two. In our experiment, the diaphragm of the CCD is always set as 3mm, which is the average pupil diameter at indoor environments.

 figure: Fig. 10

Fig. 10 Captured transitional stereo-images at a observation position in the CFZ0~1 when (a)only the microdisplay 0 is active, (b) only the microdisplay 1 is active and (c)both of them are active.

Download Full Size | PDF

Two pyramids are displayed to demonstrate the proposed idea and system. Let the CCD be at ten positions with an equal interval along the horizontal direction in the VZ, captured images are shown in Fig. 11. A more intuitive feeling can be obtained through seeing the online multimedia recorded by a camera translating at a constant speed along the horizontal direction (Media 1). When the BR sweeps across the SVZs, the change rate of the observed transitional stereo-image becomes slow, as can be seen in the multimedia, which is due to the blank parts (i.e. no message) in the display zone for the stereo-images and the existence of the SVZs. This phenomenon can be alleviated by adopting full filled stereo-images and SVZs with a small size. Whatever, the change is still continuous. Five subjects observed the displayed 3D image in the lab environment and no obvious discontinuous motion parallax was perceived.

 figure: Fig. 11

Fig. 11 Captured images with a spatial interval of 13mm along the horizontal direction when the proposed display system works. The labels on each image denote the shooting position, with 0mm representing the midpoint of the viewing zone. A more intuitive feeling can be found in the online multimedia (Media 1).

Download Full Size | PDF

4. Discussions on the spatial range of the VZ along the z-direction

In above paragraphs, the discussed SVZs and CFZs are confined in the observing plane Pobserv. Actually, the SVZs and CFZs also occupy some zone space along the z-direction around the Pobserv, as shown in Fig. 11. Here, only three lenses are drawn for simplicity. When the pupil deviates away from the Pobserv along the negative z-direction, the SVZs get enlarged while the CFZs shrink. As a result, the changing rates of the observed transitional stereo-image will be sometimes faster and sometimes slower. This phenomenon can be seen more clearly in the online multimedia (Media 1). Inversely, if the pupil deviates away from the Pobserv along the positive z-direction, this uneven changing rate gets remediated by some extent, but crosstalk noises from adjacent stereo-images will appear when the pupil is just centered at the viewpoints, i.e. the center points of each SVZ.

Once the pupil leaves away the plane determined by the points M'k-1, M'k, M'k + 1, a new kind of CFZs gets generated. CFZk-1~k~k+1 in the Fig. 12 is shown as an example. For an observation point in the CFZk-1~k~k + 1, the observed transitional stereo-image is constituted by three segments from adjacent stereo-image k-1, k and k + 1, respectively. Although the presented transitional stereo-image still changes in a timed continuous manner under this condition, the difference between segments from the stereo-images k-1 and k + 1 becomes even more obvious, resulting in deterioration of the accuracy of the displayed 3D image. Therefore, in order to display the 3D image more accurately, the region between Pobserv and the plane determined by the points M'k-1, M'k, M'k + 1 is taken as the preferred observing zone along the z-direction, which is about 20.3mm in our prototype system.

 figure: Fig. 12

Fig. 12 Schematic diagram showing the spatial expansion of the SVZs and CFZs along the z-direction.

Download Full Size | PDF

Another problem that needs to be addressed is the wavefront aberrations from the processed lenses, especially when the lenses have larger offsets with respect to their corresponding microdisplays. Experimentally, anti-distortion through a correction Table [19] is performed to alleviate the wavefront aberration for the lenses with larger offsets. The procedure of the anti-distortion correction can be illustrated by Fig. 13. A dot pattern, as shown in Fig. 13(a), functions as the target image and is loaded onto one microdisplay with all other microdisplays being shut down at the same time. The projected virtual image, as shown in Fig. 13(b), is captured at the corresponding SVZ. Through analyzing the distortion of the captured image, a correction table is generated. An anti-distorted image is obtained based on the correction table, as shown in Fig. 13(c). Loading the anti-distorted image onto this microdisplay, a corrected image gets displayed, as shown in Fig. 13(d). This process is performed for each microdisplay-lens pairs and all stereo-images are anti-distorted through establishing their respective correction tables.

 figure: Fig. 13

Fig. 13 Correction of the image distortion by the electronic method.

Download Full Size | PDF

5. Conclusions

In conclusion, a novel multiview 3D display with continuous motion parallax and full display resolution are realized through controllable spatial-spectrum fusing of light-rays from planar aligned OLED microdisplays. The proposed system needs only a thin optical structure (65mm) offering great potential for portable or mobile 3D display applications provided that the related driving and control systems can be integrated into a monolithic chip. Compared with existing technologies to bypass the discontinuous motion parallax issue in multiview 3D display, i.e. super-multiview display technique and viewer-tracking technique, the novel technology proposed in this paper can get implemented with a moderate number of 2D display panels without using viewer-tracking unit. This endows our system with higher practicability.

Limited by N.A = 0.75 of the used lenses and the mechanical size of the OLED microdisplay, 9 OLED microdisplays are used by the prototype system in this manuscript and a viewing angle of 13° is reached experimentally. If OLED microdisplays with a smaller size dx are available and lenses with a larger N.A. are used, more natural motion parallax and wider viewing angles can be obtained through the proposed system. This is the focus of our future work.

Acknowledgments

The authors gratefully acknowledge support by the Natural Science Foundation of China, Grant No. U1201254, the National High Technology Research and Development Program of China (No. 2013AA03A106, No. 2015AA03A101).

References and links

1. J. Y. Son and B. Javidi, “Three-dimensional imaging methods based on multi-view images,” J. Disp. Technol. 1(1), 125–140 (2005). [CrossRef]  

2. J. Y. Son, V. V. Saveljev, J. S. Kim, S. S. Kim, and B. Javidi, “Viewing zones in three-dimensional imaging systems based on lenticular, parallax-barrier, and microlens-array plates,” Appl. Opt. 43(26), 4985–4992 (2004). [CrossRef]   [PubMed]  

3. D. Lanman, M. Hirsch, Y. Kim, and R. Raskar, “Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization,” ACM Trans. Graph. 29(6), 163–172 (2010). [CrossRef]  

4. Y. Takaki, O. Yokoyama, and G. Hamagishi, “Flat panel display with slanted pixel arrangement for 16-view display,” In Stereoscopic Displays and Applications XX, A. Woods, N. Holiman, and J. Merrittl, eds, Proc. SPIE-IS& T Electronic Imaging 7237, 723708 (2009). [CrossRef]  

5. L. Bogaert, Y. Meuret, S. Roelandt, A. Avci, H. De Smet, and H. Thienpont, “Demonstration of a multiview projection display using decentered microlens arrays,” Opt. Express 18(25), 26092–26106 (2010). [CrossRef]   [PubMed]  

6. C. van Berkel, D. W. Parker, and A. R. Franklin, “Multiview 3D LCD,” Proc. SPIE 2653, 32–39 (1996). [CrossRef]  

7. K. H. Yoon, H. K. Ju, I. K. Park, and S. K. Kim, “Determination of the optimum viewing distance for a multi-view auto-stereoscopic 3D display,” Opt. Express 22(19), 22616–22631 (2014). [CrossRef]   [PubMed]  

8. S. Iwasawa, M. Kawakita, S. Yano, and H. Ando, “Implementation of autostereoscopic HD projection display with dense horizontal parallax,” Proc. SPIE 7863, 78630T (2011). [CrossRef]  

9. N. A. Dodgson, J. R. Moore, S. R. Lang, G. Martin, and P. Canepa, “A time-sequential multi-projector autostereoscopic 3D display,” J. Soc. Inf. Disp. 8(2), 169–176 (2000). [CrossRef]  

10. J. Reitterer, F. Fidler, G. Schmid, T. Riel, C. Hambeck, F. Saint Julien-Wallsee, W. Leeb, and U. Schmid, “Design and evaluation of a large-scale autostereoscopic multi-view laser display for outdoor applications,” Opt. Express 22(22), 27063–27068 (2014). [CrossRef]   [PubMed]  

11. S. P. Hines, “Autostereoscopic video display with motion parallax,” Proc. SPIE 3012, 208–219 (1997). [CrossRef]  

12. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photon. 5(4), 456–535 (2013). [CrossRef]   [PubMed]  

13. C. van Berkel and J. A. Clarke, “Characterization and optimization of 3D-LCD module design,” Proc. SPIE 3012, 179–186 (1997). [CrossRef]  

14. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010). [CrossRef]   [PubMed]  

15. Y. Kajiki, H. Yoshikawa, and T. Honda, “Hologram-like video images by 45-view stereoscopic display,” Proc. SPIE 3012, 154–166 (1997). [CrossRef]  

16. S. H. Ju, M. D. Kim, M. S. Mark, K. T. Kim, J. H. Park, and K. M. Lim, “Viewer’s eye position estimation using single camera,” SID Syposium Dig. Tech. Pap. 44(1), 671–674 (2012).

17. P. Surman, R. S. Brar, I. Sexton, and K. Hopf, “MUTED and HELIUM3D autostereoscopic displays,” IEEE Intervational Conference on Multimedia and Expo (ICME) 1594–1599 (2010). [CrossRef]  

18. O. Eldes, K. Akşit, and H. Urey, “Multi-view autostereoscopic projection display using rotating screen,” Opt. Express 21(23), 29043–29054 (2013). [CrossRef]   [PubMed]  

19. Y. Takaki and H. Nakanuma, “Improvement of multiple imaging system used for natural 3D display which generates high-density directional images,” Proc. SPIE 5243, 42–49 (2003). [CrossRef]  

Supplementary Material (1)

Media 1: AVI (4997 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Optical structure of the proposed multiview display system. Two OLED microdisplays are drawn here to demonstrate the proposed ideas.
Fig. 2
Fig. 2 Optical structures of the proposed multiview 3D display system with only two microdisplay-lens combination units for simplicity: (a) Only OLED microdisplay k being activated by stereo-image k; (b) Only OLED microdisplay k + 1 being activated by stereo-image k + 1.
Fig. 3
Fig. 3 The spatially changing transitional stereo-images observed by a moving observation point at different positions of a CFZ. The observation points A and A' are as denoted in the Fig. 2(b).
Fig. 4
Fig. 4 Optical diagram showing the observed transition stereo-image when the pupil is covered by a CFZ.
Fig. 5
Fig. 5 Optical diagram showing the observed transition stereo-image when the pupil straddles the adjacent SVZ and CFZ.
Fig. 6
Fig. 6 Optical diagram showing the offsetting of the optical axis of the Lens m with respect to the corresponding microdisplay.
Fig. 7
Fig. 7 Photograph of the experimental display system.
Fig. 8
Fig. 8 Schematic diagram showing the vertical viewing zone of the display system.
Fig. 9
Fig. 9 Measured light intensities along the horizontal center line of the viewing zone.
Fig. 10
Fig. 10 Captured transitional stereo-images at a observation position in the CFZ0~1 when (a)only the microdisplay 0 is active, (b) only the microdisplay 1 is active and (c)both of them are active.
Fig. 11
Fig. 11 Captured images with a spatial interval of 13mm along the horizontal direction when the proposed display system works. The labels on each image denote the shooting position, with 0mm representing the midpoint of the viewing zone. A more intuitive feeling can be found in the online multimedia (Media 1).
Fig. 12
Fig. 12 Schematic diagram showing the spatial expansion of the SVZs and CFZs along the z-direction.
Fig. 13
Fig. 13 Correction of the image distortion by the electronic method.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

L NF =2(f/u) Δ L F =(f/u) d x
I Q = α Qk I Q k + α Qk+1 I Q k+1
α Qk = D 2 Q/ D 2 D 1 α Qk+1 =Q D 1 / D 2 D 1 D 2 D 1 =2(β1)d
I Q' = α Q'k I Q' k + α Q'k+1 I Q' k+1
α Qk+1 =Q' D 3 /(2(β1)d) α Qk =1 α Qk+1
β δ m = δ m +m( d x +2 Δ ) δ m =m( d x +2 Δ )/(β1)
A x =2( δ k +( d x +2 Δ )/2)=( 2m+β1 )( d x +2 Δ )/( β1 )
Eq.(7) N.A.> A x /f }2m+1<[(f×N.A.)(β1)/( d x +2 Δ )(β2)]
Viewing Anglearcsin[ ((2m+1) L NF +2m L F )/(v+f) ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.