Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Light field display with near virtual-image mode

Open Access Open Access

Abstract

This study proposes a near virtual-image method for a flat-panel light field display using both lens and aperture arrays. The pixels of the flat-panel display are located between the lens array and its focal plane, which increases the viewing zone. The enlarged virtual images of the pixels are generated, and a single enlarged virtual pixel image is transmitted by each aperture of the aperture array. The aperture array also reduces aberrations of the lens array. Because the apertures within the aperture array are wider than the pitch of the pixels, this produces a higher light efficiency than light field displays with aperture arrays. The effectiveness of the proposed technique was verified using constructed light field displays with only horizontal parallax.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

In light field displays [16], the rays emitted from three-dimensional (3D) objects are reconstructed to create 3D images that can be viewed without 3D glasses. The glasses-free 3D displays can be categorized into the light field display based on the ray reconstruction and holographic display based on the wavefront reconstruction [7]. The commercialization of the former one is easier than the latter one. The recent increase in the resolution of flat-panel displays has accelerated the development of light field displays. This study proposes a new light field display configuration that uses a flat-panel display to increase the viewing zone angle and the brightness of 3D images. The light field displays using a single flat-panel display were originally developed as integral imaging displays [2,3,6]. This study chose the term “light field display” based on the categorization of the 3D displays described above.

Several different configurations have been proposed for light field displays. The flat-panel type [8,9] comprises a flat-panel display and an optical device array, e.g., lens or aperture arrays [10,11]. The projector-array type comprises multiple projectors and a common screen [12,13], while the multilayer type comprises a stack of flat-panel displays aligned in the depth direction [14,15]. Recently, a head-mounted type array was developed to mitigate the visual fatigue caused by the vergence–accommodation conflict [16,17]. As flat-panel type displays have the simplest structure of all configurations, this type is the starting point for the widespread use of light field displays. As described above, the flat-panel type uses two configurations: the lens array and the aperture array. The lens-array type has a higher light efficiency than the aperture-array type, although the viewing zone of the latter is wider than that of the former. The technique to increase the 3D image resolution using the time multiplexing technique has been proposed [18] since the flat-panel type provides 3D images with a lower resolution than the projector-array and multilayer types.

Three imaging modes have been developed for the lens-array type light field displays; these are identified by the length of the gap between the lens array and the pixels of the flat-panel display. When the gap length is equal to the focal length of the lens array, the pixels are imaged at infinity, i.e., the infinite-image mode [19]. This setup is the original setup used for integral imaging displays and produces 3D images in the vicinity of the display screen. When the gap exceeds the focal length, real images of the pixels are produced at a finite distance in front of the screen, i.e., the real-image mode [20,21]. This mode is particularly effective for the generation of aerial 3D images [22]. When the gap is shorter than the focal length, virtual images of the pixels are produced at a finite distance behind the screen, i.e., the virtual-image mode [20,23]. This mode has been used to reduce the thickness of head-mounted displays and produces virtual images several meters behind the lens array. This study uses the term “far virtual-image mode” to differentiate the conventional virtual-image mode from the technique proposed in this paper.

This research proposes a new configuration system for flat-panel light field displays that uses both lens and aperture arrays to provide a wider viewing zone than that of the lens-array type and higher light efficiency than that of the aperture-array type. Section 2 explains the proposed method, while Section 3 verifies the technique by constructing prototype displays. The resolution of the prototype display is evaluated in Section 5, and the study’s conclusions are presented in Section 6.

2. Theory

Figures 1(a) and (b) illustrate conventional flat-panel light field displays with a lens array and an aperture array, respectively. The gap between the lens or aperture array and the pixels of the flat-panel display is denoted by g, and the pitch of the lenses or apertures is denoted by p. The focal length of the lens array is denoted by f. The elementary images corresponding to the lenses or apertures are displayed on the flat-panel display. The pitch of the elementary images is denoted by pe. To maximize the width of the viewing zone at a distance (l) from the lens or aperture array, the elementary image pitch should be pe = (1 + g/l) p. When the pixel pitch of the flat-panel display is denoted by q, the resolution of the elementary images is (pe/q) × (pe/q), and the number of rays emitted from each lens or aperture is R = (pe/q)2.

 figure: Fig. 1.

Fig. 1. Conventional flat-panel light field displays: (a) lens-array type and (b) aperture-array type.

Download Full Size | PDF

Because of the superior light efficiency of the lens-array type, it is more commonly used for the construction of light field displays than the aperture-array type. As described in Section 1, for the infinite-image mode, the gap is equal to the focal length, i.e., g = f. For the real- and virtual-imaging modes, the distance from the lens array to the real- and virtual-image planes (denoted by g′) is usually much longer than the gap; consequently, the gap is approximated to the focal length, i.e., gf. Therefore, the viewing-zone angle of the lens-array type is given by Φ ≃ 2 tan−1(pe/2f). To enlarge this angle, the focal length should be reduced. However, it is difficult to reduce a focal length that is less than the lens pitch owing to the increase in lens aberration; consequently, the viewing-zone angle cannot be increased effectively using the lens-array type.

However, the viewing-zone angle of the aperture-array type can be enlarged easily by reducing the gap when rays are emitted from the pixels at a sufficiently wide angle. The viewing-zone angle is given by Φ = 2 tan−1(pe/2g). However, to prevent crosstalk among the rays emitted from different pixels and emanating in different directions, the aperture size should equal the pixel pitch of the flat-panel display. Therefore, the light efficiency decreases when the number of ray directions increases. The light efficiency, denoted by η, is given by η = 1/R = (q/pe)2.

Figure 2 presents a schematic diagram of this study’s proposed light field display, which uses both lens and aperture arrays. Instead of reducing the focal length of the lens array, the gap is reduced to enlarge the viewing-zone angle. The viewing-zone angle is given by Φ = 2 tan−1(pe/2g). To increase the viewing-zone angle sufficiently, a substantial reduction in the gap is required. Therefore, the virtual images of the elementary images are produced closer to the lens array, i.e., g′ = (1/g–1/f)−1, and the magnification of the virtual images (M = g′/g) is not so large. Multiple magnified images of the pixels are observed through each lens owing to the low magnification of the virtual images. Then, the width of the apertures of the aperture array is made equal to that of one magnified pixel image to ensure that only one magnified pixel image is observed through each lens. Because the width of the virtual pixel images is Mq, the light efficiency of the proposed technique is η = (Mq/pe)2 = M2/R, which exceeds that of the aperture-array type. The aperture array also has the effect of blocking rays preceding the peripheral lens areas with high lens aberration. As described above, the proposed technique produces virtual pixel images close to the lens array, i.e., several millimeters behind it; therefore, in this study, the proposed imaging mode is referred to as a “near virtual-image mode.”

 figure: Fig. 2.

Fig. 2. Proposed light field display using both lens and aperture arrays based on the near virtual-image mode.

Download Full Size | PDF

Table 1 compares the proposed technique with conventional approaches. The proposed technique has a wider viewing-zone angle than the lens-array type and a superior light efficiency to the aperture-array type. Several previous works have combined the aperture array with the lens array type to reduce the lens aberration and the stray light. The reduction of the lens aberration could reduce the lens array’s focal length, resulting in the enlargement of the viewing zone. The technique proposed in this study reduces the gap between the lens array and the flat-panel display without reducing the focal length of the lens array to enlarge the viewing zone; this is achieved utilizing the virtual-image formation via the lens array.

Tables Icon

Table 1. Viewing-zone angle and light efficiency for proposed and existing techniques

Although the viewing-zone angle can be enlarged using the near virtual-image mode, the divergence of rays increases in comparison with that using the infinite-image mode. Figure 3(a) depicts a ray generated using the near virtual-image mode in which the diffraction of light is neglected. Light from the virtual pixel image passes through the aperture, whose width is equal to that of the virtual pixel image. The width of the central region with a constant light intensity does not alter when light propagates through space, which is denoted by w1nv. The region where light exists increases in width with the propagation of light, which is denoted by w2nv. When the length measured from the lens array is denoted by z, both widths are given by the following:

$$w_1^{nv} = Mq, $$
$$w_2^{nv} = Mq + 2\frac{{Mq}}{{g^{\prime}}}z. $$

 figure: Fig. 3.

Fig. 3. Divergence of rays for (a) near virtual-image mode and (b) infinite-image mode.

Download Full Size | PDF

Figure 3(b) illustrates a ray generated by the infinite-image mode. As it propagates, there is an increase in the width of the central region of constant intensity, which is denoted by w1inf. The width of the region where light exists, which is denoted by w2inf, is obtained by the addition of 2p to w1inf when z ≥ pf/q. Both widths are obtained by the following equations, respectively:

$$w_1^{inf} = \left|{\frac{q}{f}z - p} \right|, $$
$$w_2^{inf} = \frac{q}{f}z + p. $$

Figures 4(a) and (b) show the ray widths for the near virtual-image and infinite-image modes, respectively. The total ray extent in the near virtual-image mode is larger than that in the infinite-image mode. However, with the near virtual image, there is no change in the width of the central region of constant intensity, although it increases in the infinite-image mode.

 figure: Fig. 4.

Fig. 4. Divergence of rays for (a) near virtual-image mode and (b) infinite-image mode.

Download Full Size | PDF

3. Experimental system

The effectiveness of the proposed technique was verified using light field displays constructed with only horizontal parallax to prioritize the resolution of 3D images. Thus, a lenticular lens was used as the lens array, and a slit array was used as the aperture array.

The flat-panel display comprised a liquid-crystal display (LCD) with a resolution of 7,680 × 4,320, a screen size of 31.5 in., a pixel pitch of 0.0909 mm, and a subpixel pitch of q = 0.0303 mm (UP3218K, Dell Inc.). The viewing distance at which the viewing-zone width was maximized was l = 1.2 m (∼3 H). A commercial lenticular lens with a focal length of f = 0.953 mm and a lens pitch of 0.699 mm was used. To balance the horizontal and vertical resolutions of the 3D images, the lenticular lens was slanted [24] at an angle of tan−1(1/6) = 9.46° to provide an elementary image with a height of 6 subpixels. The horizontal pitch of the lenses was p = 0.710 mm. The photograph of the lens array is shown in Fig. 5(a), which is made of acrylic acid resin adhered to a glass plate. The lens surface was placed toward the flat-panel display.

 figure: Fig. 5.

Fig. 5. Components used to build experimental systems: (a) lens array, and (b) slit array.

Download Full Size | PDF

To satisfy the near virtual-image mode and increase the viewing-zone angle, the gap between the pixels and the lens array should be decreased appropriately. In this experiment, the gap was adjusted using acrylic sheets as spacers. Three acrylic sheets with thicknesses of 0.2, 0.3, and 0.4 mm were used. The spacers were glued to the LCD using optical adhesive (#NOA146, Norland Products Inc). For the slit array, a photomask with a thickness of 0.185 mm was used. The optical length between the front surface of the LCD and the pixels was measured as well as the distance between the front surface of the lenticular lens and the principal plane. Then, the gaps using the three spacers were calculated (see Table 2). With the 0.4-mm spacer, because the image length increased (g′ = 318 mm), the conventional virtual-image mode was approximately achieved. With the 0.2- and 0.3-mm spacers, because the virtual-image plane was located near the lens array (5.67 and 12.0 mm), the near virtual-image mode was achieved. Table 2 also presents the calculated viewing-zone angle (Φ) and the pitch of the virtual subpixel images. Because the pitches of the elementary images were nearly equal for all three spacers, the width of each elementary image for the three cases was 7.81 subpixels for each color. Therefore, there were R = 46.9 ray directions, while the resolution of the 3D images for the three displays was 983 × 720.

Tables Icon

Table 2. Parameters of constructed light field displays

The pitch of the slits was equal to the pitch of the lenses. Generally, the width of a subpixel is smaller than its pitch, and variations in the widths of the virtual subpixel images were observed. Moreover, the virtual images were distorted around the valleys of the lenses owing to lens aberration. Although the pitches of the virtual subpixel images were different for the 0.2- and 0.3-mm spacers, a common slit array with a slit width of 0.350 mm was used, which was half the value of the slit pitch. With the 0.2-mm spacer, the slit width was wider than the pitch of the virtual subpixel images, although when the 0.3- and 0.4-mm spacers were used, it was narrower. The photograph of the fabricated slit array is shown in Fig. 5(b).

Figures 6(a)–(c) contain photographs of the virtual subpixel images using the 0.2-, 0.3-, and 0.4-mm spacers, respectively. The horizontally enlarged subpixels were observed through the slits; as expected, the widths of the virtual subpixel images were larger than the slit widths with the 0.3- and 0.4-mm spacers. When the 0.2-mm spacer was used, more than one virtual subpixel image was observed, and the average pitch of the virtual subpixels was 0.262 mm. Because of distortion and the deviation of the lenses, the measured result is higher than the calculated value. Figure 6(d) presents a photograph of the virtual images that were obtained using the 0.3-mm spacer when the slit array was removed, and Table 2 contains the parameters for this case. The measured pitch of the virtual subpixels was 0.151 mm. It can be observed in Fig. 6(d) that the virtual pixel images were significantly distorted around the lens valleys.

 figure: Fig. 6.

Fig. 6. Captured virtual images of subpixels using (a) 0.2-, (b) 0.3-, and (c) 0.4-mm spacers with slit array, and (d) 0.3-mm spacer without slit array.

Download Full Size | PDF

For the real- and virtual-image modes, the central depth plane (CDP) specifies the center of the depth range of 3D images [20,21]. For the near virtual-image mode, the CDP is located on the virtual-image plane, which is located at a distance of g′ from the lens array: 5.67 and 12.0 mm behind the lens array for the experimental systems using 0.2- and 0.3-mm spacers, respectively. The overall image size of the near virtual-image mode is equal to that of other modes, i.e., the screen size of the flat-panel display, which was 31.5 in. for the experimental systems. The ray density (pixels per degree) for the near virtual-image mode, which is the number of pixels in each elementary image divided by the viewing-zone angle, was 0.998 and 1.07 pixels/degree for each color for the systems using 0.2- and 0.3-mm spacers, respectively.

4. Experimental results

The 3D images were generated using computer graphics software. Forty-six parallax images with a resolution of 983 × 720 were rendered using 46 cameras aligned in the horizontal direction at a distance of 1.2 m from the rendering screen. This study considered a ray emitted from every subpixel and refracted by the corresponding lens. The nearest and second-nearest cameras from the ray were determined to interpolate two images, which were rendered using the two cameras to obtain the value of the subpixel.

Figure 7 contains photographs of 3D images produced by the constructed displays. These were captured from different horizontal directions (as indicated in the figure), and the viewing-zone angles were wider for the thinner spacers. The measured horizontal viewing-zone angles were 46°, 42°, and 39° when using the 0.2-, 0.3-, and 0.4-mm spacers, respectively. These experimentally obtained viewing-zone angles showed good coincidence with the calculated angles shown in Table 2 since the experimentally obtained angles were 1∼2° smaller than the calculated values. With the infinite-image mode, the calculated viewing-zone angle was 40.9°. Therefore, the use of the near virtual-image mode increased the viewing-zone angle.

 figure: Fig. 7.

Fig. 7. 3D images captured from different horizontal directions using 0.2-, 0.3-, and 0.4-mm spacers: (a) violin, (b) globe, and (c) lion.

Download Full Size | PDF

Figure 8 presents some enlargements of the images captured from the central position. In Fig. 8(a), because the violin was located in the range of ±40 mm from the display, no blurs were observed for any of the three displays. However, in Fig. 8(b), a larger blur was observed around the pillar for the display with the thinner spacers. In Fig. 8(c), the letters were more blurred for the displays using thinner spacers. The pillar and the letters were located about 90 and 60 mm, respectively, in front of the display screen. These results confirm that the use of the near virtual-image mode increased blurs in the 3D images.

 figure: Fig. 8.

Fig. 8. Magnified 3D images: (a) violin, (b) globe, and (c) lion.

Download Full Size | PDF

For comparisons, the aperture array type was constructed using a slit array with a slit width of 0.0303 mm (i.e., subpixel pitch). The lenticular lens was not used, and the 0.3-mm spacer was applied. The 3D images generated by the constructed aperture-array type are shown in Fig. 9(a) when the image “globe” was displayed. The camera capturing conditions were the same as those for the images shown in Fig. 7. The 3D images were dark, and the brightness of the screen of the aperture-array system was 9.40 cd/m2, while the brightness of that based on the near virtual-image mode with the 0.3-mm spacer was 109 cd/m2, when white images were displayed on them. Thus, the increase in the light efficiency of the proposed technique was confirmed.

 figure: Fig. 9.

Fig. 9. 3D images produced by (a) aperture array type, (b) lens array type, and (c) proposed technique.

Download Full Size | PDF

The lens aberration reduction via the slit array was verified even though the slit array was confirmed to allow one subpixel image to be observed from each lens in Sec. 3. The slit array was replaced with a transparent film having the same thickness for the experimental system with the 0.3-mm spacer to construct the lens-array system. The 3D images produced by the lens-array system are shown in Fig. 9(b). In comparison, the 3D images produced by the proposed display system are shown in Fig. 9(c), which are the same images shown in Fig. 7(b) for a spacer thickness of 0.3 mm. The magnified images are added. Without the aperture array, the image degradation was more obviously observed around the pillar because of the inferior control ability in the ray direction. Therefore, the effectiveness of the slit array to reduce the lens aberration was verified. The brightness of the screen of the system without the slit array was 194 cd/m2, which was brighter than that including the slit array.

5. Resolution evaluation

This section evaluates the resolution of the 3D images generated by the near virtual-image mode. The blurriness of an image (as perceived by the human eye) depends on the focusing positions of the eyes. First, the formation of retinal images according to the focusing position of the eyes is discussed. Then, retinal images are obtained experimentally to evaluate the resolution. The display system with a 0.3-mm spacer was selected for the evaluation.

5.1 Analysis of retinal image formation

The focusing position of the eyes when observing 3D images is a controversial issue. Generally, it is recognized that visual fatigue results from the vergence–accommodation conflict because the eyes focus on the display screens instead of the 3D images [25]. However, several studies have reported that the eyes can focus on 3D images generated by light field displays [26,27], and super multi-view displays that generate high-density rays have been developed to enable the eyes to focus on 3D images [28,29]. The present study analyzes the resolution of 3D images using two cases; in the first, the eyes focus on the display screen, while in the second, they focus on the 3D images.

Figure 10 illustrates the formation of a retinal image when the eyes focus on the display screen. The center of an eye lens is assumed to be located on the optical axis of one lens of the lens array, and a point object is produced on this axis. Several rays emitted from the flat-panel display are directed by corresponding lenses to pass through the point object. A drawn line that connects the eye lens with each lens specifies the position on the flat-panel display where the eye observes through the lens. The difference between the light-emission point and the eye observation point is denoted by Δxm for the m-th lens counted from the central lens, which is given by the following:

$$\Delta {x_m} = mpg\left( {\frac{1}{z} - \frac{1}{l}} \right). $$

 figure: Fig. 10.

Fig. 10. Observation of rays when the eye focuses on a display screen.

Download Full Size | PDF

If Δxm is larger than the pixel pitch (q/2), the eye perceives no light in the m-th lens. Conversely, when Δxm is equal to or smaller than q/2, the eye perceives light in the m-th lens. The perceived light intensity depends on the ratio Δxm/(q/2). The normalized light intensity (Qm) observed through the m-th lens is given by Eq. (6):

$$\left\{ \begin{array}{ll} {Q_m} = 0 & \Delta {x_m} > \frac{q}{2}\\ {Q_m} = 1 - \frac{{2\Delta {x_m}}}{q} = 1 - \frac{{2mpg}}{q}\left( {\frac{1}{z} - \frac{1}{l}} \right) & \Delta {x_m} \le \frac{q}{2} \end{array} \right. $$

If Q0 = 1 and Qm = 0 (m ≠ 0), the object point is imaged as one point on the retina. Otherwise, the object point is imaged as multiple points on the retina, and blur is perceived. Figure 11 shows the calculated Qm for the prototype display. When 3D images are displayed at a distance of ±40 mm from the screen, no blur is observed. When the distance is ±60 mm, because Q ± 1 is less than 0.5, the highest spatial frequency patterns can be resolved.

 figure: Fig. 11.

Fig. 11. Intensities of rays observed through lenses when the eye focuses on a display screen.

Download Full Size | PDF

When the eyes focus on 3D images, the width of the rays at the focused depth position determines the blur of those 3D images. Here, it is assumed that the blur can be obtained by w2nv in Eq. (2). Figure 12 shows the calculated blur for the prototype display. The w2nv/p value was calculated to normalize the blur by the lens pitch. Because w2nv contains the aperture width, the highest spatial frequency patterns can be resolved when w2nv/p < 2. Therefore, the prototype display can produce the highest spatial frequency patterns when 3D images are displayed at a distance of −27.8 to +16.7 mm from the lens array.

 figure: Fig. 12.

Fig. 12. Blur of rays normalized by lens pitch when the eye focuses on a 3D image.

Download Full Size | PDF

5.2 Experimental evaluation

The resolution evaluation used a test image comprising five different line patterns. The line widths were equal to the widths of one, two, three, four, and five lenses. The line patterns were slanted at the same angle as the lens array to enable a direct evaluation of line blurring. Retinal images were captured using a USB camera with a 5-mm pupil diameter (the average pupil diameter of a human eye) instead of a human eye. The distance between the display screen and the camera was 1,200 mm, and the test image was displayed within the range of −100 mm (behind the screen) to +100 mm (in front of the screen) at 20-mm intervals.

Figure 13 presents the retinal images when the camera was focused on the display screen, while Fig. 14 shows those obtained when the camera was focused at the depth positions where the test images were displayed. The intensity profiles of the vertical line patterns with the highest spatial frequency are provided in Fig. 15.

 figure: Fig. 13.

Fig. 13. Retinal images when the focus was on the display screen: (a) vertical and (b) horizontal lines.

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. Retinal images when the focus was on the 3D images: (a) vertical and (b) horizontal lines.

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. Intensity profiles of vertical lines with the highest spatial frequency when the focus was on (a) the display screen and (b) the 3D images.

Download Full Size | PDF

First, the horizontal resolution is considered. When the camera was focused on the display screen, Fig. 15(a) shows that the vertical line patterns with the highest spatial frequency were resolved in the depth range from −60 to +60 mm. This result agreed well with the analysis findings in Fig. 11, which is described in Section 5.1. Referring to Fig. 11, the analysis predicts that the line patterns displayed at −60, −40, −20, 0, +20, +40, and +60 mm could be resolved. When the camera was focused at the depth positions where the test images were displayed (Fig. 15(b)), the highest spatial frequency vertical line patterns were resolved in the depth range from −20 to +40 mm. Except for the line pattern at +40 mm for the case of focusing on 3D images, these results correspond well with the result of the analysis of the retinal images using Fig. 12 described in Section 5.1. Referring to Fig. 12, the analysis predicts that the line patterns displayed at −20, 0, and +20 mm could be resolved.

Next, the vertical resolution is discussed. The image blur in the vertical direction depended on the camera’s pupil diameter since the prototype display had only horizontal parallax and rays were diffused in the vertical direction. When the focal position of a camera increases from the display screen, the vertical blur also increases. When the camera was focused at a depth position of +100 mm, the maximum blur was 0.42 mm. Because the maximum blur was lower than the lens pitch (the vertical pixel pitch of the parallax images), the horizontal lines were resolved at all depths.

As mentioned above, as the horizontal resolution of the 3D images depends on the focusing position of the eyes, a subjective evaluation was conducted. There were eight male subjects with visual acuities of 0.7, 0.8, 0.8, 0.8, 0.9, 0.9, 0.9, and 1.0. A test image was displayed at a distance of −40 mm from the display screen; this was because the vertical line pattern with the highest spatial frequency was resolved when the eyes focused on the display screen and was not resolved when the eyes focused on the 3D image. Of the eight subjects, five could resolve the highest spatial frequency vertical line pattern, while three could not. It could be considered that five subjects might have focused on the display screen, and three subjects might have focused on the 3D image. Therefore, the focusing position of the eyes for the line pattern might have been specific to each person. As shown in Fig. 15, the contrast of the line pattern when the eyes focused on the screen was higher than that when eyes focused on the 3D image. However, when the eyes focused on the screen, the contrast was not high, and some subjects reported that they observed the line pattern as two lines. We categorized this case as “not resolved.” Therefore, there might be a variance in the answers of the subjects. Additionally, we conducted a subjective evaluation for the line pattern with the highest spatial frequency displayed at a distance of –20 mm; all subjects could resolve the line pattern possibly because the line pattern contrast displayed at a distance of −20 mm was high when eyes focused on the screen, as well as, when eyes focused on the line pattern, as shown in Fig. 15. Since the number of subjects was small for the subjective evaluations conducted in this study, detailed evaluations with more subjects should be conducted to analyze the perceived resolution.

6. Conclusions

This study proposed a flat-panel-type light field display using the near virtual-image mode. Using both lens and aperture arrays, the proposed display provides a wider viewing zone compared with the lens-array-type light field display and brighter 3D images compared with the slit-array type.

Three light field displays with different viewing-zone angles were constructed based on the proposed technique. They provided only horizontal parallax and emitted rays in 46 horizontal directions. Their horizontal resolution was 1.41 pixels/mm, which was the number of lenses per mm. The vertical resolution was 1.83 pixels/mm, which was the vertical resolution of the parallax images used to generate the 3D images. The measured viewing-zone angles were 39°, 42°, and 46°, and the transmittance of the slit array was 50%. The resolution was evaluated for the display with a 42° viewing-zone angle. The highest spatial frequency (where the line and lens pitches were equal) was resolved in the depth range from −60 to +60 mm when the camera was focused on the display screen and in the depth range from −20 to +40 mm when the camera was focused at the depth position where the line pattern was displayed.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7(1), 821–825 (1908). [CrossRef]  

2. B. Javidi and S. H. Hong, “Three-dimensional holographic image sensing and integral imaging display,” J. Disp. Technol. 1(2), 341–346 (2005). [CrossRef]  .

3. J. H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–94 (2009). [CrossRef]  .

4. J. Hong, Y. Kim, H. J. Choi, J. Hahn, J. H. Park, H. Kim, S. W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues,” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef]  .

5. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics. 5(4), 456–535 (2013). [CrossRef]  .

6. X. Xiao, B. Javidi, M. M.- Corral, and A. Stem, “Advances in three-dimensional integral imaging: Sensing, display, and applications [Invited],” Appl. Opt. 52(4), 546–560 (2013). [CrossRef]  

7. F. Yaras, H. Kang, and L. Onural, “State of the art in holographic displays: a survey,” J. Disp. Technol. 6(10), 443–454 (2010). [CrossRef]  .

8. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37(11), 2034–2045 (1998). [CrossRef]  .

9. T. Naemura, T. Yoshida, and H. Harashima, “3-D computer graphics based on integral photography,” Opt. Express 8(4), 255–262 (2001). [CrossRef]  .

10. B. Lee, S. Jung, S. W. Min, and J. H. Park, “Three-dimensional display by use of integral photography with dynamically variable image planes,” Opt. Lett. 26(19), 1481–1482 (2001). [CrossRef]  .

11. S. K. Kim, K. H. Yoon, S. K. Yoon, and H. Ju, “Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display,” Opt. Express 23(10), 13230–13244 (2015). [CrossRef]  .

12. H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12(6), 1067–1076 (2004). [CrossRef]  .

13. J.-S. Jang and B. Javidi, “Three-dimensional integral imaging of micro-objects,” Opt. Lett. 29(11), 1230–1232 (2004). [CrossRef]  .

14. D. Lanman, G. Wetzstein, M. Hirsch, W. Heidrich, and R. Raskar, “Polarization fields: dynamic light field display using multi-layer LCDs,” ACM Trans. Graph. 30(6), 1–10 (2011). [CrossRef]  .

15. S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, “Additive light field displays: Realization of augmented reality with holographic optical elements,” ACM Trans. Graph. 35(4), 1 (2016). [CrossRef]  

16. F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field display with focus cues,” ACM Trans. Graph. 34(4), 1 (2015). [CrossRef]  

17. K. Akşit, J. Kautz, and D. Luebke, “Slim near-eye display using pinhole aperture arrays,” Appl. Opt. 54(11), 3422–3427 (2015). [CrossRef]  .

18. A. Schwarz, J. Wang, A. Shemer, Z. Zalevsky, and B. Javidi, “Lensless three-dimensional integral imaging using variable and time multiplexed pinhole array,” Opt. Lett. 40(8), 1814–1817 (2015). [CrossRef]  .

19. T. Okoshi, Three-Dimensional Imaging Techniques (Academic Press, New York, 1976).

20. J. S. Jang, F. Jin, and B. Javidi, “Three-dimensional integral imaging with large depth of focus by use of real and virtual image fields,” Opt. Lett. 28(16), 1421–1423 (2003). [CrossRef]  .

21. F. Jin, J. S. Jang, and B. Javidi, “Effects of device resolution on three-dimensional integral imaging,” Opt. Lett. 29(12), 1345–1347 (2004). [CrossRef]  .

22. Y. Momonoi, K. Yamamoto, Y. Yokote, A. Sato, and Y. Yakaki, “Light Field Mirage using multiple flat-panel light field displays,” Opt. Express 29(7), 10406–10423 (2021). [CrossRef]  .

23. D. Lanman and D. Luebke, “Near-Eye Light Field Displays,” ACM Trans. Graph. (TOG) 32(6), 220 (2013). [CrossRef]  

24. C. van Berkel and J. A. Clarke, “Characterization and optimization of 3D-LCD module design,” Proc. SPIE 3012, 179 (1997). [CrossRef]  .

25. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” Journal of Vision 8(3), 33 (2008). [CrossRef]  

26. Y. Kim, J. Kim, K. Hong, H. K. Yang, J.-H. Jung, H. Choi, S.-W. Min, J.-M. Seo, J.-M. Hwang, and B. Lee, “Accommodative response of integral imaging in near distance,” J. Disp. Technol. 8(2), 70–78 (2012). [CrossRef]  .

27. H. Hiura, K. Komine, J. Arai, and T. Mishina, “Measurement of static convergence and accommodation responses to images of integral photography and binocular stereoscopy,” Opt. Express 25(4), 3454–3468 (2017). [CrossRef]  .

28. Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006). [CrossRef]  .

29. Y. Takaki, K. Tanaka, and J. Nakamura, “Super multi-view display with a lower resolution flat-panel display,” Opt. Express 19(5), 4129–4139 (2011). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Conventional flat-panel light field displays: (a) lens-array type and (b) aperture-array type.
Fig. 2.
Fig. 2. Proposed light field display using both lens and aperture arrays based on the near virtual-image mode.
Fig. 3.
Fig. 3. Divergence of rays for (a) near virtual-image mode and (b) infinite-image mode.
Fig. 4.
Fig. 4. Divergence of rays for (a) near virtual-image mode and (b) infinite-image mode.
Fig. 5.
Fig. 5. Components used to build experimental systems: (a) lens array, and (b) slit array.
Fig. 6.
Fig. 6. Captured virtual images of subpixels using (a) 0.2-, (b) 0.3-, and (c) 0.4-mm spacers with slit array, and (d) 0.3-mm spacer without slit array.
Fig. 7.
Fig. 7. 3D images captured from different horizontal directions using 0.2-, 0.3-, and 0.4-mm spacers: (a) violin, (b) globe, and (c) lion.
Fig. 8.
Fig. 8. Magnified 3D images: (a) violin, (b) globe, and (c) lion.
Fig. 9.
Fig. 9. 3D images produced by (a) aperture array type, (b) lens array type, and (c) proposed technique.
Fig. 10.
Fig. 10. Observation of rays when the eye focuses on a display screen.
Fig. 11.
Fig. 11. Intensities of rays observed through lenses when the eye focuses on a display screen.
Fig. 12.
Fig. 12. Blur of rays normalized by lens pitch when the eye focuses on a 3D image.
Fig. 13.
Fig. 13. Retinal images when the focus was on the display screen: (a) vertical and (b) horizontal lines.
Fig. 14.
Fig. 14. Retinal images when the focus was on the 3D images: (a) vertical and (b) horizontal lines.
Fig. 15.
Fig. 15. Intensity profiles of vertical lines with the highest spatial frequency when the focus was on (a) the display screen and (b) the 3D images.

Tables (2)

Tables Icon

Table 1. Viewing-zone angle and light efficiency for proposed and existing techniques

Tables Icon

Table 2. Parameters of constructed light field displays

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

w 1 n v = M q ,
w 2 n v = M q + 2 M q g z .
w 1 i n f = | q f z p | ,
w 2 i n f = q f z + p .
Δ x m = m p g ( 1 z 1 l ) .
{ Q m = 0 Δ x m > q 2 Q m = 1 2 Δ x m q = 1 2 m p g q ( 1 z 1 l ) Δ x m q 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.