Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Integral imaging based light field display with holographic diffusor: principles, potentials and restrictions

Open Access Open Access

Abstract

Under the framework of light field, the diffusing angle and spatial location of holographic diffusor is investigated for the integral imaging based light field display. These two parameters are considered in terms of continuous light field reconstruction of the object point and blind visual spots elimination for the viewer. The concept of joint view reconstruction as well the essence of this view interpolation is analyzed theoretically, and a new phenomenon caused by the additional holographic plane called double window violation is also deduced. To the best of our knowledge, this is the first report on the quantitative analysis of the two key parameters of the holographic diffusor, which is deduced theoretically and verified experimentally.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Light field display (LFD) is a newly aroused promising 3D display technology, which can provide true color, full-parallax, motion parallax, 3D vision without any visual aids [1–3]. Integral imaging based light display system(IILFD), which is usually constructed by putting a holographic diffusor in a certain place in front of the conventional integral imaging system, inherits the manifold merits of integral imaging display, can reconstruct continuous light field and provide better 3D experience [4,5]. Although the viewing parameters like the resolution limitation, viewing angle, depth of filed and design parameters of the conventional integral imaging (InIm) system have been investigated by several research groups [6–16], the analysis of the IILFD system, especially the diffusing angle the location of the holographic diffusor has not yet been fully addressed yet.

Previous researches concentrated in the analysis of InIm’s performance on parameters such as the lateral resolution, the depth resolution, the viewing angle and the trade-off relationship among them. Burckhardt et al., from the point of view of diffraction, presented a theory of the lateral resolution of the conventional InIm system to determine the optimum diameter of the lens array. However, his analysis introduced an approximation to solve the problem analytically and considered the point light source only [6]. Hoshino et al. analyzed the resolution limitation of InIm by considering the maximum spatial frequency measured at the viewpoint, they also discussed the implementability of InIm to a practical 3D display [7]. Park et al. described and compared the viewing parameters of InIm both in real and virtual mode, the relationship between the lateral resolution of InIm and the resolution of the display panel is analyzed. They found that the InIm in real mode has a better lateral resolution, but the InIm in virtual mode is superior in viewing angle, the depth resolution of the two mode is almost the same [8]. Jang et al. discussed the relation between the resolution and the depth of a 3D integral image and proposed the use of the product of depth and resolution square (PDRS) as a figure of merit in InIm systems [17], which reveals that we cannot achieve a PDRS of more than 1/λ by a single ideal diffraction-limited InIm system without introducing additional system devices (the λ here refers to the illumination wavelength). However, the analysis is on the basis that the resolution of elemental images is sufficiently large, but a practical display panel or a 2-D image sensor usually has a pixilated structure and only a limited resolution with a finite number of pixels. The effects of device resolution on InIm should be taken into consideration to get a more precise result. Jin et al. evaluated the performance of InIm as the change of the number of pixels and deduced the minimum number of pixels required in each elemental image to avoid depth-of-focus degradation [9]. Martínez-Corral et al. investigated the resolution degradation caused by the multifaceted structure of the elemental fields of view (FOVs) [18], however, their analysis concentrated on the optimization of the viewing distance of InIm system, rather than the optimization of the system architecture (e.g., the parameters or the position of lens array). Min et al. developed the characteristic equation of the integral imaging system, which enables us to evaluate the total viewing quality of InIm quantitatively [10]. However, the analysis is on the basis of rough ray optics, and the result can only give a variation tendency of the viewing parameters. Tavakoli et al. for the first time, quantitatively analyzed the effects of sensor position uncertainty on 3D reconstruction in synthetic aperture integral imaging systems [12], but their analysis regardless the lenslet position uncertainty in the reconstruction process, which has a huge impact on optical reconstruction. Kavehvash et al. evaluated this factor and exploited an irregular lens array (lenslets are rotated or displaced from their original positions in the conventional flat lens array.) to partly resolve the depth of field limitation [13], however, the system is complicated and need a meticulous optimization. Cho et al. utilized fixed resource constraints to get optimized system parameters such as lens numerical aperture, the pitch between image sensors, the number of image sensors, the pixel size, and the number of pixels [14]. But a brute force search algorithm is needed to find the optimized parameters, which is inefficient. Luo et al. focused on the analysis of the depth of field of the InIm system, which is beneficial for better understanding and designing of InIm displays [15]. Yang et al. evaluated the effects of both aperture and aberration on InIm performance quantitatively [16], and human eyes are fully modeled to give a subjective assessment.

The above introduction focuses on the history and advancements of IILFD in the optics community, but one cannot omit the significant parallel contributions in the computer graphics and rendering community. Early works by Levoy et al. [19] and Gortler et al. [20] laid the foundation of the 4D light field parameterization we describe. Chai et al. [21] introduced the frequency analysis of light fields, which is also known as plenoptic sampling theory. Isaksen et al. [22] applied the frequency analysis of ray space signals to study aliasing on 3D autostereoscopic displays, thus they made the connection between light fields and autostereoscopic displays. Latter works by Zwicker et al. [23] extend the frequency analysis of light fields and derive the bandwidth of 3D displays. Since the setup of an array of display panels and a lens array, offset from a diffusing element is optically equivalent to a projector array with holographic diffusor. To alleviate the restriction imposed by the limited bandwidth of the display panel, the former setup can also be substituted by the latter. Kawakita et al. [24] proposed an autostereoscopic display system using multiprojectors and an optical screen consisting of a special diffuser film with a large condenser lens. the relationships between the diffusion characteristics of this film and the 3D image quality is analyzed. Jones et al. [25,26] observed how holographic diffusers fill the gaps between discrete emitters for different screens geometries (flat, curved, transmissive, and reflective), but the analysis are qualitatively presented. To the best of our knowledge, there is still no systematic and quantative analysis about the role of the holographic screen in the IILFD.

Since the IILFD is inherited from the traditional InIm, all these creative ideas can provide us a salutary lesson for the analysis of IILFD system and we can mainly focus on the difference between the two systems. Two obvious distinctions between the conventional InIm and the IILFD are the substitution of micro lens array with macro lens array (MaLA) and the using of holographic diffusor, besides, the macro lenses are often sparsely packaged to get enlarged FOV. These changes bring some new characteristics for the IILFD system.

Under the framework of the light field, the reconstruction characteristic analysis and optimal design of the integral imaging based light field display system is investigated. The rest of the paper is organized as the following. The mechanism for continuous light field reconstruction is analyzed from the point of view of the object point, this part of work is discussed in detail in subsection 2.1; From the point of the observer, the joint reconstruction mechanism of adjacent view interpolation is revealed in subsection 2.2; besides, the double window violation phenomenon introduced by the additional holographic diffusor plane is illustrated in subsection 2.3. The diffusing angle and location of the holographic diffusor is analyzed in subsection 2.4. Section 3 devotes to the experiments for the validation of diffusing angle selection strategy, the location of holographic diffusor, and the double violation phenomenon. Finally, in section 4, we summarized the main achievements of this reported research.

2. Theory and modeling

2.1. Continuous light field reconstruction using MaLA assisted with holographic diffusor

The light field can be considered as the set of light rays flowing in every direction through every point in space. It is omnidirectional and continuous. If the light field of an object is captured and reconstructed, a 3D scene of the object can be perceived at any viewing point, for the viewer can observe light rays of the same direction and intensity as the original scene [27]. Such a system can be thought as an ideal 3D imaging system and intuitively described as shown in Fig. 1, where the sphere boundary surface divides the 3D space into 2 parts, i.e. the viewing space where the viewer is located and the visual space where the object is located. However, recording and reconstruction of the continuous light field are impractical or unattainable for all but the most trivial cases. In practice, we must record and reconstruct it with finite representations in the form of discretely-sampled parallax images. Also, the boundary is generally not closed, the rays are often collected by a planar recording device on a finite area, which is the case of the conventional photography and InIm systems.

 figure: Fig. 1

Fig. 1 A heuristic description of an ideal 3D imaging system, where all the rays emerging from the 3D object flowing through a sphere boundary surface S need to be captured and reconstructed.

Download Full Size | PDF

Following the terminology for light ray distribution used by Adelson and Bergen, we also refer the function describing the distribution of light radiance in space as plenoptic function [28,29]. The original definition of plenoptic function also includes parameters like time and light wavelength, however, these two parameters are considered implicitly to simplify the notation in this work. The radiance of a spatial point a(x,y,z) with propagation direction angle d(ϕ,θ)=(sinϕcosθ,sinϕsinθ,cosθ) can be notated as a five dimensional function p(x,y,z,ϕ,θ), where ϕ[0,π] and θ[0,2π) corresponding to variables of standard spherical coordinate systems. This representation of the plenoptic function has a certain amount of redundancy when the light is propagating in a transparent medium and the radiance is conserved. i.e.

p(x,y,z,ϕ,θ)=p(x+rsinϕcosθ,y+rsinϕsinθ,z+rcosϕ,ϕ,θ),
where r is an arbitrary value that corresponds to unoccluded points in a transparent medium.

The apparatus studied in this work is meant to recreate the plenoptic function in a surface by using a plane display. So we can simplify the display’s plenoptic function by assuming that the display system is at the z=0 plane.

pd(x,y,ϕ,θ)=p(x,y,0,ϕ,θ),
where the asterisk * means that the radiance values are equal to the values of the plenoptic function in unoccluded parts of the display, the radiance of other parts is deduced by values extrapolated using Eq. (1). This kind of parameterization of plenoptic function is in consistent with the dual-plane parameterization that established and described by Levoy et al. [19] and Gortler et al. [20].

Considering the light intensity distribution of the vertex point of the cube within the viewing angle interval [θ1,θ2], as specified in Fig. 2(a), the corresponding light intensity variations along angular dimensions is depicted in Fig. 2(b). In traditional photography, the angular information is missing, for the sensor collects all the radiance of a point source passing through the lens without further distinguishing of its incident angle. The final intensity is an integral of the radiance from all effective incident angles that closely related to the aperture boundary of the lens, as depicted in Fig. 2(c).

 figure: Fig. 2

Fig. 2 (a) Sketch map of the viewing angle interval of the sampled light field, (b) the continuous light intensity distribution of the light field specified in (a), (c) the light capture scheme of the traditional photography.

Download Full Size | PDF

While InIm is working in a different mode, it can record and reconstruct the light rays with both its intensity and direction. Traditionally in an InIm display system, the recorded elemental images are loaded to a 2D display panel covered by a micro-lens array (MiLA). Generally, the lenslets used in InIm for the process of both capturing and displaying the light field are relatively small, with a lens pitch less than 1 or 2 mm, some even on the order of several hundred of microns. The densely distributed lenslets in space sufficiently sampled (see Fig. 3(a)) and reconstructed (see Fig. 3(b)) the light field in angular. The restored light field of the traditional InIm system with MiLA is illustrated in Fig. 3(c). Note that the light field captured and reconstructed here is discrete. The grid-like structure of the lens array is a noising and troublesome for good viewing experience, but it can be neglectable and acceptable if the lens is smaller enough in size and the viewing distance is far enough. The most severe drawback of the traditional InIm display is the viewing angle is relatively small which will restrict the viewing zone of the observer to obtain the 3D experience.

 figure: Fig. 3

Fig. 3 (a) The light field acquisition, (b) reconstruction scheme and (c) the restored light field of the traditional InIm system with MiLA, (d), (e), (f) are the counterparts for the IILFD with MaLA, respectively.

Download Full Size | PDF

However, in the IILFD demonstrated in [4,5,30,31], the lens used in the lens array is relatively large in size, with a lens diameter bigger than 2 mm. To distinguish it from conventional MiLA, we refer it as macro-lens array (MaLA) [32]. In the light field capture process, as is shown in Fig. 3(d), the object is sparsely sampled in angular compared with the conventional InIm case. A similar phenomenon can be seen in the reconstructed light field (see Fig. 3(e)) and restored light distribution(see Fig. 3(f)). Unfortunately, the angularly down-sampled light field cannot be sufficiently reproduced without using other devices or components for two reasons. First, the big size of the lens in MaLA make the grid-like lens-array pattern noticeably observable, and second the image the viewer perceived is low in resolution because of the shrinking of the number of the lens. Thus, a holographic diffusor is used to get continuous light field, and our previous work has proved its effectiveness from the perspective of signal processing [30]. Here, we concentrate on the study of this problem from the point of view of the observer and the angular spectrum extension of the holographic diffusor, to address the two key parameters of the IILFD, i.e. the diffusing angle and the spatial location of the holographic diffusor.

2.2. Joint reconstruction mechanism based on adjacent view interpolation

In an IILFD, as depicted in Fig. 4(a), all the information contained in the plenoptic function can be replicated without any redundancy by using a two-dimensional flat display panel with two spatial dimensions (x, y) and two angular dimensions (φ, θ), which is conformed to our previous study that indicates in term of light field reconstruction, the IILFD system can be thought as a spatial superposition of a series of parallel light field from a certain range of direction. Since dual plane are used in the InIm system, it is more convenient to use the Cartesian angular dimensions (i, j). The LA plane can be considered as the spatial (x, y) plane, and the display panel (DP) can be considered as the angular (i, j) plane, note that the (i, j) coordinate is closely related the center of the lens location.

 figure: Fig. 4

Fig. 4 (a) Typical structure of the IILFD, (b) Light ray distribution of the InIm system without using holographic diffusor.

Download Full Size | PDF

For an object field of o(x,y),

o˜r(x,y)=o˜s(x,y)=m,no(x,y)δ(xmΔx)δ(ynΔy),
Where o˜r(x,y) is the light field reconstructed, o˜s(x,y) is the light field sampled, the sampling/reconstructing intervals Δx and Δy are defined as the lens pitch between two adjacent columns and two adjacent rows. Here, the angular dimension of the object field is considered implicitly.

Assume that the original resolution of DP is (I × J), the physical size of it is (a × b), the pitches of the lens in horizontal and vertical directions are both pl, i.e. Δx=Δy=pl, the number of the lens is M×N, then the resolution of the reconstructed orthographic image with consideration of the scale factor is (I/s,J/s), here, s is a scale factor that determined by the Gaussian law. In our implementation, zero value points are padded between two adjacent lenses, which means the black zone existed in the views, for the lens arrangement are sparse in space. The number of zero value points nz is given by the following evaluation.

nz=pl×Is×a1.

Here, x is the operation of rounding down, which means to rounds each element of x to the nearest integer larger than or equal to that element.

As depicted in Fig. 5(a), in the central orthographic view reconstructed by the InIm, only five pixels of the in gray are sparsely rebuilt, black zones (see dark gray part) existed when the pitch between the lens becomes larger, which will greatly harm our viewing experience. So using proper diffusor to interpolate the adjacent view is inevitable for acceptable image reconstruction. The view after appropriate interpolation is shown in Fig. 5(b), where the pixels from adjacent view are interpolated to jointly form the central orthographic view, so we refer this kind of view interpolation as joint reconstruction.

 figure: Fig. 5

Fig. 5 View interpolation of the holographic diffusor in terms of orthographic projection. (a) without diffusor, (b) with proper diffusor.

Download Full Size | PDF

The essence of the joint reconstruction with a holographic diffusor is to use the adjacent point to approximate the real point. The reconstruction of the views is a joint action of many adjacent views rather than an isolated one which restricted to the local view, although to a single view, the system is sparsely sampled and reconstructed in space. We propose the concept of joint reconstruction, i.e. the reconstruction of the views is not a single result of all the pixels of this view, but also the pixels from the adjacent view for a certain range, this range depends on the extending angle of the holographic diffusor.

Take the 3D scene of a curved line for instance (see Fig. 6), which consists of dots A-G, and consider the ideal case that the scene is uniformly sampled by the sampling interval of half of the lenslet’s pitch and reconstructed, then points A, C, D, E, G, should be reproduced. Here in Fig. 6, the envisioned reconstructing light rays are in dotted arrow, while the real ones are in solid arrow. The intersection of the envisioned light rays and the scene is noted as hollow circles, while the intersection of the real light rays and the scene is noted as solid circle. Actually, in a practical case, the scene is sparsely sampled and reconstructed, only A, D, G is precisely sampled and reconstructed, points C and E are substituted by the points B and F sampled by the adjacent views. The accuracy of these substitutions is greatly related to the difference between the adjacent views. As one can intuitively know, small pixel pitch is meaningful for precise object point approximation.

 figure: Fig. 6

Fig. 6 The essence of joint reconstruction.

Download Full Size | PDF

There are two key points for this mechanism to work well. First, the perspective coherence [33], which means the similarity between images of a static scene as viewed from different locations. Second is the difference between the adjacent views, which we referred as parallel light field difference ratio [34], should be small enough. Otherwise, the effectiveness of the joint reconstruction can be greatly hampered. Besides, the diffusing angle should be proper to avoid severe aliasing and distortion. As one can intuitively know, the parallax can also be greatly suppressed if a holographic diffusor with large diffusing angle is used. Under such a circumstance, many adjacent views are interpolated to form a view, the perspective disparity between the adjacent reconstructed view will decrease.

2.3. The double window violation phenomenon

The introducing of holographic diffusor inevitably causes some side effects to the display system. One of the most obvious side effects is the additional display window. In any 3D display with a 2D display panel, the window violation problem cannot be neglected, which is caused by the so-called projection constraint as discussed in [35]. Since we cannot force photons to change their propagating direction in the absence of an optical medium, a display medium or element must always lie along a line of sight between the viewer and all parts of a spatial image. For a traditional autostereoscopic display with a common feature of comprising a screen to generate original light and a light modulation component to control light direction, this phenomenon can be visualized as Fig. 7(a), the reconstructed aerial 3D object is partially invisible to both eyes of the viewer for it is fall off the edge of the display. For our display with a holographic diffusor, a similar phenomenon exists but presents in a different way. The holographic diffusor is served as the second display window and further restrict the effective display area. This situation is similar to the volumetric display case, as demonstrated in Fig. 7(b), despite the fact that the display area is not restricted to a certain display volume. When the viewer is just right in front of the display, the case becomes the same as the traditional autostereoscopic display, as depicted in Fig. 7(c). However, when the viewer is at the left or right side of the display, the effective area that the aerial can be presented is restricted by both the rear display unit and the front diffusor, see Fig. 7(d). This phenomenon is referred as the double window violation, where the display unit served as the original light source window, and the diffusor served as the second window which determines the amount of the light can pass through.

 figure: Fig. 7

Fig. 7 The window violation of (a) the traditional autostereoscopic display, (b) the volumetric display. The double window violation of the holographic based LFD for (c) the front viewer, (d) the side viewer.

Download Full Size | PDF

Actually, there are two kinds of double window violation, the first one is the front window truncation. For most cases, the size of holographic diffusor is the same as the original display screen, however, for a 3D display, the displayed object is not necessarily smaller than the size of the screen. When the 3D object volume displayed by the system is larger than the screen, the image will be truncated by the size of the holographic diffusor. The second kind of window violation is the back-window restriction, this often happens when we observe the scene in the left or right side of the display with a certain angle within the FOV of the system, and part of the image is disappeared because of the limited size of the back window. For both cases, the double window violation phenomenon mentioned above will further restrict the effective expressible area.

The following 3 methods will help to alleviate the problem. (1). Extending the second window to fit the FOV of the system. If the FOV of the system is not taken into consideration, the second window should be infinite to get the same expressible area as the single window violation case; (2). Extending the first window by using bonding mirror like side mirror. This method will not only improve the homogeneousness of the light field, but also will extend the first window to fit the second one. This can be seen in the work presented by Hahn et al. [36], and the Holografika display proposed by Rodriguez et al. [37] also includes side mirrors to assist with the windowing violation; (3). Using holographic diffusor with large diffusing angle, however, this method is not ideal for its side effect in parallax elimination and distortion worsen.

2.4. The diffusing angle selection strategy and the proper location of the holographic diffusor

From the above analysis, we can know that the holographic diffusor is one of the key components of the IILFD, so the question lies in the selection of the diffusing angle and the location of the holographic diffusor. Previous analysis about the light field reconstruction process is based on the ideal pinhole assumption of the lens for simplicity, however, real single lens is paraxial imaging optics with finite aperture. the aperture size is often not ignorable, especially in the case of IILFD, where light rays emitted from a single pixel of the EI is shaped by the lens, generating conical light ray bundles with their summits in contact at the CDP (see the gray shadow in Fig. 8). The spreading angle of the ray bundle is closely related to the aperture size of the lens, which can be expressed as the following equation without consideration of the aberration.

θin=2×arctan(a2×d),
Here, a is the effective aperture size of the lens, d is the distance between the LA and the diffuser, which is usually located at the CDP of the system. Traditionally, the effective aperture size a is close or equal to the lens pitch p, however, this value is often smaller than p when an aperture mask array is applied to extend the expressible depth range of the system or the lenses are sparsely arranged to enhance the viewing angle, which is often the case in IILFD.

 figure: Fig. 8

Fig. 8 The light emitted from the pixel modulated by the lens, forming conical light ray bundles with their summits in contact at the CDP, the conical ray bundles are re-modulated by the diffuser, homogenous image spots are reconstructed with enlarged output angle.

Download Full Size | PDF

The holographic diffusor or the holographic functional screen is a kind of holographic recorded light shaping diffusor with randomized surface structures that enable high transmission efficiency and homogenized light beam shaping. The diffusing angle is specified in full width at half maximum (FWHM), the effective angular output of the system illustrated in Fig. 8 is expressed as the following:

θout=θin2+θd2,
Where θout is the angular output, θin is the incident light source angle and θd is the diffusing angle. So intuitively speaking, the holographic diffusor with appropriate diffusing angle bridges the angular gap between the rays reconstructed by two adjacent lenses, see θtar in Fig. 8. This angle can be obtained approximately by simple geometrical calculation, as
θtar=2×arctanp2×d,
Here, θtar is the ideal spread angle after diffusion, which means the principle rays of the two adjacent lenses are merged together. This angle is deduced simply for the sake of good visual experience and blind spots elimination, without considering the accuracy of the reconstructed images as the view interpolation essence presented in the previous section. A schematic diagram of continuous light field reconstruction by using holographic diffusor is shown in Fig. 9.

 figure: Fig. 9

Fig. 9 Schematic diagram of continuous light field reconstruction by using holographic diffusor.

Download Full Size | PDF

The above analysis is on the assumption that the reconstructed image point is right in the CDP, and this fact is not always true since the displayed 3D scene has a certain range along the depth direction, also, the selection of the diffusing angle is coupled with d, the distance between the holographic diffusor and the LA, as the ideal spread angle θtar after diffusion is a function of d. From the point of view of the object point reconstruction, the holographic diffusor should be located just right in the place where the image point recovers. Originally, in an InIm display, the object point reconstructed out of the CDP is worsen in resolution because of the light rays’ diffusion, as illustrated in Fig. 10(a). The reconstructed O1 and O2, which is integral by image points A, B, C and D, E, F in CDP respectively, become two spots rather than two points. Also, we can see that the reconstructed object points O1 and O2, although the distance between them and CDP are both d1, O1 can be reconstructed more clearly, for there are distinct light ray aliasing in O2, which undoubtedly will deteriorate the image quality. Similar fact exists in IILFD, as depicted in Fig. 10(b), we make the pinhole assumption for the lens to emphasize the light rays’ diffusion function of the diffusor, and reveal that the utilization of holographic diffusor is also useful for out of CDP points’ reconstruction, despite the fact it will further worsen the reconstructed resolution. When the pinhole assumption is used, one can intuitively know that the plane where the final reconstructed object located is the best plane for holographic diffusor to get continuous light field. For the real case, we validate it experimentally. The above analysis indicates that the location of holographic diffusor is not necessarily coupled with the location of CDP.

 figure: Fig. 10

Fig. 10 Image reconstructed out of CDP, (a) considering the finite aperture of the lens, (b) under pinhole assumption and considering the light diffuse function of the holographic diffusor.

Download Full Size | PDF

The above analysis insisted that the holographic diffusor should locate in the exact spot where the 3D image integrates in terms of object point reconstruction. Let’s consider the location of holographic diffusor from the point of view of joint reconstruction. The concept of view interpolation mentioned in subsection 2.2 does not involve with the location of holographic diffusor explicitly. It seems that the holographic diffusor can be placed in any location in front of the LA, since the existence of holographic diffusor does not change the spatial location of the image points, it only affects the start point where the light rays diffuse. As shown in Fig. 11, consider a 3D scene of a curve consisted of points A-K. In an ideal case, the scene is densely and uniformly sampled and reconstructed by micro lenses, so the reconstructed front view of the scene is a series of discrete points with equal vertical pitch, which is illustrated in Fig. 11 as points ACDFHIK named as the ideal case. Actually, when a MaLA assisted with holographic diffusor are used for light field reconstruction, the reconstructed front view will be diffusor-location-dependent. More specifically speaking, when the holographic diffusor is located in L1 and L2, the object points C, D, H, and I are correctly substituted by their adjacent ones B, E, G, and J respectively. However, when the location of holographic diffusor is set to L3 that is further far away from the depth central of the display 3D scene L1, the results become different for the object points C, D, H, I are wrongly substituted with points E, B, J, G in order, which obviously will increase the error of joint view reconstruction, causing image blur and confusing aliasing. The actual reconstructed orthographic views under different locations of holographic diffusor (at L1,L2 and L3) are illustrated at the actual case of Fig. 11.It is also obvious that L1 is a better candidate location compared to L2 for holographic diffusor since the former one can provide more accurate and uniform overall point approximation.

 figure: Fig. 11

Fig. 11 Three candidate holographic diffusors locations and their reconstructed orthographic views.

Download Full Size | PDF

In terms of two major functions of holographic diffusor, i.e. (1). To alleviate the disturbance of the noticeably observable inherently existed grid-like lens array pattern. (2). To re-modulate the concretely reproduced light field into a continuous one. Together with the above analysis, we can conclude that the key of holographic diffusor lies in merging the principle rays from two adjacent lenses, so the diffusing angle is closely related with the distance between the LA and itself. The location is uncoupled with the CDP, it should lie in the depth center of a given 3D scene.

3. Experiments and discussions

A prototype of IILFD with full parallax was constructed to demonstrate the feasibility of the above analysis. The system setup and the mechanical structure of the prototype was illustrated as Fig. 12(a). The main components of the system are the LCD panel, the macro lens array, and the holographic diffusor. The size of the LCD is 12.5 inch in diagonal, with an effective display area of approximately 277 × 156 mm, with the resolution of 3840 × 2160 and the pixel pitch of 72 μm. The lens array is made up of compound lenses, each compound lens consists of two single lenses to suppress aberration and get better imaging performance for non-paraxial light rays. The optical FOV of the compound lens is approximately 40°, the diameter of the compound lens (see Fig. 12(b), referred as D) is 10 mm and its equivalent focal length is also 10 mm. The compound lens is sparsely assembled in a hexagonal grid packaging way with a lens pitch (see Fig. 12(b), referred as p) of 12.6 mm. The merit of such hexagonal grid packaging way is that distance of any two adjacent lens center is equal to the lens pitch, which is useful for uniform light field reconstruction. The lens array is made up of 312 circular macro compound lenses, i.e. with 12 lenses in a column and a total of 26 columns. The gap between the macro lens array and the LCD panel is g = 10.7 mm, so the central depth plane (CDP) of the imaging system is located about 152 mm away for the lens array plane. The original location of the holographic diffusor is right in the CDP plane.

 figure: Fig. 12

Fig. 12 (a)The structure of the display system, (b) the arrangement of the lens array.

Download Full Size | PDF

3.1. The selection of the diffusing angle of the holographic diffusor

First, we validate the holographic diffusing angle selection strategy. Since inside the LA, an aperture array with a diameter of 5 mm is used as the mask to block the nonparaxial ray and to thin the ray bundles to improve the DOF of the system, so the actual original diffusing angle of the incident light in the holographic diffusor given by Eq. (1) is approximately 1.88°. According to the arrangement of the lens array, the target diffusing angle for the horizontal and vertical direction is 4.74°, thus the ideal diffusing angle of the holographic diffusor can be got by Eq. (2), i.e. 4.35°. Since the commercially available holographic diffusor are not continuous in diffusing angle, we chose holographic diffusor with diffusing angle 3.5°, 5° and 10° to make a comparison. Two 3D scenes are used in the experiments, one is a rectangular pyramid scene with chessboard background, the summit of the pyramid together with its four planes separate the viewing zone into four parts, which is corresponding to the setup of the diffusor. The setup of the basketball scene is almost the same as the former scene. The CDP is sectioned into 4 parts, as illustrated in Fig. 13. Part I is without holographic diffusor, the image can hardly be distinguished because of the harmful disturbance of the noticeably observable inherently existed grid-like lens array pattern. Part II has a holographic diffusor with diffusing angle of 3.5°, the image can be integrated but the dark zone is still distinct, so the visual quality is still not ideal enough. Part III has holographic diffusor with diffusing angle of 5°, the dark zone disturbance is effectively alleviated and the visual quality of the image is acceptable. Part IV has holographic diffusor with diffusing angle of 10°. Although the dark zone is totally removed, the image is also overly interpolated, causing image blur and distortion especially in the image section that far away from the CDP. Although the two scenes revealed that the diffusor with a diffusing angle of 5° is the best candidate for the system, there are still two phenomena worth mentioning, i.e., comparing the part III of Fig. 13(b) with Fig. 13(a), we can find that, when the background is very bright, the structural grid disturbance can be distinct. From part III and part IV of both Fig. 13(a) and 13(b), we can clearly see the worsening of the expressible DOF when the diffusing angle is enlarged.

 figure: Fig. 13

Fig. 13 (a)The display result of the rectangular pyramid scene with chessboard background, (b)the display result of a basketball scene. For both scenes, the viewing zone is uniformly divided into 4 parts, part I without diffusor, part II with diffusing angle of 3.5°, part III with diffusing angle of 5° and part IV with diffusing angle of 10°.

Download Full Size | PDF

The complete views of the scene with a diffusor of 5° from left to right of the viewing zone are illustrated in Fig. 14. Form the complete view we can perceive that the displayed 3D scenes are vivid with right parallaxes and occlusion relationships, which verifies the effectiveness of continuous light field reconstruction using the selected holographic diffusor.

 figure: Fig. 14

Fig. 14 The left(a), (d), middle(b), (e), right view(c), (f) of the rectangular pyramid and the basketball scene.

Download Full Size | PDF

3.2. The location of the holographic diffusor

Another key parameter of the InIm based LFD is the location of the holographic diffusor. In the above experiments, the frontier of the object is located at the CDP of the system, so the holographic diffusor is right at this location. However, this setup cannot always be the optimal configuration and achieve the best visual experience especially when the object is not located in the front and back area around the CDP. Previous analysis indicates that the holographic diffusor should locate at where the 3D image integrated. To verified this assumption, a 3D scene of a dog set at 200 mm away from the lens array is used. Figure 15 shows the front view of the display result when a holographic diffusor with diffusing angle of 5° is used at different location, when it is located at the CDP (150 mm away from the lens array), the boundary part of the dog is blurred (see Fig. 15(a)), while it is located exactly at the location where the 3D image lies, i.e. 200 mm away from the lens array, sharp and clear image can be perceived (see Fig. 15(b)). When the holographic diffusor moves further far away from the lens array, i.e. 250 mm away from the lens array. The image is also blurred again (see Fig. 15(c)), and it is even worse than the first condition.

 figure: Fig. 15

Fig. 15 Front view of the dog scene with the holographic diffusor (a) at 150mm (approx. at CDP) away from the lens array, (b) at 200 mm away from the lens array, (c) at 250 mm away from the lens array.

Download Full Size | PDF

The location of the holographic diffusor is very important since the selection of the diffusing angle is closely related to the distance between the lens array plane and the holographic diffusor plane. From the point of view of object point reconstruction, the continuous light field is re-modulated by discrete light field in the intersection point of each light ray, which in most of the case, is not in the CDP. Therefore, as verified by the above experiments, the location of the holographic diffusor is not necessarily coupled with the location of the CDP. But one can also intuitively know that the location of the CDP is a privileged place to put holographic diffusor when the displayed 3D scene is right around a finite depth range of it. That is why most previous research insisted in putting the holographic diffusor in the CDP.

3.3. The validation of the double window violation phenomenon

The double window violation phenomenon is quite obvious and is easy to be observed, here we present it to reveal that this phenomenon is also closely related to the distance between the two windows. The tank scene is displayed at different location of the system. Figure 16(a) shows the displayed result of the tank at 150 mm away from the lens array viewing at left 10°. The tail part of the tank can be clearly visualized (the area I in Fig. 16(a)), however, when the tank is located in 250 mm from the lens array, the tail part is transacted by the limited size of the display panel, which is shown in area II of Fig. 16(b). In both cases, a holographic diffusor of 5° is used, while the former is located in 150 mm away from the LA and the latter 250 mm away from the LA. These two pictures clearly state that the longer the distance between the two windows, the severe the double window violation problem.

 figure: Fig. 16

Fig. 16 display result of the tank scene at different location viewing at left 10°, (a) the tank located at 150 mm from the lens array, (b) the tank located at 250 mm from the lens array.

Download Full Size | PDF

Since the double windows violation problem will worsen with the increase of the distance d, the ultimate solution to this problem is to reduce the distance between the two windows. One of the questions that most researchers interested in is that if is there a compact system (a system with a small gap between the holographic diffusor and lens array) possible for practical 3D display with sparsely arranged macro lenses and holographic diffusor with proper diffusing angle? Since the diffusing angle is closely related to the distance between the lens array and the holographic diffusor, locating the holographic diffusor closely to the lens array will inevitably need a large diffusing angle to eliminate the dark zone and visual blind spots, which in return will greatly deteriorate the expressible depth range. Besides, the parallax effect of the adjacent view will be corroded for the view are overly interpolated. To sum up, we believe that this configuration cannot make a practical 3D system which is compact is system size and acceptable in viewing quality, which we think may be the major bottleneck for the IILFD with holographic diffusor.

4. Conclusion

The mechanism for continuous light filed reconstruction of the IILFD is analyzed in terms of the object point reconstruction and observers’ view interpolations, the essence of joint view reconstruction is presented, two key parameters of the system, i.e. the diffusing angle and lactation of the holographic diffusor is deduced in the process. The additional double window violation phenomenon that introduced by the dual-plane structure of the IILFD is also stated briefly. Optical experiments verified the validity of the analysis. We believe that our work will help to reveal the imaging essence of IILFD and is beneficial for the optimization of the IILFD.

Funding

National Key Research and Development Program of China (2017YFB1104500); National Natural Science Foundation of China (61775240); Foundation for the Author of National Excellent Doctoral Dissertation of the People’s Republic of China (FANEDD) (201432).

References

1. J.-Y. Son, H. Lee, B.-R. Lee, and K.-H. Lee, “Holographic and Light-Field Imaging as Future 3-D Displays,” Proc. IEEE 105(5), 789–804 (2017). [CrossRef]  

2. M. Yamaguchi, “Full-Parallax Holographic Light-Field 3-D Displays and Interactive 3-D Touch,” Proc. IEEE 105(5), 947–959 (2017). [CrossRef]  

3. T. Iwane, “Light field display and 3D image reconstruction,” Proc. SPIE 9867, 98670S (2016). [CrossRef]  

4. F. C. Fan, S. Choi, and C. Jiang, “Demonstration of perfect holographic display on commercial 4K plane displayer,” in Digital Holography & 3-D Imaging Meeting, OSA Technical Digest (Optical Society of America (2015), p. DW3A.4. [CrossRef]  

5. X. Sang, F. C. Fan, C. C. Jiang, S. Choi, W. Dou, C. Yu, and D. Xu, “Demonstration of a large-size real-time full-color three-dimensional display,” Opt. Lett. 34(24), 3803–3805 (2009). [CrossRef]   [PubMed]  

6. C. B. Burckhardt, “Optimum Parameters and Resolution Limitation of Integral Photography,” J. Opt. Soc. Am. 58(1), 71–76 (1968). [CrossRef]  

7. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15(8), 2059–2065 (1998). [CrossRef]  

8. J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “Analysis of viewing parameters for two display methods based on integral photography,” Appl. Opt. 40(29), 5217–5232 (2001). [CrossRef]   [PubMed]  

9. F. Jin, J.-S. Jang, and B. Javidi, “Effects of device resolution on three-dimensional integral imaging,” Opt. Lett. 29(12), 1345–1347 (2004). [CrossRef]   [PubMed]  

10. S.-W. Min, J. Kim, and B. Lee, “New Characteristic Equation of Three-Dimensional Integral Imaging System and its Applications,” Jpn. J. Appl. Phys. 44(2), L71–L74 (2005). [CrossRef]  

11. F. Okano, J. Arai, and M. Kawakita, “Wave optical analysis of integral method for three-dimensional images,” Opt. Lett. 32(4), 364–366 (2007). [CrossRef]   [PubMed]  

12. B. Tavakoli, M. Daneshpanah, B. Javidi, and E. Watson, “Performance of 3D integral imaging with position uncertainty,” Opt. Express 15(19), 11889–11902 (2007). [CrossRef]   [PubMed]  

13. Z. Kavehvash, K. Mehrany, and S. Bagheri, “Optimization of the lens-array structure for performance improvement of integral imaging,” Opt. Lett. 36(20), 3993–3995 (2011). [CrossRef]   [PubMed]  

14. M. Cho and B. Javidi, “Optimization of 3D Integral Imaging System Parameters,” J. Disp. Technol. 8(6), 357–360 (2012). [CrossRef]  

15. C. G. Luo, X. Xiao, M. Martínez-Corral, C. W. Chen, B. Javidi, and Q. H. Wang, “Analysis of the depth of field of integral imaging displays based on wave optics,” Opt. Express 21(25), 31263–31273 (2013). [CrossRef]   [PubMed]  

16. F. Yang, L. Dong, and A. Wang, “Study on the optimum design of an integral imaging system based on view analysis,” Opt. Commun. 343, 66–72 (2015). [CrossRef]  

17. J.-S. Jang, F. Jin, and B. Javidi, “Three-dimensional integral imaging with large depth of focus by use of real and virtual image fields,” Opt. Lett. 28(16), 1421–1423 (2003). [CrossRef]   [PubMed]  

18. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” J. Opt. Soc. Am. A 22(4), 597–603 (2005). [CrossRef]   [PubMed]  

19. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (ACM SIGGRAPH, New Orleans, 1996), pp. 31–42.

20. S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Computer Graphics Proceedings, Annual Conference Series (ACM SIGGRAPH, New Orleans, 1996), pp. 43–54. [CrossRef]  

21. J.-X. Chai, X. Tong, S.-C. Chan, and H.-Y. Shum, “Plenoptic Sampling,” in SIGGRAPH '00 Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (2000), pp. 307–318. [CrossRef]  

22. A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically Reparameterized Light Fields,” in Computer Graphics SIGGRAPH 2000 Proceedings (Los Angeles, 2000), pp. 297–306.

23. M. Zwicker, W. Matusik, F. Durand, H. Pfister, and C. Forlines, “Antialiasing for automultiscopic 3D displays,” in ACM SIGGRAPH 2006 Sketches (SIGGRAPH '06) (2006), pp. 73–82.

24. M. Kawakita, S. Iwasawa, M. Sakai, Y. Haino, M. Sato, and N. Inoue, “3D image quality of 200-inch glasses-free 3D display system,” Proc. SPIE 8288, 82880B (2012). [CrossRef]  

25. K. Nagano, A. Jones, J. Liu, J. Busch, X. Yu, M. Bolas, and P. Debevec, “An autostereoscopic projector array optimized for 3D facial display,” in ACM SIGGRAPH 2013 Emerging Technologies on - SIGGRAPH '13 (2013).

26. A. Jones, K. Nagano, J. Liu, J. Busch, X. Yu, M. Bolas, and P. Debevec, “Interpolating vertical parallax for an autostereoscopic three-dimensional projector array,” J. Electron. Imaging 23(1), 011005 (2014). [CrossRef]  

27. A. Stern and B. Javidi, “Three-Dimensional Image Sensing, Visualization, and Processing Using Integral Imaging,” Proc. IEEE 94(3), 591–607 (2006). [CrossRef]  

28. E. H. Adelson and J. R. Bergen, “The Plenoptic Function and the Elements of Early Vision,” in Computational Models of Visual Processing, M. S. Landy, and A. J. Movshon, eds. (MIT Press, Cambridge, MA, 1991), pp. 3–20.

29. A. J. Woods, A. Said, N. S. Holliman, E.-V. Talvala, and J. O. Merritt, “Spatial-angular analysis of displays for reproduction of light fields,” Proc. SPIE 7237, 723707 (2009). [CrossRef]  

30. Z. Yan, X. Yan, X. Jiang, H. Gao, and J. Wen, “Integral imaging based light field display with enhanced viewing resolution using holographic diffuser,” Opt. Commun. 402, 437–441 (2017). [CrossRef]  

31. X. Sang, X. Gao, X. Yu, S. Xing, Y. Li, and Y. Wu, “Interactive floating full-parallax digital three-dimensional light-field display based on wavefront recomposing,” Opt. Express 26(7), 8883–8889 (2018). [CrossRef]   [PubMed]  

32. X. Yan, J. Wen, Z. Yan, T. Zhang, and X. Jiang, “Post-calibration compensation method for integral imaging system with macrolens array,” Opt. Express 27(4), 4834–4844 (2019). [CrossRef]   [PubMed]  

33. M. Halle, “Multiple viewpoint rendering,” in Conference on Computer Graphics and Interactive Techniques (1998), pp. 243–254.

34. J. Wen, X. Yan, X. Jiang, Z. Yan, Y. Wang, and J. Wang, “Nonlinear mapping method for the generation of an elemental image array in a photorealistic pseudoscopic free 3D display,” Appl. Opt. 57(22), 6375–6382 (2018). [CrossRef]   [PubMed]  

35. M. Halle, “Autostereoscopic displays and computer graphics,” Comput. Graph. 31(2), 58–62 (1997). [CrossRef]  

36. J. Hahn, Y. Kim, and B. Lee, “Uniform angular resolution integral imaging display with boundary folding mirrors,” Appl. Opt. 48(3), 504–511 (2009). [CrossRef]   [PubMed]  

37. T. Rodriguez, A. C. d. Leon, B. Uzzan, N. Livet, E. Boyer, F. Geffray, T. Balogh, Z. Megyesi, and A. Barsi, “Holographic and action capture techniques,” in SIGGRAPH '07 ACM SIGGRAPH 2007 Emerging Technologies (New York, NY, USA, 2007), paper 11.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1
Fig. 1 A heuristic description of an ideal 3D imaging system, where all the rays emerging from the 3D object flowing through a sphere boundary surface S need to be captured and reconstructed.
Fig. 2
Fig. 2 (a) Sketch map of the viewing angle interval of the sampled light field, (b) the continuous light intensity distribution of the light field specified in (a), (c) the light capture scheme of the traditional photography.
Fig. 3
Fig. 3 (a) The light field acquisition, (b) reconstruction scheme and (c) the restored light field of the traditional InIm system with MiLA, (d), (e), (f) are the counterparts for the IILFD with MaLA, respectively.
Fig. 4
Fig. 4 (a) Typical structure of the IILFD, (b) Light ray distribution of the InIm system without using holographic diffusor.
Fig. 5
Fig. 5 View interpolation of the holographic diffusor in terms of orthographic projection. (a) without diffusor, (b) with proper diffusor.
Fig. 6
Fig. 6 The essence of joint reconstruction.
Fig. 7
Fig. 7 The window violation of (a) the traditional autostereoscopic display, (b) the volumetric display. The double window violation of the holographic based LFD for (c) the front viewer, (d) the side viewer.
Fig. 8
Fig. 8 The light emitted from the pixel modulated by the lens, forming conical light ray bundles with their summits in contact at the CDP, the conical ray bundles are re-modulated by the diffuser, homogenous image spots are reconstructed with enlarged output angle.
Fig. 9
Fig. 9 Schematic diagram of continuous light field reconstruction by using holographic diffusor.
Fig. 10
Fig. 10 Image reconstructed out of CDP, (a) considering the finite aperture of the lens, (b) under pinhole assumption and considering the light diffuse function of the holographic diffusor.
Fig. 11
Fig. 11 Three candidate holographic diffusors locations and their reconstructed orthographic views.
Fig. 12
Fig. 12 (a)The structure of the display system, (b) the arrangement of the lens array.
Fig. 13
Fig. 13 (a)The display result of the rectangular pyramid scene with chessboard background, (b)the display result of a basketball scene. For both scenes, the viewing zone is uniformly divided into 4 parts, part I without diffusor, part II with diffusing angle of 3.5°, part III with diffusing angle of 5° and part IV with diffusing angle of 10°.
Fig. 14
Fig. 14 The left(a), (d), middle(b), (e), right view(c), (f) of the rectangular pyramid and the basketball scene.
Fig. 15
Fig. 15 Front view of the dog scene with the holographic diffusor (a) at 150mm (approx. at CDP) away from the lens array, (b) at 200 mm away from the lens array, (c) at 250 mm away from the lens array.
Fig. 16
Fig. 16 display result of the tank scene at different location viewing at left 10°, (a) the tank located at 150 mm from the lens array, (b) the tank located at 250 mm from the lens array.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

p(x,y,z,ϕ,θ)=p(x+rsinϕcosθ,y+rsinϕsinθ,z+rcosϕ,ϕ,θ),
p d (x,y,ϕ,θ)= p (x,y,0,ϕ,θ),
o ˜ r (x,y)= o ˜ s (x,y)= m,n o(x,y)δ(xmΔx)δ(ynΔy) ,
n z = p l ×I s×a 1 .
θ in =2×arctan( a 2×d ),
θ out = θ in 2 + θ d 2 ,
θ tar =2×arctan p 2×d ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.