Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Integral imaging reconstruction system based on the human eye viewing mechanism

Open Access Open Access

Abstract

For integral stereo imaging systems based on lens arrays, the cross-mixing of erroneous light rays between adjacent lenses seriously affects the quality of the reconstructed light field. In this paper, we proposed a light field reconstruction method based on the human eye viewing mechanism, which incorporates simplified human eye imaging into the integral imaging system. First, the light field model for specified viewpoint is established, and the distribution of the light source for each viewpoint is accurately calculated for the EIA generation algorithm of fixed viewpoint. Second, according to the ray tracing algorithm in this paper, non-overlapping EIA based on the human eye viewing mechanism is designed to suppress the amount of crosstalk rays fundamentally. The actual viewing clarity is improved with the same reconstructed resolution. Experimental results verify the effectiveness of the proposed method. The SSIM value is higher than 0.93, which verifies that the viewing angle range is increased to 62°.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Natural light is diffusely reflected on the surface of an object, forming light ray in multiple directions. The two rays received by the left eye and the right eye form a disparity in the retinal imaging, which is then converted into distance information from the object to the human eye, and the above process is the fundamental reason why people can perceive three-dimensional scenes. Integral imaging is naked-eye three-dimensional (3D) display technology, which reconstructs a real three-dimensional image, so the reconstructed light field should conform to the principle that the human eye perceives a real three-dimensional scene.

Elemental image array (EIA) consists of element image (EI). The integral imaging system uses the lens array as an important device to modulate the light direction, and light emitted from each element image (EI) is refracted by the corresponding lens and superimposed in space to form three-dimensional light field. Therefore, EIA is a 2D image containing multiple disparity information matched with the lens array. Elemental image panel (EIP) is a display panel loaded with EIA image. Each point source on EIP is a pixel containing viewpoint information essentially. According to the principle of human eye stereoscopic perception, at most one of the many light rays emitted by each pixel can constitute the correct light field for the specified viewpoint in theory. This light must meet two characteristics: one is coming from the correct corresponding lens to ensure the light is not crosstalk, and the other is can enter the human eye. For a specified viewing position, the intersecting rays constitutes an optimal reconstructed light fields.

At present, the common display panels are LED and LCD whose high-resolution characteristics are the basis for imaging. However, point-source has divergent nature. The standard arrangement of EIs is shown in Fig. 1. The combination of the two is prone to cause light crosstalk problems that cannot be ignored. The defects are mainly reflected in three aspects: first, there are a lot of crosstalk light and invalid light in imaging stage. The light refracted by adjacent lenses and enter the human eye can form periodic crosstalk image (point A in Fig. 1), which seen by eye is partial truncation or ghost. Second, the light emitted through the corresponding lens but do not enter the human eye is invalid light (point D in Fig. 1). This information redundancy causes the light field resolution unable to match the actual number of light rays. Third, the light emitted by other pixels through the corresponding lens is received, but the light is adjacent viewpoint information, which reduces the perceived resolution of the human eye, so it is also a crosstalk light (point B in Fig. 1).

 figure: Fig. 1.

Fig. 1. Crosstalk analysis of reconstructed light field.

Download Full Size | PDF

Excessive crosstalk light obscures the correct light, so that the light field seen by the human eye is no longer the content of theoretical analysis, and eventually the actual viewing angle and resolution are not ideal. There are many related studies in recent years [17]. Wang Yu et al. [8]. used the similarity between adjacent EIs to perform registration and stitching to reduce the crosstalk of adjacent EIs, but it is mainly used for computer reconstruction. Park G et al. [9]. used tracking system to locate the observer's position, dynamically changed the EIAs and expanded the viewing angle. Watanabe et al. [10] proposed a display method based on the parallel projection of EIs, which improved the display performance, and the pixel density is about 63.5ppi. Under the premise of avoiding crosstalk, some researchers took the approach of changing the system structure to ensure the viewing angle. For example, Byoungho Lee et al. [11]. proposed a method using a curved lens array; Nam Kim et al. [12]. proposed a method using dual spatial light modulator occlusion; Seung. HoShin et al. [13]. proposed the method of using high refractive index medium.

Therefore most of the relevant researches analyze the light field on the implicit premise of the ideal situation, ignoring the light reception segment by the human eye. Instead of the simplified pinhole imaging, this paper takes thin lens imaging as theoretical basis, and analyzes the actual light path from point-source to the human eye in the reconstructed light field accurately. Finally, the EIA pixels are rearranged to establish a crosstalk suppression light field based on the human eye viewing mechanism (Fig. 2). The ultimate objective is to increase the viewing angle and increase the correct light density.

 figure: Fig. 2.

Fig. 2. Flowchart of the algorithm based on the human eye viewing mechanism.

Download Full Size | PDF

2. Light field model of the specified viewpoint

The structure of the human eye is equivalent to a convex lens, and external objects are imaged on the retina. In this paper, the position of the pupil is defined as the viewpoint, and the picture received by the human eye at a specified position is called “viewpoint content”, which is essentially a retinal image. The pixels on it are called “view pixels”. This section mainly analyzes the process that the light emitted by the EIP is converged by the lens array and received by the human eye when the optical system (display screen, micro-lens array and EIA) is fixed. It can backward figure out the corresponding light source position distributed on EIP according to the position of a single specified viewpoint.

2.1 Light field of the front view

The coordinate system is established as shown in Fig. 3. $x - y$ plane is vertical with the axis of lens, and the upper left corner of the EIP is regarded as the (0,0) point of the $x - y$ plane. The z-axis is parallel to the lens axis, and the origin of the z axis locates on the plane where the optical center of the lens. The pixel diameter ${P_D}$ on the EIP is the unit of length measurement. The light field is essentially a three-dimensional image, which can be analyzed as a virtual finite surface.

 figure: Fig. 3.

Fig. 3. Schematic diagram of single-viewpoint optical path.

Download Full Size | PDF

Any optical system has certain diffraction effects, and the point in light field is actually composed of airy spot superimposed together, rather than an ideal point. Fresnel diffraction formula shows that the circular aperture diffraction spot on the focal plane of thin lens is the smallest and has the highest imaging resolution, and the distribution of light intensity is the sum of the light intensity of all imaging points [14].The Integral imaging system defines the focal plane as the central depth plane (CDP). The depth value v of CDP (distance from the screen) is shown in Eq. (1). So when the distance between the lens array and the screen is determined, the position of CDP is unique and certain [15]. CDP has two functions [16]: First, CDP is the focal plane, so the location of the stereoscopic image is centred on CDP. Second, the image on any other plane is a larger light spot, so the pixel resolution of CDP is the evaluation index of the spatial resolution of the integral imaging system.

$S(a,b,c)$ is a specified viewpoint position. Taking the imaging of a single lens as an example, point A is a pixel on the EIP, and point C is the projection of point A on the CDP. Therefore, all rays from point A intersect at point C and then diverge, but only one of these rays enters the viewpoint S. Point E is the optical center of the lens, and point B is the intersection of lens axis and EIP (that is, the x-y coordinates of point B and the optical center of the lens are the same). Point O is the intersection of lens axis and CDP similarly. u is the distance between the EIP display and the lens array, f is focal length of the lens. u and v are both unsigned variables. Integral imaging could be classified as follows. Depth priority integral image (u = f) and resolution priority integral image (u > f). In this paper, the resolution priority mode is adopted, so the real image is displayed.

$$v = u \cdot f/(u - f)$$

CDP is a deterministic plane in the light field. Each of lens coordinate, EIP and CDP positions are known when the optical system is fixed, so the light AC emitted by a certain pixel A on the EIP is definite (but AC is not necessarily the light entering the viewpoint S) in Fig. 3. The line DS is supposed to be the light emitted from point A to the viewpoint S through the lens (D is refraction point on the lens). The coordinates of point D can be determined according to the known point C. Then, the unique correspondence between viewpoint S (pupil position) and point A (light source) is calculated. The parameter formulae are deduced as follows:

$\frac{{s - p}}{{a - p}} = \frac{v}{c}$ and $\frac{{t - q}}{{b - q}} = \frac{v}{c}$ due to $\Delta DCT \sim \Delta DSG$, that is:

$$\left\{ \begin{array}{l} s = \frac{{va + cp - vp}}{c}\\ t = \frac{{vb + cq - vq}}{c} \end{array} \right.$$
$\Delta ABE \sim \Delta COE$”and Eq. (2) are using to deduce Eq. (3).
$$\left\{ {\begin{array}{c} {x - e = \frac{u}{v}(s - e) = \frac{{u(va + cp - vp)}}{{vc}} - \frac{u}{v}e = \frac{{u(c - v)}}{{vc}}p + \frac{u}{c}a - \frac{u}{v}e}\\ {y - h = \frac{u}{v}(t - h) = \frac{{u(vb + cq - vq)}}{{vc}} - \frac{u}{v}h = \frac{{u(c - v)}}{{vc}}q + \frac{u}{c}b - \frac{u}{v}h} \end{array}} \right.$$

We modify Eq. (3) to Eq. (4)

$$\left( {\begin{array}{c} x\\ y \end{array}} \right) = \frac{{u(c - v)}}{{vc}}\left( {\begin{array}{c} p\\ q \end{array}} \right) + \frac{u}{c}\left( {\begin{array}{c} a\\ b \end{array}} \right) + \left( {1 - \frac{u}{v}} \right)\left( {\begin{array}{c} e\\ h \end{array}} \right)$$

2.2 Light field of the lateral view

The observer does not necessarily maintain the state of facing the display screen when watching the stereoscopic light field, especially in the edge region of the light field. The optical axis of pupil is no longer perpendicular to the EIP when the head rotates. Therefore, section2.2 further analyzes the imaging algorithm of lateral view on the basis of front view algorithm (section 2.1). The light field model of lateral view is shown in Fig. 4. Take horizontal rotation for example. The red plane referred to as “initial retina”, which is the retina when facing the light field, and the blue plane is “rotated retina” of lateral view. The axle center is point E representing the pupil, and the axis is the straight line IE that is perpendicular to the EIP. When the head rotates counterclockwise $\theta$° ($\theta > 0$), the retina also rotates $\theta$°. We need to calculate the positional change of the same view pixel on retina from the front view to the lateral view.

 figure: Fig. 4.

Fig. 4. Simplified model of retinal imaging.

Download Full Size | PDF

(1) Two-dimensional light field

We analyze from the two-dimensional level as shown in Fig. 5 firstly. The light AE is coplanar with the two optical axes before and after the rotation. Therefore, light AE is imaged as A1 on the initial retina and as A2 on the rotated retina. Two coordinate systems are established on the retina used only for the viewpoint content. The length of the coordinate system is the diameter of the imaged spot on the retina. For the initial retina, OA1 is the x-axis, O is the origin, and OE⊥OA1. For the rotated retina, FA2 is the x-axis, F is the origin, and FE⊥FA2.

 figure: Fig. 5.

Fig. 5. Schematic diagram of two-dimensional lateral view imaging.

Download Full Size | PDF

For the convenience of description, the top view (Fig. 5) is used to analyze the relationship between OA1 and FA2, which represents the positional change relationship of view pixels on the initial retina and the rotated retina. We set OA1 = x1, FA2 = x2. $OE = FE = R$ is the distance from the pupil to the retina (imaging plane), the angle of rotation $\angle FEO = \theta$. We set $\angle \textrm{OE}{\textrm{A}_1} = \phi ,\textrm{O}{\textrm{A}_1} = \textrm{R}\tan \phi = {x_1}$. When $\phi > 0$, $\textrm{O}{\textrm{A}_1} > 0$. When $\phi < 0$, $\textrm{O}{\textrm{A}_1} < 0$. $\textrm{F}{\textrm{A}_2} = \textrm{R}\tan (\phi + \theta ) = {x_2}$. ${x_1}$ and ${x_2}$ is a hyperbolic function relationship seen from Eq. (5).

$${x_2} = \frac{{\tan \phi + \tan \theta }}{{1 - \tan \theta \tan \phi }}R = \frac{{{x_1} + R\tan \theta }}{{R - {x_1}\tan \theta }}R$$

(2) Three -dimensional light field

The two-dimensional light field describes the case that the light entering the human eye is coplanar with the two optical axes, and then combining Fig. 4 and Fig. 6, we extend to the analysis of any light coming from the lens array. The point H and point A are on the lens array surface, point A is still a point coplanar with the two optical axes, and light HA is perpendicular to the plane where the two optical axes reside. Therefore, point H on the lens array surface can be located through the position of point A and the length of HA, and HE can represent any light in the three-dimensional light field.

 figure: Fig. 6.

Fig. 6. Schematic diagram of three-dimensional lateral view imaging.

Download Full Size | PDF

Point ${\textrm{H}_1}({x_1},{y_1})$ is the imaging of point H on the initial retina, and point ${\textrm{H}_2}({x_2},{y_2})$ is the imaging of point H on the rotated retina. ${\textrm{H}_1}{\textrm{A}_1} = {y_1},{\textrm{H}_2}{\textrm{A}_2} = {y_2}$. ${\textrm{H}_1}{\textrm{A}_1} \bot \textrm{E}{\textrm{A}_2}$, ${\textrm{H}_2}{\textrm{A}_2}/{/}{\textrm{H}_1}{\textrm{A}_1}$. The light HA is perpendicular to the plane where the two optical axes reside, so ${x_1}$ and ${x_2}$ also conform to Eq. (5). We figure out ${y_1},{y_2}$ based on the spatial relative relationship between point A and point H.

Equation (6) can be obtained by $\textrm{E}{\textrm{A}_1} = \textrm{R}/\cos \phi$, $\textrm{E}{\textrm{A}_2} = \textrm{R}/\cos (\phi - \theta )$:

$${y_2}/{y_1} = \textrm{E}{\textrm{A}_2}/\textrm{E}{\textrm{A}_1} = 1/(\cos \theta + \tan \phi \sin \theta )$$
$${y_2} = \frac{{{y_1}}}{{\cos \theta + \frac{{{x_1}}}{R}\sin \theta }} = \frac{{R{y_1}}}{{R\cos \theta + {x_1}\sin \theta }}$$

Equations (5) and (7) are combined into Eq. (8), which represents the position relationship between the same point in the initial retina and the rotated retina

$$\left( {\begin{array}{c} {{x_2}}\\ {{y_2}} \end{array}} \right) = \left( {\begin{array}{c} {\frac{{{x_1} + R\tan \theta }}{{R - {x_1}\tan \theta }}R}\\ {\frac{{R{y_1}}}{{R\cos \theta + {x_1}\sin \theta }}} \end{array}} \right)$$

Retinal rotation is similar to head rotation, which causes changes in the image of the retina. EIA needs to be deduced reversely to ensure that the rotated retina image is still the same as that obtained in front view.

Equation (9) is calculated according to Eq. (8).

$$\left( {\begin{array}{c} {{x_1}}\\ {{y_1}} \end{array}} \right) = \left( {\begin{array}{c} {\frac{{{x_2} - R\tan \theta }}{{R + {x_2}\tan \theta }}R}\\ {\frac{{R\cos \theta - R\tan \theta + {x_2}\sin \theta + {x_2}}}{{R + {x_2}\tan \theta }}{y_2}} \end{array}} \right)$$

In order to get the same result as front view, ${x_2}: = {x_1}$ and ${y_2}: = {y_1}$ are taken.

$$\left( {\begin{array}{c} {{x_{new}}}\\ {{y_{new}}} \end{array}} \right) = \left( {\begin{array}{c} {\frac{{{x_1} - R\tan \theta }}{{R + {x_1}\tan \theta }}R}\\ {\frac{{R\cos \theta - R\tan \theta + {x_1}\sin \theta + {x_1}}}{{R + {x_1}\tan \theta }}{y_1}} \end{array}} \right)$$

The point A1 in Fig. 6 is replaced by point $({x_{new}},{y_{new}})$, and the new point A on EIA can be found in combination with Eq. (4) to update EIA.

3. EIA design for the specified viewpoint

To reduce the generation of crosstalk light fundamentally, the optimal EIA that conforms to the human eye viewing mechanism should be designed for specified viewpoint. In this paper, the physical location of each pixel on EIA no longer strictly corresponds to the lens. Therefore, the pixels on EIA should be rearranged according to the specific light field information. The key is to calculate the corresponding pixel position on EIA for each point in the light field. The calculation method is as follows:

The position coordinates of the eyes are defined as the left viewpoint ${E_L}({{a_L},{b_L},{c_L}} )$ and the right viewpoint ${E_R}({{a_R},{b_R},{c_R}} )$ on basis of the section2. The viewpoint must be outside the CDP, so ${c_L} > {Z_{\max }},{c_R} > {Z_{\max }}$. The pixel location in the light field is added in Eq. (4). All the light received by a viewpoint comes directly from the lens array. Therefore, the position of the viewpoint is taken as the starting point, we take the position of the viewpoint as the starting point, calculate the coordinates of its intersection with the lens plane along the line of sight, and then locate the pixel coordinates on the EIA, laying the foundation for rearranging EIA.

The EIP serving left viewpoint is taken as an example, $P({{x_\textrm{0}},{y_\textrm{0}},{z_\textrm{0}}} )$ is a point in the reconstructed light field, and $D({{p_\textrm{L}},{q_\textrm{L}},0} )$ is the intersection of extension line ${E_L}P$ and the lens array plane (Fig. 7). Equation (11) can be obtained by the geometric light path model.

$$\left( {\begin{array}{c} {{p_L}}\\ {{q_L}} \end{array}} \right) = \frac{{{c_L}}}{{{c_L} - {z_0}}}\left( {\begin{array}{c} {{x_0}}\\ {{y_0}} \end{array}} \right) + \frac{{{z_0}}}{{{z_0} - {c_L}}}\left( {\begin{array}{c} {{a_L}}\\ {{b_L}} \end{array}} \right)$$

 figure: Fig. 7.

Fig. 7. Schematic diagram of the optical path analysis of the left viewpoint.

Download Full Size | PDF

In order to calculate the corresponding pixel point $A({{x_\textrm{L}},{y_\textrm{L}},u} )$ on the EIP, corresponding optical center $E({{e_\textrm{L}},{h_\textrm{L}},0} )$ of the lens is obtained according to the position of $({{p_\textrm{L}},{q_\textrm{L}}} )$ in the lens array plane, and then combined with Eq. (4) to obtain Eq. (12) which is the mapping from the lens array panel to EIP. The light entering the viewpoint is $P{E_L}$, and its only light source on the EIP is $A({{x_\textrm{L}},{y_\textrm{L}},u} )$. So we combine Eq. (11) and Eq. (12) to get Eq. (13), which is the mapping from light field to EIP:

$$\left( {\begin{array}{c} {{x_L}}\\ {{y_L}} \end{array}} \right) = \frac{{u({{c_L} - v} )}}{{v{c_L}}}\left( {\begin{array}{c} {{p_L}}\\ {{q_L}} \end{array}} \right) + \frac{u}{{{c_L}}}\left( {\begin{array}{c} {{a_L}}\\ {{b_L}} \end{array}} \right) + \left( {1 - \frac{u}{v}} \right)\left( {\begin{array}{c} {{e_L}}\\ {{h_L}} \end{array}} \right)$$
$$\left( {\begin{array}{c} {{x_L}}\\ {{y_L}} \end{array}} \right) = \frac{{u({c_L} - v)}}{{v({{c_L} - {z_0}} )}}\left( {\begin{array}{c} {{x_0}}\\ {{y_0}} \end{array}} \right) + \frac{{u({{z_0} - v} )}}{{v({{z_0} - {c_L}} )}}\left( {\begin{array}{c} {{a_L}}\\ {{b_L}} \end{array}} \right) + \left( {1 - \frac{u}{v}} \right)\left( {\begin{array}{c} {{e_L}}\\ {{h_L}} \end{array}} \right)$$

The pixel value of point $P({{x_\textrm{0}},{y_\textrm{0}},{z_\textrm{0}}} )$ in the light field is also the pixel value of the coordinate $({{x_\textrm{L}},{y_\textrm{L}}} )$ on EIP. Note that the mapping in Eq. (12) and Eq. (13) is not linear (${z_\textrm{0}} = f({x_\textrm{0}},{y_\textrm{0}})$ is the surface function of light field). Similarly, the EIP serving the right viewpoint can be obtained by Eq. (14):

$$\left( {\begin{array}{c} {{x_R}}\\ {{y_R}} \end{array}} \right) = \frac{{u({c_R} - v)}}{{v({c_R} - {z_\textrm{0}})}}\left( {\begin{array}{c} {{x_\textrm{0}}}\\ {{y_\textrm{0}}} \end{array}} \right) + \frac{{u({z_\textrm{0}} - v)}}{{v({z_\textrm{0}} - {c_R})}}\left( {\begin{array}{c} {{a_R}}\\ {{b_R}} \end{array}} \right) + (1 - \frac{u}{v})\left( {\begin{array}{c} {{e_R}}\\ {{h_R}} \end{array}} \right)$$

4. Multi-view optimization

Ideally, the left and right eyes should collect each corresponding viewpoint content including disparity information (Fig. 8) according to the principle of binocular stereo perception. Therefore, this section mainly illustrates optimal EIAs that serve the left and right eyes independently at the same time without overlapping. For convenience of description, the EI serving the left view is abbreviated as “left EI”, and the EI serving the right view is abbreviated as “right EI”.

 figure: Fig. 8.

Fig. 8. EIA layout diagram for dual-viewpoints.

Download Full Size | PDF

Equation (12) is transformed into $\left( {\begin{array}{c} {{x_L} - {e_L}}\\ {{y_L} - {h_L}} \end{array}} \right) - \frac{u}{{{c_L}}}\left( {\begin{array}{c} {{a_L}}\\ {{b_L}} \end{array}} \right) + \frac{u}{{{c_L}}}\left( {\begin{array}{c} {{e_L}}\\ {{h_L}} \end{array}} \right) = \frac{{u({c_L} - v)}}{{v{c_L}}}\left( {\begin{array}{c} {{p_L} - {e_L}}\\ {{q_L} - {h_L}} \end{array}} \right)$, It can be seen from the compression ratio $\frac{{u(c - v)}}{{vc}}$ that only a few pixels on the EIP serve a specified viewpoint, and the rest are invisible to this viewpoint, which provide an important premise for multi-view theoretical analysis and the expansion of viewing angle. We analyze dual viewpoints on the premise of facing the reconstructed light field firstly, so ${b_L} = {b_R} = b,{c_L} = {c_R} = c$, and ${a_R} - {a_L}$ is the distance between eyes. $\frac{{u(c - v)}}{{vc}} < 1$ ensure that there will be no intersection between adjacent EIs.

$$\left( {\begin{array}{c} {{x_{LO}}}\\ {{y_{LO}}} \end{array}} \right) = \frac{u}{{{c_L}}}\left( {\begin{array}{c} {{a_L}}\\ {{b_L}} \end{array}} \right) + \left( {1 - \frac{u}{{{c_L}}}} \right)\left( {\begin{array}{c} {{e_L}}\\ {{h_L}} \end{array}} \right)$$

Moreover, the left EI and the right EI cannot overlap. The center point $({x_{LO}},{y_{LO}})$ of each left EI is obtained by Eq. (15), and the center point $({x_{RO}},{y_{RO}})$ of the right EI in the same way. If the lens radius is r, the radius of the corresponding EI display area is $\frac{u}{v}\left( {1 - \frac{v}{c}} \right)r$, and the coordinate difference of the circle center corresponding to two center points $({x_{O1}},{y_{O1}})$ and $({x_{O2}},{y_{O2}})$ on the EIP is Eq. (16).

$$\left( {\begin{array}{c} {{x_{O1}} - {x_{O2}}}\\ {{y_{O1}} - {y_{O2}}} \end{array}} \right) = \left( {1 - \frac{u}{c}} \right)\left( {\begin{array}{c} {{e_1} - {e_2}}\\ {{h_1} - {h_2}} \end{array}} \right) > \frac{u}{v}\left( {1 - \frac{v}{c}} \right)\left( {\begin{array}{c} {{e_1} - {e_2}}\\ {{h_1} - {h_2}} \end{array}} \right)$$

Left EI and right EI should satisfy two conditions to avoid overlap: (1) The side length of the EI is smaller than the lens radius to ensure each EI has enough layout space, that is $\frac{{u(c - v)}}{{vc}} \le \frac{1}{2}$. (2)$\max ({|{{e_1} - {e_2}} |,|{{h_1} - {h_2}} |} )\ge \textrm{2}r$, so $\max ({|{{x_{\textrm{O}1}} - {x_{\textrm{O}2}}} |,|{{y_{\textrm{O}1}} - {y_{\textrm{O}2}}} |} )\ge \left( {1 - \frac{u}{c}} \right)r > \frac{u}{v}\left( {1 - \frac{v}{c}} \right)r$. The minimum distance between the center points of the left EI and the right EI must be greater than the side length of the EI to ensure the display area will not overlap, as shown in Eq. (17):

$$\left|{\left( {\begin{array}{c} {{x_{LO}} - {x_{RO}}}\\ {{y_{LO}} - {y_{RO}}} \end{array}} \right)} \right|= \left|{\frac{u}{c}({{a_L} - {a_R}} )+ \left( {1 - \frac{u}{c}} \right)({{e_L} - {e_R}} )} \right|> {P_L}$$

For a certain viewer, the human eye distance “${a_L} - {a_R}$“ is a fixed value (65 mm is used in the experiment). ${e_L} - {e_R}$ is the distance between the optical centers of two lenses. If each row of the lens array has $N$ lenses, then ${e_L} - {e_R}$ has $N + 1$ fixed values. We substitute $N + 1$ kind of values into $\left|{\frac{u}{c}({{a_L} - {a_R}} )+ \left( {1 - \frac{u}{c}} \right)({{e_L} - {e_R}} )} \right|$ to obtain the minimum value. The most effective accommodative factor are $u$ and $c$, which depends on the actual system characteristics and application requirements. The overlapping analysis of lateral viewing situation (${\textrm{b}_L} \ne {\textrm{b}_\textrm{R}}$ or ${c_L} \ne {c_R}$) can still refer to the above method, which should satisfy Eq. (18) or Eq. (19).

$$\frac{{{c_L} - u}}{{{c_L}}}{e_L} + \frac{{u{a_L}}}{{{c_L}}} - \frac{{{c_R} - u}}{{{c_R}}}{e_R} - \frac{{u{a_R}}}{{{c_R}}} > \min (\frac{u}{v}(1 - \frac{v}{{{c_L}}})r,\frac{u}{v}(1 - \frac{v}{{{c_R}}})r)$$
$$\frac{{{c_L} - u}}{{{c_L}}}{h_L} + \frac{{u{b_L}}}{{{c_L}}} - \frac{{{c_R} - u}}{{{c_R}}}{h_R} - \frac{{u{b_R}}}{{{c_R}}} > \min (\frac{u}{v}(1 - \frac{v}{{{c_L}}})r,\frac{u}{v}(1 - \frac{v}{{{c_R}}})r)$$

At least half of the remaining areas on the EIP are free after ignoring the EIAs serving the left and right viewpoints, so we use the areas without viewpoint information to serve more viewpoints and expand viewing angle. Then Eq. (18) or Eq. (19) can be used to calculate the EIA for more than two viewpoints. The left viewpoint and right viewpoint are taken as the center respectively, and 1 point is taken as the step to expand the radius of the viewpoint area. EIAs of newly added viewpoints are sequentially generated to fill all areas of the EIP. In the process of expanding the multi-viewpoint EIA, the existing EIA (main viewpoint EIA) should not be damaged. After obtaining the EIA of binocular viewpoint, the remaining points of EIA are filled point by point in a non-covering way according to the principle that the distance to the main viewpoint was increased gradually. This not only improves the brightness of EIA, but also the anti-shake range of the viewpoint. Our method can minimize crosstalk while expanding the viewing angle, but the generation speed of EIAs is reduced.

5. Experimental results

In order to demonstrate the rationality of the algorithm in this paper, the quality of reconstructed light field was verified from two aspects of light field display density and view angle. Three models produced by 3Dsmax (3D modeling software) are selected as the original scene (Fig. 9(a)), and a color image and corresponding depth image captured by virtual depth camera are set as the original information for generating EIA. Our algorithm does not use the traditional camera array to obtain EIA. In essence, it is a simplified EIA generation method combining depth information and pixel mapping. The algorithm was compiled using Delphi, and the compiled executable program was run on a PC with Intel Core i7-7500U@2.70 GHZ time cost (ms). The experimental platform of optical reconstruction is shown in Fig. 9(a), and the optical reconstruction parameters are shown in Table 1.

 figure: Fig. 9.

Fig. 9. Configuration of light field reconstruction experiment (a) 3Dsmax modeling process (b) Model A (rabbit) (c) Depth map of Model A (d) EI corresponding to Model A (e) Model C (yellow octopus) (f) Optical experimental equipment (g) Model B (skull) (h) Depth map of model B (i) EI corresponding to model B (j) EI corresponding to Model C.

Download Full Size | PDF

Tables Icon

Table 1. Optical experimental parameters

5.1 Light field display density

The resolution of reconstructed light field is mainly affected by EIP resolution and lens size. Therefore, when the EIP resolution and lens array distribution parameters are fixed, the space resolution of the light field will not be improved. Our algorithm fully reduces the ray crosstalk in the light field, so greatly improves the clarity and visual smoothness of the view in the case of the same field resolution. According to Eq. (13) and Eq. (14), ${\left[ {\frac{{v(c - {z_\textrm{0}})}}{{u(c - v)}}} \right]^2}$ points in the original scene are mapped to the same point (x, y) on the EIP. The experiment parameters in this paper are taken as an example, the resolution of EI is 17 × 17. The resolution of the original scene (model B) is 800 × 800, so EIA calculation is a process in which the resolution is compressed. The depth change of the light field surface influence the value of ${\left[ {\frac{{v(c - {z_\textrm{0}})}}{{u(c - v)}}} \right]^2}$. Therefore, the higher the sampling density of the scene, the higher the display density of the reconstructed light field in this precondition. Density coefficient α=$\frac{{u(c - v)}}{{v(c - {z_0})}}$(${c_L}\textrm{ = }{c_R} = c$) is the key of the display density, α is proportional to $u$, but u cannot be too large, which is limited by the condition that binoculars cannot overlap ($\frac{{u(c - v)}}{{vc}} \le \frac{1}{2}$). According to the parameters of the experiment equipment, the optimal value of u is 3.2 mm. In the selection of sampling points, the points in the light field were selected to map EIA according to specific viewpoint positions. This sampling method can suppress crosstalk ray and improve the visual clarity of reconstructed light field compared with the indiscriminate mapping of traditional algorithm.

We compared the actual effects of proposed algorithm and several EIA generation algorithms [1719] that do not consider the human eye position. Model A was adopted as the test target. The pixel size and resolution of reconstructed light field by different algorithms are the same, but the quality of light field is different shown in Fig. 10.For example, the CFB method can clearly reconstruct objects at smaller depths (leaves in the Fig. 10(d)), but the image of objects (rabbits) at larger depths cannot maintain the same clarity. Meanwhile, the light correlation between each depth of our reconstructed light field is strong, and the objects under each layer depth can be clearly displayed, which is more in line with the actual viewing effect of human eyes. It also can be seen from the imaging of the edge region that our algorithm has higher image smoothness, stronger cohesion between pixels and smooth transition in the multi-layer depth of reconstruction.

 figure: Fig. 10.

Fig. 10. Display effect of optical reconstruction (a) Our method (b) UIS algorithm [17] (c) MDII algorithm [18] (d) CFB method [19].

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Viewing result of light field constructed by UIS algorithm at ${\pm}$20° viewpoint.

Download Full Size | PDF

5.2 Viewing angle of the light field

The combination of the light field reconstruction model based on specific viewpoint and the corresponding EIA rearrangement can increase the viewing range. Model B was adopted as the test target of viewing angle and Model C was adopted as the test target of viewpoint continuity. The viewing angle of UIS algorithm is 40°, and there is an obvious crosstalk phenomenon at the position of ${\pm}$20° (Fig. 11). This is a periodic truncation problem caused by a part of the image from the correct viewing area and from the adjacent viewing area entering the human eye simultaneously.

In contrast, the feature of our algorithm is that the person standing at the edge of the viewing angle just receive the light field information serving this viewing point. We shot the light field of our algorithm from multiple perspectives (Fig. 12), and photographs displayed a true and complete 3D light field within the viewing angle of 62°, which also verify that integral imaging system has the characteristic of horizontal disparity and vertical disparity. The multiple edge details of the stereo image show that our algorithm has strong crosstalk-Resistant ability. The edge of speculation is reduced, and the effective viewing angle range is expanded.

 figure: Fig. 12.

Fig. 12. Viewing result of light field constructed by our algorithm at ${\pm}$31° viewpoint.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Structure similarity (SSIM) comparison of reconstructed images of different algorithms.

Download Full Size | PDF

In order to further evaluate the quality of the reconstructed light field based on the viewing mechanism of the human eye, the reconstructed stereo optical images using different algorithms are compared with the ideal images of computational reconstruction (Model B). We use structural similarity (SSIM) [20] as the objective evaluation parameter. SSIM is an evaluation method that conforms to the characteristics of the human visual system. The SSIM value is greater than 0.88, indicating that the reconstructed image has a clear outline. The value close to 1 indicates a similar structure. As shown in Fig. 13, the SSIM values of 10 viewpoints are compared. The SSIM values of traditional algorithm ranges from 0.6 to 0.8, indicating that the reconstructed light field is distorted. The reconstructed light field of our algorithm can keep the SSIM value between 0.93 and 0.96 in a wide viewing angle range, and the structure is more similar to the model, which objectively verifies the crosstalk resistance of the our algorithm.

 figure: Fig. 14.

Fig. 14. Viewing result of light field constructed by our algorithm between 0°∼40°viewpoint.

Download Full Size | PDF

In the Section1, the partial truncation and ghost problem caused by crosstalk rays is mentioned. Ghost problem means that the discontinuous light fields is observed when the observer moves position. Therefore, the model C was used to prove the viewpoint continuity and color retention of our algorithm within the range of viewing angle. We shotted the light field of model C in the range of 0°∼40°. Five images were obtained, and the interval of adjacent viewpoints are 10° to detect that three-dimensional images seen from closely adjacent viewpoints are steadily transitioned. From the overall effect of the images and the detail position of the mark (Fig. 14), it can be proved that the reconstructed light field will not have ghost problem.

6. Summary

Aiming at the problems of the narrow viewing angle and the light field quality degradation caused by the large amount of crosstalk light in the integral imaging system, this paper regards the human eye as a simplified optical system connected to the display stage, and analyzes the close relationship between the crosstalk phenomenon and viewing position of the human eye, finally proposes integral imaging reconstruction system based on the human eye viewing mechanism. Firstly, we analyze the light sources in the reconstructed light field that can be actually seen by different viewing positions on the premise of the two common postures (front view and lateral view). Then, the optimal EIA based on the viewing position of the human eye is reversely designed to fundamentally eliminate the phenomenon of crosstalk. Finally multi-view optimization is performed. The experimental results show that our method increases the number of correct rays in the light field, thus greatly improving visual clarity and expanding the viewing angle of the full 3D view. The display quality of the reconstructed light field is comprehensively improved, which show three-dimensional effect close to the real scene.

Funding

Project of Jilin Scientific and Technological Development Program (20220201062GX); Scientific Research of Education Department of Jilin Province (JJKH20220770KJ).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. H. Ren, Q.-H. Wang, Y. Xing, M. Zhao, L. Luo, and H. Deng, “Super-multiview integral imaging scheme based on sparse camera array and CNN super-resolution,” Appl. Opt. 58(5), A190–A196 (2019). [CrossRef]  

2. K. C. Kwon, K. H. Kwon, M. U. Erdenebat, Y. L. Piao, and N. Kim, “Resolution-Enhancement for an Integral Imaging Microscopy Using Deep Learning,” IEEE Photonics J. 11(1), 1–12 (2019). [CrossRef]  

3. X. L. Ma, R. Y. Yuan, L. B. Zhang, M. Y. He, H. L. Zhang, Y. Xing, and Q. H. Wang, “Augmented reality autostereoscopic 3D display based on sparse reflection array,” Opt. Commun. 510, 127913 (2022). [CrossRef]  

4. R. Li, H.-L. Zhang, F. Chu, and Q.-H. Wang, “Compact integral imaging 2D/3D compatible dis-play based on liquid crystal micro-lens array,” Liq. Cryst. 49(4), 512–522 (2022). [CrossRef]  

5. J. Yim, Y. M. Kim, and S. W. Min, “Real object pickup method for real and virtual modes of integral imaging,” Opt. Eng. 53(07), 1 (2014). [CrossRef]  

6. X. Li, L. Lei, and H. Wang Q, “Wavelet-based iterative perfect reconstruction in computational integral imaging,” J. Opt. Soc. Am. A 35(7), 1212–1220 (2018). [CrossRef]  

7. Y. Terashima, S. Suyama, and H. Yamamoto, “Aerial depth-fused 3D image formed with aerial imaging by retro-reflection (AIRR),” Opt. Rev. 26(1), 179–186 (2019). [CrossRef]  

8. Yu Wang, Jinxiao Yang, Le Liu Piao, and Yan, “Computational Reconstruction of Integral Imaging Based on Elemental Images Stitching,” Acta Opt. Sin. 39(11), 1110001 (2019). [CrossRef]  

9. G. Park, J.-H. Jung, K. Hong, Y. Kim, Y.-H. Kim, S.-W. Min, and B. Lee, “Multi-viewer tracking integral imaging system and its viewing zone analysis,” Opt. Express 17(20), 17895–17908 (2009). [CrossRef]  

10. H. Watanabe, N. Okaichi, H. Sasaki, and M. Kawakita, “Pixel-density and viewing-angle enhanced integral 3D display with parallel projection of multiple UHD elemental images,” Opt. Express 28(17), 24731–24746 (2020). [CrossRef]  

11. Y. Kim, J.-H. Park, H. Choi, S. Jung, S.-W. Min, and B. Lee, “Viewing-angle-enhanced integral imaging system using a curved lens array,” Opt. Express 12(3), 421–429 (2004). [CrossRef]  

12. J. Y. Jang, H. S. Lee, S. Cha, and S. H. Shin, “Viewing angle enhanced integral imaging display by using a high refractive index medium,” Appl. Opt. 50(7), B71–B76 (2011). [CrossRef]  

13. G. Baasantseren, J.-H. Park, K.-C. Kwon, and N. Kim, “Viewing angle enhanced integral imaging display using two elemental image masks,” Opt. Express 17(16), 14405–14417 (2009). [CrossRef]  

14. J. W. Goodman “Introduction to Fourier optics.” (McGraw-Hill, 1996).

15. F. Guangfei, C. Linsen, W. Guojun, and G. Xinyu, “Computational Reconstruction Algorithm for Integral Imaging Based on Diffraction Tracing,” Acta Opt. Sin. 36(5), 0511003 (2016). [CrossRef]  

16. B. Javidi, A. Carnicer, J. Arai, T. Fujii, H. Hua, H. Liao, M. Martínez-Corral, F. Pla, A. Stern, L. Waller, Q.-H. Wang, G. Wetzstein, M. Yamaguchi, and H. Yamamoto, “Roadmap on 3D integral imaging: sensing, processing, and display,” Opt. Express 28(22), 32266–32293 (2020). [CrossRef]  

17. L. Deng, P. Yan, and Y. Gu, “Naked-eye 3D display with undistorted imaging system based on human visual system,” Jpn. J. Appl. Phys. 59(9), 092006 (2020). [CrossRef]  

18. M. U. Erdenebat, B. J. Kim, Y. L. Piao, S. Y. Park, and N. Kim, “Three-dimensional image acquisition and reconstruction system on a mobile device based on computer-generated integral imaging,” Appl. Opt. 56(28), 7796–7802 (2017). [CrossRef]  

19. S. Yang, X. Sang, X. Yu, X. Gao, and B. Yan, “Analysis of the depth of field for integral imaging with consideration of facet braiding,” Appl. Opt. 57(7), 1534–1540 (2018). [CrossRef]  

20. Z. Wang, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. Crosstalk analysis of reconstructed light field.
Fig. 2.
Fig. 2. Flowchart of the algorithm based on the human eye viewing mechanism.
Fig. 3.
Fig. 3. Schematic diagram of single-viewpoint optical path.
Fig. 4.
Fig. 4. Simplified model of retinal imaging.
Fig. 5.
Fig. 5. Schematic diagram of two-dimensional lateral view imaging.
Fig. 6.
Fig. 6. Schematic diagram of three-dimensional lateral view imaging.
Fig. 7.
Fig. 7. Schematic diagram of the optical path analysis of the left viewpoint.
Fig. 8.
Fig. 8. EIA layout diagram for dual-viewpoints.
Fig. 9.
Fig. 9. Configuration of light field reconstruction experiment (a) 3Dsmax modeling process (b) Model A (rabbit) (c) Depth map of Model A (d) EI corresponding to Model A (e) Model C (yellow octopus) (f) Optical experimental equipment (g) Model B (skull) (h) Depth map of model B (i) EI corresponding to model B (j) EI corresponding to Model C.
Fig. 10.
Fig. 10. Display effect of optical reconstruction (a) Our method (b) UIS algorithm [17] (c) MDII algorithm [18] (d) CFB method [19].
Fig. 11.
Fig. 11. Viewing result of light field constructed by UIS algorithm at ${\pm}$20° viewpoint.
Fig. 12.
Fig. 12. Viewing result of light field constructed by our algorithm at ${\pm}$31° viewpoint.
Fig. 13.
Fig. 13. Structure similarity (SSIM) comparison of reconstructed images of different algorithms.
Fig. 14.
Fig. 14. Viewing result of light field constructed by our algorithm between 0°∼40°viewpoint.

Tables (1)

Tables Icon

Table 1. Optical experimental parameters

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

v = u f / ( u f )
{ s = v a + c p v p c t = v b + c q v q c
{ x e = u v ( s e ) = u ( v a + c p v p ) v c u v e = u ( c v ) v c p + u c a u v e y h = u v ( t h ) = u ( v b + c q v q ) v c u v h = u ( c v ) v c q + u c b u v h
( x y ) = u ( c v ) v c ( p q ) + u c ( a b ) + ( 1 u v ) ( e h )
x 2 = tan ϕ + tan θ 1 tan θ tan ϕ R = x 1 + R tan θ R x 1 tan θ R
y 2 / y 1 = E A 2 / E A 1 = 1 / ( cos θ + tan ϕ sin θ )
y 2 = y 1 cos θ + x 1 R sin θ = R y 1 R cos θ + x 1 sin θ
( x 2 y 2 ) = ( x 1 + R tan θ R x 1 tan θ R R y 1 R cos θ + x 1 sin θ )
( x 1 y 1 ) = ( x 2 R tan θ R + x 2 tan θ R R cos θ R tan θ + x 2 sin θ + x 2 R + x 2 tan θ y 2 )
( x n e w y n e w ) = ( x 1 R tan θ R + x 1 tan θ R R cos θ R tan θ + x 1 sin θ + x 1 R + x 1 tan θ y 1 )
( p L q L ) = c L c L z 0 ( x 0 y 0 ) + z 0 z 0 c L ( a L b L )
( x L y L ) = u ( c L v ) v c L ( p L q L ) + u c L ( a L b L ) + ( 1 u v ) ( e L h L )
( x L y L ) = u ( c L v ) v ( c L z 0 ) ( x 0 y 0 ) + u ( z 0 v ) v ( z 0 c L ) ( a L b L ) + ( 1 u v ) ( e L h L )
( x R y R ) = u ( c R v ) v ( c R z 0 ) ( x 0 y 0 ) + u ( z 0 v ) v ( z 0 c R ) ( a R b R ) + ( 1 u v ) ( e R h R )
( x L O y L O ) = u c L ( a L b L ) + ( 1 u c L ) ( e L h L )
( x O 1 x O 2 y O 1 y O 2 ) = ( 1 u c ) ( e 1 e 2 h 1 h 2 ) > u v ( 1 v c ) ( e 1 e 2 h 1 h 2 )
| ( x L O x R O y L O y R O ) | = | u c ( a L a R ) + ( 1 u c ) ( e L e R ) | > P L
c L u c L e L + u a L c L c R u c R e R u a R c R > min ( u v ( 1 v c L ) r , u v ( 1 v c R ) r )
c L u c L h L + u b L c L c R u c R h R u b R c R > min ( u v ( 1 v c L ) r , u v ( 1 v c R ) r )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.