Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Depth plane adaptive integral imaging system using a vari-focal liquid lens array for realizing augmented reality

Open Access Open Access

Abstract

Augmented reality (AR) is an interactive experience of a real-world environment where the objects that reside in the real world are enhanced by computer-generated perceptual information. Despite its attractive features, AR has not become popular because of the visual fatigue that many people face when they experience it. Many methods have been introduced to solve this visual fatigue problem and one of these methods is an integral imaging system that provides images almost continuous viewpoints and full parallax. However, the integral imaging system, which uses a lens array with a fixed focal length, has limited depth of focus (DOF) range. As a result, images that are outside of the DOF range become distorted. In this paper, a vari-focal liquid lens array was fabricated and the optical characteristics of the lens array were evaluated. Using the vari-focal liquid lens array, the DOF range was extended and high-resolution images are realized without restriction of depth range in an AR system.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Augmented reality [13] recently has attracted attention as a system to share and extend human experiences. Augmented reality is a technology that expands our physical world, adding virtual content onto it such that it appears as though it is actually present in the real world. Augmented reality has shown huge potential for various applications, such as education and training, games and play, medicine, exhibitions, tourism, and so on but there are some limitations impeding its popularization. The main limitation is the fatigue that users feel when they experience augmented reality. The fatigue is caused by vergence-accommodation conflict (VAC). VAC is a well-known problem in stereoscopic 3D display fields, and is caused by the mismatch between the binocular disparity of a stereoscopic image and the optical focus cue [4,5]. There have been reports that among those who have experienced the augmented reality, more than 50∼80% of users experience symptoms of fatigue [6,7]. Many studies have been conducted to solve the VAC problem through holographic displays [810], volumetric displays [1113], multi-focal displays [1416], and light field displays [1719]. Among them, the integral imaging based light field, which provides almost continuous viewpoints and full parallax, is attracting attention as a way to solve the VAC problem [2024]. In addition, in the case of an integral imaging based light field, because the hardware complexity is low and the optical system configuration is compact compared to other methods, it is advantageous to apply it to a head mounted display (HMD). There are two types of display modes for integral imaging based light field displays. One is depth priority integral imaging (DPII) and the other is resolution priority integral imaging (RPII) [25]. DPII can be achieved by matching the distance between the elemental images (EIs) and lens with the focal length of the lens. With this method, the lateral resolution is quite low but the depth of focus (DOF) is very wide, and thus virtual objects can be observed in a wide depth range. On the other hand, RPII is a method where the distance between the EIs and the lens is set to be longer than the focal length. This method has high lateral resolution but has a limited depth range due to the narrow DOF. In response, many studies have been conducted to extend the DOF of the RPII method [2628]. However, the aforementioned methods have a bulky system because they need mechanical movement [26], more than one display [27], or multiple planes [28], thus impeding their application to HMDs. For this reason, recently, researchers have reported a single liquid lens applied to the HMD [29,30]. However, in this case, the image size of the projected elemental image should be adjusted and space is required for the projected elemental image. In addition, since the size of the single liquid lens is relatively large, the speed of the liquid lens is slow. In this paper, a vari-focal liquid lens array was fabricated to extend the DOF in an integral imaging based light field display system. In order to increase the dioptric power and fill factor ratio of the vari-focal liquid lens arrays, a hybrid lens with an added solid lens array structure was used. The optical characteristics of the hybrid lens were evaluated and the lens was applied to an AR system. As a result, high resolution images over a wide depth range were confirmed. Finally, a time-multiplexing method as well as an eye-tracking method were realized. The vari-focal liquid lens arrays is expected to make optical systems more compact and be suitable for use in HMDs.

2. Concept of depth plane adaptive integral imaging system

Integral imaging is an autostereoscopic 3D imaging technique that captures and reproduces a light field by using a display and a lens array. By displaying the elemental images generated by a software program called 3ds Max and placing the lens array in front of the display, 3D images can be observed [31]. By reproducing a set of rays emitted by each point of the 3D object, the user’s eyes focus on the perceived distance of the image. This can thus eliminate the VAC problem. In general, a lens array that has a fixed focal length is used and a plane called the focused image plane or central depth plane (CDP) is determined by a simple lens law equation [32]. Because the CDP is fixed, the limitation of the depth range caused by integral imaging becomes an issue. This can be overcome by using a vari-focal optical element. In the present study, an electro-wetting liquid lens array that can control the focal length was fabricated and introduced. The vari-focal lens array was attached in front of the micro-display and an eyepiece lens was then placed after the vari-focal lens array. The focal length of the liquid lens array can be changed by applying voltage and the central depth plane (CDP) can be moved back and forth. Therefore, the expressible depth range of an image can be extended, as shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Concept of the depth plane adaptive integral imaging system.

Download Full Size | PDF

For example, when expressing the letter ‘A’ in a near position, the focal length of the lens array is adjusted to match the position of the CDP with the letter ‘A’, and the same method is applied to express the letter ‘C’ in a far position. There are two ways to realize the proposed system. The first is a time-multiplexing method and the other is an eye-tracking method. The time-multiplexing method expresses several depth ranges simultaneously by using a time division method. The eye-tracking method control the focal length of the lens array to match the CDP with objects after obtaining the position of the object that the eye focuses upon.

3. Design and specifications of the proposed system

The design of the proposed system is shown in Fig. 2. The liquid lens array is attached in front of the micro display (Sony, ECX335). An eyepiece lens is then placed after the liquid lens array. Elemental images from the micro display are integrated and an intermediate image is formed through a liquid lens array. The intermediate image is magnified through the eyepiece lens, forming a final object image. There are many considerations when designing the proposed system such as expressible depth range, field of view, resolution, form factor, etc. We will determine specifications for the proposed system in order to obtain the target expressible depth range. In the proposed system, the expressible depth range was decided as from 30cm, which is the minimum distance over which the human eye can focus, to 100cm. Objects near 100cm show similar accommodation with optical infinity. The final object image distance (do) can be calculated by the lens law equation, which is given by Eq. (1):

$$\frac{1}{{{d_{ep}}}} - \frac{1}{{{d_{o}} - {d_{e}}}} = \frac{1}{{{f_{ep}}}} \Leftrightarrow {d_{o}} = \frac{{{f_{ep}}{d_{ep}}}}{{{f_{ep}} - {d_{ep}}}} + {d_{e}},\;\textrm{for}\;0\;<\;{d_{ep}}\;<\;{f_{ep}}$$
where de is the distance between the eye and the eyepiece lens, fep is the focal length of the eyepiece lens, dep is the distance between the eyepiece lens and the intermediate image, and do is the distance between the eye and the final object image. In the AR system, an optical component such as a beam splitter that can combine the real world and the virtual images is required. An optical combiner is placed between the eyes and the eyepiece lens and it requires some space. Here, we set the eye distance (de) as 40mm. Because the values of do and de were decided, we can infer the relationship between fep and dep. For example, assuming the optical power range of the vari-focal lens array is constant and the variation of the intermediate image plane (Δdep) is 1.8mm, if fep is 30mm, the expressible depth range becomes 30cm to 100cm, corresponding with our target depth range, and in this case dep is 27.3mm. However, if fep is 50mm, the expressible depth range is from 30cm to 41.8cm, and in this case dep is 42.9mm. Therefore, the longer the focal length of the eyepiece lens is, the narrower the expressible depth range becomes and the distance between the eyepiece lens and intermediate image (dep) becomes larger, which results in a bulky system. We therefore need to use an eyepiece lens with short focal length. However, there is a constraint in reducing the focal length of the eyepiece lens. Because it is impractical to manufacture a lens with a focal length that is significantly less than its diameter, the size of the lens must be small in order to have a short focal length. Because the eyepiece lens acts as an aperture of the entire system, if the size of eyepiece lens decreased, the field of view become smaller. The field of view is represented by Eq. (2):
$$\alpha = 2\arctan \left[ {\textrm{min}(\frac{{{w_{ep}}}}{{2{d_{e}}}},\frac{{{M_{o}}{w_{d}}}}{{2{d_{o}}}})} \right],{M_{o}} = \frac{{{d_{o}} - {d_{e}}}}{{{d_{ep}}}}$$
where α is the field of view, wep is the width of the eyepiece lens, wd is the width of the micro-display, and Mo is the magnification by the eyepiece lens. In consideration of this correlation, an eyepiece lens with a diameter size of 25.4mm, which can cover the size of micro-display, is selected. The eyepiece lens is a plastic aspherical lens and the focal length is 30mm. We also selected a high resolution micro-display having a small pixel size. The resolution of the micro-display is 1920${\times} $1080, the size of the micro display is 15.36mm${\times} $8.64mm, and the pixel size is 8µm. Since an eyepiece lens having a 30mm focal length is used, the change of the intermediate image plane (Δdep) should be 1.8mm to obtain the expressible depth range from 30cm to 100cm and dep should be 27.3mm. In order to obtain target variation of the intermediate image plane (1.8mm), the specifications required for liquid lens arrays should be examined. First, the size of the unit lens should be considered. In an AR system, the size of a unit lens of the lens array (wL) has little effect on the field of view or the resolution of the 3D images, which is represented by Eq. (3):
$$N = 2\arctan (\frac{{p{M_{I}}{M_{o}}}}{{2{d_{o}}}}),{M_{I}} = \frac{{{d_{I}}}}{{{d_{L}}}}$$
where N is the spatial resolution of 3D images, p is the pixel size of the micro-display, dL is the distance between the liquid lens array and the display, dI is the distance between the liquid lens array and the intermediate image, and MI is the magnification by the liquid lens array. The spatial resolution is calculated by angular separation of a single pixel in the visual space. However, from the point of view of the liquid lens, if the size of a unit lens is small, the optical power range is increased and this can lead to a wide expressible depth range [33]. Also, the response time of the lens is reduced [34]. Lastly, as the size of the lens is reduced, more bundles of light are emitted from one point of the 3D image and they enter the pupil, and users can thereby experience more natural images. A smaller unit lens is, accordingly more advantageous, but there are restrictions because the size of the liquid lens is affected by the size of the solid lens.

 figure: Fig. 2.

Fig. 2. Design of the depth plane adaptive integral imaging system.

Download Full Size | PDF

Figure 3(a) shows the design of the unit lens of the liquid lens array. To fabricate the liquid lens array, there should be hole arrays that can contain liquids. Each unit lens is separated by a partition wall and this disturbs users viewing integral imaging. A solid lens array therefore should be attached to the liquid lens array to ensure the partition walls are invisible. This type of lens array is called a hybrid lens. When selecting a solid lens, the size of the unit lens and the fill factor should be considered. The fill factor of hexagonal arrangement is higher than that of rectangular arrangement [35]. In addition, it is difficult to fabricate a hexagonal solid lens with a small lens size. Here, a hexagonal solid lens with a unit lens size of 1mm was selected (Fresnel Technologies). The focal length is 3mm and the thickness of the solid lens array is 1.6mm. From the bottom glass, the gasket, the chamber, the top gasket, the ITO glass, and the solid lens are stacked. The thickness of each layer is designed as 0.3mm for the glasses, 0.1mm for gaskets, and 0.3mm for the chamber. A glass smaller than 0.3mm is easy to break, and thus 0.3mm glass is selected. The total size of the electro-wetting lens array is 17mm${\times} $11mm to align with the micro display. The total micro-display size is 18mm${\times} $12mm including bezel. Second, the distance between the liquid lens and solid lens should be considered. Factors affecting the variation of the intermediate image are the amount of change of the contact angle between two liquids, the focal length of the solid lens, and the distance between the liquid lens array and the solid lens array. Among them, we controlled the intermediate image plane range by adjusting the distance between the conductive liquid lens and the solid lens array. The position and the range of the intermediate image plane were calculated based on the ray transfer ABCD matrix method and the measured contact angle whose range is from 146 degrees to 98 degrees. Figure 3(b) is a graph showing the theoretical intermediate image plane values with varying thickness of the conductive liquid. Here, the intermediate plane distance was measured from the flat surface of the solid lens array to the intermediate image plane. As the voltage becomes higher, the intermediate image plane becomes far away from the hybrid lens array. In addition, as the thickness of the conductive liquid increases, the initial focal plane is formed farther away from the hybrid lens array. Initial states of the 0.55mm to 1.15mm case were not expressed because these states were concave. In the case of 1.15mm, the intermediate image variation is 2.1mm, which is enough to realize the target depth range. The thickness of conductive liquid was then selected as 1.15mm. Third, the response time of liquid lens array should be considered. To realize the time-multiplexing and eye-tracking method, the response time of the liquid lens should be rapid. In the case of the time-multiplexing method, since the frame rate of the micro-display is 60Hz, at least 16.6ms response time is required to realize images with different depths. In the case of the eye-tracking method, eye movements are detected by an eye-tracking device. It typically takes approximately 80ms for a person to adjust their focus cue and recognize a new object. This can be reduced to 50ms depending on the type of objects users see. Since a commercialized eye-tracing sensor responses in 16.6ms(60Hz), lens operation should be done within 33.4ms for a stable system. The specifications for the proposed system are presented in Table 1.

 figure: Fig. 3.

Fig. 3. (a) Design of liquid lens array combined with solid lens. (b) The theoretical position and range of the intermediate image plane of hybrid lens array according to thickness of conductive liquid.

Download Full Size | PDF

Tables Icon

Table 1. Specification for the proposed integral imaging system

4. Electro-wetting lens array

4.1. Electro-wetting liquid lens array fabrication

The operation principle of a vari-focal liquid lens array is electro-wetting. Electro-wetting can be used to modify the contact angle dynamically by changing the voltage applied to the conducting liquid. The meniscus between two immiscible liquids is used as an optical lens [36,37]. Figures 4(a)–4(h) show the entire fabrication process of the electro-wetting liquid lens array. In order to minimize the reaction between the liquids and the chamber, photosensitive glass was selected as the chamber material. The overall chamber glass size is 17mm${\times} $11mm${\times} $0.3mm. To make hole arrays in the chamber, a patterned mask was placed on the glass and particular wavelength light (355nm) was illuminated. After that, the chamber was baked in an oven with a high temperature and hole arrays were drilled in the photosensitive glass through a wet etching method. The diameter of the hole is 0.9mm and the width of the sidewall is 0.1mm. The total number of holes is 160 as shown in Fig. 5(a). The 300nm copper was deposited through the sputtering method as an electrode and an insulating layer is deposited through by chemical vapor deposition method. The sputtering is good method to deposit the electrode on the sidewall of the hole with a high aspect ratio. For an electro-wetting effect, electrical separation (dielectric) and a hydrophobic surface are required on the electrode, as provided by parylene-C layer. The insulating layer plays an important role in preventing current from flowing between the electrode and the conductive liquid. When the insulating layer is broken, the current passes through the insulating layer. Therefore, it leads to bubbles. Here, 2µm thick parylene-C layer was deposited to prevent breakdown. Another important feature of the liquid lens array is the uniformity of each unit lens. To maintain uniformity, the 0.1mm gasket was attached to the bottom of the chamber to connect all the lens by one channel. In order to seal the liquids, a 3D printer was used to make housings. The chamber was inserted into the bottom housing and the non-conductive liquid whose refractive index is 1.633 was injected. Thanks to the one channel structure, the level of the non-conductive liquid becomes constant. After putting the chamber into the conductive liquid whose refractive index is 1.435, the chamber was covered with an upper housing. The upper housing and the lower housing were joined together with silicone. Since the refractive indices for the two immiscible liquids are not equal, a refracting surfaced is formed. The density of the two liquids is same as 1.2g/cm3. This density matching of the two liquids makes it possible to make a device that is insensitive to gravity force. The final prototype of the liquid lens array has a rectangular shape, as shown in Fig. 5(b). The overall size is 26mm${\times} $17.6mm${\times} $2.6mm and the total weight is 2.1g.

 figure: Fig. 4.

Fig. 4. The fabrication process of the electro-wetting liquid lens array. (a) Exposure. (b) Baking. (c) Wet etching. (d) Electrode and insulation layer deposition. (e) Glass and gasket bonding. (f) Assembling and dosing. (g) Immersion. (h) Sealing.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. (a) The photosensitive glass with hole arrays. (b) The final prototype of the liquid lens array.

Download Full Size | PDF

4.2. Evaluation of liquid lens array and hybrid lens array characteristics

In order to measure the focal length, the optical design was set as shown in Fig. 6(a). Initially, a red laser of 632.8nm is illuminated to the objective lens and a micro-pinhole is placed after the objective lens to remove high order diffraction. The light passing through the micro-pinhole has a constant radiation angle. Parallel light can be obtained by placing the convex lens at a distance corresponding to the focal length of the convex lens. When the parallel light passes through the liquid lens, the light is spread because the initial state of the liquid lens is concave. When the spread light passing through another objective lens, the spot can be observed on the CCD. The distance between the objective lens and the CCD is constant. The liquid lens array was manually placed at a position where the focus was on the lens surface, as shown in Fig. 6(b). The CCD conjoined with the objective lens was then moved to the laser direction to find the minimum spot as, shown in Fig. 6(c). Then the distance that the CCD and objective lens moved is the focal length of the liquid lens array.

 figure: Fig. 6.

Fig. 6. (a) Optical set up for measuring the focal length of the liquid lens array on the table top. (b) The beam spot was measured when the CCD camera was focused on the surface of the liquid lens array. (c) After moving the objective lens and the CCD camera toward the liquid lens array, the minimum beam spot was found. The moved distance was focal length of the liquid lens array.

Download Full Size | PDF

The measurement was carried out five times and an average value was obtained. The dioptric power of the liquid lens array was measured from -542D to -89D and the focal length was measured from -2mm to 12mm. As the voltage was increased, the focal length of the concave lens became longer. When 60V is applied, the surface of the liquid lens becomes almost flat. In the case of a liquid lens array, the final diopter did not reach a convex state. It is impossible to use this diopter range in a HMD because it is necessary to use convex lens to enlarge the image. The problem is solved by using an optical power of solid lens array. The microscopic image and a schematic diagram of the hybrid lens are presented in Fig. 7(a). By combining the liquid lens array with the solid lens array, the dioptric power was measured from 43D to 235D and the focal length was measured from 23mm to 4mm, as shown in Fig. 7(b). The dioptric power was increased by combining the solid lens array, as shown in Fig. 7(c).

 figure: Fig. 7.

Fig. 7. (a) Microscopic images and schematic diagram of the lens array. The top is the liquid lens array and the bottom is the hybrid lens array. (b) The value of the dioptric power and focal length of the hybrid lens array depending on the voltage. (c) Graph comparing the dioptric power of the hybrid lens array with that of the liquid lens array.

Download Full Size | PDF

As discussed in Section 3, the solid lens array also makes the sidewall to be invisible as shown in Fig. 7(a). The solid lens magnifies the lens part to fill the whole area. The aberration of the liquids array was measured using a Shack-Hartmann wave front sensor which can be used for measuring the wavefront shape of incident light [38]. The optical setup for measuring the aberration is shown in Fig. 8(a). Before the liquid micro lens array was measured, the wave front sensor was first calibrated without the micro lens array to subtract the systematic aberration from the wave front measurement, as shown in Fig. 8(b). The liquid lens array was then inserted to measure the aberration. The collimated beam generated through the objective lens, the micro-pinhole, and convex lens 1 is irradiated to the liquid lens array. The image that passes through the liquid lens in enlarged and is taken from the CCD, as shown in Fig. 8(c).

 figure: Fig. 8.

Fig. 8. (a) Optical setup for measuring the aberration of the liquid lens array on the table top. (b) Schematic diagram of optical setup for calibration and (c) for lens aberration measurement.

Download Full Size | PDF

The wavefront shape of the liquid lens is analyzed by increasing the voltage, as shown in Fig. 9(a). The liquid lens is a concave lens and as more voltage is applied, the shape of the lens becomes flat. Figure 9(b) shows the wavefront of the hybrid lens and the initial state is convex. As the voltage is increased, it becomes more and more convex. The wavefront error value includes typical aberrations such as astigmatism, coma, and spherical aberration. When only liquid lens array is measured, the spherical aberration is the largest factor among those aberrations. Although the comma and astigmatism aberration get worse when applying the voltage, the total wavefront error becomes reduced as the voltage is applied because the improvement of the spherical aberration is relatively larger than the degradation of the other aberrations. In the case of the hybrid lens array, the wavefront error is further reduced because the solid lens (convex lens) receives light from the center of the liquid lens (concave lens) rather than the edge of the concave lens as shown in Fig. 9(c). When applying high voltage, the liquid interface becomes almost flat. Then, the influence of the liquid lens is reduced in the hybrid lens and the wavefront become small. This means that the hybrid lens array works almost like a solid lens array at high voltage. In a hybrid lens structure, the solid lens become more influential as applying the voltage. Because the wavefront error decreased significantly after 40V, we decided to use voltage above 40V.

 figure: Fig. 9.

Fig. 9. The wavefront shape of (a) liquid lens and (b) hybrid lens depending on the voltage. (c) Wavefront error of liquid and hybrid lens array depending on the voltages.

Download Full Size | PDF

The response time of the fabricated lens array was measured by a high-speed camera. Lens operation video clips are taken by a 1200fps recording setting which takes one frame per 0.83ms. It took total of 14 frames to change focal length completely which means a response time of 11.6ms. Figure 10 shows the magnification change as the frame passes. It was faster than 16ms that we targeted. However, for some applications that require high frame rates, response time should be reduced.

 figure: Fig. 10.

Fig. 10. Response time measurement with a high speed camera (1200fps).

Download Full Size | PDF

5. Experimental results

5.1. Optimization of the hybrid lens array configuration

As discussed in Section 3, the thickness of conductive liquid should be 1.15mm to obtain target expressible depth range. However, the experimental results were slightly different from the theoretical results. The optimization process of hybrid lens array was shown in Fig. 11(a). The experimental intermediate plane was formed farther away than the theoretical value, as shown in the Fig. 11(b). This is because the fabricated samples were slightly thicker than we expected. However, the tendency is similar to the theoretical value. As mentioned before, since the wavefront error becomes small after 40V, we used voltages above 40V. The change of the focal plane was then 1.1mm, 1.9mm, and 4mm in the cases of 0.55mm, 0.85mm and 1.15mm, respectively. We realized virtual objects located at 30cm, 50cm and 100cm for each sample, as shown in Figs. 12(a)–12(c). In the case of 0.55mm, the virtual object can be located only up to 55cm due to the lack of intermediate image plane variation, which is 1.1mm, as shown in Fig. 13(a). In the case of 0.85mm, it was possible to represent the image from 30cm to 100cm, as shown in Fig. 12(b) and 1.15mm was able to reach up to 100cm with relatively low voltage, as shown in Fig. 12(c). However, in the case of 1.15mm, the image quality is bad because the magnification becomes larger as the initial intermediate image plane is farther away. In conclusion, if the thickness of the hybrid lens is increased, the intermediate image plane variation becomes larger, but the resolution is lowered, as shown in Fig. 13(d). Therefore, appropriate thickness is thus required. Here, the 0.85mm case was determined to be optimal to realize high quality images within the target depth range.

 figure: Fig. 11.

Fig. 11. (a) The optimization process of hybrid lens array. (b) Experimental position and range of the intermediate image plane of hybrid lens array according to thickness of conductive liquid.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Captured virtual images when different voltages were applied to the each sample having (a) 0.55 mm, (b) 0.85 mm, and (c) 1.15 mm conductive liquid thickness. (d) Magnified images for each letter.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. (a) Optical set up for AR system. Captured images both real and virtual images after applying (b) 40 V (c) 50 V (d) 60 V while the camera focusing on (b) 30 cm, (c) 50 cm, (d) 100 cm. (e) Captured images located at 100 cm when CDP sets at 30 cm. (f) Captured images located at 30 cm when CDP sets at 100 cm.

Download Full Size | PDF

5.2. Image test in augmented reality system

Figure 13(a) shows that the virtual letters ‘A’, ‘B’ and ‘C’ are placed at 30cm, 50cm, and 100cm and we can see the virtual images and external environment simultaneously through the beam splitter (1inch). Correspondingly, the papers with the actual letter ‘A’, ‘B’ and ‘C’ are placed at 30cm, 50cm, and 100cm. Figure 13(b) shows that when 40V is applied to the liquid lens array, the CDP is formed at 30cm. The virtual object ‘A’ and real object ‘A’ are clearly seen at the same position when the camera focus is at 30cm. Similarly, if voltage of 50V and 60V is applied to the liquid lens, the CDP is formed at 50cm and 100cm, and the virtual letters ‘B’ and ‘C’ are clearly seen, as shown in Figs. 13(c)–13(d). Figure 13(e) shows that the CDP is formed at 30cm and the camera focuses on the real letter ‘C’. Compared to the case of Fig. 13(d), we can see that the virtual object ‘C’ becomes blurrier than that of Fig. 13(d). In the case of Fig. 13(f), the CDP is moved to 100cm and the camera focuses on the actual letter ‘A’, which is located at 30cm. In this case, we also see that the virtual object ‘A’ becomes blurrier than the letter ‘A’ in Fig. 13(b). As a result, the CDP can be dynamically adjusted trough the liquid lens array so that the expressible depth range can be extended with high quality images.

5.3. Time-multiplexing method

Figure 14(a) shows images taken by applying different voltages after displaying a rectangular image located at 30cm. The focus of the camera is adjusted according to the voltages. Figure 14(b) shows the line contrast of the rectangular boundary. The CDP is moved farther away as the voltage is increased. As a result, the boundary of the image becomes blurry because the gap between the position of the image and the CDP becomes larger. The line contrast is the best at 40V, and as the voltage was increased, the line contrast becomes bad. When the CDP is at 30cm, the DOF range is approximately from a distance corresponding from 40V to 45V. The rectangular image is placed 100cm away from the camera and CDP is adjusted with different voltages, as shown in Fig. 14(c). When 60V is applied, the CDP is located at 100cm and a clear image is obtained. At that time, the line contrast was the best, as shown in Fig. 14(d). When the CDP is at 100cm, the DOF range was found to be approximately from a distance corresponding from 50V to 60V. The DOF range become longer as the voltages increased. Based on the above results, it is possible to cover DOF from 30cm to 100cm using only two voltage levels of 40V and 60V. Applying 40V for one frame and 60V for the next frame alternately can cover the whole range. Of course, the separately-rendered elemental images must be displayed alternatively corresponding to the voltage. Since the refresh rate of the OLED micro-display is 60Hz, it operates with a refresh rate of 30Hz for each image.

 figure: Fig. 14.

Fig. 14. (a) Captured rectangular images located at 30 cm and (b) line contrast at rectangular boundary by applying voltage from 40 V to 60 V. (c) Captured rectangular images located at 100 cm and (d) line contrast at rectangular boundary by applying voltage from 40 V to 60 V.

Download Full Size | PDF

Figure 15(a) shows that the rendered image letter ‘C’ which is located at 100cm is displayed when the applied voltage is 60V and Fig. 15(b) shows that the rendered image letter ‘A’ which is located at 30cm is displayed when the applied voltage is 40V. Switching the different voltages and rendered images quickly can increase the DOF from 30cm to 100cm. The real letter ‘A’ and ‘C’ can be seen simultaneously by increasing DOF through time-multiplexing method. In this case, the brightness is halved, but virtual objects ‘A’ and ‘C’ are clearly displayed at the same time, as shown in Fig. 15(c). However, this method has the disadvantage of flickering due to the low refresh rate. The problem may be solved if the micro display’s refresh rate become high.

 figure: Fig. 15.

Fig. 15. (a) Frame 1 image when applying 60 V while the camera focusing on 100 cm. (b) Frame 2 image when applying 40 V while the camera focusing on 30 cm. (c) Captured images through time-multiplexing method. In order to focus on the both depth, the DOF of camera is increased intentionally.

Download Full Size | PDF

5.4. Eye-tracking method

An eye-tracking method can be realized where the corresponding voltage is applied to match the CDP with objects after obtaining the position of the object that the eye focuses upon. An eye-tracking camera (Pupil Labs) was used and different voltage was applied according to the position where the eye focuses. When the eye sees a real object ‘A’, the eye tracking camera calculate the position and 40V is applied and when real object ‘C’ is viewed, 60V is applied, as shown in Figs. 16(a)–16(c). Therefore, the high quality images can be obtained continuously without restriction of depth range. Here, we used only one eye-tracking camera, but if two eye-tracking cameras are used, the correct depth can be obtained and then the appropriate voltage can be applied.

 figure: Fig. 16.

Fig. 16. Captured images of both real and virtual letter through eye-tracking method. When the eye focus on (a) letter ‘A’ (b) letter ‘B’ and (c) letter ‘C’, the 40 V, 50 V and 60 V was applied. (see Visualization 1)

Download Full Size | PDF

5.5. Image resolution evaluation

The USAF resolution chart was used to measure the resolution at each depth. The USAF charts were taken at 30cm, 50cm, and 100cm through a camera respectively. The pattern in the red box represents the minimum pattern that can be resolved by the eyes. As a result, the captured images in Figs. 17(a)–17(c) can resolve the patterns up to 4.58 arcmins, 6.87 arcmins, and 13.75 arcmins at 30cm, 50cm, and 100cm, respectively. The reason why the theoretical and experimental values of the resolution are different is that the pixel light that passes through each lens does not converge exactly on the same point. It is expected that eliminating aberrations and instability of the liquid lens array will give a higher resolution.

 figure: Fig. 17.

Fig. 17. Captured USAF chart images at (a) 30 cm, (b) 50 cm and (c) 100 cm with the camera focusing on 30 cm, 50 cm and 100 cm.

Download Full Size | PDF

6. Conclusion

To realize depth plane adaptive integral imaging system, we designed the proposed system and fabricated a vari-focal lens array that can change the focal length based on electro-wetting. A hybrid lens combined with a liquid lens array and solid lens array was introduced to make the sidewall invisible and increase the dioptric power. The dioptric power of hybrid lens array was measured from 43D to 235D and the focal length was measured from 23mm to 4mm. In addition, the wavefront error was reduced by attaching the solid lens array. By adjusting the thickness of the conductive liquid, we found the optimum configuration between liquid lens array and solid lens array that enables to realize high quality images within depth range from 30cm to 100cm and applied it to the AR system. Using the vari-focal liquid lens array, the CDP can be moved properly, resulting in high resolution without restriction of depth range. Furthermore, we extended the DOF to the whole range by using time-multiplexing method and the CDP was changed in real time according to where eye focus on using eye-tracking method. Vari-focal liquid lens arrays are expected to make optical system more compact and suitable for use in AR HMD.

Funding

Institute for Information and Communications Technology Promotion grant funded by Korea government, Development of Fundamental Technology of Core Components for Augmented and Virtual Reality Devices (2017-0-01803).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. R. Azuma, Y. Baillot, R. Behringer, S. Feiner, S. Julier, and B. MacIntyre, “Recent advances in augmented reality,” IEEE Comput. Grap. Appl. 21(6), 34–47 (2001). [CrossRef]  

2. J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic, “Augmented reality technologies, systems and applications,” Multimed. Tools Appl. 51(1), 341–377 (2011). [CrossRef]  

3. H. Hua, Augmented Virtual Environments. Optics and photonics News, OSN, Ekim, 26–33 (2006).

4. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence–accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vision 8(3), 33 (2008). [CrossRef]  

5. C. M. Schor and T. K. Tsuetaki, “Fatigue of accommodation and vergence modifies their mutual interactions,” Invest. Ophthalmol. Visual Sci. 28(8), 1250–1259 (1987).

6. K. M. Stanney and K. S. Hale, Handbook of virtual environments: Design, implementation, and applications. CRC Press (2014).

7. K. M. Stanney, K. S. Hale, I. Nahmens, and R. S. Kennedy, “What to expect from immersive virtual environment exposure: Influences of gender, body mass index, and past experience,” Hum. Factors 45(3), 504–520 (2003). [CrossRef]  

8. J. F. Heanue, M. C. Bashaw, and L. Hesselink, “Volume holographic storage and retrieval of digital data,” Science 265(5173), 749–752 (1994). [CrossRef]  

9. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014). [CrossRef]  

10. H. J. Yeom, H. J. Kim, S. B. Kim, H. Zhang, B. Li, Y. M. Ji, and J. H. Park, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23(25), 32025–32034 (2015). [CrossRef]  

11. B. G. Blundell and A. J. Schwarz, “The classification of volumetric display systems: characteristics and predictability of the image space,” IEEE Trans. Visual. Comput. Graphics 8(1), 66–75 (2002). [CrossRef]  

12. G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. Giovinco, M. J. Richmond, and W. S. Chun, “100-million-voxel volumetric display,” In Cockpit Displays IX: Displays for Defense Applications (Vol. 4712, pp. 300–313). International Society for Optics and Photonics (2002, August).

13. A. Sullivan, “58.3: A Solid-state Multi-planar Volumetric Display,” In SID symposium digest of technical papers (Vol. 34, No. 1, pp. 1531–1533). Oxford, UK: Blackwell Publishing Ltd (2003, May).

14. J. P. Rolland, M. W. Krueger, and A. Goon, “Multifocal planes head-mounted displays,” Appl. Opt. 39(19), 3209–3215 (2000). [CrossRef]  

15. S. C. McQuaide, E. J. Seibel, J. P. Kelly, B. T. Schowengerdt, and T. A. Furness III, “A retinal scanning display system that produces multiple focal planes with a deformable membrane mirror,” Displays 24(2), 65–72 (2003). [CrossRef]  

16. S. Liu, Y. Li, P. Zhou, X. Li, N. Rong, S. Huang, and Y. Su, “A multi-plane optical see-through head mounted display design for augmented reality applications,” J. Soc. Inf. Disp. 24(4), 246–251 (2016). [CrossRef]  

17. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt. 52(4), 546–560 (2013). [CrossRef]  

18. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 220 (2013).

19. F. C. Huang, D. P. Luebke, and G. Wetzstein, “The light field stereoscope,” In SIGGRAPH Emerging Technologies (pp. 24-1) (2015, July).

20. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7(1), 821–825 (1908). [CrossRef]  

21. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26(3), 157–159 (2001). [CrossRef]  

22. Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters,” IEEE Trans. Visual. Comput. Graphics 15(5), 841–852 (2009). [CrossRef]  

23. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22(11), 13484–13491 (2014). [CrossRef]  

24. W. Song, Y. Wang, D. Cheng, and Y. Liu, “Light field head-mounted display with correct focus cue using micro structure array,” Chin. Opt. Lett. 12(6), 060010 (2014). [CrossRef]  

25. F. Jin, J. S. Jang, and B. Javidi, “Effects of device resolution on three-dimensional integral imaging,” Opt. Lett. 29(12), 1345–1347 (2004). [CrossRef]  

26. B. Lee, S. Jung, S. W. Min, and J. H. Park, “Three-dimensional display by use of integral photography with dynamically variable image planes,” Opt. Lett. 26(19), 1481–1482 (2001). [CrossRef]  

27. S. W. Min, B. Javidi, and B. Lee, “Enhanced three-dimensional integral imaging system by use of double display devices,” Appl. Opt. 42(20), 4186–4195 (2003). [CrossRef]  

28. D. Q. Pham, N. Kim, K. C. Kwon, J. H. Jung, K. Hong, B. Lee, and J. H. Park, “Depth enhancement of integral imaging by using polymer-dispersed liquid-crystal films and a dual-depth configuration,” Opt. Lett. 35(18), 3135–3137 (2010). [CrossRef]  

29. X. Shen and B. Javidi, “Large depth of focus dynamic micro integral imaging for optical see-through augmented reality display using a focus-tunable lens,” Appl. Opt. 57(7), B184–B189 (2018). [CrossRef]  

30. H. Huang and H. Hua, “High-performance integral-imaging-based light field augmented reality display using freeform optics,” Opt. Express 26(13), 17578–17590 (2018). [CrossRef]  

31. H. Deng, Q. H. Wang, and D. H. Li, “The realization of computer generated integral imaging based on two step pickup method,” In 2010 Symposium on Photonics and Optoelectronics (pp. 1–3). IEEE (2010, June).

32. C. J. Kim, M. Chang, M. Lee, J. Kim, and Y. H. Won, “Depth plane adaptive integral imaging using a varifocal liquid lens array,” Appl. Opt. 54(10), 2565–2571 (2015). [CrossRef]  

33. D. Shin, J. Kim, C. Kim, J. Lee, G. H. Koo, J. H. Sim, and Y. H. Won, “3-D image Crosstalk Reduction by Controlling the Width of the Electrode in a Liquid Lenticular Lens,” IEEE Photonics J. 10(4), 1–12 (2018). [CrossRef]  

34. J. Hong, Y. K. Kim, K. H. Kang, J. M. Oh, and I. S. Kang, “Effects of drop size and viscosity on spreading dynamics in DC electrowetting,” Langmuir 29(29), 9118–9125 (2013). [CrossRef]  

35. P. Nussbaum, R. Voelkel, H. P. Herzig, M. Eisner, and S. Haselbeck, “Design, fabrication and testing of microlens arrays for sensors and microsystems,” Pure Appl. Opt. 6(6), 617–636 (1997). [CrossRef]  

36. B. H. W. Hendriks, S. Kuiper, M. V. As, C. A. Renders, and T. W. Tukker, “Electrowetting-based variable-focus lens for miniature systems,” Opt. Rev. 12(3), 255–259 (2005). [CrossRef]  

37. N. R. Smith, L. Hou, J. Zhang, and J. Heikenfeld, “Fabrication and demonstration of electrowetting liquid lens arrays,” J. Disp. Technol. 5(11), 411–413 (2009). [CrossRef]  

38. B. C. Platt and R. Shack, “History and principles of Shack-Hartmann wavefront sensing,” J. Refract. Surg. 17(5), S573–S577 (2001). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       eye tracking method demonstration

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1.
Fig. 1. Concept of the depth plane adaptive integral imaging system.
Fig. 2.
Fig. 2. Design of the depth plane adaptive integral imaging system.
Fig. 3.
Fig. 3. (a) Design of liquid lens array combined with solid lens. (b) The theoretical position and range of the intermediate image plane of hybrid lens array according to thickness of conductive liquid.
Fig. 4.
Fig. 4. The fabrication process of the electro-wetting liquid lens array. (a) Exposure. (b) Baking. (c) Wet etching. (d) Electrode and insulation layer deposition. (e) Glass and gasket bonding. (f) Assembling and dosing. (g) Immersion. (h) Sealing.
Fig. 5.
Fig. 5. (a) The photosensitive glass with hole arrays. (b) The final prototype of the liquid lens array.
Fig. 6.
Fig. 6. (a) Optical set up for measuring the focal length of the liquid lens array on the table top. (b) The beam spot was measured when the CCD camera was focused on the surface of the liquid lens array. (c) After moving the objective lens and the CCD camera toward the liquid lens array, the minimum beam spot was found. The moved distance was focal length of the liquid lens array.
Fig. 7.
Fig. 7. (a) Microscopic images and schematic diagram of the lens array. The top is the liquid lens array and the bottom is the hybrid lens array. (b) The value of the dioptric power and focal length of the hybrid lens array depending on the voltage. (c) Graph comparing the dioptric power of the hybrid lens array with that of the liquid lens array.
Fig. 8.
Fig. 8. (a) Optical setup for measuring the aberration of the liquid lens array on the table top. (b) Schematic diagram of optical setup for calibration and (c) for lens aberration measurement.
Fig. 9.
Fig. 9. The wavefront shape of (a) liquid lens and (b) hybrid lens depending on the voltage. (c) Wavefront error of liquid and hybrid lens array depending on the voltages.
Fig. 10.
Fig. 10. Response time measurement with a high speed camera (1200fps).
Fig. 11.
Fig. 11. (a) The optimization process of hybrid lens array. (b) Experimental position and range of the intermediate image plane of hybrid lens array according to thickness of conductive liquid.
Fig. 12.
Fig. 12. Captured virtual images when different voltages were applied to the each sample having (a) 0.55 mm, (b) 0.85 mm, and (c) 1.15 mm conductive liquid thickness. (d) Magnified images for each letter.
Fig. 13.
Fig. 13. (a) Optical set up for AR system. Captured images both real and virtual images after applying (b) 40 V (c) 50 V (d) 60 V while the camera focusing on (b) 30 cm, (c) 50 cm, (d) 100 cm. (e) Captured images located at 100 cm when CDP sets at 30 cm. (f) Captured images located at 30 cm when CDP sets at 100 cm.
Fig. 14.
Fig. 14. (a) Captured rectangular images located at 30 cm and (b) line contrast at rectangular boundary by applying voltage from 40 V to 60 V. (c) Captured rectangular images located at 100 cm and (d) line contrast at rectangular boundary by applying voltage from 40 V to 60 V.
Fig. 15.
Fig. 15. (a) Frame 1 image when applying 60 V while the camera focusing on 100 cm. (b) Frame 2 image when applying 40 V while the camera focusing on 30 cm. (c) Captured images through time-multiplexing method. In order to focus on the both depth, the DOF of camera is increased intentionally.
Fig. 16.
Fig. 16. Captured images of both real and virtual letter through eye-tracking method. When the eye focus on (a) letter ‘A’ (b) letter ‘B’ and (c) letter ‘C’, the 40 V, 50 V and 60 V was applied. (see Visualization 1)
Fig. 17.
Fig. 17. Captured USAF chart images at (a) 30 cm, (b) 50 cm and (c) 100 cm with the camera focusing on 30 cm, 50 cm and 100 cm.

Tables (1)

Tables Icon

Table 1. Specification for the proposed integral imaging system

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

1 d e p 1 d o d e = 1 f e p d o = f e p d e p f e p d e p + d e , for 0 < d e p < f e p
α = 2 arctan [ min ( w e p 2 d e , M o w d 2 d o ) ] , M o = d o d e d e p
N = 2 arctan ( p M I M o 2 d o ) , M I = d I d L
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.