Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

LC-based lightfield camera prototype for rapidly creating target images optimized by finely adjusting several key coefficients and a LC-guided refocusing-rendering

Open Access Open Access

Abstract

A lightfield camera prototype is constructed by directly coupling a liquid-crystal (LC) microlens array with an arrayed photosensitive sensor for performing a LC-guided refocusing-rendering imaging attached by computing disparity map and extracting featured contours of targets. The proposed camera prototype presents a capability of efficiently selecting the imaging clarity value of the electronic targets interested. Two coefficients of the calibration coefficient k and the rendering coefficient C are defined for quantitively adjusting LC-guided refocusing-rendering operations about the images acquired. A parameter Dp is also introduced for exactly expressing the local disparity of the electronic patterns selected. A parallel computing architecture based on common GPU through the OpenCL platform is adopted for improving the real-time performance of the imaging algorithms proposed, which can effectively be used to extract the pixel-leveled disparity and the featured target contours. In the proposed lightfield imaging strategy, the focusing plane can be easily selected and/or further adjusted by loading and/or varying the signal voltage applied over the LC microlenses for realizing a rapid or even intelligent autofocusing. The research lays a solid foundation for continuously developing or upgrading current lightfield imaging approaches.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Lightfield cameras, also named plenoptic cameras or compound-eye cameras, already become a type of high-performance imaging tool owing to their obvious merits of flexibly identifying and recording imaging viewing-angle information of interested scene or targets. So far, lightfield cameras have been successfully utilized in many fields, for instance, four-dimensional (4D) imaging, refocused calculation imaging in a single exposure [1,2], and so on. Compared with conventional cameras, a primary functioned structure variance is carried out through inserting an arrayed built-in diffractive or refractive microlenses between the photosensitive chip and the main lens, for effectively capturing images involving abundant viewing-angle clues of imaging microbeams. In general, the traditional planar (two-dimensional or 2D) imaging is only based on the light intensity or amplitude capturing and then image forming. But lightfield cameras demonstrate significant advantages, for instance, presenting viewing-angle information with defined angular resolution according to the sub-aperture configuration, and then a remarkable extension of the imaging depth of far more than 2D approach only in a several meter scale. At present, lightfield cameras already present a broad application such as the digital refocusing imaging, the objective distance or target depth estimation, the three-dimensional (3D) target reconstruction, the multi-viewing-angle compound imaging, the dual-mode imaging based on the wavefront measurement and correction [38].

Currently, two types of lightfield camera are exhibited according to the microlenses configuration. The first is the standard lightfield camera, also called lightfield camera 1.0, where the inserted diffractive or refractive microlens array with convex or concave surface profile is placed at the focal plane of the main lens, and an arrayed photosensitive detector such as typical complementary-metal-oxide-semiconductor (CMOS) sensors, following the microlenses according to their focal length. Adelson et al. developed a theoretical lightfield camera in order to facilitate lightfield information collection in 1991 [9]. And then a prototyped lightfield camera was produced by Adelson and Wang after continuous efforts, which is a historic advance in lightfield imaging [10]. And then the lightfield imaging architecture was continuously improved by Ng et al., and consequently the first hand-held lightfield camera was built in 2005, through inserting a refractive microlens array between the main lens and the sensors [11]. Unfortunately, lightfield camera 1.0 suffers a significant drawback in rendering electronic images with a disappointingly low resolution. In 2009, Lumsdaine and Georgiev proposed a focused lightfield camera with a higher spatial resolution than the standard lightfield cameras [12], where the distance of the microlens array corresponding to the main lens is less than the focal length of the main lens, which is also called lightfield camera 2.0. From a practicality and compactness perspective, the advantages of the focused lightfield cameras are more employed in scientific research and industrial production surveillance.

As demonstrated, liquid crystal (LC) materials, is first discovered by Austrian botanist F. Reinitzer in 1888 [13], and further conceptualized by German physicist O. Lehmann in 1900. In general, polar LC molecules roughly viewed as an electric dipole, will oscillate according to the spatial electric-field component of incident lightwaves, which means an excited lightwave re-emission from a LC molecular antenna. And further polar LC molecules will tend to re-orient according to the spatial electric-field distribution applied, so as to re-shape a new refractive index morphology. The above physical characteristics enable a type of LC-based microbeam adjustment for nearly real-time measuring or tuning imaging lightwave parameters such as the amplitude, the wave-vector, the polarization, the spectrum, or the wavefront [1418]. Japanese scientist Sato proposed the first LC lens with a plano-convex or a planar-concave appearance in 1979 [19]. Nose and Sato also paid continuous efforts into LC lenses, and then the LC microlenses with a basic micro-hole-electrode and an indium-tin-oxide coating, as an electrode couple for generating a non-uniform spatial electric-field in LC film sandwiched in the electrode couple above, is developed [20]. So far, various types of LC materials have been used in designing LC lenses because of the typical LC characters abourt pure phase modulations [2128]. As shown, the LC microlenses already present some obvious features, for instance, nearly continuous adjustment of wavefronts with customizable aberrations, tunable Zernike coefficients, electrically adjustable focus or phase of imaging beams, µW-scale power consumption, and the capability of controlling the optical axis of LC lenses [2939]. Furthermore, LC lenses can function as spherical or aspherical lenses, freeform optical elements, and wavefront correctors for various optical applications [21,24,34,4043].

Recently, lightfield cameras based on LC microlenses, which also allow an electrically switching between electrically tunable lightfield imaging and conventional 2D imaging with a high spatial resolution of more than 108 photosensitive pixel scale, have attracted a lot of interest [4447]. In 2015, Kwon et al. proposed a lightfield camera based on typical refractive microlenses with a gradient index profile from LC materials [48]. In the same year, Y. Lei et al. demonstrated an improved lightfield camera using an arrayed planar LC microlens with a relatively easy fabrication, so as to attract an extensive interest [49]. But the image rendering algorithm in Lei’s work is still straightforward by directly stitching sub-aperture images according to row data outfrom LC-based microbeam patterns, so as to result in a relatively poor visual rendering. Z. Xin continuously proposed several improved approaches [37,50] based on LC microlenses with electrically adjustable polarization of imaging microbeams. Furthermore, the compound imaging schemes of coupling the amplitude (only used in traditional 2D imaging) with the wave-vector (for lightfield imaging) or oscillating electric-field orientation (for polarization imaging), have been proposed and continuously optimized, so as to result in a remarkably enhanced imaging efficiency in complicated circumstance. M. Chen further proposed an arrayed LC microlens with adjustable sub-aperture for obviously extending the depth of field [47]. Additionally, an all-in-focus imaging algorithm based on focus stacking also aimed to remarkably extending the depth of field, was proposed by M. Chen [36]. Currently, the LC-based lightfield camera technology has been still advanced as follows: the focus-tunable lightfield imaging, the electrically selecting multi-sub-aperture compound-eye imaging, the electrically switching dual-mode (2D@3D) imaging. However, the lightfield pattern rendering is still achieved by ordinary stitching algorithms with a relatively low efficiency and a long time consumption, which cannot be utilized in a real-time process with a high frame frequency or corresponding to a very large photosensitive pixel scale.

In this paper, a new LC-based lightfield camera prototype (LC-LCP) attached by a rapid vision disparity recognition and also post-processing, which allows a flexibly adjustable imaging microbeam configurating of the LC microlens with a basic circle sub-aperture, is proposed. The key algorithm processing already combining LC-based microbeam adjustment leading to a high-quality lightfield image with abundant depth information, are as follows: position calibration, disparity recognition, featured contour extraction, and almost real-time refocusing-rendering. It should be noted that the proposed disparity recognition algorithm utilizes a redundant information outfrom a multiple-viewing imaging to calculate the pixel-leveled disparity maps and then efficiently extract the target contour based on the disparity features in a very short course. Compared to traditional stitching rendering algorithms, the proposed method can not only achieve a high-resolution refocusing-rendering of target images, but also realize a high-quality real-time rendering representation based on a traditional GPU deployment with a typical rate of approximately 1 megapixel per millisecond corresponding to a common RGB image.

2. LC device and imaging architecture

The essential functioned configuration and the imaging optical path of the LC-LCP are shown in Fig. 1. A partial layout of the key patterned aluminum electrode formed by densely arranging circular micro-holes with an aperture of 100 µm and a period or a center-to-center distance of 125 µm according to a basic hexagonal morphology, which will lead to a sandwiched LC microlenses controlled electrically, is illustrated in Fig. 1(a). As indicated, two silica wafers of ∼500 µm thickness is closely coupled into a ∼20 µm depth micro-cavity towards a LC microlens array with needed geometry. At first, a flat indium-tin-oxide film with ∼185 nm thickness is pre-coated over the surface of one silica wafer for shaping a planar electrode. A layer of polyimide of ∼1.2 µm thickness is further deposited over the surface of the indium-tin-oxide electrode, and then rubbed using velvet or nylon along anti-parallel direction, so as to shape V-shaped grooves with ∼10 nm depth and ∼15 nm width according to a distribution period of ∼50 nm for homogeneously aligning LC molecules. An aluminum film of ∼100 nm thickness is further fabricated over the surface of another silica wafer through common magnetron sputtering processing, and then patterned by conventional ultraviolet photolithography and wet-etching, so as to shape a patterned aluminum electrode, which is also deposited by a polyimide layer of ∼1.2 µm thickness over its surface. A patterned operation leading to a similar V-shaped groove arrangement as above is also achieved to the polyimide layer over the aluminum electrode. So, an arrayed LC microlens can be formed by fully filling LC materials into the micro-cavity, which will exhibit the same initial LC anchoring orientation by paralleled setting the top and bottom electrode plates according to the polyimide groove orientation. In the micro-cavity, LC materials are directly contact with the polyimide layer shaped.

 figure: Fig. 1.

Fig. 1. Typical features of the LC-based lightfield camera prototype proposed. (a) Main micro-structure and key parameter configuration of a patterned aluminum electrode for shaping an arrayed LC microlens. (b) LC-based lightfield camera prototype constructed according to typical Galilean imaging mode. (c) Imaging optical path based on a main lens and an arrayed LC microlens.

Download Full Size | PDF

In order to shape an ideal homogeneous alignment of LC molecules sandwiched between the top and bottom electrodes, two common polyimide masks, as a type of initial orientation stamper for strongly anchoring LC molecules according to their V-groove distribution, is introduced. The basic technological process is as follows: firstly, the polyimide material (ZKPI-440 of POME Technology Co., Ltd., Beijing, China) is spin-coated over the surface of the electrode plate and then prebaked for 10 min at 80°C and cured for 30 min at 230°C, and sequentially prebaked and cured at a hot plate, and finally rubbed anti-parallel to shape a patterned surface V-groove fashion. For shaping a micro-cavity with defined depth, glass microsphere spacers with 20 µm diameter mixed with an adhesive are further deposited along short sides of one electrode plate with a surface polyimide mask for effectively separating two electrode plates. Currently, a type of long rod-shaped nematic LC materials (E44 of Merck) is fully filled into the micro-cavity based on the capillary effect. After finishing the fabrication of the LC architecture, the long sides of the LC micro-cavity are adhesively sealed. The main parameters of the LC material including ne=1.7904, no=1.5277, =5.2, and ${\varepsilon _{/{/}}}$=22.

The schematic diagram for constructing a prototyped LC-based lightfield camera according to typical Galilean imaging mode, is displayed in Fig. 1(b). As shown, a linear polarizer is firstly attached onto a H-shaped microlens frame and then combined with the LC microlens array to shape an entire LC component. Note that even though a polarizer attached to the LC microlens for remarkably removing stray light means a lower light efficiency in imaging optical path, a higher imaging clarity will be achieved through electrically adjusting focal length of the LC microlens array. To several special application scenarios, such as dimming ambient light or night vision, a direct method through increasing the exposure time of the imaging micro-system so to enhance the environment brightness, can be utilized to improve the imaging performance. In addition, the optical efficiency of the LC-LCP can be enhanced by directly inserting a twist nematic LC device over the LC microlens array. By turning on or off a twist nematic LC cell only applying an electrical signal over it, two eigen-polarizations of an incident lightfield can be captured for compensating the light efficiency, which should be validated by the experiments given in the section 1 of the Supplement 1. In addition, to more polarizer-free LC phase modulation methods about significant advancement made by Y.-H. Lin and V. Reshetnyak, et al, can refer to [22,2427,2931,35,51]. A chip of CMOS sensors is then coupled with the LC component according to the focal length of the LC microlenses. The LC component are located nearly the focal plane of the main lens using a lens connector, so as to form a final LC-based lightfield imaging architecture. An actual LC-LCP has been constructed using a Nikon lens with a fixed-focus of 25 mm, and a Sony IMX342 photosensitive chip with a pixel scale of 6464×4852 and a pixel size of 3.45 µm is selected as an arrayed CMOS sensor. The focal length of the LC microlenses can be electrically adjusted within a range from ∼0.7 mm to ∼4.5 mm. And an optimized spacing between the LC microlenses and the CMOS sensors is set as 1.1 mm, which can be varied slightly according to the adjusted feature of the focal length of the LC microlenses developed.

As shown in Fig. 1 (c), the LC microlenses coupled with a chip of CMOS sensors is inserted into the imaging optical path according to Galilean imaging mode. So, the initial real image points of the main lens are therefore transferred into virtual objective points of the LC microlenses in a relay imaging operation. Both target point-Q and -P are firstly transferred into two virtual objective point-Q′ and -P′, and then imaged by two sets of LC microlenses to form two sequences of imaging point-Q′i and -P′j (i, j=1, 2,…,N). Where N denotes the maximum number of the LC microlens in a set of sub-imaging viewing-angles ${\varphi _{{\textrm{Q}_i}}}$ ($i$=1, 2, …, N), which are covered by imaging microbeams outfrom the target point-Q with the same imaging viewing-angle ${\varphi _Q}$. to main optical axis of the main lens. For example, N=3 means that three portrait LC microlenses have been covered by the imaging microbeams with slightly different sub-imaging viewing angle corresponding to target point-Q colored by green. It should be noted that a smaller optical aperture of a single LC microlens means a larger imaging viewing-angle resolution, but will result in a decrease of the imaging quality.

To quantitively evaluate the performance of the LC microlens constructed, two basic parameters of the point spread function and the focal length of the developed devices are measured according to a common measurement system, as shown in Fig. 2. The main experimental set-ups are demonstrated in Fig. 2(a). As demonstrated, a beam of collimated white beams continuously passes through aperture stop firstly and then the linear polarizer and final the LC microlens arrary cell. Additionally, the transmitted light fields are then remarkably amplified by a microscope objective of ×40 and 0.65 numerical aperture, and thus captured by a Laser Beam Profiler (WinCamD of DataRay,Inc., Redding, CA, USA). To finely locate the focal planes shaped, the distance between the LC microlens arrary and the microscope objective should be adjusted precisely during experiments. According to our experimental configuration, a relatively accurate focal length should be equal to a sum of the thickness of the glass substrate and the distance between the beam exiting end of the LC microlens arrary and the incident surface of the microscope.

 figure: Fig. 2.

Fig. 2. The common testing configuration for measuring the basic parameters of the PSF and the focal length of the LC microlens array. (a) The experimental set-ups. (b) Relationship between the focal length of the LC microlens array and the RMS value with a signal voltage applied over it.

Download Full Size | PDF

The relationship between the focal length of the LC microlens arrary and the RMS value of the signal voltage applied over the LC device is demonstrated in Fig. 2(b). According to our experiments, the current LC microlens arrary can be driven normally in the voltage range from ∼0.9 Vrms to ∼12.0 Vrms, and the realized focal lengthvary in a range from ∼0.7 mm to ∼4.5 mm, which can be tuned easily so as to present an excellent tunability under the condition of a low signal voltage level. In general, the focal length dependence on the applied signal voltage of a LC microlens exhibits a non-monotonic behavior, that is, it will firstly decrease and then slightly increase with increasing the voltage signals applied. The reasons are already explained in detail in previous research [50].

3. Rapid imaging process

3.1 Electrically adjustable lightfield imaging

A typical mode of electrically adjustable lightfield imaging is carried out using the LC-LCP constructed by us, as shown in Fig. 3. A traditional 2D imaging configuration with the same main lens and the CMOS sensors corresponding to the same scenes is also inserted at the left bottom of the figure. As indicated, A model truck with an objective distance of ∼26 cm and a model bus with an objective distance of ∼10 cm are located before and after the green focusing plane, which is also indicated by a red square plate with a white pattern and located at a distance of ∼20 cm to the window of the main lens. In general, the images of the two models already demonstrate a slightly fuzzy trend in spite of their positions still within the depth of field according to common 2D imaging mode with a traditional maximum scale of ∼2 m before and after the focusing plane. The raw data of two yellow models are captured using the camera prototype through applying a signal voltage of ∼2.5Vrms over the LC microlenses utilized. As shown, the lightfield imaging is effectively executed through a sub-array of the LC microlenses for each target selected. Both the bus and truck models are already attached by a red and a green dot square frame, respectively, in which the lightfield imaging data will be checking and analyzed carefully.

 figure: Fig. 3.

Fig. 3. A typical lightfield imaging is executed using the LC-based lightfield camera prototype proposed and the raw data of the target models are rapidly captured by electrically adjusted focusing of the LC microlenses corresponding to a traditional 2D imaging configuration about the same scene.

Download Full Size | PDF

Several typical localized lightfield image sets obtained by gradually varying the signal voltage applied over the same LC microlenses, are shown in Fig. 4. A common Sobel clarity operator mainly relying its value is used to quantitively evaluate the imaging quality. As shown, the imaging definition and brightness of the bus model is obviously better than the truck model. As shown by Fig. 4(a), the clearest image with a maximum clarity value of ∼47.53 is shaped at the signal voltage of ∼2.25 Vrms, and then gradually decreased to the lowest value of ∼36.52 at ∼5.5 Vrms in a range from ∼1.75 Vrms to ∼5.5Vrms. As shown in Fig. 4(b), the clearest model truck image with a maximum clarity of 26.04 is shaped at ∼1.75 Vrms, but gradually increased to the largest value of ∼29.08 at ∼4.25 Vrms, which demonstrates a slightly different variance trend with that of the bus model.

 figure: Fig. 4.

Fig. 4. Typical local lightfield image sets obtained by gradually varying the signal voltage applied over the same LC microlenses. (a) Comparison of the local imaging clarity corresponding to a far bus model and (b) a near truck model. (c) Relationship between the imaging clarity of the near and far targets and the signal voltage applied.

Download Full Size | PDF

So, a set of relationship curves about the imaging clarity of the near and far targets and the signal voltage applied over the LC microlenses are further given in Fig. 4(c). As shown, the clarity of the bus model firstly presents a nearly smooth state with an average value of ∼47.21 in a range of ∼1.75 Vrms to ∼3.5Vrms, and thus rapidly descends with gradually rising the signal voltage after exceeding a turning point B in the signal voltage region defined. A typical localized image acquired at ∼2.25 Vrms is also given for displaying the pattern clearness. To the truck model, the imaging clarity is firstly slowly improved as gradually rising the signal voltage applied, and then almost stabilized at a mean value of ∼28.83 after exceeding a turning point A. It should be noted that the imaging clarity of the far targets is approximately twice to the near target. Considering the case of the traditional 2D imaging presenting a basically uniform clarity in a typical depth of field (∼2 m), the property above will provide an opportunity to select the imaging clarity value according to the LC-based lightfield imaging controlled electrically, for example, acquiring a relatively high image clarity by a relatively low signal voltage, as demonstrated by the yellow bus model in Fig. 4(a). Conversely, the low imaging clarity should be in a signal voltage range from ∼3.4Vrms to ∼5.5 Vrms.

3.2 Rapid Imaging algorithms

As a set of sequential sub-images with almost continuously varied viewing angle according to the LC microlens configuration, the raw imaging data acquired through electrically tuning LC microlen ses will present a practical visual discontinuity. So, the relevant post-processing algorithms must be re-constructed based on a compound sub-aperture imaging strategy. Several key factors including the focusing plane calibration mainly by electrically adjusting microlenses, the pattern disparity recognition and the featured contour extraction, and a real-time imaging rendering guided by refocusing LC microlenses, should be considered carefully. Usually, the data post-processing is essentially to extract interested target clues from the pixel-leveled patterns. More details about the rapid imaging algorithms have been described in the section 2 of the Supplement 1.

4. Imaging analysis

A practical LC-LCP has been constructed using a photosensitive array of Sony IMX342 CMOS sensor with main performances: the sensor scale of 6464×4852, the imaging rate of 3.9 frames per second with 3 RGB channels, and a single pixel size of 3.45µm×3.45µm. A main lens of a F-mount 25 mm fixed-focus of LF2528M-F is used.

As an entire calibration process proposed in section 2 of the Supplement 1, the results about an initial parameter calibration is illustrated in Fig. 5. At first, a white planar target in front of the prototyped camera with a distance of ∼5 cm is introduced for deducing a key transformation relationship between both the LC and the pixel coordinates, and an experimental configuration also given in the lower left corner in Fig. 5(a). According to the enlarged picture, a uniform aperture array of the LC microlenses with several defects such as a typical white spot approximately occupying 4 sub-apertures and a dark black shadow almost covering a single microlens, can be viewed. It does not affect the parameter calibration. A binarization image for enhancing the aperture contour of the microlenses with a basic hexagonal cell is illustrated in Fig. 5 (b). A circular converging microbeam pattern is depicted in Fig. 5(c). So, the center-to-center spacing D is ∼37 pixels and the origin point for the LC coordinate system is (-17.223, -21.867) in the pixel level. In addition, the rotation matrix $M = \left[ {\begin{array}{ll} {37.411}&{ - 18.806}\\ {0.116}&{32.341} \end{array}} \right]$. and the installation deviation angle θ = 0.178005° can be calculated. Finally, a validation chart is generated to demonstrate the consistance of the calibration results with the actual aperture of the microlenses, as shown in Fig. 5 (d). It should be noted that the calibration process is performed only once after the prototyped camera is assembled, and the important parameters given above are output as the yaml file.

 figure: Fig. 5.

Fig. 5. Key parameter calibration. (a) A white planar target image with a typical white spot approximately occupying 4 sub-apertures and a dark black shadow almost covering a single microlens. (b) Image binarization. (c) Aperture contour recognition. (d) Drawing a validation chart to demonstrate the effectiveness of the obtained parameters.

Download Full Size | PDF

To perform both the disparity recognition and contour extraction, three main steps are as follows. Firstly, the raw lightfield images of the target is captured. Next, a common GPU is utilized to calculate the pixel-leveled disparity of the patterns selected, and the results are converted into color presentation for convenient viewing. Finally, a continuous visible disparity map in color space is synthesized through the proposed refocusing algorithm, and thus a target contour image based on the disparity map can be acquired by converting the obtained color disparity into grayscale. Meanwhile, a traditional contour extraction by Sobel operator is used as a comparison group for indicating the effectiveness of the lightfield-based contour extraction. So, several indoors and outdoors measurements are then conducted according to four group configurations. The indoor targets are located at different objective distance or depth such as three targets in 55 cm, multiple targets in 80 cm, and a single target in 10 cm, as follows.

4.1 Indoor disparity recognition of three targets in 55cm

The lightfield imaging configuration for the first group indoor measurement is shown in Fig. 6. A typical lightfield image acquired through applying a signal voltage of ∼4Vrms over the LC microlenses is shown in the Fig. 6(a). Two locally enlarged images marked by a green and a red dashed frames also reveal more details featured by several overlapped letters and digits, which means that the LC-based lightfield imaging has significantly extended the depth of field compared to conventional 2D imaging. Three targets have been imaged with remarkably increased clarity and precision. As shown in Fig. 6(b), three targets including a green crane model and an orange wood cube and a yellow bus model are placed at the objective distance or spatial depth of ∼13 cm, ∼32 cm, and ∼52 cm, respectively. The green focusing plane is located at the orange wood cube with ∼32 cm objective distance or spatial depth.

 figure: Fig. 6.

Fig. 6. Raw lightfield images and experimental set-ups for performing the first group indoor measurement. (a) Target images acquired through applying a signal voltage of ∼4Vrms over the LC microlenses, and two locally enlarged sub-figures are also provided. (b) Experimental set-ups with several featured distance arrangement.

Download Full Size | PDF

Considering a pixel-leveled disparity computation algorithm given in the section 2 of the Supplement 1, the key operation for obtaining the disparity value between two microlenses is matching disparity. Generally, a sum of absolute difference operator is commonly used in 3D measurements as follows

$$\textrm{SAD} = \sum abs({{I_1}({{x_i},{y_j}} )- {I_2}({{x_i},{y_j}} )} )$$
where $\textrm{SAD}$ denotes the sum of absolute difference between two-pixel blocks, and abs means an absolute value, I1 and I2 refer to the pixel blocks of the raw lightfield images for comparing similarity. So, the similarity between the two-pixel blocks will increase as the sum of absolute difference operator returning to a smaller value. An example for describing the disparity calculation of the pixel block with an optimal 7×7 size is given in Fig. 7. The point-O1 and -O2 are the center of two adjacent sub-apertures, and the point-P′1 is one of the imaging points of the virtual objective point-P′ selected for disparity calculation, which is also the center of the pixel block-A. A disparity calculation direction along a red dashed line will be determined by Sobel vector (Gx,Gy), which is composed of the convolution results outfrom the horizontal and vertical Sobel operators to the block-A. If a pixel block-B with the same scale and the distance D from the selected pixel in an adjacent microlens moves a distance $\Delta e$ in the reverse direction corresponding to the disparity calculation orientation, the smallest SAD can be obtained. After arriving the position B′ centered at another imaging point P′2, the $\Delta e$. is recorded as a disparity value for the selected pixel of the point-P′1 with a varied range from 0 to 30, because the distance D already covers ∼37 pixels.

T typical characters about the disparity recognition and the contour extraction utilizing the proposed algorithm given in the section 2 of the Supplement 1 are shown in Fig. 8. A detailed target location is given in Fig. 8(a). A featured disparity map is shaped outfrom the raw lightfield images, and also indicated by a color bar from the deep red or the low-value end to the pink or the high-value end, as shown in Fig. 8(b). Here, a parameter DP is defined for quantitively expressing the local disparity, which is closely related with several factors including: the aperture of each microlens and their arrangement period D, the calibrating coefficient k, and the rendering coefficient C. Currently, DP is roughly determined by a mean value in a small white dashed frame and still restricted in a range of (0, 30). A mean value of the model crane located at ∼13 cm is ∼8.5 and further roughly indicated by light green, which covers almost entire contour of the model crane with some small yellow patches. The DP value of the wood cube at ∼32 cm and the model bus at ∼52 cm are ∼9.4 and ∼12.3, respectively. In addition, the rendering images outfrom the raw imaging data according to the coefficients of k = 2.9 and C=1 are given in Fig. 8(c). A grayscale presentation of the contours extracted by the quantized disparity map, which already exhibit an almost identical appearance to the targets with different objective distance, is given in Fig. 8(d). As demonstrated in Fig. 8(e), the grayscale contours outfrom the rendered images using a typical traditional contour extraction method apparently present a relatively high contour contrast compared with that shown in Fig. 8(d). Generally, the contour characters of the targets located at different position or over the same focusing plane, can be more effectively expressed according to the disparity recognition, as shown by Fig. 8(b). In addition, lots of fine serrate-stripes or dot-lines as a type of pattern noise covering grayscale target processed by the traditional contour extraction method, can be viewed in Fig. 8(e), because the electronic contour extraction operations are performed in a relatively complex and time-consuming course. As shown by this figure, a serrate-roof of the model bus can be observed, so as to cause a negative impact during pattern recognition. So, the contour extraction based on disparity recognition with pesudo-color expression presents some obvious merits such as less pattern noise and higher edge definition.

 figure: Fig. 7.

Fig. 7. Typical disparity matching process.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Disparity recognition and contour extraction for the first group indoor measurement. (a) Target position configuration. (b) Contour extraction based on the disparity recognition in color space. (c) Rendered images with respect to raw lightfield viewings. (d) Grayscale presentation of the contour extracted according to the disparity recognition. (e) Contour extraction from the rendered images.

Download Full Size | PDF

4.2 Indoor disparity recognition of five targets in 80cm

Both the lightfield imaging and the measurement configuration for indoor disparity recognition of five targets located in 80 cm are shown in Fig. 9. The typical raw images captured using a signal voltage of ∼4.25Vrms are shown in Fig. 9(a). To demonstrate more details, two enlarged local patterns in a green and a red dashed boxes, are also given. As shown, the targets having different objective distance or spatial depth exhibit a relatively clear appearance with a better clarity of the aperture of each microlens compared with the first group indoor measurement shown in Fig. 6, since a closer focusing plane is selected. The five targets are sequentially placed in 80 cm, as indicated in Fig. 9(b), where the focusing plane of the main lens is set at ∼15 cm, which is also on the front surface of the truck model. Moreover, the objective distance of the letter E model and a school bus model are ∼35 cm and ∼50 cm, respectively.

 figure: Fig. 9.

Fig. 9. Lightfield imaging and the measurement configuration for indoor disparity recognition of five targets located in 80 cm. (a) Target images acquired by applying a signal voltage of ∼4.25Vrms over the LC microlenses, and two enlarged local patterns are also given. (b) Measurement configuration where the focusing plane of the main lens is set at ∼15 cm objective distance or spatial depth.

Download Full Size | PDF

The typical characters for disparity recognition and contour extraction from the second group indoor measurement by varying the focusing plane form an objective distance of ∼32 cm to ∼15 cm, are shown in Fig. 10. The measurement configuration is depicted in the Fig. 10(a). The colored disparity map is illustrated in Fig. 10(b), which also demonstrates the similar appearance features shown in Fig. 8. As indicated, the Dp value of the truck model located at ∼15 cm and the school bus model at ∼50 cm are 10.4 and 14.5, respectively. The rendering images according to the coefficients of k=2.8 and C=1.0 are given in Fig. 10(c). The Grayscale presentation of the contour extracted through disparity recognition and from the rendering images are shown in Fig. 10(d) and (e), which also demonstrate the similar appearance and distribution characters shown in Fig. 8. So, the contour extraction based on the disparity recognition with the pesudo-color expression can also present some obvious superiority in terms of depth visualization and higher edge definition.

 figure: Fig. 10.

Fig. 10. Typical disparity recognition and contour extraction from the second group indoor measurements by varying the focusing plane form an objective distance of 32 cm to 15 cm. (a) Experimental configuration. (b) Contour extraction based on the quantized disparity recognition. (c) Images rendered directly by raw lightfield viewings. (d) Grayscale presentation of the contour extracted from disparity recognition map. (e) Contour extraction from the rendered images.

Download Full Size | PDF

4.3 Indoor disparity recognition of single target in 10cm

Both the raw lightfield images and the measurement configuration under the condition of adjusting the focusing plane at the objective distance of ∼6 cm are shown in Fig. 11. In order to demonstrate the capability of the proposed algorithm in extracting depth information form a relatively complex appearance, the measurement is conducted using a single bulldozer model positioned at ∼6 cm, as shown in Fig. 11(a). The raw imaging data, as shown in Fig. 11(b), are obtained using a signal voltage of ∼4.0Vrms, where two enlarged local patterns marked by green and red dashed boxes are also attached, respectively.

 figure: Fig. 11.

Fig. 11. Both the raw lightfield images and the measurement configuration after adjusting the focusing plane at the objective distance of ∼6 cm. (a) Typical experimental set-ups. (b) Raw lightfield images acquired by applying a signal voltage of ∼4 4.0Vrms over the LC microlenses.

Download Full Size | PDF

The typical characters for disparity recognition and contour extraction under the condition of adjusting the focusing plane at the objective distance of ∼6 cm, are shown in Fig. 12. The position configuration is depicted in the Fig. 12(a). Both the colored disparity map and the rendering images according to the coefficients of k=2.8 and C=1.0 are presented in Fig. 12(b) and (c), respectively. And the grayscale presentation of the contours extracted by the disparity recognition and the rendering images are shown in Fig. 12(d) and (e). As shown, they all exhibit the similar appearance features and the variance trend as shown in figures above. And the Dp value of the bulldozer model is in a range from ∼10.3 at a nearer distance to ∼14.4 at a far distance.

 figure: Fig. 12.

Fig. 12. Disparity recognition and contour extraction under the condition of adjusting the focusing plane at the objective distance of ∼6 cm. (a) Position configuration. (b) Contour extraction based on the quantized disparity recognition. (c) Rendered images from the raw lightfield data. (d) and (e) Grayscale presentation of the contours extracted by disparity recognition and the rendered images.

Download Full Size | PDF

4.4 Outdoor disparity recognition of two targets in 4.6m

The lightfield imaging using the developed LC-LCP for achieving the disparity recognition and the contour extraction in an outdoor environment is illustrated in Fig. 13. The typical lightfield images also attached by two enlarged local patterns for fully demonstrating details, are shown in Fig. 13(a). The experimental set-up configuration is shown in Fig. 13(b), where two targets are placed at ∼2.3 m and ∼4.6 m, respectively. The focusing plane is located at ∼2.3 m.

 figure: Fig. 13.

Fig. 13. Lightfield imaging using the developed LC-based lightfield camera prototype in outdoor environment. (a) The typical lightfield images attached by two enlarged local patterns for fully demonstrating details. (b) The experimental set-up configuration.

Download Full Size | PDF

The typical characters for the disparity recognition and the contour extraction according to the measurement configuration, are illustrated in Fig. 14. A layout about the imaging targets is shown in Fig. 14(a). The colored disparity map and the rendered images according to the coefficients of k =3.0 and C=1.0 are illustrated in Fig. 14(b) and (c), respectively. As shown, the target positioned at ∼4.6 m exhibits an average Dp value of 9.8, which is almost identical across the disparity map shaped. But the target situated at ∼2.3 m demonstrates a slightly varied Dp value with a local average value of 9.6, which are marked by different color from light green to dark green. And a grayscale presentation of the contour extracted from the disparity recognition is clearer than that from the rendering images according to a comparison between Fig. 14(d) and (e), which is obviously different with those obtained in indoor measurements mentioned above. The reason can owe to a fact that the depth resolution will decline with increasing the objective distance. It should be noted that the wall corner already demonstrates a slightly large average Dp value of 13.3.

 figure: Fig. 14.

Fig. 14. Typical characters for the disparity recognition and the contour extraction according to the outdoor measurement configuration. (a) Target position allocation. (b) Contour extraction form quantized disparity recognition. (c) Rendered images outfrom the raw lightfield viewings. (d) and (e) Grayscale presentation of the contour extraction by the disparity recognition and the rendered images, respectively.

Download Full Size | PDF

4.5 LC-guided refocusing-rendering

The LC-guided refocusing-rendering operations corresponding to both the indoor and outdoor measurements are further conducted for evaluating the efficiency of the image reconstructing algorithms proposed in the Supplement 1. Generally, the traditional lightfield imaging is performed using common refractive or diffractive microlenses according to a selected focusing plane of the main lens, which means a relatively complicated calculation for electronically refocusing and rendering images towards high-definition electronic target images. In current lightfield imaging strategy proposed by us, the refocusing plane can be easily re-selected or scanned only by loading and adjusting the signal voltage applied over the LC microlenses, so as to result in a very rapid imaging process. Therefore, a continuous or hopping variance of the focusing plane with respect to dynamic or static targets can be realized only by shifting the signal voltage applied, which also means that a continuously focusing scan about targets and also an electrically switching between the LC-based lightfield imaging proposed and the traditional 2D imaging can be achieved. The indoor imaging measurements based on both the proposed LC-guided refocusing-rendering and the conventional digital refocusing-stitching, are illustrated in Fig. 15. The measuring set-up arrangement is shown in Fig. 15(a), where three models (school bus, truck, bulldozer) located before the developed LC-LCP loaded by a signal voltage of ∼4.0Vrms are at ∼11 cm, ∼32 cm, and ∼55 cm, respectively. The rendering images from the raw lightfield images according to both different imaging algorithms above are shown in Fig. 15(b), which are further labeled by the gray for the raw lightfield images, and the orange for the conventional digital refocusing-stitching images, and the blue for the LC-guided refocusing-rendering images.

 figure: Fig. 15.

Fig. 15. Refocusing and rendering results outfrom the indoor measurements. (a) Targets arrangement diagram. (b) Imaging comparison between the proposed LC-guided refocusing-rendering and the conventional digital refocusing-stitching.

Download Full Size | PDF

As demonstrated, the conventional digital refocusing-stitching method is strictly restricted by a basic R×R sized pixel block determined by each microlens to capture sub-image of the target, and then directly stitch them together to form the final rendered image. In the orange-labeled row, the rendering results of three targets with different objective distance are obtained through the conventional stitching method according to R=10, 13, 15, respectively. It can be observed that the school bus model exhibits a noticeable imperfect reconstruction with several slight fringes and ghosts. With R increasing, the rendering effect of the truck and bulldozer models is apparently improved. It is evident that the electronic targets rendered by the proposed LC-guided algorithm are remarkably better than that above, and the fringes and ghosts are almost removed. Where the coefficient k is 2.1 for the farthest target, and 2.7 for the intermediate target, and 3.5 for the nearest target, and the coefficient C=1.

The outdoor imaging measurements for performing a kind of the refocusing-rendering imaging, and the disparity recognition, and further the pattern contour extraction based on the LC-LCP, are illustrated in Fig. 16. The imaging measurement configuration is shown in Fig. 16(a), where three targets (trees, a dartboard, annular-board) located before the camera prototype loaded by a signal voltage of ∼4.5Vrms are at ∼3.0 m, ∼5.2 m, and more than 10 m. The targets and the camera prototype are sequentially arranged, as shown in Fig. 16(b). The lightfield images are further rendered by both the conventional digital refocusing-stitching algorithm and the proposed the LC-guided refocusing-rendering algorithm, as indicated by Fig. 16(c) and (d), respectively. It is noted that the rendering images by the proposed algorithm is obviously better than that outfrom the conventional method mentioned, because of the existed slight fringes and ghosts in the images acquired by the conventional method being almost eliminated, as demonstrated in indoor measurements. During the image information process, both the coefficients of k and C are 3.0 and 1 according to outdoor illumination condition.

 figure: Fig. 16.

Fig. 16. Outdoor measurements for performing a kind of refocusing-rendering imaging and the disparity recognition and the pattern contour extraction. (a) Outdoor imaging measurement configuration. (b) Sequentially arranging the targets and the LC-based lightfield camera prototype. (c) and (d) Images rendered by the conventional digital refocusing-stitching algorithm and the proposed the LC-guided refocusing-rendering algorithm, respectively. (e) Contour extraction form the quantized disparity recognition. (f) and (g) Grayscale presentations of the contours extracted from the disparity recognition map and the rendered images.

Download Full Size | PDF

The contour extraction from the LC-guided refocusing-rendering images based on the quantized disparity map, is illustrated in Fig. 16(e). It can be observed that the distance between the dart-board and the annular-board is much smaller compared to their depth in objective space. Therefore, the calculated average disparity difference is pretty minor and basically consistent with those obtained in the indoor measurements. As increasing the objective distance, the disparity will exhibit more noticeable changes, because the resolution of the disparity will decrease with the objective distance increasing. Several typical grayscale presentations of the contours extracted from the disparity recognition maps and the rendered images, are given in Fig. 16(f) and (g), respectively. As shown, the featured contours outfrom the disparity maps have revealed an impressive result, which further demonstrates the effectiveness of the proposed algorithm.

Considering the situations of the proposed LC-guided refocusing-rendering algorithm being based on the pixel-level, the calculation without GPU support should be a time-consuming and inefficient course. To achieve a cost-effective and real-time process, a GPU parallel processing acceleration is implemented using OpenCL. The major advantage of deploying parallel computing using the OpenCL platform is its high compatibility, which enables the system to function seamlessly on both Intel's integrated graphics cards and independent graphics cards from vendors like NVIDIA, making it more accessible and affordable. More information on the OpenCL platform is detailly introduced in the section 3 of the Supplement 1. So, the proposed algorithm will be evaluated under a computation circumstance of RTX 2060 Super GPU with AMD Ryzen R5 3600X CPU and DDR4 16 G RAM. The test results are presented in Table 1. When the output rendering image with an identical size as the input, it only takes 34.48 ms for fully rendering a frame image of 6464×4852, which is approximately 1 megapixel per millisecond. Compared with 20 ms per megapixel in rendering image during information processing given by S.Pratapa, Raytrix, et al [52,53], the GPU-accelerated rendering demonstrates an obvious superiority in a real-time rendering. As decreasing the rendering scale from the 6464×4852 to 202×151, the rendering rate can be rapidly increased of about two orders of magnitude, as shown in Table 1 from 29 to 3754 frame per second. And the time per Frame will be remarkably reduced from 34.48 to 0.266 ms, also crossing two orders of magnitude. So, the performances above are far more than the output frame rate of the most rapid cameras currently, which means that we can almost render a picture in a real-time imaging process.

Tables Icon

Table 1. Refocusing-rendering features under different output resolution

5. Conclusion

In this paper, a lightfield camera prototype is constructed by directly coupling an arrayed LC microlens in a basic hexagonal arrangement with an arrayed photosensitive sensor for performing a kind of LC-guided refocusing-rendering lightfield imaging, which is attached by computing disparity map and extracting featured contours of the targets interested. Compared to traditional 2D imaging generally presenting a basically uniform clarity in a typical depth of field (∼2 m), the LC-based lightfield imaging strategy presents a character for selecting a suitable imaging clarity value based on a suitable focusing plane, which can be easily selected or scanned only by loading and adjusting the signal voltage applied over the LC microlenses, and thus resulting to a very rapid imaging process. Two coefficients of the calibration coefficient k and the rendering coefficient C are defined for exactly presenting the imaging process. A parameter Dp is also introduced for quantitively expressing the local disparity of the electronic patterns. A parallel computing architecture based on GPU through the OpenCL platform is adopted to improve the real-time performance of the algorithms proposed. It can be expected that the method proposed by us will provide a key support for lightfield imaging applications such as in objective distance or target depth estimation, 3D modelling and target reconstructing, rapidly and intelligently autofocusing, and so on. The research lays a solid foundation for continuously developing lightfield imaging technology by providing an attractive LC-based scheme.

Funding

National Natural Science Foundation of China (No. 61176052).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data and codes underlying the results presented in this paper can be found on GitHub at the website of [54].

Supplemental document

See Supplement 1 for supporting content.

References

1. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (ACM Press, 1996), Vol. 1, pp. 31–42.

2. M. Levoy, “Light fields and computational imaging,” Computer 39(8), 46–55 (2006). [CrossRef]  

3. T. Georgiev, Z. Yu, A. Lumsdaine, et al., “Lytro camera technology: theory, algorithms, performance analysis,” Proc. SPIE 8667, 86671J (2013). [CrossRef]  

4. O. Johannsen, C. Heinze, B. Goldluecke, et al., “On the calibration of focused plenoptic cameras,” in Time-of-Flight and Depth Imaging. Sensors, Algorithms, and Applications: Dagstuhl 2012 Seminar on Time-of-Flight Imaging and GCPR 2013 Workshop on Imaging New Modalities, M. Grzegorzek, C. Theobalt, R. Koch, A. Kolb, eds. (SpringerBerlin Heidelberg, 2013), Vol. 8200, pp. 302–317.

5. T. Georgiev and A. Lumsdaine, “Depth of field in plenoptic cameras,” Eurographics Short Pap. 11814, 1 (2009). [CrossRef]  

6. B. Lee, “3D depth capture and imaging for microscopy,” in Digital Holography and Three-Dimensional Imaging (Optica Publishing Group, 2015), p. DW1A.1.

7. M. Yu. Loktev, V. N. Belopukhov, F. L. Vladimirov, et al., “Wave front control systems based on modal liquid crystal lenses,” Rev. Sci. Instrum. 71(9), 3290–3297 (2000). [CrossRef]  

8. S. P. Kotova, M. Yu. Kvashnin, M. A. Rakhmatulin, et al., “Modal liquid crystal wavefront corrector,” Opt. Express 10(22), 1258 (2002). [CrossRef]  

9. E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” Comput. Models Vis. Process. , M.S., Landy and J.A. Movshon, eds. (MIT Press, 1991).

10. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992). [CrossRef]  

11. R. Ng, M. Levoy, M. Bredif, et al., Light field photography with a hand-held plenoptic camera, (Stanford University, 2005).

12. A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 021106 (2010). [CrossRef]  

13. F. Reinitzer, “Contributions to the knowledge of cholesterol,” Liq. Cryst. 5(1), 7–18 (1989). [CrossRef]  

14. G. Si, Y. Zhao, E. S. P. Leong, et al., “Liquid-crystal-enabled active plasmonics: a review,” Materials 7(2), 1296–1317 (2014). [CrossRef]  

15. L. B. Wolff, T. A. Mancini, P. Pouliquen, et al., “Liquid crystal polarization camera,” IEEE Trans. Robot. Autom. 13(2), 195–203 (1997). [CrossRef]  

16. Y. Shimoda, M. Ozaki, and K. Yoshino, “Electric field tuning of a stop band in a reflection spectrum of synthetic opal infiltrated with nematic liquid crystal,” Appl. Phys. Lett. 79(22), 3627–3629 (2001). [CrossRef]  

17. Q. Tong, M. Chen, Z. Xin, et al., “Depth of field extension and objective space depth measurement based on wavefront imaging,” Opt. Express 26(14), 18368–18385 (2018). [CrossRef]  

18. M. Chen, W. Dai, Q. Shao, et al., “Optical properties of electrically controlled arc-electrode liquid-crystal microlens array for wavefront measurement and adjustment,” Appl. Opt. 58(24), 6611 (2019). [CrossRef]  

19. S. Sato, “Liquid-crystal lens-cells with variable focal length,” Jpn. J. Appl. Phys. 18(9), 1679–1684 (1979). [CrossRef]  

20. T. Nose and S. Sato, “A liquid crystal microlens obtained with a non-uniform electric field,” Liq. Cryst. 5(5), 1425–1433 (1989). [CrossRef]  

21. Y.-H. Lin, W.-C. Cheng, V. Reshetnyak, et al., “Electrically tunable gradient-index lenses via liquid crystals: beyond the power law,” Opt. Express 31(23), 37843 (2023). [CrossRef]  

22. Y.-H. Lin, M.-S. Chen, W.-C. Lin, et al., “A polarization-independent liquid crystal phase modulation using polymer-network liquid crystals in a 90° twisted cell,” J. Appl. Phys. 112(2), 024505 (2012). [CrossRef]  

23. H.-C. Lin, M.-S. Chen, and Y.-H. Lin, “A Review of Electrically Tunable Focusing Liquid Crystal Lenses,” Trans. Electr. Electron. Mater. 12(6), 234–240 (2011). [CrossRef]  

24. Y.-H. Lin, H. Ren, Y.-H. Fan, et al., “Polarization-independent and fast-response phase modulation using a normal-mode polymer-stabilized cholesteric texture,” J. Appl. Phys. 98(4), 043112 (2005). [CrossRef]  

25. Y.-H. Lin, H. Ren, Y.-H. Wu, et al., “Polarization-independent liquid crystal phase modulator using a thin polymer-separated double-layered structure,” Opt. Express 13(22), 8746 (2005). [CrossRef]  

26. H. Ren, Y.-H. Lin, Y.-H. Fan, et al., “Polarization-independent phase modulation using a polymer-dispersed liquid crystal,” Appl. Phys. Lett. 86(14), 141110 (2005). [CrossRef]  

27. Y.-H. Lin, H.-S. Chen, H.-C. Lin, et al., “Polarizer-free and fast response microlens arrays using polymer-stabilized blue phase liquid crystals,” Appl. Phys. Lett. 96(11), 113505 (2010). [CrossRef]  

28. H. Ren, Y.-H. Fan, Y.-H. Lin, et al., “Tunable-focus microlens arrays using nanosized polymer-dispersed liquid crystal droplets,” Opt. Commun. 247(1-3), 101–106 (2005). [CrossRef]  

29. H.-S. Chen, Y.-J. Wang, C.-M. Chang, et al., “A Polarizer-Free Liquid Crystal Lens Exploiting an Embedded-Multilayered Structure,” IEEE Photonics Technol. Lett. 27(8), 899–902 (2015). [CrossRef]  

30. J. H. Yu, H.-S. Chen, P.-J. Chen, et al., “Electrically tunable microlens arrays based on polarization-independent optical phase of nano liquid crystal droplets dispersed in polymer matrix,” Opt. Express 23(13), 17337 (2015). [CrossRef]  

31. Y.-H. Lin, Y.-J. Wang, G.-L. Hu, et al., “Electrically tunable polarization independent liquid crystal lenses based on orthogonally anisotropic orientations on adjacent micro-domains,” Opt. Express 29(18), 29215 (2021). [CrossRef]  

32. Y.-H. Lin and H.-S. Chen, “Electrically tunable-focusing and polarizer-free liquid crystal lenses for ophthalmic applications,” Opt. Express 21(8), 9428 (2013). [CrossRef]  

33. Y.-H. Wu, C.-C. Chang, Y.-S. Tsou, et al., “Enhancing virtual reality with high-resolution light field liquid crystal display technology,” J. Opt. Microsyst. 3(01), 041202 (2023). [CrossRef]  

34. Y.-J. Wang, Y.-H. Lin, V. Reshetnyak, et al., “Origin of oblique optical axis of electrically tunable focusing lenses arising from initial anisotropic molecular tilts under a symmetric electric field. I,” AIP Adv. 10(9), 095024 (2020). [CrossRef]  

35. Y.-J. Wang, Y.-H. Lin, O. Cakmakci, et al., “Phase modulators with tunability in wavefronts and optical axes originating from anisotropic molecular tilts under symmetric electric field II: experiments,” Opt. Express 28(6), 8985 (2020). [CrossRef]  

36. M. Chen, M. Ye, Z. Wang, et al., “Electrically addressed focal stack plenoptic camera based on a liquid-crystal microlens array for all-in-focus imaging,” Opt. Express 30(19), 34938–34955 (2022). [CrossRef]  

37. Z. Xin, D. Wei, X. Xie, et al., “Dual-polarized light-field imaging micro-system via a liquid-crystal microlens array for direct three-dimensional observation,” Opt. Express 26(4), 4035 (2018). [CrossRef]  

38. Y. Lei, Q. Tong, X. Zhang, et al., “Plenoptic camera based on a liquid crystal microlens array,” Proc. SPIE 9579, 95790T (2015). [CrossRef]  

39. Y. Lei, Q. Tong, Z. Xin, et al., “Three dimensional measurement with an electrically tunable focused plenoptic camera,” Rev. Sci. Instrum. 88(3), 033111 (2017). [CrossRef]  

40. H. Ren, J. R. Wu, Y.-H. Fan, et al., “Hermaphroditic liquid-crystal microlens,” Opt. Lett. 30(4), 1–3 (2005). [CrossRef]  

41. H.-C. Lin and Y.-H. Lin, “An electrically tunable focusing liquid crystal lens with a built-in planar polymeric lens,” Appl. Phys. Lett. 98(8), 083503 (2011). [CrossRef]  

42. H.-C. Lin and Y.-H. Lin, “An electrically tunable-focusing liquid crystal lens with a low voltage and simple electrodes,” Opt. Express 20(3), 2045 (2012). [CrossRef]  

43. Y.-H. Lin, Y.-J. Wang, and V. Reshetnyak, “Liquid crystal lenses with tunable focal length,” Liq. Cryst. Rev. 5(2), 111–143 (2017). [CrossRef]  

44. J. F. Algorri, V. Urruchi, N. Bennis, et al., “Integral imaging capture system with tunable field of view based on liquid crystal microlenses,” IEEE Photonics Technol. Lett. 28(17), 1854–1857 (2016). [CrossRef]  

45. J. F. Algorri, N. Bennis, V. Urruchi, et al., “Tunable liquid crystal multifocal microlens array,” Sci. Rep. 7(1), 17318 (2017). [CrossRef]  

46. Z. Xin, B. Deng, D. Wei, et al., “Macroscale single crystal graphene templated directional alignment of liquid-crystal microlens array for light field imaging,” Appl. Phys. Lett. 115(7), 071903 (2019). [CrossRef]  

47. M. Chen, W. He, D. Wei, et al., “Depth-of-field-extended plenoptic camera based on tunable multi-focus liquid-crystal microlens array,” Sensors 20(15), 4142 (2020). [CrossRef]  

48. H. Kwon, Y. Kizu, Y. Kizaki, et al., “A Gradient Index Liquid Crystal Microlens Array for Light-Field Camera Applications,” IEEE Photonics Technol. Lett. 27(8), 836–839 (2015). [CrossRef]  

49. Y. Lei, Q. Tong, X. Zhang, et al., “An electrically tunable plenoptic camera using a liquid crystal microlens array,” Rev. Sci. Instrum. 86(5), 053101 (2015). [CrossRef]  

50. Z. Xin, Q. Tong, Y. Lei, et al., “An electrically tunable polarization and polarization-independent liquid-crystal microlens array for imaging applications,” J. Opt. 19(9), 095602 (2017). [CrossRef]  

51. Y.-H. Lin and Y.-S. Tsou, “A polarization independent liquid crystal phase modulation adopting surface pinning effect of polymer dispersed liquid crystals,” J. Appl. Phys. 110(11), 114516 (2011). [CrossRef]  

52. S. Pratapa and D. Manocha, “HMLFC: Hierarchical motion-compensated light field compression for interactive rendering,” in Computer Graphics Forum (Wiley Online Library, 2019), Vol. 38(8), pp. 1–12.

53. C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” in Human Vision and Electronic Imaging XVII (SPIE, 2012), Vol. 8291, pp. 45–59.

54. C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” in GitHub2023). https://github.com/luckyemao/Lightfield_imgs_processing

Supplementary Material (1)

NameDescription
Supplement 1       Section 1:Adding optical efficiency enhancement enhancement experiment; Section 2:Rapid imaging algorithms; Section 3:Introduction of implementing GPU parallel computing

Data availability

Data and codes underlying the results presented in this paper can be found on GitHub at the website of [54].

54. C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” in GitHub2023). https://github.com/luckyemao/Lightfield_imgs_processing

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. Typical features of the LC-based lightfield camera prototype proposed. (a) Main micro-structure and key parameter configuration of a patterned aluminum electrode for shaping an arrayed LC microlens. (b) LC-based lightfield camera prototype constructed according to typical Galilean imaging mode. (c) Imaging optical path based on a main lens and an arrayed LC microlens.
Fig. 2.
Fig. 2. The common testing configuration for measuring the basic parameters of the PSF and the focal length of the LC microlens array. (a) The experimental set-ups. (b) Relationship between the focal length of the LC microlens array and the RMS value with a signal voltage applied over it.
Fig. 3.
Fig. 3. A typical lightfield imaging is executed using the LC-based lightfield camera prototype proposed and the raw data of the target models are rapidly captured by electrically adjusted focusing of the LC microlenses corresponding to a traditional 2D imaging configuration about the same scene.
Fig. 4.
Fig. 4. Typical local lightfield image sets obtained by gradually varying the signal voltage applied over the same LC microlenses. (a) Comparison of the local imaging clarity corresponding to a far bus model and (b) a near truck model. (c) Relationship between the imaging clarity of the near and far targets and the signal voltage applied.
Fig. 5.
Fig. 5. Key parameter calibration. (a) A white planar target image with a typical white spot approximately occupying 4 sub-apertures and a dark black shadow almost covering a single microlens. (b) Image binarization. (c) Aperture contour recognition. (d) Drawing a validation chart to demonstrate the effectiveness of the obtained parameters.
Fig. 6.
Fig. 6. Raw lightfield images and experimental set-ups for performing the first group indoor measurement. (a) Target images acquired through applying a signal voltage of ∼4Vrms over the LC microlenses, and two locally enlarged sub-figures are also provided. (b) Experimental set-ups with several featured distance arrangement.
Fig. 7.
Fig. 7. Typical disparity matching process.
Fig. 8.
Fig. 8. Disparity recognition and contour extraction for the first group indoor measurement. (a) Target position configuration. (b) Contour extraction based on the disparity recognition in color space. (c) Rendered images with respect to raw lightfield viewings. (d) Grayscale presentation of the contour extracted according to the disparity recognition. (e) Contour extraction from the rendered images.
Fig. 9.
Fig. 9. Lightfield imaging and the measurement configuration for indoor disparity recognition of five targets located in 80 cm. (a) Target images acquired by applying a signal voltage of ∼4.25Vrms over the LC microlenses, and two enlarged local patterns are also given. (b) Measurement configuration where the focusing plane of the main lens is set at ∼15 cm objective distance or spatial depth.
Fig. 10.
Fig. 10. Typical disparity recognition and contour extraction from the second group indoor measurements by varying the focusing plane form an objective distance of 32 cm to 15 cm. (a) Experimental configuration. (b) Contour extraction based on the quantized disparity recognition. (c) Images rendered directly by raw lightfield viewings. (d) Grayscale presentation of the contour extracted from disparity recognition map. (e) Contour extraction from the rendered images.
Fig. 11.
Fig. 11. Both the raw lightfield images and the measurement configuration after adjusting the focusing plane at the objective distance of ∼6 cm. (a) Typical experimental set-ups. (b) Raw lightfield images acquired by applying a signal voltage of ∼4 4.0Vrms over the LC microlenses.
Fig. 12.
Fig. 12. Disparity recognition and contour extraction under the condition of adjusting the focusing plane at the objective distance of ∼6 cm. (a) Position configuration. (b) Contour extraction based on the quantized disparity recognition. (c) Rendered images from the raw lightfield data. (d) and (e) Grayscale presentation of the contours extracted by disparity recognition and the rendered images.
Fig. 13.
Fig. 13. Lightfield imaging using the developed LC-based lightfield camera prototype in outdoor environment. (a) The typical lightfield images attached by two enlarged local patterns for fully demonstrating details. (b) The experimental set-up configuration.
Fig. 14.
Fig. 14. Typical characters for the disparity recognition and the contour extraction according to the outdoor measurement configuration. (a) Target position allocation. (b) Contour extraction form quantized disparity recognition. (c) Rendered images outfrom the raw lightfield viewings. (d) and (e) Grayscale presentation of the contour extraction by the disparity recognition and the rendered images, respectively.
Fig. 15.
Fig. 15. Refocusing and rendering results outfrom the indoor measurements. (a) Targets arrangement diagram. (b) Imaging comparison between the proposed LC-guided refocusing-rendering and the conventional digital refocusing-stitching.
Fig. 16.
Fig. 16. Outdoor measurements for performing a kind of refocusing-rendering imaging and the disparity recognition and the pattern contour extraction. (a) Outdoor imaging measurement configuration. (b) Sequentially arranging the targets and the LC-based lightfield camera prototype. (c) and (d) Images rendered by the conventional digital refocusing-stitching algorithm and the proposed the LC-guided refocusing-rendering algorithm, respectively. (e) Contour extraction form the quantized disparity recognition. (f) and (g) Grayscale presentations of the contours extracted from the disparity recognition map and the rendered images.

Tables (1)

Tables Icon

Table 1. Refocusing-rendering features under different output resolution

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

SAD = a b s ( I 1 ( x i , y j ) I 2 ( x i , y j ) )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.