Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Depth of field extension and objective space depth measurement based on wavefront imaging

Open Access Open Access

Abstract

When all the parts of the wavefront imaging system are kept static after wavefront measuring, the target’s images are blurry, because the depth of field (DOF) of the system affects the imaging quality. In this paper, the method for extending the DOF of the wavefront imaging system through an integrated architecture of a liquid-crystal microlens array (LCMLA) powered by electricity and a common photosensitive array, is presented. The DOF can be extended remarkably only by stitching together several sub-images of the LCMLA. The problem that the wavefronts and imaging results are insensitive to the objective depth is also solved. Optimal driving voltage signals are found out according to Sobel mean gradient to efficiently calibrate the depth of objective space in order to quantitatively measure the depth. The approach indicates a viable way to effectively extend the DOF of imaging micro-systems and to measure the geometrical depth of targets at the same time.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Generally, conventional imaging systems consist of a main imaging lens and a key sensor array. The imaging lens compresses incident beams, and the sensor array responses 2D light intensity distribution constructed by the imaging lens subsequently. The depth of field (DOF), which is a basic parameter, is defined as the distance between the nearest and farthest objects in a scene that appears acceptably sharp in the field of view (FOV). Usually, only objects in the DOF can be captured clearly by an imaging system with a fixed-focus lens, and other objects out of the DOF are blurry. Traditionally, the method of reducing the optical aperture of imaging systems has been used to form a relatively large DOF. However, the DOF extension achieved is still limited, and the reduction in aperture also affects the imaging quality. Recently, Wang et al. efficiently extend the DOF of an endoscope using a large-aperture focal-length-tunable liquid-crystal (LC) lens [1]. However, LC lenses usually present relatively small aperture of typical 1-2mm [2]. If the LC lens is designed too large, the beam convergence or divergence efficiency of the lens will be greatly decreased so as to exhibit an unsuitability in imaging systems with relatively large optical apertures. Currently, plenoptic cameras have been developed for sampling a 4D light-field in a single photographic exposure, and then for refocusing the targets at different depths in the FOV, which means that the DOF can be extended effectively [3–6]. Hahne et al. investigate the DOF variation of plenoptic camera [7,8]. However, without any refocusing process, the plenoptic cameras with conventional convex or concave microlenses have only a fairly narrow and fixed DOF and thus the refocusing can be realized by complicated data processing.

Wavefront measurement is a very important operation in modern optics technologies. Adaptive optics (AO) based on wavefront measurement and correction has long been used in astronomy to compensate or adjust wavefront distortions introduced by atmospheric turbulence and therefore achieve a much higher spatial resolution and spectral resolution [9–11]. Nowadays, AO has also demonstrated a very promising prospect in biology imaging based on similar principle employed in astronomical observation, and through reducing aberrations caused by relatively thick specimens with intense beam scattering, especially when imaging at depth, to greatly enhance imaging capabilities [12–17]. Generally, the response time of conventional nematic liquid crystals is several seconds or hundreds of milliseconds [18], which is sufficient for astronomical imaging but not for some relatively rapid courses. As known, some materials can generate faster response times, such as polymer-stabilized liquid crystal for forming microlens arrays with a typical millisecond-scaled response time [19], polymer-stabilized blue-phase liquid crystal with a typical sub-millisecond response time [20], ferroelectric materials with an ultra-fast response in the nanosecond scale [21,22]. As demonstrated, an approach to obtain both the wavefronts and corresponding intensity images of objects by the same incident microbeams processed by main objective [23]. A hybrid photosensitive array, which is constructed by integrating a liquid-crystal microlens array (LCMLA) driven electrically and a CMOS sensor array, is utilized to realize the function above through switching on and off the voltage signal loaded over the LCMLA. It should be noted that we can use only a single optical path to achieve those functions, but the current typical architectures proposed in common astronomy and biology are dual-optical [13–16,24]. What’s more, the wavefront imaging system can be combined with the prototype proposed by us for continuously conducting light-field measurement [25], polarization imaging [26], and the DOF extension mentioned in this article, so as to solve the imaging problem of strong scattering at great depth of biological samples through utilizing a AO microscopy [17,27,28].

When faced with complex circumstances and targets, the DOF extension of the wavefront imaging detection systems becomes very necessary. In our system, the distinct images of objects can be acquired by moving the imaging lens to an appropriate location, due to the DOF limitation of the imaging system. If the distance between the imaging lens and the photosensitive array always remains unchanged, the wavefronts and blurred images of objects can be obtained according to current wavefront imaging architecture. Figure 1 shows typical images formed by turning off and on the LCMLA and corresponding wavefronts of a model dozer located at 750mm and 850mm away from the wavefront imaging system in the objective space, respectively, under the condition of keeping the imaging lens static. As shown, the images of the dozer already become blurry and their structural details cannot be identified effectively with the increasing of the distance between the target and the wavefront imaging system, because the dozers are out of the DOF. It should be noted that the depth cannot also be quantified according to the above fact of neither the blur images nor wavefronts being not very sensitive.

 figure: Fig. 1

Fig. 1 Images acquired by turning off and on the LCMLA and the wavefronts of a model dozer at different depths of the objective space. Distance between the dozer and the LCWIS is 750mm in case-(a), and 850mm in case-(b).

Download Full Size | PDF

In this paper, a method for realizing effective DOF extension and objective space depth measurement based on a LCMLA-based wavefront imaging system (LCWIS) constructed by us, is presented. The DOF can be extended remarkably through stitching the clearest sub-images shaped and simultaneously the object’s wavefronts are measured through electrically operating LCMLA. An attractive possibility of the method is that the objective space can also be addressably layered so as to perform rapidly indexing of the layered imaging space according to the voltage signal applied over the LCMLA. Similarly, the depth information of objects with relatively big size can be measured by only changing the voltage signal applied over the LCMLA without any movable parts during whole imaging process. Although the LCMLA has been applied in LCMLA-based plenoptic cameras [29,30], they cannot tune the focal length of LC lenses addressably in the LCMLA space or related sensor space like the method presented by us, so as to rapidly refocus onto needed objects located at different position. Comparing with LCMLA-based plenoptic cameras, our method is real-time and further simpler or more convient for measuring the depth information of objects only through adjusting incident light-field by the LCMLA, because the complex calculation must be performed by the LCMLA-based plenoptic cameras [25].

2. LCMLA

2.1 Structure of LCMLA

The hybrid photosensitive array consists of a LCMLA and a CMOS sensor array with a distance of ~1.3mm. In recent years, the LCMLAs with different functions have been developed, such as LCMLA with multi-layered patterned electrodes [31], LCMLA with swinging focus [32,33], and compound LCMLA with convergent and divergent functions [34]. In this paper, we use the LCMLA with an addressable patterned electrodes. The schematic of the local LCMLA with patterned electrodes is shown in Fig. 2. The key functional structures of the LCMLA include two ~500-μm-thick glass substrates with ITO film electrodes, and a thin layer of LC materials with a typical thickness in micron scale. Only the bottom ITO electrode is patterned by a UV-photolithography and a common wet-etching process for shaping a densely arrayed micro-aperture with a diameter of 112μm and a pitch of 140μm. A layer of robbed polyimide (PI) film is continuously coated over the surface of the ITO film for initially orienting LC molecules contacted directly with it. Both ITO electrodes are coupled face-to-face to shape a microcavity whose depth is determined by the geometry of 20-μm-diameter microsphere spacers. The refractive indexes of the LC materials (Merck E44) are ne = 1.7904 and no = 1.5277. The resolution of the CMOS sensor array (MVC14KSAC-GE6, Microview) is 4384 × 3288 and the pixel pitch is 1.4μm. Consequently, each micro-aperture or single LC microlens corresponds to a sub-array of approximately 100 × 100 in the CMOS sensors. The patterned ITO electrode is divided into three independent regions driven electrically. Each region has an identical sub-aperture array and conductor wire pins on one side of the substrate.

 figure: Fig. 2

Fig. 2 (a) Basic structure of the LC device with addressably controlled electrode regions for driving LC molecule reorientation according to the electric field generated by the voltage signals applied across three electrode regions (Vc > Vb), and (b) the LC device fabricated.

Download Full Size | PDF

Upon applying different root-mean-square (rms) voltage signals across different patterned electrode regions, the functioned LC molecules exhibit needed director spatial orientation based on the electric field generated between both electrodes. As shown in Fig. 2(a), the electrode regions-(a), -(b) and -(c), are subjected to the voltage signal with different rms value, for instance, the electrode region-(a) is set to 0Vrms, and the region-(b) at Vb, and the region-(c) at Vc. The applied rms signal voltages satisfy the relation of Vc > Vb. Thus, the maximum tilting angle of LC molecules in region-(a) is the smallest among three regions. The LC molecules in region-(c) exhibit the largest tilting angle. Thus, the region-(a) of the LC device only works as a uniform phase slab, and region-(b) and -(c) function as needed LCMLA with different beam converging ability according to the rms voltage value. In other words, we already construct three different sub-LCMLAs with desired focal length based on three electrode regions of the same LC device. The photograph of the LC device is given in Fig. 2(b).

2.2 Focal length of LCMLA

The testing system for measuring the focusing performances of the LCMLA, is demonstrated in Fig. 3. As shown in this figure, the collimated beams are firstly polarized by a polarizer of the USP-50C0.4-38 of OptoSigma, and then continuously pass through the LCMLA. The constructed light-fields out from the LCMLA measured, are remarkably amplified by a microscope objective of × 60, and finally captured by a Laser Beam Profiler of WinCamD of DataRay, Inc. To finely locate the focal plane shaped, we adjust carefully the distance between the LCMLA and the microscope objective for obtaining the best point spread function (PSF) of the converged light-fields. The focal length of the LCMLA equals to the distance between the exiting end of the LCMLA and the incident surface of the microscope approximately.

 figure: Fig. 3

Fig. 3 Testing system for measuring the focusing performance of the LCMLA.

Download Full Size | PDF

Figure 4 demonstrates the relationship between the focal length of the LCMLA and the rms value of the voltage signal applied. As shown, the focal length decreases rapidly with the rising of the voltage signal in the range below ~4.0Vrms, and thus decreases slightly after exceeding ~4.0Vrms. The focal length is ~1.3mm at ~5.0Vrms. Both insert figures indicate partial 3D and 2D PSFs at ~4.5Vrms.

 figure: Fig. 4

Fig. 4 Relationship between the focal length of the LCMLA and the rms value of the voltage signal applied over it.

Download Full Size | PDF

3. Experiments

3.1 DOF extension

Figure 5 shows the typical imaging characteristics acquired using the hybrid photosensitive array for performing DOF extension. Figure 5(a) demonstrates a photograph of a ruler utilizing a conventional CMOS camera. As shown, the scales of 32-37 are fully in the DOF of the camera, while the other scales and numbers are already blurry because of being out of the DOF. When no voltage signal is applied across the electrodes, the LC device is only a uniform phase slab coupled with the imaging sensors and thus works only as a conventional phase retarding component. When focusing on a single target, a sharp image with a limited DOF can be obtained. When a second target is positioned between the first target and the LCWIS but out of the system’s DOF, the image plane of the second target lies behind the photosensitive array, so as to yield a blurred image, as shown by Fig. 5(b).

 figure: Fig. 5

Fig. 5 Typical characters of our imaging setup: (a) a photograph of a ruler acquired in a conventional imaging mode corresponding to (b) schematic of the imaging operation with the LC device turned off, and (c) schematically showing an arrayed beam convergence with the LCMLA.

Download Full Size | PDF

In our previous articles, the methods of dividing LCMLA into 2 × 2 sub-regions and using two overlapped electrode layers are presented [31,35]. When the objects in the FOV are at different depths, we can apply different voltage signals at assigned sub-regions so as to effectively extend the entire DOF of the imaging structure. Similarly, as the aperture sizes of two controlling electrodes are different, two electrodes with different size and fabricated over different structural layer of the device can be controlled separately, and thus each microlens can switch the FOV by applying different voltage signal onto corresponding electrodes. To a certain extent, it can also extend the entire DOF of the imaging structure. So, both methods can be used to extend the DOF for imaging a scene with objects located at different depths or even 3D objects at different place. In this paper, we propose a LC device with addressably controlled sub-electrode regions illuminated by incident microbeams preshaped.

When a voltage signal is applied across one electrode region, the LC device corresponding to this region functions as a sub-LCMLA, as shown in Fig. 5(c). The “triggering” of this region thus effectively shortens a focal length of the main imaging lens, thereby leading to clear imaging of the second target as a set of arrayed sub-images. Via fast stitching of several sub-images, an image of the second target with clear details can be achieved. Meanwhile, the regions of the LC device without any voltage signal will still work as phase slabs. The first target is also still imaged normally. Thus, we can acquire required images with sufficient clarity but present a relatively large distance, which corresponds to a significant extension of the DOF of the LCWIS. At the same time, the sub-LCMLA-based photosensitive sensors can also be used to measure the wavefronts of the second target. The clearest sub-images can be obtained via tuning the voltage signal applied to the LCMLA. It should be noted that the wavefronts of the second target can be effectively measured whether the clear sub-images can be obtained by sub-LCMLA-based photosensitive sensors or not. Generally, the accurate wavefront can be acquired easily when the sub-images of the secondary microlens imaging are clear according to a typical Shack-Hartmann wavefront measurement architecture used by us.

As shown in Fig. 6, the imaging lens group is positioned in front of the photosensitive array integrated with a LCMLA. Because the LC device is very sensitive to the polarization state of incident beams, we position a polarizer (OptoSigma USP-50C0.4-38) in front of the LCWIS for reducing polarization influence on imaging quality. In our experiments, we set two objects in the FOV. The farther white model car is a static target away from the LCWIS of 1650mm. The yellow dozer is a near target and located at several positions corresponding to different depth in the objective space. We use a M3520-MPW2 imaging lens group with a focal length of 35mm. Considering the case that larger sub-aperture will increase crosstalk between adjacent LC microlenses, and then smaller sub-aperture result in smaller sub-image dimension, so as to present relatively low imaging definition, the f-number is set as 5.6 in our experiments. The near limit of DOF (the near-DOF, DN in Fig. 5), which indicates a distance between the nearest position where the target is in the DOF and the imaging system, can be expressed as

DN=f2Ff2+NδF.
Here, f represents the focal length of the imaging lens group, and F the distance between the lens group and the focused target, and N the f-number, and δ the circle of confusion. Because the focal length is finite, we use the relation of
δ=f2N(Ff)
to approximately estimate the circle of confusion. In our experiments, the approximate value of DN is ~817.5mm. In theory, when we focus on a white model car positioned at 1650mm away from the LCWIS, a yellow model dozer positioned in the range between ~1650mm and ~820mm should also be clearly imaged. In fact, the positioning of the LC device in front of the photosensitive array will seriously affect the value of the near-DOF and thus lead to a remarkably decreasing of the DOF, because the substrate and LC materials have larger refractive indices than air medium.

 figure: Fig. 6

Fig. 6 Photograph of the LCWIS and model targets used in experiments for DOF extension.

Download Full Size | PDF

In measurements, we first set a white model car at 1650mm away from the LCWIS as a static target, but the LCWIS is always focused on the car. Next, a yellow model dozer is employed as a moving target in the depth range from 1550mm to 250mm. We record the imaging results for every 100-mm step along the depth direction, which is also the movement direction of the dozer, while applying suitable voltage signals across the electrode regions of the LC device. It can be observed that the images of the car are always clear and sharp, however, the image definition of the dozer is slightly varied during depth variance. For example, when the dozer is at 1550-mm, as shown in Fig. 7(a), the edge of the letters and the black stripes over the dozer are very clear. When the dozer is at 1350mm, as shown in Fig. 7(b), the corresponding image is significantly blurred relative to that in Fig. 7(a). However, the letters and stripes are still recognizable clearly. In theory, the near-DOF of the lens group is about 830mm, and thus when the LCWIS is focused on the car, we can obtain clear images in the depth range from 1650mm to ~820mm. Here, it should be noted that the depth of 1150mm is still within the range of the near DOF. However, when the dozer is at the depth of 1150mm, as shown in Fig. 7(c), the stripes can be distinguished while the letters are too dim to be identified, because of the larger refractive index of the substrate and LC materials.

 figure: Fig. 7

Fig. 7 Direct-imaging results and corresponding enlarged views for different depth position of the yellow model dozer. The dozer was placed at depths of (a) 1550 mm, (b) 1350 mm, and (c) 1150 mm, respectively.

Download Full Size | PDF

In next experimental stage, we compare the acquired dozer images from the depth of 950mm onward. The images include those acquired with the LC device being turned off and images stitched from several sub-images formed by the LCMLA being turned on. Figure 8 shows eight groups of images corresponding to the depth from 950mm to 250mm. Column-(a) shows the results obtained only through partial photosensitive array (LC device off). Column-(b) shows the results when “shaping” voltages are applied over the LCMLA. Column-(c) shows corresponding images stitched from several sub-images acquired through partial LCMLA corresponding to column-(b).

 figure: Fig. 8

Fig. 8 Direct-imaging results with the LC device off (column a) and on (column b) along with stitched images formed by sub-images (column c), and their enlarged views for the dozer depth positions of (1) 950mm, (2) 850mm, (3) 750mm, (4) 650mm, (5) 550mm, (6) 450mm, (7) 350mm, and (8) 250mm, respectively.

Download Full Size | PDF

From these results, we can see that all dozer images in column-(a) are strongly blurred, even the stripes can be hardly observed. In this case, by applying a suitable voltage signal across the appropriate electrode region in the LC device, we can obtain sharp sub-images shaped by incident beams passing through the LCMLA. Consequently, if the useful information in several sub-images can be selected and quickly stitched together, we can obtain extremely clear images, which is how the images in column-(c) are formed. Upon comparing the images in columns-(a) and -(c), we can note that the images in column-(c) are significantly clearer than those in column-(a). Since we use a large amount of the edge information of the sub-images, the final results exhibit obvious “stitching borders” along with some deformation. Even with some deformation as shown in Fig. 8(1c), the image details are still clearer than those in Fig. 8(1a). Figures. 8(2c)-8(8c) show significant improvement leading to sharp images. Thus, the imaging range is effectively extended from ~250mm to ~950mm, corresponding to a depth range of ~700mm, which is unachievable by conventional imaging methods. We also set the static target at the positions of 1350mm, 1050mm, and 750mm away from the LCWIS for performing DOF extension, and thus the experiments are similar with that described above, and then similar results are also obtained. The extended DOF range of the LCWIS is ~600mm (from ~250mm to ~850mm), ~500mm (from ~250mm to ~750mm), and ~300mm (from ~250mm to ~550mm), respectively. So no matter where the static target is focused by the LCWIS, the DOF of the LCWIS can always be extended according to our method, effectively. It should be noted that in experiments, the model dozer images selected have a pixel resolution of 1973 × 3288 and the images after stitching have different pixel resolution according to the position of the model dozer, when the dozer is at 250mm away from imaging system. The images demonstrate the lowest pixel resolution of 196 × 660, and the highest pixel resolution is 1212 × 2020, when the dozer is placed at 950mm away. The facts mentioned above mean that there is a reduction in pixel resolution after performing regional stitching by the rendering algorithm for image stitching [29].

The wavefronts of the yellow model dozer at different depth are also given in Fig. 9. In carrying out measurements, we mainly discuss how to extend the DOF of the LCWIS. The model dozer only corresponds to a small number of LC microlens, so the wavefront resolution, as shown in Fig. 9, is relatively low. From the wavefronts measured, we can see that the differences among the results is not clearly except the structural size of the wavefronts. Although, we extend the imaging DOF in this section, the target depth is still unknown.

 figure: Fig. 9

Fig. 9 Wavefronts of the model dozer located at different position.

Download Full Size | PDF

3.2 Objective space depth measurement

In this section, we define the depth as a distance between the target and the LCWIS, which should be distinguished with the ‘depth’ of the DOF. Figure 10 shows the typical imaging characteristics for objective space depth measurement. In experiments, we find that there is a corresponding relationship between the voltage signal applied over the LCMLA and the depth position where the target is set in the objective space. When the target is set at the position D1 away from the LCWIS, the LCMLA should be applied by voltage signal V1 to obtain the sharpest sub-image array, as shown by Fig. 10(a). If the target is moved to another position, the voltage signal should be adjusted from V1 to V2, accordingly, for obtaining the sharpest sub-images by the LCWIS, as shown by Fig. 10(b). Thus, the depth information could be calibrated using the voltage signal applied for shaping the clearest images.

 figure: Fig. 10

Fig. 10 Typical imaging characteristics of our setup: (a) When the LCMLA is applied by voltage signal V1, the sharp sub-image array of the target, which is at the position D1 away from the LCWIS, can be obtained by the photosensitive array. (b) Then moving the target to D2 away from the imaging system, the voltage signal should be turned to V2 for outputting sharp results by the photosensitive array used.

Download Full Size | PDF

The experimental system for measuring the depth of the objective space has the same architecture as that for expending the DOF of the LCWIS. The imaging lens group is also set in front of the photosensitive array integrated with a LCMLA. And the polarizer is also positioned in front of the LCWIS for as large as possible reducing the influence of the polarization state of incident beams.

Generally, the sharper image has a faster gray variation at image edge. There are different edge detection algorithms for targets with different characters. Sobel edge detection is a simple way to detect the edge of a target image with obvious texture features like those selected in experiments [36]. The Sobel operator performs a 2D gray gradient measurement on the image. It uses a pair of 3 × 3 convolution mask to estimate the gradients in both horizontal and vertical directions, respectively, as shown by relation (3).

Gx(ij)=[101202101]Aij,Gy(ij)=[121000121]Aij
Here, Gx(ij) and Gy(ij) are used to express the gray gradient in x-direction and y-direction of (i, j) point in image A. The absolute gradient magnitude is
Gij=Gx(ij)2+Gy(ij)2.
The mean gradient of an image with a size of m × n can be expressed by

G=Gijm×n.

For the graphs of the same object with different clarity, the mean gradient of the sharpest image must have a maximum value. In experiments, we estimate the definition of the sub-images when the LCMLA is applied by different voltage signal according to Sobel mean gradient.

We should calibrate the distances between the position where the target is placed and the LCWIS. In experiments, the LCWIS without any voltage signal is always focused on the object located at 1500mm away, and keeping all components arrangement invariance during performing calibration and measurements. Through moving another target from 1050mm to 100mm, the sub-images and the voltage signals for every 50-mm step along the depth direction are recorded. Comparing the clarity of sub-images shaped by the LCMLA loaded with different voltage signals, we can find out an accurate rms value of the voltage signal corresponding to the sub-images with best quality.

Here, we take the situation that the target is moved to 750mm away from the LCWIS as an example. Figure 11 shows the imaging results when the LCMLA is applied by different voltage signal, and further the enlarged views. The gray variation is also provided by the bottom plot of Fig. 11. As shown in the figure, the gray variance corresponding to the voltage signal is variable. When the voltage signal is 1.6Vrms, both the peak-valley value and the variation ratio of the gray gradient are the maximum. So, we use Sobel operator mentioned above to quantificat the definition of the images with different voltage signal to find out the correct value of the rms voltage to the depth 750mm. The results after edge extraction using Sobel operator are shown in Fig. 12. We also calculate the Sobel mean gradient results of images corresponding to different voltage signals. From Table 1, we can see that 750mm depth is scaled at 1.6Vrms for the LCMLA. Using the same method, we can obtain Sobel mean gradient results with the target set at different position and the LCMLA scaled by different signal voltage, as shown in Table 2. A corresponding relationship between the depth of the objective space and the voltage signal is also established, as shown in Fig. 13. Finally, the calibration of the depth of objective space is achieved.

 figure: Fig. 11

Fig. 11 Direct-imaging results and their enlarged views with the LCMLA applied by different voltage signal, and the gray variation of the same region with different voltage signal.

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 Local results after edge extraction using Sobel operator.

Download Full Size | PDF

Tables Icon

Table 1. Sobel mean gradient with different voltage signal corresponding to the target located at the depth 750mm.

Tables Icon

Table 2. Sobel mean gradient with different voltage signal corresponding to the target located at different depth.

 figure: Fig. 13

Fig. 13 Relationship between the depth of the objective space and the voltage signal applied.

Download Full Size | PDF

After finishing the calibration of the depth of the objective space according to the signal voltage, we can quantitatively measure the depth or the position of the target. Firstly, the model dozer is located at an initial position in the FOV. Changing the voltage signal and recording imaging results, we can find out a suitable voltage signal with a best imaging quality, and then get the position or depth range of the target based on the rms value of the signal voltage. Figure 14 gives several images and enlarged views of the model dozer obtained directly. The Sobel mean gradients calculated by us are shown in Table 3. And the maximum value of the Sobel mean gradient is obtained when the voltage signal is 1.7Vrms. According to Fig. 12, 1.7Vrms corresponds to the position or depth range from 500mm to 600mm. The actual distance between the model dozer and the LCWIS is 600mm in the depth range measured.

 figure: Fig. 14

Fig. 14 Direct-imaging results (color) and their enlarged views (gray) of the model dozer corresponding to different voltage signal.

Download Full Size | PDF

Tables Icon

Table 3. Sobel mean gradient of different voltage signal.

3.3. Wavefront imaging with DOF extension and depth measurement

The methods to extend the DOF of the LCWIS and measure the depth of the objective space are given above. The microstructure of the LCMLA for DOF extension and depth measurement is the same with that for LC-based wavefront imaging. Thus, we can continuously achieve wavefront imaging detection based on the DOF extension and depth measurement according to the same integration architecture of the LCMLA and the sensor array, simultaneously. We set two targets in front of the LCWIS, and the yellow one is at the position of 650mm away from the LCWIS and the green one is at 300mm away. Figure 15(a) shows the imaging results recorded through the LCWIS without any voltage signal. Because the LCWIS has already been focused onto the target being 1500mm away, both targets are all out of the DOF of the LCWIS, which means that their images are already blurry. Turning on the LCMLA and then recording arrayed sub-images, we can perform LC-based wavefront measurement and reconstruction, sub-image stitching, and depth measurement and calibration. Through addressably loading suitable voltage signals over the patterned electrodes of the LCMLA, we can apply desired voltage signal across specific electrode region divided for interested targets.

 figure: Fig. 15

Fig. 15 Results of performing wavefront imaging with DOF extension and depth measurement. (a) Image obtained directly by the LCWIS without any voltage signal for the LCMLA used, (b) sub-image array output from the LCMLA applied by a voltage signal of 1.6Vrms corresponding to the left target, and another voltage signal of 1.9Vrms corresponding to the right target, (c) clear right or green target image reconstructed by stitching arrayed sub-images and its composite wavefront, (d) clear left or yellow target image reconstructed by the same process above and its composite wavefront.

Download Full Size | PDF

According to the method proposed, we can determine the best imaging voltage signals of the yellow target and the green target being 1.6Vrms and 1.9Vrms, respectively. The imaging results are shown in Fig. 15(b). Based on the relationship between the depth of the objective space and the voltage signal, as shown in Fig. 13, we can find that the depth range of both targets are in 650-750mm and 200-300mm, which are finely consistent with that obtained in actual position measurements. Figures. 15(c) and 15(d) show the stitching images and corresponding wavefronts of both targets. It should be indicated that the wavefronts here are composite wavefronts, which includes many spectral components because a broad spectrum source are used. We have already presented an actual approach to separate and mix wavefront in [23]. So, we can acquire desired intensity images through performing DOF extension and depth measurement and corresponding wavefronts.

4. Conclusion

In this paper, we present a method to remarkably extend the DOF of the LCWIS and further quantitatively measure the depth of the objective space so as to addressably layer it for rapidly indexing imaging space according to the voltage signal applied over the LCMLA. In performing LC-based wavefront imaging mentioned in previous article, we can only obtain clear or sharp images of the targets located at different position by moving imaging lens or adjusting its focal length. If fixing all parts of the LCWIS, the images of targets distributed in a relatively large depth range of the objective space will be blurry. According to experiments and theoretical prediction, the wavefronts of the same target located at different depths, have few differences. In another words, the wavefronts of targets distributed in a relatively limited spatial depth are similar. So, we can obtain the wavefront information of the target only through single measuring instead of continuous measuring at every depth selection for acquiring clear sub-images corresponding to different voltage signal or objective distance. It is very advantageous to calculate the PSF for processing of target's image or getting rid of complex flow-field influence based on wavefront measurement and correction. Our method can also be employed to quantitatively measure the geometric depth of targets based on hybrid integrating an LCMLA and a common sensor array. We successfully realize wavefront imaging with DOF extension and depth measurement. Comparing with plenoptic camera for DOF extension and depth measurement, our method demonstrates a simple principle and easy operation without any complex calculation and movement of the key parts or components of the LCWIS. We can remark that our method cannot only be used to “lock” targets located at different position in the same FOV, but also obtain target's wavefronts for continuously clearing intensity images of targets, simultaneously. It is worth mentioning that this kind of fast and continuous acquisition function including both the wavefronts and corresponding intensity images only through a single optical-path demonstrate several advantageous to time-varying objects, especially for in vivo imaging with microscopes. We are now studying ferroelectric superlattice materials and expect them to bring about a better performance.

However, there still exist some problems. The quality of images formed by stitching sub-image array is unstable currently. Sometimes there are obvious distortions or edge residuals in final images. The precision of depth measurement is still poor at current stage. We think that the main reason is the limitation of our voltage signal source with an accuracy of only 0.1Vrms. The secondary reason is that the focal length range of the LCMLA is very sensitive to the voltage signal applied, which means that an rms value of the voltage signal is generally related to a focus range with variable size. Furthermore, we will divide the patterned ITO electrode into several independent zone so as to make a zoned LC microlenses with own electrically adjusting capability for shaping a girding imaging field, which will present an ideal performance far exceeding the current imaging structures.

Funding

National Natural Science Foundation of China (NSFC) (61432007, 61176052); Natural Science Foundation of Hubei province (2015BCE054); the Major Technological Innovation Projects in Hubei Province (2016AAA010); and SAST Innovation Foundation (SAST2015081).

References and links

1. Y. J. Wang, X. Shen, Y. H. Lin, and B. Javidi, “Extended depth-of-field 3D endoscopy with synthetic aperture integral imaging using an electrically tunable focal-length liquid-crystal lens,” Opt. Lett. 40(15), 3564–3567 (2015). [CrossRef]   [PubMed]  

2. R. Bao, C. Cui, S. Yu, H. Mai, X. Gong, and M. Ye, “Polarizer-free imaging of liquid crystal lens,” Opt. Express 22(16), 19824–19830 (2014). [CrossRef]   [PubMed]  

3. R. Ng, “Fourier slice photography,” in Proceedings of ACM SIGGRAPH(2005). 735–744.

4. R. Ng, “Digital light field photography,” Ph.D. dissertation (Stanford University, 2006).

5. C. Perwass and L. Wietzke, “Single-lens 3D camera with extended depth-of-field,” Proc. SPIE 8291, 829108 (2012). [CrossRef]  

6. T. Georgiev and A. Lumsdaine, “The multifocus plenoptic camera,” Proc. SPIE 8299, 829908 (2012). [CrossRef]  

7. C. Hahne, A. Aggoun, and V. Velisavljevic, “The refocusing distance of a standard plenoptic photograph,” in 3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON), (2015). [CrossRef]  

8. C. Hahne, A. Aggoun, V. Velisavljevic, S. Fiebig, and M. Pesch, “Refocusing distance of a standard plenoptic camera,” Opt. Express 24(19), 21521–21540 (2016). [CrossRef]   [PubMed]  

9. R. A. Muller and A. Buffington, “Real-time correction of atmospherically degraded telescope images through image sharpening,” J. Opt. Soc. Am. 64(9), 1200–1210 (1974). [CrossRef]  

10. R. Davies and M. Kasper, “Adaptive Optics for Astronomy,” Annu. Rev. Astron. Astrophys. 50(1), 305–351 (2012). [CrossRef]  

11. J. R. Crepp, “Improving planet-finding spectrometers,” Science 346(6211), 809–810 (2014). [CrossRef]   [PubMed]  

12. V. Marx, “Microscopy: hello, adaptive optics,” Nat. Methods 14(12), 1133–1136 (2017). [CrossRef]   [PubMed]  

13. K. Wang, W. Sun, C. T. Richie, B. K. Harvey, E. Betzig, and N. Ji, “Direct wavefront sensing for high-resolution in vivo imaging in scattering tissue,” Nat. Commun. 6(1), 7276 (2015). [CrossRef]   [PubMed]  

14. N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14(4), 374–380 (2017). [CrossRef]   [PubMed]  

15. W. Zheng, Y. Wu, P. Winter, R. Fischer, D. D. Nogare, A. Hong, C. McCormick, R. Christensen, W. P. Dempsey, D. B. Arnold, J. Zimmerberg, A. Chitnis, J. Sellers, C. Waterman, and H. Shroff, “Adaptive optics improves multiphoton super-resolution imaging,” Nat. Methods 14(9), 869–872 (2017). [CrossRef]   [PubMed]  

16. S. A. Rahman and M. J. Booth, “Direct wavefront sensing in adaptive optical microscopy using backscattered light,” Appl. Opt. 52(22), 5523–5532 (2013). [CrossRef]   [PubMed]  

17. N. Ji, “The practical and fundamental limits of optical imaging in mammalian brains,” Neuron 83(6), 1242–1245 (2014). [CrossRef]   [PubMed]  

18. S. Xu, Y. Li, Y. Liu, J. Sun, H. Ren, and S. T. Wu, “Fast-response liquid crystal microlens,” Micromachines (Basel) 5(2), 300–324 (2014). [CrossRef]  

19. H. Ren, S. Xu, and S.-T. Wu, “Polymer-stabilized liquid crystal microlens array with large dynamic range and fast response time,” Opt. Lett. 38(16), 3144–3147 (2013). [CrossRef]   [PubMed]  

20. Y. Chen, D. Xu, S. T. Wu, S. Yamamoto, and Y. Haseba, “A low voltage and submillisecond-response polymer-stabilized blue phase liquid crystal,” Appl. Phys. Lett. 102(14), 141116 (2013). [CrossRef]  

21. J. Y. Jo, P. Chen, R. J. Sichel, S. J. Callori, J. Sinsheimer, E. M. Dufresne, M. Dawber, and P. G. Evans, “Nanosecond dynamics of ferroelectric/dielectric superlattices,” Phys. Rev. Lett. 107(5), 055501 (2011). [CrossRef]   [PubMed]  

22. H. Ishii, T. Nakajima, Y. Takahashi, and T. Furukawa, “Ultrafast Polarization Switching in Ferroelectric Polymer Thin Films at Extremely High Electric Fields,” Appl. Phys. Express 4(3), 1027–1032 (2011). [CrossRef]  

23. Q. Tong, Y. Lei, Z. Xin, X. Zhang, H. Sang, and C. Xie, “Dual-mode photosensitive arrays based on the integration of liquid crystal microlenses and CMOS sensors for obtaining the intensity images and wavefronts of objects,” Opt. Express 24(3), 1903–1923 (2016). [CrossRef]   [PubMed]  

24. A. Finkbeiner, “Laser focus: by firing lasers into the sky, Claire Max has transformed the capabilities of current–and future—telescopes,” Nature 517(7535), 430–433 (2015). [CrossRef]   [PubMed]  

25. Y. Lei, Q. Tong, Z. Xin, D. Wei, X. Zhang, J. Liao, H. Wang, and C. Xie, “Three dimensional measurement with an electrically tunable focused plenoptic camera,” Rev. Sci. Instrum. 88(3), 033111 (2017). [CrossRef]   [PubMed]  

26. Z. Xin, D. Wei, X. Xie, M. Chen, X. Zhang, J. Liao, H. Wang, and C. Xie, “Dual-polarized light-field imaging micro-system via a liquid-crystal microlens array for direct three-dimensional observation,” Opt. Express 26(4), 4035–4049 (2018). [CrossRef]   [PubMed]  

27. R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9(9), 563–571 (2015). [CrossRef]   [PubMed]  

28. H. B. de Aguiar, S. Gigan, and S. Brasselet, “Polarization recovery through scattering media,” Sci. Adv. 3(9), e1600743 (2017). [CrossRef]   [PubMed]  

29. Y. Lei, Q. Tong, X. Zhang, H. Sang, A. Ji, and C. Xie, “An electrically tunable plenoptic camera using a liquid crystal microlens array,” Rev. Sci. Instrum. 86(5), 053101 (2015). [CrossRef]   [PubMed]  

30. H. Kwon, Y. Kizu, Y. Kizaki, M. Ito, M. Kobayashi, R. Ueno, K. Suzuki, and H. Funaki, “A gradient index liquid crystal microlens array for light-field camera applications,” IEEE Photonics Technol. Lett. 27(8), 836–839 (2015). [CrossRef]  

31. S. Kang, T. Qing, H. Sang, X. Zhang, and C. Xie, “Ommatidia structure based on double layers of liquid crystal microlens array,” Appl. Opt. 52(33), 7912–7918 (2013). [CrossRef]   [PubMed]  

32. S. Kang, X. Zhang, C. Xie, and T. Zhang, “Liquid-crystal microlens with focus swing and low driving voltage,” Appl. Opt. 52(3), 381–387 (2013). [CrossRef]   [PubMed]  

33. S. Kang and X. Zhang, “Liquid crystal microlens with dual apertures and electrically controlling focus shift,” Appl. Opt. 53(2), 244–248 (2014). [CrossRef]   [PubMed]  

34. S. Kang and X. Zhang, “Compound liquid crystal microlens array with convergent and divergent functions,” Appl. Opt. 55(12), 3333–3338 (2016). [CrossRef]   [PubMed]  

35. J. Lin, Q. Tong, Y. Lei, Z. Xin, X. Zhang, A. Ji, H. Sang, and C. Xie, “An arrayed liquid crystal Fabry–Perot infrared filter for electrically tunable spectral imaging detection,” IEEE Sens. J. 16(8), 2397–2403 (2016). [CrossRef]  

36. O. Vincent and O. Folorunso, “A descriptive algorithm for sobel image edge detection,” Proc. Info. Sci. IT Edu. Conf. 40, 97–107 (2009).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1
Fig. 1 Images acquired by turning off and on the LCMLA and the wavefronts of a model dozer at different depths of the objective space. Distance between the dozer and the LCWIS is 750mm in case-(a), and 850mm in case-(b).
Fig. 2
Fig. 2 (a) Basic structure of the LC device with addressably controlled electrode regions for driving LC molecule reorientation according to the electric field generated by the voltage signals applied across three electrode regions (Vc > Vb), and (b) the LC device fabricated.
Fig. 3
Fig. 3 Testing system for measuring the focusing performance of the LCMLA.
Fig. 4
Fig. 4 Relationship between the focal length of the LCMLA and the rms value of the voltage signal applied over it.
Fig. 5
Fig. 5 Typical characters of our imaging setup: (a) a photograph of a ruler acquired in a conventional imaging mode corresponding to (b) schematic of the imaging operation with the LC device turned off, and (c) schematically showing an arrayed beam convergence with the LCMLA.
Fig. 6
Fig. 6 Photograph of the LCWIS and model targets used in experiments for DOF extension.
Fig. 7
Fig. 7 Direct-imaging results and corresponding enlarged views for different depth position of the yellow model dozer. The dozer was placed at depths of (a) 1550 mm, (b) 1350 mm, and (c) 1150 mm, respectively.
Fig. 8
Fig. 8 Direct-imaging results with the LC device off (column a) and on (column b) along with stitched images formed by sub-images (column c), and their enlarged views for the dozer depth positions of (1) 950mm, (2) 850mm, (3) 750mm, (4) 650mm, (5) 550mm, (6) 450mm, (7) 350mm, and (8) 250mm, respectively.
Fig. 9
Fig. 9 Wavefronts of the model dozer located at different position.
Fig. 10
Fig. 10 Typical imaging characteristics of our setup: (a) When the LCMLA is applied by voltage signal V1, the sharp sub-image array of the target, which is at the position D1 away from the LCWIS, can be obtained by the photosensitive array. (b) Then moving the target to D2 away from the imaging system, the voltage signal should be turned to V2 for outputting sharp results by the photosensitive array used.
Fig. 11
Fig. 11 Direct-imaging results and their enlarged views with the LCMLA applied by different voltage signal, and the gray variation of the same region with different voltage signal.
Fig. 12
Fig. 12 Local results after edge extraction using Sobel operator.
Fig. 13
Fig. 13 Relationship between the depth of the objective space and the voltage signal applied.
Fig. 14
Fig. 14 Direct-imaging results (color) and their enlarged views (gray) of the model dozer corresponding to different voltage signal.
Fig. 15
Fig. 15 Results of performing wavefront imaging with DOF extension and depth measurement. (a) Image obtained directly by the LCWIS without any voltage signal for the LCMLA used, (b) sub-image array output from the LCMLA applied by a voltage signal of 1.6Vrms corresponding to the left target, and another voltage signal of 1.9Vrms corresponding to the right target, (c) clear right or green target image reconstructed by stitching arrayed sub-images and its composite wavefront, (d) clear left or yellow target image reconstructed by the same process above and its composite wavefront.

Tables (3)

Tables Icon

Table 1 Sobel mean gradient with different voltage signal corresponding to the target located at the depth 750mm.

Tables Icon

Table 2 Sobel mean gradient with different voltage signal corresponding to the target located at different depth.

Tables Icon

Table 3 Sobel mean gradient of different voltage signal.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

D N = f 2 F f 2 + N δ F .
δ = f 2 N ( F f )
G x ( i j ) = [ 1 0 1 2 0 2 1 0 1 ] A i j , G y ( i j ) = [ 1 2 1 0 0 0 1 2 1 ] A i j
G i j = G x ( i j ) 2 + G y ( i j ) 2 .
G = G i j m × n .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.