Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Dual-mode photosensitive arrays based on the integration of liquid crystal microlenses and CMOS sensors for obtaining the intensity images and wavefronts of objects

Open Access Open Access

Abstract

In this paper, we present a kind of dual-mode photosensitive arrays (DMPAs) constructed by hybrid integration a liquid crystal microlens array (LCMLA) driven electrically and a CMOS sensor array, which can be used to measure both the conventional intensity images and corresponding wavefronts of objects. We utilize liquid crystal materials to shape the microlens array with the electrically tunable focal length. Through switching the voltage signal on and off, the wavefronts and the intensity images can be acquired through the DMPAs, sequentially. We use white light to obtain the object's wavefronts for avoiding losing important wavefront information. We separate the white light wavefronts with a large number of spectral components and then experimentally compare them with single spectral wavefronts of typical red, green and blue lasers, respectively. Then we mix the red, green and blue wavefronts to a composite wavefront containing more optical information of the object.

© 2016 Optical Society of America

1. Introduction

Generally, the lightwave wavefront is a basic parameter for characterizing the state of lightwave transmission in circumstance. It is closely related to several factors including: the spatial distribution of the refractive index of circumstance media, the spatial propagation manner of lightwave, the energy transportation efficiency, the energy flux spatial arrangement of light fields, etc. To typical optical imaging system based on acquiring the spatial difference characters of the energy flux transportation of the light fields out from objects, the imaging efficiency is determined by several causes including the light field morphology, the beam propagation behaviors, the gathering capability of light energy, and the photoelectric conversion efficiency of sensors. Usually, the wavefront of lightwave propagating in complicated circumstances and man-made optical set-ups evolves gradually, which is determined by the factors mentioned above, and thus demonstrates the clues concerned with the extension or compression or distortion of lightwave in spatial media or artificial optical environment. Based on the physical properties of wavefront, the modern imaging optical systems are generally attached by a functioned module to acquire corresponding wavefronts for efficiently evaluating and improving the point spread function of focused light fields and then final intensity images.

For realizing dual-mode imaging detection including conventional intensity imaging and corresponding wavefront measurement, two typical optical imaging systems have been constructed, for instance, one type of system having two different optical paths to acquire intensity images and wavefronts, independently, and the others sharing main optical aperture but having two sub-optical-paths to perform imaging and wavefront measuring, respectively. The latter generally contains a beamsplitter to guide incident beams into two independent optical paths, so as to need two sensor chips to independently construct an imaging unit and a wavefront measurement structure through integrating a microlens array and a sensor chip, as shown by typical Shack-Hartmann sensor array (SHSA) [1, 2 ]. So far, the typical SHSA has been effectively utilized to carry out atmospheric wavefront measurement [3, 4 ]. It is also used in other fields, such as ophthalmology [5, 6 ], optical device alignment [7, 8 ], and semiconductor wafer quality control [9]. The imaging systems mentioned above are generally complex and expensive and thus demonstrate several remarkable disadvantages such as large volume and weight, complex driving and controlling manner, relatively slow image and wavefront information fusion.

For simplifying the imaging detection system, we propose a dual-mode photosensitive array (DMPA), which can be used to acquire sequentially the conventional intensity image and corresponding wavefront so as to need only one optical system to perform imaging and wavefront measurement. The key approaches are using a liquid crystal microlens array (LCMLA) driven electrically to replaces the conventional microlenses with fixed surface profile in common SHSA. Usually, the LCMLA demonstrates a dual-functioned character relying on the electrical signals applied over it, because the LCMLA can be easily changed into a common liquid crystal phase slab after the applied electrical signals being removed. As demonstrated, the electrically switching LCMLA can help us to improve the resolution of intensity images, and the similar function is already realized in several optical structures [10, 11 ]. As shown, the liquid crystal materials has some special electro-optic properties, which can be used to build electrically tunable microlenses, because the liquid crystal molecules can be easily re-rotated following the variation of the electric field generated. In 1979, Sato presented the first liquid crystal microlens [12]. So far, liquid crystal microlenses have been developed rapidly, and many typical functioned structures such as double-layer liquid crystal lens [13], adaptive liquid crystal lens [14], and liquid crystal microlens with focus swing [15], are presented. Many imaging applications based on LCMLAs are also suggested, for example, electrically tunable plenoptic camera based on a LCMLA [16] and a gradient index LCMLA for light-field camera applications [17].

In 2003, L. Seifert et al proposed an adaptive Shack-Hartmann sensor, in which the microlens array had been replaced by a Liquid Crystal Display for measuring the wavefront [18]. And the LCMLA has also been utilized to compensate wavefront aberrations [19]. However, there is no articles report that LCMLA was utilized in the optical system for acquiring both conventional high definition intensity images and corresponding wavefronts. In 2011, R. S. Cudney replaced the conventional microlenses with electrically controlled ferroelectric zone plates to manufacture a Shack-Hartmann wavefront sensor, which can be used for both image acquisition and wavefront sensing [20]. The modified wavefront sensor is like a DMPA. However, the control voltage is too high, which reaches to the kV-scale. Compared to the ferroelectric zone plates, the LCMLA demonstrates a remarkable advantage of lower driving voltage.

In this paper, we present a kind of DMPA, which is built by integrating a LCMLA and a CMOS sensor. When applying a voltage signal over the LCMLA's electrodes, the LCMLA works as a common refractive microlens mode for shaping liquid-crystal-based SHSA, and thus the sensors are used to obtain the low definition intensity image for constructing wavefront. Through removing the voltage signal, the LCMLA is varied into a common liquid crystal phase slab, and thus the conventional intensity images of objects can be acquired. Therefore, we can switch the wavefront measurement mode and the imaging mode by only turning on and off the controlling voltage applied over liquid crystal microstructures. We also present the conception of the composite wavefront. In section 2, the basic microstructure of the DMPAs and several key approaches for reconstructing the wavefronts of objects are described. In section 3, we show the optical features of the LCMLA fabricated. In section 4, the experiments and measurement results are introduced, and the composite wavefronts are also explained, and finally the method of separating and mixing wavefronts from the object and spectral lasers are discussed, respectively.

2. Key structures of photosensitive arrays

The DMPAs designed and fabricated by us consist of a LCMLA and a CMOS sensor array. Figure 1 shows the key structure of the LCMLA, which contains two ~500-μm-thick glass substrates with an indium tin oxide (ITO) electrode and a layer of liquid crystal materials with a typical thickness in micron-scale. For shaping ITO electrode, a ~50-nm-thick ITO film is firstly coated over one side of each glass substrates. The top ITO electrode of the LCMLA is patterned by a conventional UV-photolithography and further etched into an arrayed circle-hole with the same diameter of 119μm and the pixel pitch of 140μm by a wet chemical etching process. A layer of polyimide (PI) film is continuously coated over the surface of ITO film and also fully filled into each circle-hole fabricated, and then manually rubbed so as to shape initial parallel grooves with ~750nm width and ~50nm depth for leading LC molecules contacted directly with them to a desired surface arrangement, which should be aligned homogeneously according to the groove direction. Both ITO electrodes are arranged face-to-face and maintained the same direction with respect to groove direction so as to be effectively coupled into a microcavity with a depth determined by the microsphere spacers of 20μm diameter. The refractive indexes of the liquid crystal materials (Merck E44) here used by us are ne = 1.7904 and no = 1.5277. The resolution of the CMOS sensor array (MVC14KSAC-GE6, Microview) is 4384 × 3288, and then the pixel pitch is 1.4μm. Therefore, each circle-hole or a single liquid crystal microlens corresponds to 100 × 100 pixels in CMOS sensor array or so-called sub-arrayed sensor, and thus the resolution of wavefront measurement is about 43 × 32.

 figure: Fig. 1

Fig. 1 Key structure and parameters of the LCMLA designed and fabricated by us.

Download Full Size | PDF

The micro-structural schematic of the DMPAs developed by us is shown in Fig. 2 . As shown, under the condition of breaking off the voltage signal applied over the electrode couple of the LCMLA, the device only acts as a common phase retarder, and then the conventional intensity images of objects can be acquired. When a square-wave voltage signal with a root-mean-square (rms) voltage in the range from ~2.0 to ~20Vrms is applied over the liquid crystal microstructures, the liquid crystal device works as an arrayed microlens, and the incident beams passing through the LCMLA are re-focused on the each sub-arrayed sensor. The division wavefront maps are then obtained according to the focus distribution over each sub-arrayed sensor. By switching the voltage signal off and on, the DMPAs works at only one mode of both the imaging and the wavefront measurement modes.

 figure: Fig. 2

Fig. 2 Micro-structural schematic of the DMPAs developed by us for performing conventional intensity imaging and corresponding wavefront measurement. (a) No controlling voltage: the DMPAs working in the imaging mode. (b) A voltage signal is applied: the DMPAs working in the wavefront measurement mode. (c) The cross-section view of a single liquid crystal microlens applied by a voltage signal with needed amplitude and duty cycle and waveform.

Download Full Size | PDF

3. Optical features

Because the focal length is variable with the wavelength of beams incident upon microlenses, the focal length of the beams with different spectral component including the typical wavelength of red, green, and blue, should be measured, respectively. The optical measurement system for acquiring the actual focal length of each spectral lightwave processed by the LCMLA, is demonstrated in Fig. 3 . As shown in this figure, the collimated beams out from several spectral lasers (Changchun New Industries Optoelectronics Tech. Co., Ltd,) with a central wavelength of red: 671nm, and green: 532nm, and blue: 473nm, respectively, are firstly polarized by a polarizer of the USP-50C0.4-38 of OptoSigma, and then continuously pass through the LCMLA. The constructed light fields, which are out from the LCMLA measured, are remarkably amplified by a microscope objective, and finally captured by a Laser Beam Profiler of WinCamD of DataRay, Inc.. To finely locating the focal plane shaped, we adjust carefully the distance between the LCMLA and the microscope objective for obtaining the best point spread function (PSF) of the converged light fields. The distance between the exiting end of the LCMLA and the incident surface of the microscope objective, and the focal length of the LCMLA equals to the sum of this distance and the thickness of glass substrate, approximately.

 figure: Fig. 3

Fig. 3 Measurement system for acquiring common optical performances of the LCMLA fabricated by us.

Download Full Size | PDF

Several typical PSFs of the LCMLA around focal plane are demonstrated in Fig. 4 . As shown in Figs. 4(a) and 4(b), the converging light fields of red lasers are shaped effectively, and their PSFs are also captured at the different voltage signal state through using a microscope objective of × 60. In Fig. 4(a), it can be noted that the PSFs of the LCMLA demonstrate a similar appearance with a sharp peak in the range from 4Vrms to 5Vrms. In other words, the LCMLA shows an obvious focusing depth when the voltage signal is varied in the range from ~4.0Vrms to ~5.0Vrms, where the PSFs of focuses are almost the same. In the following experiments, the value of 4.5Vrms is chosen as a controlling voltage indicating the focusing state of the LCMLA. It should be noted that the value of 4.5Vrms is not a unique value indicating the focal plane with the same energy converging state of the LCMLA. The diameter of focusing spot is ~4μm. Figure 4(b) shows the corresponding focusing spots of red, green, and blue beams at 4.5Vrms from left to right. Figure 4(c) shows the red beam's PSFs of the partial LCMLA at 4.5Vrms. From the figure, we can see that the PSF of each microlens is sharp, and the uniformity of the PSFs is about 81.54%. Because the PSFs of microlenses indicate the energy converging extent of beams processed by microlenses, an ideal PSF profile not only points out the better fabrication level but also predicts a needed imaging quality. In the following experiments, the convergence effect of the LCMLA are used to realize dual-mode imaging detection.

 figure: Fig. 4

Fig. 4 Beam converging patterns and the corresponding PSFs of the LCMLA applied by different voltage signal. (a) The beam converging spots and corresponding PSFs of single liquid crystal microlens: red light at different rms voltage state. (b) The beam converging spots and PSFs of spectral lasers processed by a liquid crystal microlens including different wavelength at 4.5 Vrms. (c) Arrayed PSF of red beams processed by partial LCMLA at 4.5 Vrms.

Download Full Size | PDF

Figure 5 shows the relationship between the focal length of the LCMLA and the rms voltage value of the controlling signal applied. When the rms voltage is 1.0Vrms, the beam converging effect of the LCMLA is not obvious, as shown by three inserts in the figure. Considering the obvious re-rotation of liquid crystal molecules only happens around the edge of each circle-hole shaped in the top electrode, a pseudo focus length can be utilized to approximately estimate the distance between the LCMLA and the microscope objective employed at 1.0Vrms, because the converging spot is not yet focusing spot. As shown in experiments, the focal spot can be shaped effectively after the controlling voltage applied over the LCMLA exceeding the value of ~1.5Vrms. The focal length decreases rapidly with the rising of the controlling voltage in the range from ~1.5Vrms to ~5.0Vrms, and thus decreases slightly after the controlling voltage exceeding ~2.5Vrms, and thus almost unchanged after the controlling voltage exceeding ~6.0Vrms. The focal lengths of each spectral beams including the green and the blue and the red at ~4.5Vrms, is ~1.18mm, ~1.01mm, and ~0.95mm, respectively. As shown in the figure, the focal length of red beams is larger than that of the green and the blue, because the wavelength of red beams is the largest.

 figure: Fig. 5

Fig. 5 Relationship between the focal length of the LCMLA and the rms value of the voltage signal applied.

Download Full Size | PDF

4. Experiment and discussion

In experiments, the DMPAs developed by us are directly used to process beams leading to both the convectional intensity images and corresponding wavefronts of the same object in dark background. A schematic of the measurement platform is presented in Fig. 6 . Figures 6(a) and 6(b) show several main experimental setups, and then a shot of naturally performing measurements are also given. The imaging lens is M3520-MPW2 35mm 1:2 of Computar. Figure 6(c) shows the liquid crystal device fabricated by us, where the red dashline box indicates the effective zone of the liquid crystal device utilized to carry out beam processing. Because of the short focal length character of the LCMLA, the liquid crystal device should be placed almost together with the CMOS sensor used, and the distance between the LCMLA and the sensors should be approached as closely as possible to the focal length according to the simulation and natural measurement results. We use a white light sources of ARC LAMP SRC F/1 COLL COND of Newport Corporation, and three spectral lasers with different wavelength including the typical red, the green, and the blue beams, to expose the objects. The fabricated DMPA is used to obtain sequentially both the conventional intensity images and low definition intensity image for constructing corresponding wavefronts of the objects. All acquired data are saved in a computer.

 figure: Fig. 6

Fig. 6 Experimental system for performing dual-mode imaging detection. (a) Several main experimental setups. (b) The liquid crystal device fabricated and the red dashline box indicating the effective zone of the device for carrying out beam processing. (c) A shot of naturally performing measurements.

Download Full Size | PDF

4.1 Slope calculation

Figure 7 shows the schematic for carrying out the wavefront measurement [9, 21 ]. Generally, the slope of each division wavefront can be calculated by

{tanθx=Δxftanθy=Δyf,
where Δx and Δy are the lateral shift of the centroid point at the x- and y-direction, respectively, and f is the focal length of the microlenses. The lateral shift of the centroid point can be expressed as
[ΔxΔy]i,j=[xcxoycyo]i,j,
where x o and y o are the central coordinate of the (i, j)-th microlens, and x c and y c are the centroid coordinate of division wavefront corresponding to the (i, j)-th microlens. We calculate the coordinate of the centroid point by
[xcyc]i,j=1m=[Ml]i,j[Mu]i,jn=[Nl]i,j[Nu]i,jI(m,n)[m=[Ml]i,j[Mu]i,jn=[Nl]i,j[Nu]i,jx(m,n)I(m,n)m=[Ml]i,j[Mu]i,jn=[Nl]i,j[Nu]i,jy(m,n)I(m,n)],
where [x c]i, j and [y c]i, j are the coordinate of the centroid point corresponding to the (i, j)-th microlens of the LCMLA, and x(m, n) and y(m, n) are the coordinate of the (m, n)-th pixel, and I(m, n) is the light intensity at the pixel (m, n), and [M u]i , j, [M l]i , j, [N u]i , j, and [Nl]i , j are the upper and lower limit of the coordinate position corresponding to the (i, j)-th microlens. Finally, we can reconstruct the wavefront by the slope data acquired from the low definition intensity image.

 figure: Fig. 7

Fig. 7 Schematic of measuring the division wavefront map of the beams out from the object.

Download Full Size | PDF

4.2 Mean-absolute-difference

The mean-absolute-difference (MAD) is usually used to describe the match between two block images [22, 23 ]. In experiments, we need to compare the similarity of two wavefronts, and therefore the MAD is used to describe the similarity between two wavefronts. The equation is

MAD=i=1Mj=1N|Ws(i,j)r×Wt(i,j)|M×N,
where Ws is the standard wavefront, and Wt is the measured wavefront, and r is an experience coefficient, which can be chosen in the range from 0.01 to 1 according our experiment results.

4.3 DMPAs

The DMPAs fabricated by us can work in one of two modes, which are defined as the imaging mode and the wavefront measurement mode, respectively. Based on the DMPAs, we can obtain the conventional intensity images and corresponding wavefronts only through applying a voltage signal over the electrode couple of the liquid crystal microstructures of the DPMAs. The typical characters of so-called dual-mode imaging detection are shown in Fig. 8 . As shown, the general imaging process starts from applying or removing a voltage signal over liquid crystal microstructures of the DMPA. It is very important that a refocused light field distribution map should be effectively measured for correctly constructing the wavefront of a model dozer, when the DMPA works in the wavefront measurement mode. If a voltage signal with a sufficient rms voltage is applied over the electrode couple of the liquid crystal microstructures of the DMPA, the photosensitive arrays of the DMPA will be zoned according to the scale of liquid crystal microlenses shaped, but still perform imaging operation and therefore present a relatively low spatial resolution or imaging definition compared to that of normal imaging operation of the large scale CMOS sensors coupled. After the voltage signal being removed, the liquid crystal microlenses will be transformed into a kind of phase retarder of beams.

 figure: Fig. 8

Fig. 8 Typical imaging characters leading to construct light wavefront of the object based on the DMPAs developed. (a) Conventional intensity image of a model dozer when the DMPA works in the imaging mode. (b) Low definition intensity image used to shape the wavefront of the model dozer when the DMPA works in the wavefront measurement mode. (c) The constructed wavefront is also presented.

Download Full Size | PDF

The typical intensity image of a model dozer is shown in Fig. 8(a). It can be seen that the intensity image of the object is very clear. Figure 8(b) shows the division low definition intensity image for constructing wavefront obtained based on the DMPA, when the liquid crystal microstructures in the DMPA are applied by a voltage signal with a suitable rms value so as to form an arrayed converging microlens with electrically tunable focal length. We can see that the definition of the obtained image with respect to the normal image of the same object is already decreased remarkably, and thus each sub-arrayed sensor outputs a zoned sub-image of the object used to calculate the slope of division wavefront map. Because the definition of the wavefront measurement mode is determined by the scale of liquid crystal microlenses, in other words, the object light field or corresponding object image is already divided into sub-aperture array or sub-image array by the microlenses shaped. The light fields are redistributed after passing through the liquid crystal microlenses, and the sub-arrayed sensor are only used to record the focused light field information of each sub-aperture or sub-image, and finally shape the wavefront through integrating all sub-image information. Using the equations mentioned above, the wavefront data can be calculated and then the wavefront can be reconstructed by the low definition intensity image acquired. By the way, two small inserts indicating the detailed beam distribution characters are also presented, which are already adjusted under the conditions of the red frame with a brightness of 52% and a contrast of 70%, and the blue frame with a brightness of 65% and a contrast of 80%.

4.4 Wavefront separating

We know that the light wavefront is generally presented based on the transportation behaviors of beams with a single wavelength or a very narrow bandwidth. To effectively analyze the wavefront evolving behaviors in complicated circumstances, we utilize a white light beam to experimentally explore the wavefront characters because the white light involves more spectral information. Usually, the white light, as a composite light, contains a large number of wavelength components ranged from ~400nm to ~750nm, and then demonstrates a varied energy transportation efficiency. At first, we denote a composite wavefront corresponding to beams out from a broadband light source with sufficient wavelength components, and then a spectral wavefront corresponding to beams only having a single wavelength or a very narrow wavelength region, for instance, typical Gaussian wavefronts of common lasers. Sometimes, we only want to get a certain spectral wavefront information of white light beams processed. So, a method of separating a white light wavefront or so-called composite wavefront into spectral wavefront is proposed by us.

According to the basic beam transformation characteristics, the light beams with several different wavelengths and passing through the same convex or converging microlens, will be focused onto the different focal planes. Spectral light beams with different focal length corresponding to liquid crystal microlenses driven and controlled by the same voltage signal, such as the basic red, the green, and the blue components, should be correctly selected for calculating the slopes of division wavefront maps so as to exactly simulate the composite wavefronts. Because the focal length of green beams is between the red and the blue, we can employ them to roughly estimate the slopes of the multi-spectral division wavefront map or even white light division wavefront map according to the facts that the focal length of light beams is approximately equal to that of central wavelength component or green beams, and thus the wavefronts of the object can be roughly constructed from the division wavefront maps of light beams selected. To reasonably separate the white light wavefront into spectral wavefronts at desired wavelength points, we should determine the key weight coefficients of spectral components for realizing a relatively simple synthesis of the composite wavefront only by integrating the red and the blue and the green wavefront. According to the photosensitive architecture constructed by us, the photo-electronic signals are stimulated by every spectral energy including the typical red, green and blue components. As shown, each spectral imaging channel of the photosensitive array has a featured sensitivity to incident beams. So, we try to utilize the featured sensitivity properties of the sensors to separate the white light wavefront acquired.

Figure 9 shows a data selection for separating the object wavefront, which have been discretly quantified according to the spectral response curve [24] in the hardware operating instructions of MVC14KSAC-GE6 camera. The spectral imaging channels of the sensors including the red, the green, and the blue components, present different peak values at the wavelength of ~600nm, ~535nm, and ~450nm, respectively. The green beams with 532 nm wavelength is selected for building the green imaging channel parameter, as expressed by

G532=SG532SGGW,
where SG is the area between the green curve and x-axis from 400nm to 750nm, and SG532 is the area of a thin green post covering a wavelength region from 530nm to 532.5nm, as indicated in Fig. 9. The red and blue imaging channels for separating photosensitive signal data are also be formed according to the similar operation with respect to that of green beams with 532nm wavelength. The similar data corresponding to the blue and red beams are selected at the central wavelength of 473nm and 671nm and having the same bandwidth of 2.5nm, respectively. Through the approaches above, the composite wavefront can be separated at both the wavelengths of 671nm and 473nm, effectively.

 figure: Fig. 9

Fig. 9 Data selection for separating object wavefront based on the spectral response properties of the sensors used.

Download Full Size | PDF

As shown in Fig. 10 , two shining bricks are used to obtain conventional intensity images when the DMPA works in the imaging mode. They all have a letter on their surface. The letter D with typical dark blue sheen and E with typical red sheen are on the left and right brick, respectively. Figure 11 shows the low definition intensity image for constructing white wavefront acquired by the DMPA working in the wavefront measurement mode, and the associated wavefront constructed by us, are also given. As shown, a low definition intensity image is obtained firstly, and then an associated front viewing wavefront is constructed. The small inserts are also presented in the figure, which are used to indicate the detailed beam distribution characters and already adjusted under the conditions of the red frame with a brightness of 60% and a contrast of 75%, and the blue frame with a brightness of 62% and a contrast of 80%.

 figure: Fig. 10

Fig. 10 Conventional intensity images of two bricks when the DMPA works in the imaging mode.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Low definition intensity image used to shape a white light wavefront and the associated front view of wavefront constructed, and the small inserts indicate the detailed beam distribution characters, which are already adjusted under the conditions of the red frame with a brightness of 60% and a contrast of 75%, and the blue frame with a brightness of 62% and a contrast of 80%.

Download Full Size | PDF

As shown in the Fig. 12 , the pictures are the low definition intensity image for constructing spectral wavefronts separated from the object image acquired and associated wavefronts constructed according to the selected wavelength including 671nm, 532nm and 473nm (from top to bottom), and the wavefront separation weight coefficients chosen. For evaluating the wavefront separation efficiency, we also introduce low definition intensity images for shaping three spectral wavefronts of the same bricks, which are obtained through exposing the bricks by the lasers of 671nm, 532nm, and 473nm, respectively. The minimum mean-absolute-difference of the separated wavefront with respect to the associated spectral wavefront introduced by us is as follows: the red being 0.0114, the green being 0.0095, and the blue being 0.0071. From measurement and separating results, we can see that the spectral wavefronts separated according to our method are very close to them measured directly through spectral laser illumination. The operation mentioned above means that a spectral wavefront can also be acquired approximately through separating a white light wavefront with a large number of spectral components.

 figure: Fig. 12

Fig. 12 Low definition white light images leading to forming the wavefronts and associated front view of wavefronts constructed, and the low definition spectral images originated from common laser illumination and corresponding front view of wavefronts constructed.

Download Full Size | PDF

The approach as above maybe highlight a road to easily acquire common wavefront (generally being spectral wavefront) or effectively construct a composite wavefront by integrating several spectral wavefronts selected and matched carefully. It should be noticed that the choice of the ratio, such as SG532/SG for efficiently performing wavefront separation or the synthesis of real spectral wavefronts, is based on the featured quantum efficiency of the sensors used. If using another type of sensors, their quantum efficiency will is changed, and thus the wavefront separation weight coefficients according the featured photosensitive performances, should be varied. So, the ratio should be selected or calculated correctly based on the featured photosensitive response performances of the sensors used actually.

4.5 Wavefront mixing

Generally, the object is always colorful in the visible range. The common single wavelength or spectral light beams may not response sufficiently the lightwave information of the object. The operation based on composite light composed of many wavelength components to measure composite wavefronts of the object, means to obtain more entirely structure and appearance and radiation information of the object. In experiments, the object is a model tree, as shown in Fig. 13 . The crown of the tree is green, and its trunk is red. The cases of performing the dual-mode imaging detection of the model tree are shown in Fig. 14 . The low definition intensity image and associated wavefronts are acquired at 4.5Vrms. As shown in Fig. 14(a), there is only a trunk at the red wavefront point, and the green and blue frame are adjusted under a brightness of 70% and a contrast of 80%, and a brightness of 70% and a contrast of 80%, respectively. As shown in Figs. 14(b) and 14(c), we almost cannot see the trunk at the green and blue wavefront points. But, we can see the entire tree using white light, as shown in Fig. 14(d). In Fig. 14(b), the red frame is adjusted under a brightness of 76% and a contrast of 85%, and the blue frame under a brightness of 54% and a contrast of 65%. In Fig. 14(c), the red frame is adjusted under a brightness of 76% and a contrast of 85%, and the green frame under a brightness of 56% and a contrast of 65%. The fact mentioned above tell us: the white light should usually be employed to obtain the wavefront information of the object.

 figure: Fig. 13

Fig. 13 The object of model tree.

Download Full Size | PDF

 figure: Fig. 14

Fig. 14 Low definition intensity image acquired at 4.5 Vrms and associated wavefronts. (a) Low definition intensity image of the red light, and the small inserts indicating the detailed beam distribution characters. The green and blue frame are adjusted under a brightness of 70% and a contrast of 80%, and under a brightness of 70% and a contrast of 80%, respectively. (b) Low definition intensity image of the green light. The red and blue frame are adjusted under a brightness of 76% and a contrast of 85%, and under a brightness of 54% and a contrast of 65%, respectively. (c) Low definition intensity image of the blue light. The red and green frame are adjusted under a brightness of 76% and a contrast of 85%, and under a brightness of 56% and a contrast of 65%, respectively. (d) Low definition intensity image of the white light. (e) The wavefront constructed by the low definition intensity images from (a) to (d).

Download Full Size | PDF

As known, the common single wavelength or spectral wavefront is very useful, although the wavefront information corresponding to the object in the visible range is incomplete. If we want to use the single wavelength or spectral wavefront to represent the object's lightwave transportation behaviors in complicated circumstances, we need to mix several key spectral wavefronts in certain wavelength range, so as to make a mixed wavefront, which should be similar to that composite wavefront out from the object.

Suppose the mixed wavefront is denoted by

Wm=m×Wr+n×Wg+(1mn)×Wb,
where m and n are the weight coefficients of the red and green wavefront, respectively, and their value is between 0 and 1. The ideal values of parameters m and n can be acquired, when the mean-absolute-difference of the mixed wavefront with respect to the white light wavefront has the minimum value, and thus the mixed wavefront is the most similar to that of light white. Figure 15 shows the method to mix three spectral wavefronts to shape a mixed wavefront. Figure 16 shows the case of the mean-absolute-difference of the mixed wavefront with respect to white light wavefront, when r = 0.8. The mixed wavefront has the minimum value at m = 0.58 and n = 0.09, in other words, the mixed wavefront is the most similar to that of white light used.

 figure: Fig. 15

Fig. 15 The mixing of the spectral wavefronts and then a comparison of the mixed wavefront and the white light wavefront chosen.

Download Full Size | PDF

 figure: Fig. 16

Fig. 16 The mean-absolute-difference of the mixed wavefront to the white light wavefront when r = 0.8.

Download Full Size | PDF

Figure 17 demonstrates the mean-absolute-difference situation of different measured wavefront with respect to white light wavefront chosen by us. The red, green, and blue line, present the variance trend of the mean-absolute-difference parameter of spectral wavefronts including the red, green, and blue lightwave, to the white light wavefront used, respectively. As shown, the minimum value of the mean-absolute-difference of the single or spectral wavefront mentioned above is around 0.1. As shown in the figure, the orange line presents also a mean-absolute-difference case of the mixed wavefront with respect to the white light wavefront employed. It is obvious that the similar minimum value of the orange line is much smaller than that of the red, green or blue line, which means that the mixed wavefront is more close or similar to the white light wavefront to that of spectral wavefront used in experiments. The cyan line is another mixed wavefront with a random weight coefficient of the red, green and blue wavelength component.

 figure: Fig. 17

Fig. 17 The mean-absolute-difference of different wavefront to the white light wavefront.

Download Full Size | PDF

5. Conclusion

In this paper, we present a kind of DMPAs based on integrating a liquid crystal microlens array and a CMOS sensor to perform a so-called dual-mode imaging detection. The DMPAs developed by us can be easily switched only through applying or removing a voltage signal with relatively low rms voltage value over the liquid crystal microstructure of the DMPAs. From experimental results, we can see that both the conventional intensity images and corresponding low definition intensity image for constructing wavefronts of objects acquired based on the DMPAs demonstrate a relatively high imaging quality. The developed DMPAs cannot only work efficiently in the conventional intensity imaging mode but also in the mature Shack-Hartmann-based wavefront measurement mode. Because the white light wavefront as a composite wavefront involving many spectral components presents more optical information compared with spectral wavefronts of objects, it is more suitable to use in the object spectral wavefront measurement based on the characteristics of separating white light wavefront according to the spectral weight coefficients, which can be obtained through the processes as shown in the paper, or fusing several spectral wavefronts such as typical red, green, and blue wavefront according to our methods discussed.

But, there is also some problems needed to be solved. As shown in experiments, the separating wavefront method is based on an assumption that the white light source has a uniform intensity at all wavelength components in the visible range, however there is virtually no such white light source. Because the white light source demonstrates a wider illumination range, but the spectral lasers need to be expanded and thus the illumination rang of the expanded beam is limited, the intensity differences between the white light sources and spectral lasers will remarkably affect the wavefront measurement accuracy.

Acknowledgments

We would like to thank the Analytical and Testing Center of Huazhong University of Science and Technology for their valuable help. This work is supported by the National Natural Science Foundation of China (Nos.61176052 and 61432007), the Fundamental Research Funds for the Central Universities (No.2014CG008), the CAEP THz Science and Technology Foundation (No. CAEPTHZ201402), and the Special Research Fund for the Doctoral Program of Higher Education (No. 20130142110007).

References and links

1. R. V. Shack and B. C. Platt, “Production and use of a lenticular Hartmann screen,” J. Opt. Soc. Am. 61(5), 656–660 (1971).

2. J. G. Allen, A. Jankevics, D. Wormell, and L. Schmutz, “Digital wavefront sensor for astronomical image compensation,” Proc. SPIE 739, 124–128 (1988). [CrossRef]  

3. D. Dayton, J. Gonglewski, B. Pierson, and B. Spielbusch, “Atmospheric structure function measurements with a Shack-Hartmann wave-front sensor,” Opt. Lett. 17(24), 1737–1739 (1992). [CrossRef]   [PubMed]  

4. R. W. Wilson, “SLODAR: measuring optical turbulence altitude with a Shack–Hartmann wavefront sensor,” Mon. Not. R. Astron. Soc. 337(1), 103–108 (2002). [CrossRef]  

5. J. Liang, B. Grimm, S. Goelz, and J. F. Bille, “Objective measurement of wave aberrations of the human eye with the use of a Hartmann-Shack wave-front sensor,” J. Opt. Soc. Am. A 11(7), 1949–1957 (1994). [CrossRef]   [PubMed]  

6. J. Liang and D. R. Williams, “Aberrations and retinal image quality of the normal human eye,” J. Opt. Soc. Am. A 14(11), 2873–2883 (1997). [CrossRef]   [PubMed]  

7. J. D. Mansell, J. Hennawi, E. K. Gustafson, M. M. Fejer, R. L. Byer, D. Clubley, S. Yoshida, and D. H. Reitze, “Evaluating the effect of transmissive optic thermal lensing on laser beam quality with a shack-hartmann wave-front sensor,” Appl. Opt. 40(3), 366–374 (2001). [CrossRef]   [PubMed]  

8. C. López-Quesada, J. Andilla, and E. Martín-Badosa, “Correction of aberration in holographic optical tweezers using a Shack-Hartmann sensor,” Appl. Opt. 48(6), 1084–1090 (2009). [CrossRef]   [PubMed]  

9. B. C. Platt and R. Shack, “History and principles of Shack-Hartmann wavefront sensing,” J. Refract. Surg. 17(5), S573–S577 (2001). [PubMed]  

10. Y. H. Fan, H. Ren, and S. T. Wu, “Electrically switchable Fresnel lens using a polymer-separated composite film,” Opt. Express 13(11), 4141–4147 (2005). [CrossRef]   [PubMed]  

11. R. Cudney, L. Ríos, and H. Escamilla, “Electrically controlled Fresnel zone plates made from ring-shaped 180 degree domains,” Opt. Express 12(23), 5783–5788 (2004). [CrossRef]   [PubMed]  

12. S. Sato, “Liquid-crystal lens-cells with variable focal length,” Jpn. J. Appl. Phys. 18(9), 1679–1684 (1979). [CrossRef]  

13. M. Ye, B. Wang, and S. Sato, “Double-layer liquid crystal lens,” Jpn. J. Appl. Phys. 43(No. 3A), L352–L354 (2004). [CrossRef]  

14. H. Ren and S. T. Wu, “Adaptive liquid crystal lens with large focal length tunability,” Opt. Express 14(23), 11292–11298 (2006). [CrossRef]   [PubMed]  

15. S. Kang, X. Zhang, C. Xie, and T. Zhang, “Liquid-crystal microlens with focus swing and low driving voltage,” Appl. Opt. 52(3), 381–387 (2013). [CrossRef]   [PubMed]  

16. Y. Lei, Q. Tong, X. Zhang, H. Sang, A. Ji, and C. Xie, “An electrically tunable plenoptic camera using a liquid crystal microlens array,” Rev. Sci. Instrum. 86(5), 053101 (2015). [CrossRef]   [PubMed]  

17. H. Kwon, Y. Kizu, Y. Kizaki, M. Ito, M. Kobayashi, R. Ueno, K. Suzuki, and H. Funaki, “A gradient index liquid crystal microlens array for light-field camera applications,” IEEE Photonics Technol. Lett. 27(8), 836–839 (2015). [CrossRef]  

18. L. Seifert, J. Liesener, and H. J. Tiziani, “The adaptive Shack-Hartmann sensor,” Opt. Commun. 216(4–6), 313–319 (2003). [CrossRef]  

19. J. Arines, V. Durán, Z. Jaroszewicz, J. Ares, E. Tajahuerce, P. Prado, J. Lancis, S. Bará, and V. Climent, “Measurement and compensation of optical aberrations using a single spatial light modulator,” Opt. Express 15(23), 15287–15292 (2007). [CrossRef]   [PubMed]  

20. R. S. Cudney, “Modified Shack-Hartmann sensor made with electrically controlled ferroelectric zone plates,” Opt. Express 19(18), 17396–17401 (2011). [CrossRef]   [PubMed]  

21. J. Li, Y. Gong, H. Chen, and X. Hu, “Wave-front reconstruction with Hartmann–Shack sensor using a phase-retrieval method,” Opt. Commun. 336, 127–133 (2015). [CrossRef]  

22. H. Gharavi and M. Mills, “Blockmatching motion estimation algorithms-new results,” IEEE Trans. Circuits Systems 37(5), 649–651 (1990). [CrossRef]  

23. W. Li and E. Salari, “Successive elimination algorithm for motion estimation,” IEEE Trans. Image Process. 4(1), 105–107 (1995). [CrossRef]   [PubMed]  

24. “Spectral response curve,” in the hardware operating instructions of MVC14KSAC-GE6 camera V1.0.0.0, (Microview, 2012).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1
Fig. 1 Key structure and parameters of the LCMLA designed and fabricated by us.
Fig. 2
Fig. 2 Micro-structural schematic of the DMPAs developed by us for performing conventional intensity imaging and corresponding wavefront measurement. (a) No controlling voltage: the DMPAs working in the imaging mode. (b) A voltage signal is applied: the DMPAs working in the wavefront measurement mode. (c) The cross-section view of a single liquid crystal microlens applied by a voltage signal with needed amplitude and duty cycle and waveform.
Fig. 3
Fig. 3 Measurement system for acquiring common optical performances of the LCMLA fabricated by us.
Fig. 4
Fig. 4 Beam converging patterns and the corresponding PSFs of the LCMLA applied by different voltage signal. (a) The beam converging spots and corresponding PSFs of single liquid crystal microlens: red light at different rms voltage state. (b) The beam converging spots and PSFs of spectral lasers processed by a liquid crystal microlens including different wavelength at 4.5 Vrms. (c) Arrayed PSF of red beams processed by partial LCMLA at 4.5 Vrms.
Fig. 5
Fig. 5 Relationship between the focal length of the LCMLA and the rms value of the voltage signal applied.
Fig. 6
Fig. 6 Experimental system for performing dual-mode imaging detection. (a) Several main experimental setups. (b) The liquid crystal device fabricated and the red dashline box indicating the effective zone of the device for carrying out beam processing. (c) A shot of naturally performing measurements.
Fig. 7
Fig. 7 Schematic of measuring the division wavefront map of the beams out from the object.
Fig. 8
Fig. 8 Typical imaging characters leading to construct light wavefront of the object based on the DMPAs developed. (a) Conventional intensity image of a model dozer when the DMPA works in the imaging mode. (b) Low definition intensity image used to shape the wavefront of the model dozer when the DMPA works in the wavefront measurement mode. (c) The constructed wavefront is also presented.
Fig. 9
Fig. 9 Data selection for separating object wavefront based on the spectral response properties of the sensors used.
Fig. 10
Fig. 10 Conventional intensity images of two bricks when the DMPA works in the imaging mode.
Fig. 11
Fig. 11 Low definition intensity image used to shape a white light wavefront and the associated front view of wavefront constructed, and the small inserts indicate the detailed beam distribution characters, which are already adjusted under the conditions of the red frame with a brightness of 60% and a contrast of 75%, and the blue frame with a brightness of 62% and a contrast of 80%.
Fig. 12
Fig. 12 Low definition white light images leading to forming the wavefronts and associated front view of wavefronts constructed, and the low definition spectral images originated from common laser illumination and corresponding front view of wavefronts constructed.
Fig. 13
Fig. 13 The object of model tree.
Fig. 14
Fig. 14 Low definition intensity image acquired at 4.5 Vrms and associated wavefronts. (a) Low definition intensity image of the red light, and the small inserts indicating the detailed beam distribution characters. The green and blue frame are adjusted under a brightness of 70% and a contrast of 80%, and under a brightness of 70% and a contrast of 80%, respectively. (b) Low definition intensity image of the green light. The red and blue frame are adjusted under a brightness of 76% and a contrast of 85%, and under a brightness of 54% and a contrast of 65%, respectively. (c) Low definition intensity image of the blue light. The red and green frame are adjusted under a brightness of 76% and a contrast of 85%, and under a brightness of 56% and a contrast of 65%, respectively. (d) Low definition intensity image of the white light. (e) The wavefront constructed by the low definition intensity images from (a) to (d).
Fig. 15
Fig. 15 The mixing of the spectral wavefronts and then a comparison of the mixed wavefront and the white light wavefront chosen.
Fig. 16
Fig. 16 The mean-absolute-difference of the mixed wavefront to the white light wavefront when r = 0.8.
Fig. 17
Fig. 17 The mean-absolute-difference of different wavefront to the white light wavefront.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

{ tan θ x = Δ x f tan θ y = Δ y f ,
[ Δ x Δ y ] i , j = [ x c x o y c y o ] i , j ,
[ x c y c ] i , j = 1 m = [ M l ] i , j [ M u ] i , j n = [ N l ] i , j [ N u ] i , j I ( m , n ) [ m = [ M l ] i , j [ M u ] i , j n = [ N l ] i , j [ N u ] i , j x ( m , n ) I ( m , n ) m = [ M l ] i , j [ M u ] i , j n = [ N l ] i , j [ N u ] i , j y ( m , n ) I ( m , n ) ] ,
M A D = i = 1 M j = 1 N | W s ( i , j ) r × W t ( i , j ) | M × N ,
G 532 = S G 532 S G G W ,
W m = m × W r + n × W g + ( 1 m n ) × W b ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.