Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Calibration methods for division-of-focal-plane polarimeters

Open Access Open Access

Abstract

Division-of-focal plane (DoFP) imaging polarimeters are useful instruments for measuring polarization information for a variety of applications. Recent advances in nanofabrication have enabled the practical manufacture of DoFP sensors for the visible spectrum. These sensors are made by integrating nanowire polarization filters directly with an imaging array, and size variations of the nanowires due to fabrication can cause the optical properties of the filters to vary up to 20% across the imaging array. If left unchecked, these variations introduce significant errors when reconstructing the polarization image. Calibration methods offer a means to correct these errors. This work evaluates a scalar and matrix calibration derived from a mathematical model of the polarimeter behavior. The methods are evaluated quantitatively with an existing DoFP polarimeter under varying illumination intensity and angle of linear polarization.

© 2013 Optical Society of America

1. Introduction

Polarization imaging is the process of recording the polarization state of light in a scene. Typically this means recording the full or partial Stokes vectors of light across an image plane [1]. Applications of polarization imaging include remote sensing and general contrast enhancement [26]; the study of species that display or sense polarization [712]; and biomedical imaging applications [1318]. Thus improvements to the quality of polarization imagers can have a tremendous impact on advancing the fields of remote sensing, marine biology, and biomedical imaging.

Polarization imagers generally work by: (1) modulating or splitting light into components that can be captured with conventional intensity-measuring image sensors and (2) reconstructing the original polarization image from the measured components. Popular modulation schemes include the division-of-time (DoT) polarimeter, where a filter or filters placed in front of the imager change over time [19, 20]; the division-of-amplitude (DoA) polarimeter where a prism splits the image into multiple paths and each path has its own polarization optics and image plane [2124]; and the division-of-focal-plane (DoFP) polarimeter, which splits the image by placing a repeating pattern of pixel-pitch-matched polarization filters directly onto the pixels of an imager [2535].

DoFP polarimeters have several advantages over competing imaging polarimeter architectures. Most notably, they capture all of the components needed to reconstruct the incident polarization state simultaneously—which avoids the motion-blur inherent to DoT polarimeters. In addition, their monolithic design makes them more compact and robust than either DoT or DoA polarimeters; this makes them the ideal choice for field work.

However, DoFP sensors have several notable sources of errors. Instantaneous field of view (IFOV) errors result from the incomplete reconstruction of the spatially modulated polarization image. Recent research has shown Fourier transform based and interpolation based reconstruction techniques can mitigate these errors [3638].

A second source of error in DoFP polarimeters is the fixed pattern noise (FPN), which represents the spatial variations in the optical responses between all pixels across the imaging sensor to a uniform polarization signature (or target). Since a target with a uniform polarization signature is imaged across the entire sensor array, i.e. the spatial frequency of the stimulus target is zero, the IFOV errors are zero. Hence, the FPN error can be evaluated separately from the IFOV errors. The FPN is due to manufacturing variations across the focal plane in the photodetectors, read-out amplifiers within each pixels or at a chip-level [39, 40], and the polarization nanowire filters [4143]. Techniques such as correlated-double-sampling (CDS) or difference double sampling (DDS) effectively correct the FPN components due to photodetector and amplifier variations [39, 40].

Correcting for FPN due to the nanowire filter variations has not been previously investigated to the best of the authors’ knowledge. This source of error deserves special attention because of the nominal size of the nanowires. Even as nanotechnology matures, variations from the nominal values of the thickness (140 nm), width (70 nm) and pitch (140 nm) of the nanowires can easily approach 5nm to 20nm [41, 42]. These variations can have a major impact to the collective optical response of a group of nanowires comprising a single pixelated polarization filter [43]. Spatial variations in the optical response of 20% between polarization filters have been previously reported for a DoFP nanowire polarimeter [44]. Reducing the variations of the nanowires through more advance nanofabrication techniques can lead to prohibitively expensive filters and imaging devices. Hence, a computational method for correcting optical variations between pixelated-polarization filter due to variations at the nanoscale should be carefully explored and is the prime motivation for this work.

In this paper, we describe two calibration techniques tailored to mitigate variations in the optical response between pixelated-polarization filters across an imaging array in order to improve the accuracy of the captured polarization information. The first calibration technique assumes that the optical properties for each pixel in the imaging array are independent from its neighbors, while the second calibration technique utilizes a small neighborhood of pixels in order to calibrate the collective optical responses. The rest of the paper is organized as follows. In Section 2, we describe our mathematical model for the division of focal plane polarimeter which is the basis for the two calibration methods. Section 3 describes in detail the two calibration methods used to minimize FPN. Section 4 presents detailed experimental measurements which are used to assess the accuracy of the two calibration methods. Real-life images obtained from a DoFP polarimeter and corrected with the two calibration methods are presented in this section. Concluding remarks are presented in Section 5.

2. DoFP polarimeter model

The DoFP polarimeter is an imaging sensor composed of polarization sensitive pixels. Each polarization pixel is comprised of a polarization filter followed by a photodetector. The pitch of the pixelated polarization filter is matched to the pitch of the photo pixel in the imaging array and is between 2 μm and 10 μm for typical CMOS or CCD image sensors. The pixels are tiled into small non-overlapping neighborhoods called “super-pixels.” Figure 1 shows a typical linear DoFP polarimeter capable of measuring the first three Stokes parameters. The pixels are grouped into 2 by 2 super-pixels and each super-pixel has 0°, 45°, 90°, and 135° linear polarization filters [33].

 figure: Fig. 1

Fig. 1 Block diagram of a division of focal plane polarimeter. An array of pixel pitch-matched polarization filters are deposited on the surface of a CMOS or CCD imaging array.

Download Full Size | PDF

The optical behavior of each pixel is modeled as the composition of the pixel’s conversion function acting on the intensity of the light transmitted through the polarization filter. Using Mueller calculus, the light transmitted by the polarization filter is represented by the Stokes vector Sf4. The Stokes vector Sf is equal to the product of the filter’s Mueller matrix Mf4×4 and the incident light Sin4 as shown in Eq. (1).

Sf=MfSin.

For example, the Mueller matrix for a linear polarization filter with electric field amplitude transmission coefficients px and py along the filter’s x and y axes, respectively, and rotated by an angle θ is shown in Eq. (2). The sine and cosine of θ in Eq. (2) are shortened to sθ and cθ, respectively. These parameters encapsulate all of the filter non-idealities due to manufacturing flaws in wire thickness, width, and pitch.

Mf=12((px2+py2)(px2py2)c2θ(px2py2)s2θ0(px2py2)c2θ(px2+py2)c2θ2+2pxpys2θ212(pxpy)2s4θ0(px2py2)s2θ12(pxpy)2s4θ2pxpyc2θ2+(px2+py2)s2θ200002pxpy).

The intensity component of the filtered Stokes vector is then passed to the conversion function of the underlying pixel. The conversion function represents the conversion of photons to real values (i.e. digital value) by the image sensor. Since we are interested in the FPN components introduced by the nanowire polarization filters, we assume that the conversion function is linear or has already been linearized, and has no temporal or quantization noise. The resulting pixel model is shown in Eq. (3).

Ip=g(1000)Sf+d=g(1000)MfSin+d=AfSin+d.

In Eq. (3), Ip is the real intensity value measured by the pixel and the g and d parameters are the gain and dark offset of the pixel, respectively. The row vector (1000) selects the intensity component of the incident light. This further simplifies Eq. (3) by combining the pixel’s gain and the first row of the filter’s Mueller matrix into the row vector Af1×4, which is the polarization pixel’s analysis vector.

In a super-pixel configuration, the responses of the n constituent pixels are stacked into a column vector Ispn as shown in Eq. (4).

Isp=(Af,1Sin+d1Af,nSin+dn)=(Af,1Af,n)Sin+(d1dn)=ASin+d.

The individual analysis vectors and dark offsets for each of the pixels in the super-pixel are combined into an analysis matrix, An×4, and a dark offset vector, dn. This model assumes that either the incident illumination is uniform across the super-pixel or that all of the constituent pixels are co-located.

3. Calibration techniques

3.1 Derivations

The purpose of a polarimeter calibration technique is to transform the non-ideal response of the pixelated filters into an ideal response, independently of the incident Stokes vector. In other words, an effective calibration technique is a function that transforms polarimeter measurements so that they are as close to the ideal as possible, without reconstructing the Sin vector. Equations (5) and (6) express this concept in terms of finding a calibration function, Calp for the single pixel model and Calsp for the super-pixel model, that minimizes the square error between the calibrated response and the ideal response.

minCalpCalp(Ip)Ip,ideal2;
minCalspCalsp(Isp)Isp,ideal2.

In order to perform these minimizations, the ideal responses of the polarization pixel and the calibration functions must be specified. In general, the ideal dark offset d for all pixels is zero. For pixels with linear polarization filters, the ideal analysis vector Aideal is the first row of the Mueller matrix given in Eq. (2), with parameters px=1, py=0, and θ equal to the rotation of the transmission axis of the filter. Equations (7) and (8) show the ideal pixel and super-pixel responses, respectively.

Ip,ideal=AidealSin;
Isp,ideal=AidealSin.

Finding an appropriate form for Calp and Calsp is computed as follows. Since both ideal and non-ideal responses of the pixelated filters are linear, a linear transformation is used to convert one response to the other response. The two functions are shown in Eqs. (9) and (10).

Calp(Ip)=gc(Ipdc);
Calsp(Isp)=Gc(Ispdc).

In Eq. (9), the optical response of a single pixel, Ip, is first compensated for the dark offset by subtracting dc, followed by the application of a scalar gain, gc. The super-pixel calibration response presented in Eq. (10) utilizes the vector dc for compensating the dark offsets and the matrix Gc for correcting the gains of all pixels in the super-pixel.

Equations (5) and (6) can be expanded using Eqs. (3), (4), and (7) through (10), resulting in Eqs. (11) and (12) respectively.

mingc,dcgc(AfSin+ddc)AidealSin2;
minGc,dcGc(ASin+ddc)AidealSin2.

Both minimizations are convex and can be completed by taking the partial derivatives with respect to the calibration gains and calibration dark offsets, setting them to zero, and solving for the parameters. Sections 3.2 and 3.3 describe such solutions to Eqs. (11) and (12). It is also possible to solve the minimizations by supplying various known Sin values and the corresponding Ip or Isp values to an ordinary least squares solver. The apparatus described in section 4 could generate data for that purpose.

3.2 Single-pixel solution

A solution for the single-pixel case is presented in Eq. (13).

dc=d,gc=AidealSinAfSin.

The calibration dark offset is set to the pixel’s dark offset, d, and the calibration gain is the ratio of the two projections. When substituted back into Eq. (9), we see that the dark offsets will cancel, and the calibration gain scales the projection of Sin onto Af to the length of the projection of Sinonto Aideal. This results in an ideal response of the single-pixel polarization as seen in Eq. (14).

Calp(Ip)=gc(Ipdc)=AidealSinAfSin(AfSin+dd)=AidealSin=Ip,ideal.

The dependence of gc on Sin in Eq. (13) is a problem—a single value of gc will not be valid for all values of Sin. One solution to this problem is to make the assumption that Af is a scalar multiple of Aideal, or in other words, assume that they both point in the same direction in the Stokes space. This is equivalent to assuming that all of the filter parameters are ideal except for its transmission coefficient. This assumption makes (AidealSin)/(AfSin) constant and Eq. (13) simplifies to the expressions of Eq. (15), which can be used practically.

dc=d,gc=AidealAf.

Equation (16) is obtained by substituting the results from Eq. (15) into Eq. (9). The calibration dark offset still cancels the pixel’s dark offset, but the gain simply rescales the Af vector to the same length as the Aideal vector. If they point in different directions, this method will not be able to completely calibrate the response.

Calp(Ip)=gc(Ipdc)=AidealAf(AfSin+dd)=AidealAfAfSinIp,ideal.

3.3 Super-pixel solution

A solution for the super-pixel case is presented in Eq. (17).

dc=d,Gc=AidealA+.

Here A+ indicates the pseudo-inverse of A, which is computed such that the value for Gc in Eq. (17) satisfies GcA=Aideal. As long as the pseudo-inverse of A exists, Gc will transform all of its constituent Af vectors, by scaling and rotating, exactly into those of Aideal. Equation (18) is obtained by substituting the results from Eq. (17) into Eq. (10). Equation (18) shows that using the super-pixel based approach perfectly calibrates the response, as long as the model’s assumptions hold.

Calsp(Isp)=Gc(Ispdc)=AidealA+(ASin+dd)=AidealSin=Isp,ideal.

4. Evaluation with visible-spectrum linear DoFP polarimeter

4.1 Experimental setup

The two calibration functions presented in Eqs. (9) and (10) are evaluated on data collected from the apparatus shown in Fig. 2. A Sylvania EHJ64655HLX, 250 W, tungsten-halogen bulb provides light for the system. The light passes through Edmund Optics’ Heat Absorbing Glass to block unwanted IR components, then optionally through 1 of three narrow-band spectral filters: Thorlabs FB450-10, Newport 10LF10-515, or Thorlabs FB600-10; which pass 450, 515, and 600 nm light, respectively. An adjustable shutter controls the amount of light that passes into a 4” integrating sphere, which produces nominally uniform unpolarized light at its outputs. A Thorlabs S120VC calibrated photodiode placed at one output port of the integrating sphere measures relative light intensities. Light from the other output port passes through a Newport 20LP-VIS-B linear polarizer mounted on a motorized rotation stage, and finally passes into the visible-spectrum, linear, DoFP polarimeter described in [32, 44]. The apparatus generates fully linearly polarized light with arbitrary intensity and polarization angle. It can be switched between “white” light directly from the lamp or one of the several narrow-band spectra provided by the spectral filters. Since the polarimeter being used for evaluation only measures linear polarization, there is no need for circularly polarizing optics. The capability to control the degree of linear polarization will be included for future works.

 figure: Fig. 2

Fig. 2 Apparatus for evaluating calibration techniques.

Download Full Size | PDF

Data was generated with unfiltered, 450 nm, 515 nm, and 650 nm light respectively. For each spectrum, 100 images at 6 different intensities and 36 polarization angles were collected from a 300×300 pixel (2.2 mm2) sub-region of the polarimeter. The small sub-region was selected to maximize the uniformity of the incident light and to limit the amount of data collected. The coefficient of variation of a non-polarimetric image taken over the same area was 0.0106, which will contribute to the final reconstruction errors. Each intensity and polarization angle was sampled 100 times to reduce the effects of temporal noise on the final results. The 6 intensities followed a roughly exponential sequence based on the dynamic range of the polarimeter. The maximum intensity was set as high as possible without saturating any pixels at any angle of the polarizer. The remaining intensities were set at 50%, 25%, 10%, 5%, and 2.5% of the maximum intensity for that wavelength. This procedure minimized the effects of wavelength-dependent intensity variations of the photodiode’s quantum efficiency. The 36 different polarization angles were uniformly distributed every 5° from 0° to 180°, which covers the full range of linear polarization angles. The output of the integrating sphere was 3% linearly polarized, which is easily compensated for as shown in the following section. Only the images taken with white (unfiltered) light and polarization angles every 20° were used as training data to determine calibration function parameters. The remainder of the data was used for testing the performance of the functions.

4.2 Determining model and calibration parameters

The optimal gains and offsets for the single-pixel and super-pixel calibration functions, Eqs. (15) and (17) respectively, are computed from the analysis vector, Af, and dark offset, d, for each pixel. These parameters can be determined from the m training data samples collected for each pixel as shown in Eqs. (19) and (20). The values for each Sin,i must include all of the polarization effects of the apparatus, including the polarization of the output of the integrating sphere.

(I1Im)=(Afd)(Sin,11Sin,m1);
(Afd)=(I1Im)(Sin,11Sin,m1)+.

Equation (20) was evaluated for each pixel using a least-squares solver. The coefficients of determination, R2, for all of the pixels are above 99.73% and have a median of 99.93%. This indicates that the model explains most of the variation in the training data. The large number of samples essentially guarantees that the results are statistically significant.

The pixel dark offsets are summarized in Fig. 3. The dark offsets are small compared to the dynamic range of the polarimeter (maximum digital value of 4095), but predominantly negative. This is not a problem, but indicates that the dark offsets are being over-corrected within the polarimeter itself. The dark offsets are set by the camera manufacturer and cannot be reprogrammed.

 figure: Fig. 3

Fig. 3 Histogram of pixel dark offsets. The digital value range for each pixel in the polarimeter is between 0 and 4095.

Download Full Size | PDF

Figure 4 displays the measured pixel analysis vectors, Af=(A0A1A2A3). Since these measurements are from pixels with linear polarization filters,A3 is always zero and is not included in the figure. The spatial variation of the filter transmissions is about 20% and can be attributed to the variations in the thickness and width of the aluminum nanowires comprising the pixelated polarization filters. The measurements show a constant angular offset of approximately 5° from the ideal, which is most likely due to alignment errors during the interference lithography fabrication step of the nanowire polarization filters [41]. Most of the filters have diattenuations of about 0.9, which corresponds to an extinction ratio of about 26 dB. This is less than the values reported for the polarimeter in [32] and is attributed to the increased optical cross-talk due to the lack of collimation in this work’s optical apparatus. It is worth noting that any cross-talk effects are measured as part of the pixel model parameters. However, this means that the pixel parameters depend on the incident light beam’s F-number, and that the parameters must be re-measured for each F-number that the polarimeter uses.

 figure: Fig. 4

Fig. 4 Analysis vectors for all pixels in the imaging array. Diamonds indicate corresponding ideal values for each colored group—red is 0°, blue is 45°, green is 90° and purple is 135°. The ratios of A2/A0 versus A1/A0 for each pixel are presented in the left sub-plot, where the radius corresponds to a filter’s diattenuation and the polar angle corresponds to its orientation. The corresponding values of A0, which represent the filters’ transmission coefficients along the x-axis, are plotted in the right sub-plot.

Download Full Size | PDF

Since the analysis vector, Af, and dark offset, d, are determined for each pixel, computing the single-pixel and super-pixel calibration function parameters requires solving Eqs. (15) and (17) respectively. In order to illustrate the capabilities of the two calibration functions, the products of the single-pixel and super-pixel calibration gains with the analysis vectors are shown in Figs. 5 and 6, respectively. The single-pixel calibration normalizes the length of each pixel’s Af to that of the corresponding Aideal. This results in a drastic decrease in transmission variation to ~2%, but does not correct any errors due to diattenuation or orientation (see Fig. 5). On the other hand, the super-pixel calibration completely transforms the analysis vectors to the ideal vector and corrects for variations in transmission, diattenuation and orientation between individual pixelated polarization filters across the imaging array. The transmission variations between all pixels in the imaging array are less than 0.1% after the super-pixel calibration as demonstrated in Fig. 6.

 figure: Fig. 5

Fig. 5 Pixel analysis vectors corrected with the single-pixel calibration method. The lengths of the analysis vectors have been normalized to the ideal ones, but the directions have not been corrected. The variation in transmission between all pixels in the imaging array is ~2%.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Pixel analysis vectors corrected with the super-pixel calibration method. The analysis vectors are transformed completely to the ideal analysis vectors. The variation in transmission between all pixels in the imaging array is less than 0.1%.

Download Full Size | PDF

4.3 Test results

The difference between the single- and super-pixel calibrations is also evident when the calibration functions are applied to the test data. Figures 7 and 8 show histograms of the optical responses for the uncalibrated, single-pixel-calibrated, and super-pixel-calibrated methods. The polarimeter is illuminated with linearly polarized white light at an incident angle of 15°. In Fig. 7, the left sub-plot presents the histogram response of the 0°-oriented pixels before and after the two calibration methods are applied. The right sub-plot presents the uncalibrated response of all pixels in the imaging array grouped by the filters orientations, i.e. 0, 45, 90 and 135 degrees. The FPN (i.e. spatial variations) of the uncalibrated pixels with 0-degree filter, computed as the ratio of the standard variation over the mean value, is 11.6%. The FPN of the CCD imaging array without the polarization filter is 0.5%, which was measured before depositing the nanowire polarization filters on the surface of the sensor.

 figure: Fig. 7

Fig. 7 Pixel responses with white light at 100% intensity level and linearly polarized at 15°. Left: The response of the 0°-oriented pixels with and without calibration. Right: The response of all pixels in the imaging array without calibration.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Pixel responses with white light at 100% intensity level and linearly polarized at 15°. Left: The response of all pixels in the imaging array after single-pixel-calibration. Right: The response of all pixels in the imaging array after super-pixel-calibration.

Download Full Size | PDF

The single-pixel calibration and super-pixel calibration reduce the FPN for the zero degree oriented pixels down to 0.15% and 0.11% respectively (see Fig. 7). The large reduction in spatial variation across all pixels in the imaging array after the two calibration methods are applies is evident from Fig. 8. The variations in the spatial response for the four groups of pixels are reduced from ~11% down to 0.1% for the two calibration methods. The super-pixel calibration method also adjusts the transmission of the filters to their nominal value. This is very critical in order to minimize the error in the computed Stokes parameters, degree of linear polarization and angle of polarization.

Figures 9 and 10 examine the responses of the two calibration methods, in addition to the uncalibratied pixels’ response, when the polarimeter is illuminated with linearly polarized white light and the angle of linear polarization is swept between 0° to 180°.

 figure: Fig. 9

Fig. 9 Pixel responses with white light, 100% intensity, linearly polarized at angle θ. Left: uncalibrated. Right: Single-pixel calibrated. Error bars are at ± 1 standard deviation.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Super-pixel calibrated response with white light, 100% intensity, linearly polarized at angle θ. Error bars are at ± 1 standard deviation.

Download Full Size | PDF

The uncalibrated responses follow Malus’ law but the amplitudes of the squared cosines between the four pixelated polarization filters vary widely. There is a constant offset from zero, and the peaks do not occur at their nominal angles. Furthermore, the spatial variation for each incident angle across the imaging array is relatively high as demonstrated by the histogram plots in Fig. 7. The single-pixel calibration makes the amplitudes uniform between the four filters, but does not correct the problem that the peak values for individual filters do not occur at the correct angle. The super-pixel calibration corrects the amplitude uniformity between the four pixel responses and aligns the maximum responses of each pixel to the appropriate angle. The re-alignment of the sinusoids such that they exhibit maximum values at the nominal angles is critical for the accuracy of the reconstructed angle and degree of linear polarization.

Figures 11, 12, and 13 show the RMS reconstruction error of the incident intensity, degree, and angle of polarization, respectively, as the incident angle of polarization and intensity are swept through their ranges.

 figure: Fig. 11

Fig. 11 RMS error of the reconstructed intensity of the incident light, i.e.S0parameter, as a function of the (left) incident polarization angle,θ, and (right) intensity.

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 RMS error of the reconstructed DoLP as polarization angle,θ, and intensity, S0, vary.

Download Full Size | PDF

 figure: Fig. 13

Fig. 13 RMS error of the reconstructed polarization angle,θ, as θ and intensity, S0, vary.

Download Full Size | PDF

The reconstruction errors for the uncalibrated pixels’ responses (in terms of S0, DoLP and AoP) show a large dependence on the incident angle of polarization. The maximum RMS error for DoLP at maximum and minimum illumination is ~20% and ~35% respectively. This is a result of the mismatched amplitudes of the four super-pixel responses as indicated in Fig. 9.

The single-pixel calibration method removes the RMS reconstruction error dependence on the incident polarization angle and the reconstruction error is constant for both light intensities. The maximum RMS error after employing single-pixel calibration for DoLP at maximum and minimum illumination is ~10% and ~32% respectively.

The super-pixel calibration method further reduces the RMS reconstruction error and for light intensities of 100% of the dynamic range, the error is decreased by a factor of ~10 compared to the single-pixel calibration method. The maximum RMS error after employing super-pixel calibration for DoLP at maximum and minimum illumination is ~0.5% and ~26% respectively. The only deviation from this behavior is the reconstruction of the total intensity, i.e. S0, at low light illuminations—in this case the errors after employing either single-pixel or super pixel calibrations are approximately equal.

The RMS reconstruction errors do not reach zero for several reasons. The non-uniformity of approximately 1% in the flat-field produced by the apparatus limits the accuracy of the pixel model parameter measurements, which in turn produce errors in the calibration parameters. Additionally, the image sensor’s specifications indicate a maximum non-linearity of 2% in pixel photo-responses, which is not included in our model [45]. Finally, we have not included any noise sources in the model—both photon shot noise and the image sensor’s readout noise cause significant reductions in SNR at low light intensities. Using approximate calculations based on figures from the image sensor’s specifications, the photon shot noise accounts for about 84% of the noise power and readout noise for about 16% of the noise power at 10% illumination [45]. A thorough noise analysis and error propagation would be required to determine how much each of these unaddressed error sources contribute to the final reconstruction errors.

Figure 14 shows the RMS reconstruction error for the single-pixel (left sub-plot) and super-pixel (right sub-plot) calibrations for three single-wavelength test data sets. Since the transmission coefficients, px and py, for the orthogonal electric field components in Eq. (2) and the quantum efficiency of the image sensor are dependent on wavelength, the RMS errors are a function of wavelength.

 figure: Fig. 14

Fig. 14 RMS error of the reconstructed intensity, S0, as the incident intensity on the imaging sensor is varied for three different wavelengths. Left sub-plot presents the RMS error for the single pixel calibration method and the right sub-plot presents the RMS error for the super-pixel calibration method.

Download Full Size | PDF

Since the extinction ratios are around 10 at 450 nm, 30 at 550nm and 38 at 650nm, the RMS error is highest at 450 nm and around 6% for light intensities above 10% of the imager’s dynamic range. The RMS error for the green and red LEDs is around 4% for the same intensity levels. Although the analysis vectors for each pixel in the imaging array were obtained with broad-band white light, the RMS errors for the reconstructed intensity, i.e. S0, are similar across the entire visible spectrum. Similar results were obtained for the RMS errors for angle and degree of linear polarization and are not shown for brevity.

4.4 Calibration on real life images from division of focal plane polarimeter

Real-life images obtained from a division of focal plane polarimeter on a rainy day are presented in Fig. 15. The first column of images presents intensity, S0, the second column of images presents the degree of linear polarization and the third column presents the angle of polarization. False color is used to depict the degree of linear polarization, where blue color represents low degree of linear polarization and red color represents high degree of linear polarization.

 figure: Fig. 15

Fig. 15 Real life images obtained from a division of focal plane polarimeter. The first column of images present intensity, S0, the second column of images present the degree of linear polarization and the third column presents the angle of polarization. Uncalibrated images are presented in the first row; single-pixel calibration images are presented in the second row and super-pixel calibration images are presented in the third row.

Download Full Size | PDF

Uncalibrated images are presented in the first row of Fig. 15 and they contain large errors from the expected values. For example, the angle of polarization for the road is expected to be zero degree because of the horizontal surface orientation and the incident illumination is unpolarized due to the cloudy/rainy weather. The angle of polarization for the road obtained from the uncalibrated image is around 165 degrees. The degree of linear polarization image has a pronounce gradient in the~45 degree orientation and light blue lines can be observed throughout the image. This is due the fact that the 45 degree pixels had higher transmission values as well as these pixels had higher spatial variations compared to the other three pixels. The forest in the background is not very visible in both the angle and degree of linear polarization images.

The images in the second row are obtained with the utilization of the single-pixel calibration method. In this set of images, the uniformity of the angle and degree of linear polarization images is higher compared to the uncalibrated images. For example, the polarization signatures across the road are more uniform compared to the first row of images. Nevertheless, the angle of polarization for the road is around 15 degrees which is incorrect. The single-pixel calibration does not correct for the nominal response of individual pixels, which leads to large errors in the angle and degree of linear polarization.

The images in the third row are obtained with the utilization of the super-pixel calibration method. In this image, the expected angle of polarization for the road is zero as well as the uniformity of the angle of polarization across the road is further improved. Due to the curved shape of the incoming car’s windshield, the angle of polarization image has a gradient. This gradient is more pronounced in the super-pixel calibrated image than in the single-pixel calibrated image.

5. Summary

In this paper, we have presented two calibration methods for division-of-focal-plane polarimeters. Typical division-of-focal-plane polarimeters for the visible spectrum employ nanowires in order to construct linear polarization filters. Mismatches in the size of the nanowires will lead to optical variations at the macro scale and we outline two calibration methods which mitigate these effects. Both methods were developed from the same linear model for polarization pixels, but one treats each pixel independently, and the other treats super-pixel groups together. We showed that the super-pixel approach is mathematically more powerful than the single-pixel approach and can correct both the typical photodetector gain and offset non-idealities in addition to polarization sensitive flaws such as non-ideal filter orientations and non-ideal filter diattenuation coefficients. The single-pixel approach can only correct for non-ideal gains and offsets.

The measurements of our visible-spectrum linear DoFP polarimeter show that a majority of the non-uniformity between pixels is in their gains and offsets, but a significant amount of variation occurs in the model parameters that the single-pixel approach cannot correct, including a constant rotational offset and moderate variations in filter diattenuations. Thus we have shown that calibrating each pixel independently reduces DoLP reconstruction errors from 12% to 10% for moderate light illuminations. Calibrating each super-pixel as a unit reduces the RMSE to approximately 1%. Similar reductions in error occur for reconstructing the intensity and AoP images. These figures indicate that the super-pixel calibration method is worth the extra computational effort, but there are still some un-addressed sources of error. These unaddressed sources of error include the image sensor’s non-linear response, temporal noise, including the photon shot noise and readout noise, and the non-uniformities in the flat-field that the calibration apparatus produces.

Finally, we showed that though the calibration parameters were determined using a tungsten-halogen lamp for illumination with only an IR blocking filter in place, they performed well across the visible spectral range of the polarimeter. It is also worth noting that the optical properties of the polarimeter are stable enough that the same calibration parameters have been used with no measurable difference for about two years during the development of this work. The improvements in the quality of real-life images obtained from a division of focal plane polarimeter for the visible spectrum after applying the two calibration methods are also demonstrated in this paper.

Acknowledgments

This work was supported by National Science Foundation grant number OCE-1130897, and Air Force Office of Scientific Research grant numbers FA9550-10-1-0121 and FA9550-12-1-0321.

References and links

1. D. H. Goldstein, Polarized light, 3rd ed. (CRC Press, Boca Raton, FL, 2011), pp. xxi, 770 p.

2. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45(22), 5453–5469 (2006). [CrossRef]   [PubMed]  

3. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind Haze Separation,” in Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, 2006), 1984–1991. [CrossRef]  

4. T. Treibitz and Y. Y. Schechner, “Active Polarization Descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009). [CrossRef]   [PubMed]  

5. J. L. Deuzé, F. M. Bréon, C. Devaux, P. Goloub, M. Herman, B. Lafrance, F. Maignan, A. Marchand, F. Nadal, G. Perry, and D. Tanré, “Remote sensing of aerosols over land surfaces from POLDER-ADEOS-1 polarized measurements,” J. Geophys. Res., D, Atmospheres 106(D5), 4913–4926 (2001). [CrossRef]  

6. E. Puttonen, J. Suomalainen, T. Hakala, and J. Peltoniemi, “Measurement of Reflectance Properties of Asphalt Surfaces and Their Usability as Reference Targets for Aerial Photos,” IEEE Trans. Geosci. Remote Sens. 47(7), 2330–2339 (2009). [CrossRef]  

7. D. H. Goldstein, “Polarization properties of Scarabaeidae,” Appl. Opt. 45(30), 7944–7950 (2006). [CrossRef]   [PubMed]  

8. P. Brady and M. Cummings, “Differential Response to Circularly Polarized Light by the Jewel Scarab Beetle Chrysina gloriosa,” Am. Nat. 175(5), 614–620 (2010). [CrossRef]   [PubMed]  

9. T. W. Cronin, N. Shashar, R. L. Caldwell, J. Marshall, A. G. Cheroske, and T.-H. Chiou, “Polarization Vision and Its Role in Biological Signaling,” Integr. Comp. Biol. 43(4), 549–558 (2003). [CrossRef]   [PubMed]  

10. N. Shashar, R. Hagan, J. G. Boal, and R. T. Hanlon, “Cuttlefish use polarization sensitivity in predation on silvery fish,” Vision Res. 40(1), 71–75 (2000). [CrossRef]   [PubMed]  

11. A. Sweeney, C. Jiggins, and S. Johnsen, “Insect communication: Polarized light as a butterfly mating signal,” Nature 423(6935), 31–32 (2003). [CrossRef]   [PubMed]  

12. G. Horváth and D. Varjú, Polarized light in animal vision: polarization patterns in nature (Springer, 2004).

13. C. Paddock, T. Youngs, E. Eriksen, and R. Boyce, “Validation of wall thickness estimates obtained with polarized light microscopy using multiple fluorochrome labels: correlation with erosion depth estimates obtained by lamellar counting,” Bone 16(3), 381–383 (1995). [CrossRef]   [PubMed]  

14. P. B. Canham, H. M. Finlay, J. G. Dixon, and S. E. Ferguson, “Layered collagen fabric of cerebral aneurysms quantitatively assessed by the universal stage and polarized light microscopy,” Anat. Rec. 231(4), 579–592 (1991). [CrossRef]   [PubMed]  

15. E. Salomatina-Motts, V. Neel, and A. Yaroslavskaya, “Multimodal polarization system for imaging skin cancer,” Opt. Spectrosc. 107(6), 884–890 (2009). [CrossRef]  

16. M. Anastasiadou, A. D. Martino, D. Clement, F. Liège, B. Laude‐Boulesteix, N. Quang, J. Dreyfuss, B. Huynh, A. Nazac, L. Schwartz, and H. Cohen, “Polarimetric imaging for the diagnosis of cervical cancer,” Phys. Status Solidi 5(5c), 1423–1426 (2008). [CrossRef]  

17. Y. Liu, T. York, W. Akers, G. Sudlow, V. Gruev, and S. Achilefu, “Complementary fluorescence-polarization microscopy using division-of-focal-plane polarization imaging sensor,” J. Biomed. Opt. 17(11), 116001 (2012). [CrossRef]   [PubMed]  

18. V. V. Tuchin, L. V. Wang, and D. A. Zimnyakov, Optical polarization in biomedical applications (Springer, 2006).

19. R. Walraven, “Polarization imagery,” Opt. Eng. 20(1), 200114 (1981). [CrossRef]  

20. J. E. Solomon, “Polarization imaging,” Appl. Opt. 20(9), 1537–1544 (1981). [CrossRef]   [PubMed]  

21. R. M. Azzam, “Arrangement of four photodetectors for measuring the state of polarization of light,” Opt. Lett. 10(7), 309–311 (1985). [CrossRef]   [PubMed]  

22. C. A. Farlow, D. B. Chenault, J. L. Pezzaniti, K. D. Spradley, and M. G. Gulley, “Imaging polarimeter development and applications,” in Proc. SPIE, 2002), 118–125.

23. J. D. Barter, P. H. Lee, H. Thompson, Jr., and T. Schneider, “Stokes parameter imaging of scattering surfaces,” in Optical Science, Engineering and Instrumentation'97, (International Society for Optics and Photonics, 1997), 314–320.

24. M. W. Kudenov, J. L. Pezzaniti, and G. R. Gerhart, “Microbolometer-infrared imaging Stokes polarimeter,” Opt. Eng. 48, 063201 (2009).

25. C. K. Harnett and H. G. Craighead, “Liquid-crystal micropolarizer array for polarization-difference imaging,” Appl. Opt. 41(7), 1291–1296 (2002). [CrossRef]   [PubMed]  

26. G. P. Nordin, J. T. Meier, P. C. Deguzman, and M. W. Jones, “Diffractive optical element for Stokes vector measurement with a focal plane array,” in SPIE's International Symposium on Optical Science, Engineering, and Instrumentation, (International Society for Optics and Photonics, 1999), 169–177. [CrossRef]  

27. M. Sarkar, D. San Segundo Bello, C. Van Hoof, and A. Theuwissen, “Integrated polarization analyzing CMOS image sensor for material classification,” IEEE Sens. J. 11(8), 1692–1703 (2011). [CrossRef]  

28. J. S. Tyo, “Hybrid division of aperture/division of a focal-plane polarimeter for real-time polarization imagery without an instantaneous field-of-view error,” Opt. Lett. 31(20), 2984–2986 (2006). [CrossRef]   [PubMed]  

29. M. Momeni and A. H. Titus, “An analog VLSI chip emulating polarization vision of octopus retina,” IEEE Trans. Neural Netw. 17(1), 222–232 (2006). [CrossRef]   [PubMed]  

30. T. Tokuda, S. Sato, H. Yamada, K. Sasagawa, and J. Ohta, “Polarisation-analysing CMOS photosensor with monolithically embedded wire grid polariser,” Electron. Lett. 45(4), 228–230 (2009). [CrossRef]  

31. V. Gruev, J. Van der Spiegel, and N. Engheta, “Dual-tier thin film polymer polarization imaging sensor,” Opt. Express 18(18), 19292–19303 (2010). [CrossRef]   [PubMed]  

32. V. Gruev, R. Perkins, and T. York, “CCD polarization imaging sensor with aluminum nanowire optical filters,” Opt. Express 18(18), 19087–19094 (2010). [CrossRef]   [PubMed]  

33. R. Perkins and V. Gruev, “Signal-to-noise analysis of Stokes parameters in division of focal plane polarimeters,” Opt. Express 18(25), 25815–25824 (2010). [CrossRef]   [PubMed]  

34. M. Kulkarni and V. Gruev, “Integrated spectral-polarization imaging sensor with aluminum nanowire polarization filters,” Opt. Express 20(21), 22997–23012 (2012). [CrossRef]   [PubMed]  

35. G. Myhre, W.-L. Hsu, A. Peinado, C. LaCasse, N. Brock, R. A. Chipman, and S. Pau, “Liquid crystal polymer full-stokes division of focal plane polarimeter,” Opt. Express 20(25), 27393–27409 (2012). [CrossRef]   [PubMed]  

36. J. S. Tyo, C. F. LaCasse, and B. M. Ratliff, “Total elimination of sampling errors in polarization imagery obtained with integrated microgrid polarimeters,” Opt. Lett. 34(20), 3187–3189 (2009). [CrossRef]   [PubMed]  

37. S. Gao and V. Gruev, “Bilinear and bicubic interpolation methods for division of focal plane polarimeters,” Opt. Express 19(27), 26161–26173 (2011). [CrossRef]   [PubMed]  

38. X. Xu, M. Kulkarni, A. Nehorai, and V. Gruev, “A correlation-based interpolation algorithm for division-of-focal-plane polarization sensors,” Proc. SPIE 8364, 83640L–83640L (2012). [CrossRef]  

39. A. El Gamal, B. A. Fowler, H. Min, and X. Liu, “Modeling and estimation of FPN components in CMOS image sensors,” Proc. SPIE 3301, 168–177 (1998). [CrossRef]  

40. V. Gruev, Z. Yang, J. Van der Spiegel, and R. Etienne-Cummings, “Current mode image sensor with two transistors per pixel,” Circuits and Systems I: Regular Papers, IEEE Transactions on 57, 1154–1165 (2010). [CrossRef]  

41. V. Gruev, “Fabrication of a dual-layer aluminum nanowires polarization filter array,” Opt. Express 19(24), 24361–24369 (2011). [CrossRef]   [PubMed]  

42. J. J. Wang, F. Walters, X. Liu, P. Sciortino, and X. Deng, “High-performance, large area, deep ultraviolet to infrared polarizers based on 40 nm line/78 nm space nanowire grids,” Appl. Phys. Lett. 90, 061104 (2007).

43. M. A. Jensen and G. P. Nordin, “Finite-aperture wire grid polarizers,” J. Opt. Soc. Am. A 17(12), 2191–2198 (2000). [CrossRef]   [PubMed]  

44. T. York and V. Gruev, “Characterization of a visible spectrum division-of-focal-plane polarimeter,” Appl. Opt. 51(22), 5392–5400 (2012). [CrossRef]   [PubMed]  

45. “KAI-2020 Image Sensor Device Performance Specification,” (Eastman Kodak Company, 2010).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1
Fig. 1 Block diagram of a division of focal plane polarimeter. An array of pixel pitch-matched polarization filters are deposited on the surface of a CMOS or CCD imaging array.
Fig. 2
Fig. 2 Apparatus for evaluating calibration techniques.
Fig. 3
Fig. 3 Histogram of pixel dark offsets. The digital value range for each pixel in the polarimeter is between 0 and 4095.
Fig. 4
Fig. 4 Analysis vectors for all pixels in the imaging array. Diamonds indicate corresponding ideal values for each colored group—red is 0°, blue is 45°, green is 90° and purple is 135°. The ratios of A2/A0 versus A1/A0 for each pixel are presented in the left sub-plot, where the radius corresponds to a filter’s diattenuation and the polar angle corresponds to its orientation. The corresponding values of A0, which represent the filters’ transmission coefficients along the x-axis, are plotted in the right sub-plot.
Fig. 5
Fig. 5 Pixel analysis vectors corrected with the single-pixel calibration method. The lengths of the analysis vectors have been normalized to the ideal ones, but the directions have not been corrected. The variation in transmission between all pixels in the imaging array is ~2%.
Fig. 6
Fig. 6 Pixel analysis vectors corrected with the super-pixel calibration method. The analysis vectors are transformed completely to the ideal analysis vectors. The variation in transmission between all pixels in the imaging array is less than 0.1%.
Fig. 7
Fig. 7 Pixel responses with white light at 100% intensity level and linearly polarized at 15°. Left: The response of the 0°-oriented pixels with and without calibration. Right: The response of all pixels in the imaging array without calibration.
Fig. 8
Fig. 8 Pixel responses with white light at 100% intensity level and linearly polarized at 15°. Left: The response of all pixels in the imaging array after single-pixel-calibration. Right: The response of all pixels in the imaging array after super-pixel-calibration.
Fig. 9
Fig. 9 Pixel responses with white light, 100% intensity, linearly polarized at angle θ. Left: uncalibrated. Right: Single-pixel calibrated. Error bars are at ± 1 standard deviation.
Fig. 10
Fig. 10 Super-pixel calibrated response with white light, 100% intensity, linearly polarized at angle θ. Error bars are at ± 1 standard deviation.
Fig. 11
Fig. 11 RMS error of the reconstructed intensity of the incident light, i.e. S 0 parameter, as a function of the (left) incident polarization angle,θ, and (right) intensity.
Fig. 12
Fig. 12 RMS error of the reconstructed DoLP as polarization angle,θ, and intensity, S 0 , vary.
Fig. 13
Fig. 13 RMS error of the reconstructed polarization angle,θ, as θ and intensity, S 0 , vary.
Fig. 14
Fig. 14 RMS error of the reconstructed intensity, S0, as the incident intensity on the imaging sensor is varied for three different wavelengths. Left sub-plot presents the RMS error for the single pixel calibration method and the right sub-plot presents the RMS error for the super-pixel calibration method.
Fig. 15
Fig. 15 Real life images obtained from a division of focal plane polarimeter. The first column of images present intensity, S0, the second column of images present the degree of linear polarization and the third column presents the angle of polarization. Uncalibrated images are presented in the first row; single-pixel calibration images are presented in the second row and super-pixel calibration images are presented in the third row.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

S f = M f S in .
M f = 1 2 ( ( p x 2 + p y 2 ) ( p x 2 p y 2 ) c 2θ ( p x 2 p y 2 ) s 2θ 0 ( p x 2 p y 2 ) c 2θ ( p x 2 + p y 2 ) c 2θ 2 +2 p x p y s 2θ 2 1 2 ( p x p y ) 2 s 4θ 0 ( p x 2 p y 2 ) s 2θ 1 2 ( p x p y ) 2 s 4θ 2 p x p y c 2θ 2 +( p x 2 + p y 2 ) s 2θ 2 0 0 0 0 2 p x p y ).
I p =g( 1 0 0 0 ) S f +d=g( 1 0 0 0 ) M f S in +d= A f S in +d.
I sp =( A f,1 S in + d 1 A f,n S in + d n )=( A f,1 A f,n ) S in +( d 1 d n )=A S in + d .
min Ca l p Ca l p ( I p ) I p,ideal 2 ;
min Ca l sp Ca l sp ( I sp ) I sp,ideal 2 .
I p,ideal = A ideal S in ;
I sp,ideal = A ideal S in .
Ca l p ( I p )= g c ( I p d c );
Ca l sp ( I sp )= G c ( I sp d c ).
min g c , d c g c ( A f S in +d d c ) A ideal S in 2 ;
min G c , d c G c ( A S in + d d c ) A ideal S in 2 .
d c =d, g c = A ideal S in A f S in .
Ca l p ( I p )= g c ( I p d c )= A ideal S in A f S in ( A f S in +dd )= A ideal S in = I p,ideal .
d c =d, g c = A ideal A f .
Ca l p ( I p )= g c ( I p d c )= A ideal A f ( A f S in +dd )= A ideal A f A f S in I p,ideal .
d c = d , G c = A ideal A + .
Ca l sp ( I sp )= G c ( I sp d c )= A ideal A + ( A S in + d d )= A ideal S in = I sp,ideal .
( I 1 I m )=( A f d )( S in,1 1 S in,m 1 );
( A f d )=( I 1 I m ) ( S in,1 1 S in,m 1 ) + .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.