Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Residual interpolation for division of focal plane polarization image sensors

Open Access Open Access

Abstract

Division of focal plane (DoFP) polarization image sensors capture polarization properties of light at every imaging frame. However, these imaging sensors capture only partial polarization information, resulting in reduced spatial resolution output and a varying instantaneous field of overview (IFoV). Interpolation methods are used to reduce the drawbacks and recover the missing polarization information. In this paper, we propose residual interpolation as an alternative to normal interpolation for division of focal plane polarization image sensors, where the residual is the difference between an observed and a tentatively estimated pixel value. Our results validate that our proposed algorithm using residual interpolation can give state-of-the-art performance over several previously published interpolation methods, namely bilinear, bicubic, spline and gradient-based interpolation. Visual image evaluation as well as mean square error analysis is applied to test images. For an outdoor polarized image of a car, residual interpolation has less mean square error and better visual evaluation results.

© 2017 Optical Society of America

1. Introduction

1.1 Background

The vital physical parameters of light are intensity (I), wavelength (λ), and polarization (Vector E). In the past, polarization has been ignored by imaging technology, as the human eye is insensitive to the polarization factor of light. Polarization information provides orthogonal information compared to intensity and color, and it captures information about the target 3-D surface normals [1–4], material composition, roughness and ultra-high efficiency metamaterial polarizers [5–8]. In bioengineering research, polarization imaging is used to discriminate healthy from diseased tissue without the use of molecular markers [9–11].

Various techniques and instruments have been developed to record the polarization parameters of light [12]. With developments in nanofabrication technology, compact, inexpensive and high-resolution polarization sensors called division of focal plane (DoFP) polarization image sensors have been realized [13–18]. These developments in nanofabrication and nanomaterials allow for fabrication of pixelated nanowire filters on the top surface of the imaging sensor and help realize robust DoFP polarization imaging sensors. The imaging elements, i.e., photodetectors and micro polarization filter arrays, are included on the same substrate as the DoFP image sensor. The main advantage of DoFP image sensors over division of time (DoT) sensors is their capability of seizing polarization information at each frame and avoiding incorrect polarization information in moving targets [19]. DoFP sensors integrate pixelated polarization filters with an array of imaging elements and are organized in a super-pixel [20,21] configuration containing four distinct pixelated polarization filters with transmission axes oriented at 0°, 90°, 45° and 135°, respectively (see Fig. 1). The super pixel holds all the required information to obtain a useful polarized image, recording the first three (S0, S1, S2) or four Stokes (S3) parameters at every frame [22].

 figure: Fig. 1

Fig. 1 Division of focal plane polarization imaging sensor array with a 4-polarizer filter array (0°, 45°, 90°  135°) of charge coupled device (CCD) imaging elements.

Download Full Size | PDF

The image obtained from a DoFP sensor has a lower accuracy of polarization information as each individual pixel within the super pixel has a slightly different field of view. To reconstruct the polarization information, missing pixel information is estimated across the imaging array [23,24]. Normally DoFP polarization sensors lose spatial resolution and capture erroneous polarization information [22,23,25,26]. Due to the four spatially distributed pixelated polarization filters, the instantaneous fields of view for the neighboring pixels in a super-pixel configuration can be different from each other [24,27–30]. Therefore, the first three Stokes parameters (S0, S1, S2), angle of linear polarization (AoP) and degree of linear polarization (DoLP) will contain error and are different from the true polarization component. Such edge artifacts can be easily observed in AoP and DoLP images. These drawbacks need to be resolved to obtain the real-time advantage of DoFP image sensors.

The polarization-imaging sensor shares many similarities with color imaging using a Bayer color filter array [31]. The 2 × 2 super pixel of a color filter array consists of three wavelength channels of red, green and blue. The blue and red colors are down-sampled by a factor of 25% each, while green is down-sampled by a factor of 50%. As the color filters are placed on the imaging element array, spatial resolution is reduced in the different color channels during the above-mentioned down-sampling. Since the sensor for each channel only perceives partial information, interpolation algorithms are used to recover the lost spatial resolution with minimal artifacts.

In color image demosaicking, the G image is first interpolated, and then the tentatively estimated R image (R^) is generated. The residuals are created between observed and tentatively estimated R pixels (R-R^) at the R pixels. Then the interpolated residuals are added to the tentatively estimated R to get the interpolated image [32–34]. The interpolation techniques used for a color filter array cannot be directly employed on the polarization domain due to the essential difference between these two modalities. We have borrowed the tentative estimation of pixels from the residual interpolation technique used for a color filter array. In DoFP, we apply the residual interpolation method to the ‘4’ polarized images separately before calculating the DoLP and AoP.

Interpolation methods are applied to recover some of the lost spatial resolution and improve the accuracy of the captured polarization information. The following methods have been traditionally used to interpolate polarization information: bilinear, bicubic, spline and gradient-based methods [22,23,25–28]. For each method, four polarization-filtered images are required to obtain the necessary polarization information, such as Stokes parameters and angle and degree of linear polarization. The bilinear, bicubic, and spline methods are essentially low-pass filters, which smooth out the intensity information obtained by the four polarization-filtered images and create saw tooth artifacts at the edges. In the case of multiple-object images against a background, their continuity for low-resolution images fails and false polarization signatures are generated. In the gradient-based interpolation technique, interleaved gradients are used, and these will introduce nonconformities due to the instantaneous field of overview (IFoV). However, the errors can be reduced clearly if a proper interpolation technique is used to reduce incorrect polarization information. Therefore, a novel residual interpolation method with edge preservation and interpolation of residuals obtained by tentatively estimating and observing pixel difference is developed to provide more accuracy.

In this paper, we propose residual interpolation for division of focal plane imaging sensors, where the interpolation is executed in a “residual” domain. We interpolated low-resolution polarized images, generated tentative estimates of 0°, 45°, 90° and 135° (qi0¯,qi45¯,qi90¯,qi135¯) and calculated their residuals, which are the differences between the observed and the tentatively estimated pixel values (i.e., 0° -qi0¯,45° -qi45¯,90° - qi90¯, and 135° -qi135¯). We used a guided filter for edge preservation and to accurately generate the tentative estimates of the pixel values [35]. The advantage of the guided filter is that its computing time is independent of filter size. The performance of the residual interpolation method is compared with several previously published interpolation methods, the bilinear, bicubic, spline and gradient-based methods. Based on the results, it is clear that residual interpolation outperforms the others in terms of both mean square error and visual evaluation.

1.2 Linear polarization imaging calculations

A DoFP imaging sensor captures both the intensity and polarization information of a scene. The sensor samples the scene through 0°, 45°, 90° and 135° polarization filters, and registers the four sub-sampled images. The intensity and polarization are then worked out from images with 0°, 45°, 90° and 135° linear polarization filters. To observe polarization, two polarization properties are of most interest, DoLP, and AoP. The intensity, polarization difference, DoLP and AoP are computed via the following equations:

Intensity (S0)=1/2(I0+I90+I45+I135).
S1=(I0I90).
S2=(I45I135).
DoLP=(s12+s22)/s02.
AoP=1/2tan1(S2/S1).

A linear polarization filter has been used to find the Stokes parameters; however, the Stokes parameter ( S3) is not captured for the DoFP sensor shown in Fig. 1. The above equations show that a polarization imaging sensor has to sample the images with four linear polarization filters offset by 45° [29].

2. Residual interpolation

In this section, the bilinear interpolation method is first briefly reviewed, followed by an overview of the proposed residual interpolation method. We used bilinear interpolation due to its low computational complexity. The basic principle of bilinear interpolation is to estimate the pixel values in two dimensions. The distance weighted average of the four nearest pixel values is used to estimate a new pixel value.

Based on the four neighboring pixel points (see Fig. 2), f (i, j), f (i + 1, j), f (i, j + 1) and f (i + 1, j + 1), of the interpolating point f (x, y), the mathematical formula for bilinear interpolation can be written as follows:

 figure: Fig. 2

Fig. 2 Bilinear interpolation.

Download Full Size | PDF

f(x,y)=f(i,j).(i+1x).(j+1y)+f(i,j+1).(i+1x).(yj)+f(i+1,j+1).(xi).(yj)+f(i+1,j).(xi).(j+1y).

We can calculate the pixel (2, 2) at 90° polarization orientation by bilinear interpolation for a 4 X 4 block as follows [22] (see Fig. 3):

 figure: Fig. 3

Fig. 3 A 4 X 4 block in a DoFP image sensor.

Download Full Size | PDF

I45=1/2(I45(1,2)+I45(3,2)).
I135=1/2(I135(2,1)+I45(2,3)).
I0=1/4(I0(1,1)+I0(1,3)+0(3,1)+0(3,3)).

We used a guided filter for edge-preserving smoothing of the images taken at 0o, 45o, 90oand 135o. The guided filter is a linear model between high-resolution guided images (I0, I45,I90, I135)and filtered images (Iint0, Iint45, Iint90,Iint135). The filter output ‘qi’ is a linear transform of the guided image in window wk at pixel k, and this model is applied on all four images:

qi0=ak+bk,wk.

Similarly, this model can be applied on 45o, 90oand 135o to get filter output qi45,qi90 and qi135.  where ak and bk are the linear coefficients constant in window wk. These coefficients can be determined by minimizing the cost function in window wk for I0  and Iint0:

E0(ak,bk)=iwk((akI0+bkIint0)2+εak2).
ak=1wiwkI0Int0μkIint0kσk2+ε.
bk=Iint0kakμk.

Similarly, cost functions E45, E90 and E135 can be calculated for Iint45, Iint90 and Iint135. where μk and σk2 are the mean and variance of I0 in wk, |w| is the number of pixels and Int0k is the mean of Iint0 Similarly, ak  and bkvalues can be calculated for E45, E90 and E135, respectively. The filter output for all the tentatively estimated images of the four polarizers can be found as follows:

{qi0¯=1|w|iwk(akI0+bk)qi45¯=1|w|iwk(akI45+bk)qi90¯=1|w|iwk(akI90+bk)qi135¯=1|w|iwk(akI135+bk).

The guided filter provides the tentatively estimated pixel values for each 4-polarizer filter array (0o, 45o, 90o, 135o). The residuals (∆) can be calculated from the original and guided filter output pixel as follows:

{ΔI0(i,j)=i=1:n,j=1:m(I0(i,j)qi0¯(i,j))ΔI45(i,j)=i=1:n,j=1:m(I45(i,j)qi45¯(i,j))ΔI90(i,j)=i=1:n,j=1:m(I0(i,j)qi90¯(i,j))ΔI135(i,j)=i=1:n,j=1:m(I0(i,j)qi135¯(i,j)).

The residuals (∆) can be further interpolated and then added to the tentatively obtained pixel values. The ∆Iint for 0o, 45o, 90o,and 135o is shown in the residual interpolated difference block in Fig. 4(a). We can calculate the pixel ∆Iint90 at 90° polarization orientation by bilinear interpolation as follows:

 figure: Fig. 4

Fig. 4 A 4X4 residual interpolated difference and net residual interpolated block.

Download Full Size | PDF

ΔIint45=1/2(ΔI45(1,2)+ΔI45(3,2)).
ΔIint135=1/2(ΔI135(2,1)+ΔI45(2,3)).
ΔIint0=1/2(ΔI0(1,1)+ΔI0(1,3)+ΔI0(3,1)+ΔI0(3,3)).

The net residual interpolation is the addition of each pixel value of the residuals (∆), and the tentative estimates for each polarized image are shown in Fig. 4(b). This can be represented as follows:

{RI_I0(i,j)=i=1:n,j=1:m(ΔIint0(i,j)+qi0¯(i,j))RI_I45(i,j)=i=1:n,j=1:m(ΔIint45(i,j)+qi45¯(i,j))RI_I90(i,j)=i=1:n,j=1:m(ΔIint90(i,j)+qi90¯(i,j))RI_I135(i,j)=i=1:n,j=1:m(ΔIint135(i,j)+qi135¯(i,j)).

The difference between the residual and bilinear interpolations is that bilinear estimates the new pixel from the four nearest neighbors and residual interpolates, using bilinear interpolation, the residuals obtained from the tentatively estimated and observed pixel values. The interpolated residuals are added to the tentatively estimated pixel values to get the net residual interpolation.

In Fig. 5, the flow chart of residual interpolation is presented. First, the low-resolution polarization images are up-sampled using bilinear interpolation to generate high-resolution images. With the guided-filter, the proposed algorithm can up sample the sparse data by using the above-mentioned interpolated images and the high-resolution guided images (I0 I45,I90, I135). Therefore, the image structures of the interpolated images are preserved. We generated the tentative estimation of the 0°, 45°, 90° and 135° images (qi0¯,qi45¯,qi90¯,qi135¯) and calculated the residuals, as presented in Eq. (15). The residuals were again interpolated using bilinear interpolation, and those tentatively estimated were augmented to get the residual interpolation, as shown in Eq. (19).

 figure: Fig. 5

Fig. 5 Flow chart for residual interpolation.

Download Full Size | PDF

3. Modulation transfer function

The modulation transfer function (MTF) of an imaging system is a measure of the contrast being transferred by the lens. The MTF measures the magnitude response of an imaging system to a varying sinusoidal pattern at different spatial frequencies. Simply, it measures how a camera can see small things [36]. At a range of spatial frequency, the MTF can be calculated as the ratio of the contrast at the output to the input sinusoidal patterns.

The polarization image sensor captures the polarization information at each imaging frame. An input target image can be defined as varying sinusoidal patterns at different frequencies. We generated an artificial sinusoidal image in MATLAB at each frame, i.e, 0°, 45°, 90° and 135°, as follows [22]:

{I0(x,y)=cos(2πfxx+2πfyy)+1I45(x,y)=2cos(2πfxx+2πfyy)+2I90(x,y)=cos(2πfxx+2πfyy)+1I135(x,y)=0.
where fx and fy are the spatial frequency components in the horizontal and vertical directions. All patterns were down-sampled in order to check the accuracy of the interpolation algorithms. We interpolated the down-sampled polarized images to get the high resolution image according to the bilinear, bicubic, spline, gradient and residual interpolation algorithms. The S0 and S2 parameters are variables based on a sine function, while S1 is constant.We then matched our imager as to how it would sample such a signal and applied interpolation techniques. The ratio of the original sinusoidal signal to the interpolated signal gives us the MTF at one frequency. We changed the frequency, and for each frequency, we got another MTF point. In this way, at a range of frequencies swept from 0 to 0.5 cycles per pixel, the MTF curve was plotted, as shown in Fig. 6. In Fig. 6(a) to (e) the 3-D MTF chart of bilinear, bicubic, spline and gradient-based and residual interpolation is represented, showing the MTF along fx and fy. The horizontal frequency fx and vertical frequencyfy were swept from 0 to 0.5 cycles per pixel. Figure 6(f) shows the MTF response of spline interpolation in cyan color, bilinear shown in green, bicubic in blue, gradient in yellow and residual interpolation in red along fx = fy. The ideal MTF, shown by the dotted purple line, has unity gain for 0 to 0.5 and zero gain at higher frequencies. All interpolation algorithms other than residual interpolation give low gain below 0.25 cycles per second and zero gain at higher frequencies. The residual interpolation has a higher gain at low frequencies beyond 0.25 cycles per pixel than the other methods. At higher frequencies, greater than 0.375, residual interpolation again provides increased gain less than 0.5 cycles per pixel as compared to other interpolation methods.

 figure: Fig. 6

Fig. 6 The MTF of intensity (S0)  for interpolation algorithms: (a) bilinear, (b) bicubic, (c) spline, (d) gradient, (e) residual. (f) The MTF of S0 along fx = fy

Download Full Size | PDF

4. Experimental setup

To assess the accuracy of different interpolation methods, the “true” high-resolution polarization image must be known beforehand, and with the DoFP polarization imaging sensors, we can only generate low-resolution images. Four images at 0°, 45°, 90° and 135° orientations were taken of a car in an outdoor environment with a CMLN-13S2M-CS CCD camera mounted with a linear polarization filter. These true high-resolution images were down-sampled by following the sampling pattern of the DoFP polarization imaging sensor, and four low-resolution images at 0°, 45°, 90° and 135° feature angles were obtained, like those acquired from a DoFP sensor. After applying the interpolation algorithms, the final high-resolution interpolated images were compared against the true high-resolution images that were originally obtained. The images obtained at 0°, 45°, 90° and 135° orientations are grayscale images. A further four true high-resolution images of a car were captured in an outdoor environment. The intensity, DoLP and AoP images for the car are shown in Fig. 7. Potential error in the original high-resolution images due to optical misalignment is not an experimental concern. Our concern is only to test the interpolation algorithm in terms of mean square error and visual evaluation because the algorithms are tested on low-resolution images, and any original error will be the same in both the low and high resolution images. Our setup provides a fair comparison of the reconstruction error among the bilinear, bicubic, spline and gradient based and residual interpolation methods.

 figure: Fig. 7

Fig. 7 The true high-resolution image of a car: (a) car-intensity, (b) car-DoLP and (c) car-AoP.

Download Full Size | PDF

5. Performance estimation

In this section, we adopt mean square error (MSE) and visual evaluation to compare the performance of the different interpolation algorithms. The interpolation methods are used to get high-resolution images from the low-resolution images. The interpolated images are matched with the true high-resolution image, and the characteristics of polarization are explored on the images. In Section 5.1 and 5.2 the visual image evaluation and the MSE of both test images are given, respectively.

5.1 Visual image evaluation

In Fig. 7, the intensity, DoLP and AoP images computed from the high-resolution car image are shown. The row contains the true high-resolution polarization image, and this is used to visually compare the reconstruction accuracy of the different interpolation methods presented in Fig. 8 using small patches. In the first column of Fig. 8, the original intensity, DoLP and AoP are given, while the second to the sixth columns show bilinear, bicubic, spline, gradient, and residual interpolation images, respectively.

 figure: Fig. 8

Fig. 8 The true high-resolution image and comparison of interpolation methods on the (a) intensity, (b) DoLP, and (c)AoP, showing the effect of interpolation on the artifacts in the patches.

Download Full Size | PDF

In Fig. 7, the DoLP values are lower in the red areas and higher on the car’s glass windows, with the light blue spot on the glass marked with a white oval showing medium DoLP. The AoP value is low, medium, and high in the red, light blue and purple areas, respectively.

In Fig. 8, small patches of the car image are shown. In the figure, the purple ovals on the original and residual intensity images show how many artifacts have been effectively recovered. The image artifacts and glitches are effectively reduced by residual interpolation, with the interpolated images near to the original images. DoLP and AoP patches of the car show the accuracy of residual interpolation as compared to the bilinear, bicubic, gradient and spline algorithms.

We used parallel programming to speed up the processing to real-time. On our system Intel (R) core (TM) i5-3470 CPU @ 3.20GHz, 8-Gb RAM, bilinear interpolation computes the AoP image (960 x1280) in 40 milliseconds, bicubic in 47 milliseconds, gradient in 45 milliseconds, spline in 57 milliseconds and residual interpolation in 61 milliseconds. The most important is that in terms of the polarization information recovery, mean square error, visual evaluation and MTF, the residual interpolation performance is significantly better than other interpolation methods.

5.2. MSE comparison

The MSE for the different interpolation algorithms is found using the following equation:

MSE=1MN1iM1iN(Oimg(i,j)iimg(i,j))2.
where Oimg(i, j) is the actual target pixel, iimg(i, j) is the interpolated actual intensity image and M and N are the number of rows and columns in the image array, respectively. The mean square error results for the different interpolation methods are shown in Table 1 for the car image. The minimum MSE for the I(0°), I(45°), I(90°), I(135°), intensity, DoLP and AoP images is obtained via the residual interpolation method. The spline interpolation method introduces the largest error, while the bicubic and gradient interpolation methods show similar error performance, with the latter being computationally efficient.

Tables Icon

Table 1. MSE performance comparison for car image

6. Conclusion

In this paper, we proposed the residual interpolation algorithm for a division of focal plane image sensor. We have compared the structure of the gradient, bilinear, bicubic and spline interpolation algorithms with residual interpolation. The performance was compared visually, by modulation transfer function (MTF) and with a MSE matrix through a CCD camera and a linear polarization filter turned around the sensor. The interpolation algorithms were applied on low-resolution images and compared statistically against the true high-resolution polarization images. We applied the algorithms on intensity (S0), angle of linear polarization (AoP) and degree of linear polarization (DoLP) to observe the accuracy of edge recovery and polarization information. The improvements in the reconstruction accuracy using the proposed residual interpolation method were shown both with the MSE and visually in comparison with the bilinear, bicubic, spline and gradient-based algorithms. This demonstrates that residual interpolation could bring a large improvement to the output quality in terms of edge artifacts for a real DoFP polarization image sensor. Most importantly, the residual method outperforms and shows its advantage over other leading methods.

Funding

The Qatar National Research Fund (NPRP9-421-2-170).

Acknowledgments

The authors would like to thank Neal Brock at 4D technology and Shengkui Gao at Apple, United States for their guidance about the polarization image sensors.

References and links

1. N. M. Garcia, I. de Erausquin, C. Edmiston, and V. Gruev, “Surface normal reconstruction using circularly polarized light,” Opt. Express 23(11), 14391–14406 (2015). [CrossRef]   [PubMed]  

2. D. Miyazaki, T. Shigetomi, M. Baba, R. Furukawa, S. Hiura, and N. Asada, “Surface normal estimation of black specular objects from multiview polarization images,” Opt. Eng. 56(4), 041303 (2016). [CrossRef]  

3. H. Zhan and D. G. Voelz, “Modified polarimetric bidirectional reflectance distribution function with diffuse scattering: surface parameter estimation,” Opt. Eng. 55(12), 123103 (2016). [CrossRef]  

4. V. Thilak, D. G. Voelz, and C. D. Creusere, “Polarization-based index of refraction and reflection angle estimation for remote sensing applications,” Appl. Opt. 46(30), 7527–7536 (2007). [CrossRef]   [PubMed]  

5. B. Shen, P. Wang, R. Polson, and R. Menon, “Ultra-high-efficiency metamaterial polarizer,” Optica 1(5), 356–360 (2014). [CrossRef]  

6. P. Terrier, V. Devlaminck, and J. M. Charbois, “Segmentation of rough surfaces using a polarization imaging system,” J. Opt. Soc. Am. A 25(2), 423–430 (2008). [CrossRef]   [PubMed]  

7. O. Morel, R. Seulin, and D. Fofi, “Handy method to calibrate division-of-amplitude polarimeters for the first three Stokes parameters,” Opt. Express 24(12), 13634–13646 (2016). [CrossRef]   [PubMed]  

8. M. W. Hyde 4th, J. D. Schmidt, M. J. Havrilla, and S. C. Cain, “Enhanced material classification using turbulence-degraded polarimetric imagery,” Opt. Lett. 35(21), 3601–3603 (2010). [CrossRef]   [PubMed]  

9. S. Alali and A. Vitkin, “Polarized light imaging in biomedicine: emerging Mueller matrix methodologies for bulk tissue assessment,” J. Biomed. Opt. 20(6), 061104 (2015). [CrossRef]   [PubMed]  

10. T. York, S. B. Powell, S. Gao, L. Kahan, T. Charanya, D. Saha, N. W. Roberts, T. W. Cronin, J. Marshall, S. Achilefu, S. P. Lake, B. Raman, and V. Gruev, “Bioinspired polarization imaging sensors: from circuits and optics to signal processing algorithms and biomedical applications: analysis at the focal plane emulates nature’s method in sensors to image and diagnose with polarized light,” Proc IEEE Inst Electr Electron Eng 102(10), 1450–1469 (2014). [CrossRef]   [PubMed]  

11. N. W. Roberts, M. J. How, M. L. Porter, S. E. Temple, R. L. Caldwell, S. B. Powell, V. Gruev, N. J. Marshall, and T. W. Cronin, “Animal polarization imaging and implications for optical processing,” Proc. IEEE 102(10), 1427–1434 (2014). [CrossRef]  

12. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45(22), 5453–5469 (2006). [CrossRef]   [PubMed]  

13. X. Zhao, A. Bermak, F. Boussaid, and V. G. Chigrinov, “Liquid-crystal micropolarimeter array for full Stokes polarization imaging in visible spectrum,” Opt. Express 18(17), 17776–17787 (2010). [CrossRef]   [PubMed]  

14. V. Gruev, “Fabrication of a dual-layer aluminum nanowires polarization filter array,” Opt. Express 19(24), 24361–24369 (2011). [CrossRef]   [PubMed]  

15. V. Gruev and R. E. Cummings, “Implementation of steerable spatiotemporal image filters on the focal plane,” IEEE Trans. Circuits Syst. 49(4), 233–244 (2002). [CrossRef]  

16. X. Zhao, F. Boussaid, A. Bermak, and V. G. Chigrinov, “High-resolution thin “guest-host” micropolarizer arrays for visible imaging polarimetry,” Opt. Express 19(6), 5565–5573 (2011). [CrossRef]   [PubMed]  

17. V. Gruev, J. Van der Spiegel, and N. Engheta, “Dual-tier thin film polymer polarization imaging sensor,” Opt. Express 18(18), 19292–19303 (2010). [CrossRef]   [PubMed]  

18. M. Kulkarni and V. Gruev, “Integrated spectral-polarization imaging sensor with aluminum nanowire polarization filters,” Opt. Express 20(21), 22997–23012 (2012). [CrossRef]   [PubMed]  

19. C. K. Harnett and H. G. Craighead, “Liquid-crystal micropolarizer array for polarization-difference imaging,” Appl. Opt. 41(7), 1291–1296 (2002). [CrossRef]   [PubMed]  

20. V. Gruev and R. E. Cummings, “A pipelined temporal difference imager,” IEEE J. Solid-State Circuits 39(3), 538–543 (2004). [CrossRef]  

21. Y. Liu, R. Njuguna, T. Matthews, W. J. Akers, G. P. Sudlow, S. Mondal, R. Tang, V. Gruev, and S. Achilefu, “Near-infrared fluorescence goggle system with complementary metal-oxide-semiconductor imaging sensor and see-through display,” J. Biomed. Opt. 18(10), 101303 (2013). [CrossRef]   [PubMed]  

22. S. Gao and V. Gruev, “Bilinear and bicubic interpolation methods for division of focal plane polarimeters,” Opt. Express 19(27), 26161–26173 (2011). [CrossRef]   [PubMed]  

23. J. Zhang, H. Luo, B. Hui, and Z. Chang, “Image interpolation for division of focal plane polarimeters with intensity correlation,” Opt. Express 24(18), 20799–20807 (2016). [CrossRef]   [PubMed]  

24. R. Perkins and V. Gruev, “Signal-to-noise analysis of Stokes parameters in division of focal plane polarimeters,” Opt. Express 18(25), 25815–25824 (2010). [CrossRef]   [PubMed]  

25. E. Gilboa, J. P. Cunningham, A. Nehorai, and V. Gruev, “Image interpolation and denoising for division of focal plane sensors using Gaussian processes,” Opt. Express 22(12), 15277–15291 (2014). [CrossRef]   [PubMed]  

26. S. Gao and V. Gruev, “Gradient-based interpolation method for division-of-focal-plane polarimeters,” Opt. Express 21(1), 1137–1151 (2013). [CrossRef]   [PubMed]  

27. B. M. Ratliff, C. F. LaCasse, and J. S. Tyo, “Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery,” Opt. Express 17(11), 9112–9125 (2009). [CrossRef]   [PubMed]  

28. P. Thévenaz, T. Blu, and M. Unser, “Image interpolation and Resampling” in Handbook of Medical Imaging (SPIE Press, 2000), pp. 393–420.

29. D. H. Goldstein, Polarized Light, 3rd ed. (CPC Press, 2010).

30. M. W. Kudenov, L. J. Pezzaniti, and G. R. Gerhart, “Microbolometer-infrared imaging Stokes polarimeter,” Opt. Eng. 48(6), 063201 (2009). [CrossRef]  

31. D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Residual interpolation for color image demosaicking,” in 2013 IEEE International Conference on Image Processing, Melbourne, (IEEE, 2013), pp. 2304–2308. [CrossRef]  

32. Y. Monno, D. Kiku, S. Kikuchi, M. Tanaka, and M. Okutomi, “Multispectral demosaicking with novel guide image generation and residual interpolation,” in IEEE International Conference on Image Processing (IEEE, 2014), pp. 645–649. [CrossRef]  

33. W. Ye and K. K. Ma, “Color image demosaicing using iterative residual interpolation,” IEEE Trans. Image Process. 24(12), 5879–5891 (2015). [CrossRef]   [PubMed]  

34. D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Beyond color difference: Residual interpolation for color image demosaicking,” IEEE Trans. Image Process. 25(3), 1288–1300 (2016). [PubMed]  

35. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013). [CrossRef]   [PubMed]  

36. G. D. Boreman, Modulation Transfer Function in Optical and Electro-Optical Systems (SPIE, 2001).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Division of focal plane polarization imaging sensor array with a 4-polarizer filter array (0°, 45°, 90°  135° ) of charge coupled device (CCD) imaging elements.
Fig. 2
Fig. 2 Bilinear interpolation.
Fig. 3
Fig. 3 A 4 X 4 block in a DoFP image sensor.
Fig. 4
Fig. 4 A 4X4 residual interpolated difference and net residual interpolated block.
Fig. 5
Fig. 5 Flow chart for residual interpolation.
Fig. 6
Fig. 6 The MTF of intensity ( S 0 )  for interpolation algorithms: (a) bilinear, (b) bicubic, (c) spline, (d) gradient, (e) residual. (f) The MTF of S 0 along f x = f y
Fig. 7
Fig. 7 The true high-resolution image of a car: (a) car-intensity, (b) car-DoLP and (c) car-AoP.
Fig. 8
Fig. 8 The true high-resolution image and comparison of interpolation methods on the (a) intensity, (b) DoLP, and (c)AoP, showing the effect of interpolation on the artifacts in the patches.

Tables (1)

Tables Icon

Table 1 MSE performance comparison for car image

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

Intensity ( S 0 )=1/2 ( I 0 + I 90 + I 45 + I 135 ).
S 1 =( I 0 I 90 ).
S 2 =( I 45 I 135 ).
DoLP= ( s 1 2 + s 2 2 ) / s 0 2 .
AoP=1/ 2 tan 1 ( S 2 / S 1 ) .
f( x,y )=f( i,j ).( i+1x ).( j+1y )+f( i,j+1 ).( i+1x ).( yj ) +f( i+1,j+1 ).( xi ).( yj )+f( i+1,j ).( xi ).( j+1y ).
I 45 =1/2( I 45 (1,2)+ I 45 (3,2)).
I 135 =1/2( I 135 (2,1)+ I 45 (2,3)).
I 0 =1/4( I 0 (1,1)+ I 0 (1,3) + 0 (3,1) + 0 (3,3)).
q i0 = a k + b k , w k .
E0( a k , b k )= i w k ( ( a k I 0 + b k I int0 ) 2 +ε a k 2 ).
a k = 1 w i w k I 0 In t 0 μ k Iint0 k σ k 2 +ε .
b k =Iint 0 k a k μ k .
{ qi0 ¯ = 1 | w | i w k ( a k I 0 + b k ) qi45 ¯ = 1 | w | i w k ( a k I 45 + b k ) qi90 ¯ = 1 | w | i w k ( a k I 90 + b k ) qi135 ¯ = 1 | w | i w k ( a k I 135 + b k ) .
{ Δ I 0 (i,j)= i=1:n,j=1:m ( I 0 (i,j) qi0 ¯ (i,j)) Δ I 45 (i,j)= i=1:n,j=1:m ( I 45 (i,j) qi45 ¯ (i,j)) Δ I 90 (i,j)= i=1:n,j=1:m ( I 0 (i,j) qi90 ¯ (i,j)) Δ I 135 (i,j)= i=1:n,j=1:m ( I 0 (i,j) qi135 ¯ (i,j)) .
Δ I int45 =1/2(Δ I 45 (1,2)+Δ I 45 (3,2)).
Δ I int135 =1/2(Δ I 135 (2,1)+Δ I 45 (2,3)).
Δ I int0 =1/2(Δ I 0 (1,1)+Δ I 0 (1,3)+Δ I 0 (3,1)+Δ I 0 (3,3)).
{ RI_ I 0 (i,j)= i=1:n,j=1:m (Δ I int0 (i,j)+ qi0 ¯ (i,j)) RI_ I 45 (i,j)= i=1:n,j=1:m (Δ I int45 (i,j)+ qi45 ¯ (i,j)) RI_ I 90 (i,j)= i=1:n,j=1:m (Δ I int90 (i,j)+ qi90 ¯ (i,j)) RI_ I 135 (i,j)= i=1:n,j=1:m (Δ I int135 (i,j)+ qi135 ¯ (i,j)) .
{ I 0 (x,y)=cos(2π f x x+2π f y y)+1 I 45 (x,y)=2cos(2π f x x+2π f y y)+2 I 90 (x,y)=cos(2π f x x+2π f y y)+1 I 135 (x,y)=0 .
MSE= 1 MN 1iM 1iN ( O img (i,j) i img (i,j)) 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.