Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Denoising imaging polarimetry by adapted BM3D method

Open Access Open Access

Abstract

In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. INTRODUCTION

The polarization of light describes how light waves propagate through space [1]. Although different forms of polarization can occur, such as circular polarization, in this paper we focus only on linear polarization, the form that is abundant in nature. Light is said to be completely linearly polarized (or polarized for the purposes of this paper) when all waves travelling along the same path through space are oscillating within the same plane. If, however, there is no correlation between the orientation of the waves, the light is described as unpolarized. Polarized and unpolarized light are the limiting cases of partially polarized light, which can be considered to be a mixture of polarized and unpolarized light.

The polarization of light can be altered by the processes of scattering and reflection. As a form of visual information, it provides a fitness benefit such that many animals [2,3] use polarization sensitivity for a variety of task-specific behaviors such as navigation [4], communication [5], and contrast enhancement [6]. Inspired by nature, many devices, known as imaging polarimeters or polarization cameras, are now available that capture images containing information about the polarization of light [7,8]. These have been used in a growing number of applications [9], including mine detection [10], surveillance [11], shape retrieval [12], and robot vision [13] as well as research in sensory biology [8,14].

A major challenge facing imaging polarimetry, addressed in this paper, is noise. State-of-the art imaging polarimeters suffer from low signal-to-noise ratios (SNR), and it will be shown that conventional image denoising algorithms are not well suited to polarization imagery.

While a great deal of previous work has been done on denoising, very little has specifically been tailored to imaging polarimetry. Zhao et al. [15] approached denoising imaging polarimetry by computing Stokes components from a noisy camera using spatially adaptive wavelet image fusion, whereas Faisan et al.’s [16] method is based on a modified version of the nonlocal means (NLM) algorithm [17].

This paper compares the effectiveness of conventional denoising algorithms in the context of imaging polarimetry. A novel method termed Polarization-BM3D (PBM3D), adapted from an existing denoising algorithm, BM3D (Block Matching 3D) [18], is then introduced and will be shown to be superior to the state of the art.

2. IMAGING POLARIMETRY

A. Representing Light Polarization

A polarizer is an optical filter that only transmits light of a given linear polarization. The angle between the transmitted light and the horizontal is known as the polarizer orientation. Let I represent the total light intensity and Ii represent the intensity of light, which is transmitted through a polarizer orientated at i degrees to the horizontal. The standard way of representing the linear polarization is by using Stokes parameters (S0,S1,S2) [19], which are defined as follows:

S0=I,
S1=I0I90,
S2=I45I135.

Note that I=I0+I90=I45+I135, so the above can be rewritten, using I0, I45, and I90, as

S0=I0+I90,
S1=I0I90,
S2=I0+2I45I90.

The degree of (linear) polarization (DoP) and the angle of polarization (AoP) are defined as

DoP=S12+S22S0,
AoP=12arctan(S2S1).

The DoP represents the proportion of light that is polarized, rather than being unpolarized, i.e., DoP=1 means that the light is fully polarized, DoP=0 means unpolarized. The AoP represents the average orientation of the oscillation of multiple waves of light, expressed as an angle from the horizontal.

B. Imaging Polarimeters

Imaging polarimeters are devices that, in addition to measuring the intensity of light at each pixel in an array, also measure the polarization of light at each pixel location. There are many designs of a passive imaging polarimeter (here, we are not concerned with active imaging polarimeters), summarized in [20]. The common feature they share is in measuring the intensity of light, which passes through polarizers of multiple orientations, (Ii1,Ii2,,Iin), possibly with additional measurements of circular polarization, at each pixel in an array. The measurements for multiple orientations are taken either simultaneously or of a completely static scene. The Stokes parameters are then derived at each pixel. For the rest of this paper, we will consider a polarimeter that measures I0, I45, and I90, which is a common arrangement [7,8]. Generalizations to imaging polarimeters, which measure intensities at different angles, are straightforward.

As this paper addresses polarization measurements across an array, the symbols I0, I45, I90, S0, S1, S2, DoP, and AoP will henceforth refer to the array of values, rather than a single measurement. I0, I45, and I90 are known as the camera components and S0, S1, and S2 as the Stokes components.

C. Noise

Noise affects most imaging systems and is especially challenging in polarimetry due to the complex sensor configuration involved with measuring the polarization. Each type of imaging polarimeter (see [20] for a description of the different types) either is affected by noise to a greater extent than are conventional cameras or suffers from other degradations that limit its use to specific applications. “Division of focal plane” polarimeters, which use micro-optical arrays of polarization elements, suffer from imperfect fabrication and crosstalk between polarization elements. “Division of amplitude” polarimeters, which split the incident light into multiple optical paths, suffer from low SNR due to the splitting of the light. “Division of aperture” polarimeters, which use separate apertures for separate polarization components, suffer from distortions due to parallax (except in the case where a scene has no depth). “Division of time” polarimeters require static, or slowly evolving, scenes, and are thus incapable of recording video of scenes with rapid movement and so, for many applications, cannot be used. Also, in polarimetry, where DoP and AoP are often the quantities of interest, they are nonlinear functions of the camera and Stokes components, which in this case have the effect of amplifying the noise degradation.

To highlight the degradation of a DoP image due to noise, consider Fig. 1. The top row shows the three camera components of an unpolarized scene (i.e., all three components are identical, and DoP=0 everywhere). The original images with noise added are shown in the bottom row. Although there is only a small noticeable difference between the original and noisy camera components, the difference between the original and noisy DoP images is severe. This indicates a large error, with 25% of pixels exhibiting an error greater than 10%. The error is greater when the intensity of the camera components is smaller. To see why this is the case, consider a noisy Stokes image (S0,S1,S2), where the measured values are normally distributed around the true Stokes parameters (T0,T1,T2). Let the true DoP be given by δ0=(T12+T22)1/2/T0. The naive way to compute δ0 is to apply the DoP formula to the measured Stokes parameters δ=(S12+S22)1/2/S0. But E(δ)δ0 (where E is expected value), meaning that this is a biased estimator; thus, the calculated DoP does not average to the correct result. This can be seen by the fact that, if the true DoP, δ0, is zero, and T0>0, then any error in S1 and S2 results in δ>0. Denoising algorithms, including the one proposed in this paper, PBM3D, are thus essential for mitigating such degradations due to noise.

 figure: Fig. 1.

Fig. 1. Simulation of an unpolarized scene with and without noise (σ=0.02). Black represents a value of 0, white of 1. The error is large for the noisy DoP image.

Download Full Size | PDF

This paper considers only uniformly distributed independent Gaussian noise. This noise model is only a good fit in the case of detector-limited noise but serves as an approximation of the shot-noise process and has precedence in the literature [15,16]. A noise model in which the Gaussian parameters are allowed to vary between pixels depending on the intensity would have greater general applicability to polarimetry, but BM3D, on which our algorithm PBM3D is based, assumes uniformly distributed noise. In Section D, PBM3D is applied to real polarimetry and is shown to be effective, thereby justifying our choice of noise model. Throughout this paper, a noisy camera component, Ii, is described as follows. Let Ω denote the image domain. For all xΩ and i{0,45,90}:Ii(x)=Ii(x)+n(x) where the noise, n(x), is a normally distributed zero-mean random variable with standard deviation σ, and Ii is the true camera component.

3. STATE OF THE ART

There are various methods for mitigating noise degradation in imaging polarimetry. For example, polarizer orientations can be chosen optimally for noise reduction [21,22]; however, this is not always possible due to constraints on the imaging system. Further reductions in noise can also be made through the use of denoising algorithms, which attempt to estimate the original image.

While vast literature exists on denoising algorithms in general, little is specifically targeted at denoising imaging polarimetry. Zhao et al. [15] approached denoising imaging polarimetry by computing Stokes components from a noisy camera using a spatially adaptive wavelet image fusion, based on [23]. A benefit of this algorithm is that the noisy camera components need not be registered prior to denoising. The algorithm of Faisan et al. [16] is based on a modified version of Buades et al.’s nonlocal means (NLM) algorithm [17]. The NLM algorithm is modified by reducing the contribution of outlier patches in the weighted average and by taking into account the constraints arising from the Stokes components having to be mutually compatible. A disadvantage of this method is that it takes a long time to denoise a single image (550s for a 256×256 image, which takes approximately 1 s using our method, PBM3D. Both on an Intel Core i7, running at 3 GHz).

In this paper, our PBM3D algorithm will be compared with the above two algorithms. Faisan et al. [16] compared their denoising algorithm with earlier methods [2426] and demonstrated that their NLM-based algorithm gives superior denoising performance. For this reason, comparison with these algorithms is not considered.

4. METHOD

Our approach to denoising polarization images is to adapt Dabov’s BM3D algorithm [18] for use with imaging polarimetry, a novel method that we call PBM3D.

BM3D was chosen primarily for its robustness and effectiveness. Sadreazami et al. [27] recently compared the performance of a large number of state-of-the-art denoising algorithms, using three test images and four values of σ, the noise standard deviation. The authors showed that no one denoising algorithm of those tested always gave the greatest denoised peak signal-to-noise ratio (PSNR). However, BM3D was always able to give denoised PSNR values close to the best performing algorithm and, in more than half the cases, was in the top two. Another appealing aspect of BM3D is that extensions have been published for color images (CBM3D) [28], multispectral images (MSPCA-BM3D) [29], volumetric data (BM4D) [30], and video (VBM4D) [31]. This extensibility shows the versatility of the core algorithm. Sadreazami et al. found that CBM3D was the best-performing algorithm for color images with high noise levels.

A. BM3D

BM3D consists of two steps. In Step 1, a basic estimate of the denoised image is produced; Step 2 then refines the estimate produced in Step 1 to give the final estimate. Steps 1 and 2 consist of the same basic substeps, as shown in Algorithm 1.

Tables Icon

Algorithm 1. BM3D, single step

BM3D is described more fully in [18], and thorough analysis is provided in [32].

B. CBM3D

CBM3D adapts BM3D for color images [28]. Figure 2 outlines the algorithm, which works by applying BM3D to the three channels of the image in the YUV color space, also in two steps, but computing the groups only using the Y channel. Details of CMB3D are as shown in Algorithm 2.

Tables Icon

Algorithm 2. CBM3D, single step

 figure: Fig. 2.

Fig. 2. Basic outline of the CBM3D/PBM3D denoising algorithm.

Download Full Size | PDF

Dabov et al. [28] provide the following reason for why CBM3D performs better than applying BM3D separately to three color channels:

  • • The SNR of the intensity channel Y is greater than the chrominance channels.
  • • Most of the valuable information, such as edges, shades, objects and texture, are contained in Y.
  • • The information in U and V tends to be low-frequency.
  • • Isoluminant regions, with U and V varying are rare.

If BM3D is performed separately on color channels, U and V, the grouping suffers [28] due to the lower SNR, and the denoising performance is worse, as it is sensitive to the grouping.

C. PBM3D

In order to optimize BM3D for polarization images, we propose taking CBM3D and replacing the RGB to YUV transformation with a transformation from the camera component image (I0,I45,I90) image to a chosen polarization space, denoted generally as (P0,P1,P2). This is shown in Algorithm 3.

Tables Icon

Algorithm 3. PBM3D, single step

The choice of T has a large effect on denoising performance. Which matrix is optimal is dependent on the image statistics and the noise level, which are both dependent on the application. A possible choice for the polarization transform is to use Stokes parameters [Eqs. (4)–(6)], which is shown in Fig. 2. The transform to Stokes parameters (Stokes transform) results in the first channel, P0=S0 being intensity, which uses arguments similar to those used in Section 5, is likely to contain most of the valuable information. For this reason, the Stokes transform was used as a starting point in finding the optimal denoising matrix in Section 5.

Here, we describe an algorithm estimate of the optimal matrix, Topt, given a set of noise-free model images, D, and a given noise standard deviation, σ.

Let IiD be a noise-free camera component image (e.g., I=(I0,I45,I90)), Ii be Ii with Gaussian noise of standard deviation σ added, D be the set of images Ii and PBM3DT represent the operation of applying PBM3D with transformation matrix T. Define Topt as follows:

Topt=argminTiMSE(Ii,PBM3DT(Ii)),
where MSE is the mean square error. Note that T is normalized such that, for each row, (abc), |a|+|b|+|c|=1.

Due to the large number of degrees of freedom of T and the fact that the matrix elements can take any value in the range [1,1], it is not possible to perform an exhaustive search. Instead, a pattern search method can be used and is described in Algorithm 4. Note that the intervals δ and 10δ are both used to avoid converging to nonglobal minima. Results from the method are shown in Section 5.

Tables Icon

Algorithm 4. Pattern search method

5. EXPERIMENTS

A. Data Sets

In order to demonstrate the effectiveness of denoising algorithms, they must be evaluated using representative noisy test imagery. The test imagery used in these experiments comprises noise-free polarization images, with simulated noise. As noise-free polarization images cannot be produced using a noisy imaging polarimeter, we instead use a digital single-lens reflex (DSLR) camera with a rotatable polarizer in front of the lens. This approach to producing imaging polarimetry is one of the earliest [33] and has been used by various authors, e.g., [6,34].

For this approach to work, the camera sensor must have a linear response with respect to intensity, that is Imeasured=kIactual, where Imeaured and Iactual are the measured and actual light intensities, and k is an arbitrary constant. The linearity can be verified using a fixed light source and a second rotating polarizer. As one polarizer is kept stationary, and the other is rotated, the intensity values measured at each pixel will produce a cosine squared curve if the sensor is linear, according to Malus’ law [19]. The DSLR used to generate the images in this paper was a Nikon D70, whose sensors have a linear response.

The images are generated as follows:

  • 1: All camera settings are set to manual for consistency between shots.
  • 2: To prevent inaccuracies due to compression, the camera is set to take images in raw format.
  • 3: The camera is placed on a tripod or otherwise such that it is stationary.
  • 4: The polarizer is orientated to be parallel to the horizontal and an image is taken.
  • 5: The polarizer is rotated so that it is at 45 deg to the horizontal and a second image is taken.
  • 6: The polarizer is rotated so that it is orientated vertically and the final image is taken.

Given the superior SNR of modern DSLR cameras, this provides low noise polarization images. For arbitrarily low noise levels, multiple photos for a given polarizer angle are taken, registered, and averaged. The main drawback of this method is that the light conditions and image subjects must be stationary; this method therefore cannot be used for many applications but still allows noise-free polarization images to be taken and so is invaluable for testing denoising algorithms.

B. Optimal Denoising Matrix

The optimal matrix for a given application is dependent on the image statistics and the noise level. In order to test the matrix optimization algorithms given in Section 4, and, with no particular application in mind, we produced a set of 10 polarization images, using the method above, of various outdoor scenes. We added noise of several values of σ, the noise standard deviation (see Tables 1 and 2). The optimal matrices given in this section are therefore only optimal for this particular image set. However, they provide a useful starting point and are likely to be close to optimal for applications where the images involve outdoor scenes. The choice of 10 images was arbitrary. Using a larger number of images would result in a more robust estimate of the optimal matrix. The I0 component of each image is shown in Fig. 3.

Tables Icon

Table 1. PSNR Values for Images (Street, Dome, Building) Denoised Using the Following Matrices: I, Identity Matrix; S, Stokes Matrix; O, Opponent Matrix; P, Pattern Search Optimala

Tables Icon

Table 2. Optimal Matrices Computed Using the Pattern Search Method for 10 Values of σ, the Noise Standard Deviation

 figure: Fig. 3.

Fig. 3. I0 of each image in the set.

Download Full Size | PDF

The natural choice of transform to gain an intensity-polarization representation of a polarization image from the camera components is to use a Stokes transformation, which, after normalization, is given by

Tstokes=(1201212012141214).

However, it was found during experiments that the opponent transform, Topp matrix of CBM3D [28], given below, almost always gives superior denoising performance compared with that of the Stokes transform:

Topp=(13131312012141214).

This is logical because taking the mean of the three camera components gives a greater SNR than taking the mean of only two components, and having greater SNR gives better grouping in the PBM3D algorithm, which is important, as denoising performance is sensitive to the quality of the grouping. The opponent matrix was therefore taken as the initial matrix, T0 in the pattern search algorithm.

The pattern search method was applied to the model imagery with δ=0.01. Table 1 shows the PSNR values for images denoised using the estimated optimal matrices. It can be seen that, in every case, the matrix found using the pattern search method results in the most effective denoising.

The pattern search method was then applied at 10 sigma values, giving an estimated optimal matrix for each (Table 2).

The pattern search method was also applied to an image set containing all 10 images, each with noise added of 10 different σ values. The following matrix was found to be optimal on average across all σ values:

Topt=(0.31330.38330.30330.48000.03000.51000.26000.52000.2200).

C. Comparison of Denoising Algorithms

The performance of PBM3D with a variety of images (different to those used for the matrix optimization) and noise levels was compared with the performance of several other denoising algorithms for polarization:

  • • BM3D: Standard BM3D for gray-scale images applied individually to each camera component (I0,I45,I90).
  • • BM3D Stokes: Standard BM3D applied individually to each Stokes component (S0,S1,S2), found by transforming the camera components.
  • • Zhao: Zhao et al.’s method [15].
  • • Faisan: Faisan et al.’s method [16].

In order to quantitatively compare the denoising performance, PSNR was computed for each denoised image.

For Stokes image (S0,S1,S2) with ground truth given by (S0,S1,S2), with S0(x)[0,1], S1(x)[1,1], S2(x)[1,1], and xΩ, where Ω is the image domain, PSNR is given by

PSNR=10log10(1MSE),
where
MSE=13MNxΩ((S0(x)S0(x))2+12(S1(x)S1(x))2+12(S2(x)S2(x))2).

Table 3 shows the PSNR values for four images (“oranges,” “cars,” “windows,” “statue”). The same data, along with those for four other images, are plotted in Fig. 4. It can be seen that PBM3D always provides the greatest denoising performance. Every image denoised using PBM3D at every noise level had a greater PSNR than images denoised using all other methods. The second-best performing method in every case was BM3D Stokes, with PBM3D denoising images with a greater PSNR of 0.84 dB on average. The difference in PSNR between images denoised using PBM3D and BM3D Stokes was greatest for the intermediate noise levels. The smaller noise levels exhibited less of a difference, and the PSNR values in the higher noise values became closer as noise was increased. The convergence of the PSNR values in the higher noise levels can be explained by the fact that the S1 and S2 components of the images become so noisy that they bear little resemblance to the ground truth, as shown in Fig. 5. Zhao’s method performed poorly at all noise levels; it provided a smaller PSNR at higher noise levels than the other methods at higher noise levels. Faisan’s method had worse performance than all of the BM3D-based methods, at all noise levels (images denoised using Faisan had a PSNR 4.50 dB smaller on average than those denoised using PBM3D) but performed significantly better than Zhao’s method.

Tables Icon

Table 3. PSNR for Denoising of Four Images (“Oranges,” “Cars,” “Windows,” “Statue”) Using Several Methods (B, BM3D; S, BM3D Stokes; P, PBM3D; Z, Zhao; F, Faisan) and Several Values of σ, the Standard Deviation of the Noise, Addeda

 figure: Fig. 4.

Fig. 4. PSNR for denoised images as a function of σ, the standard deviation of noise. Above each plot is the name of the image denoised; line colors correspond to different denoising algorithms. For the top row, PSNR values are shown in Table 3. It can be seen that, for all images and all values of σ, PBM3D produces images with the greatest PSNR.

Download Full Size | PDF

Figures 68 show the denoised images corresponding to the σ=0.026 row of Table 3 as well as the ground truth and noisy images. It can be seen that, as well as providing the greatest PSNR value, the visual quality of the images denoised using PBM3D is the greatest of the methods tested. In all three figures, the S0 component for the images denoised using BM3D, BM3D Stokes, and PBM3D appear similar to the ground truth, with the image denoised using Faisan appearing to be slightly less sharp. The S1 and S2 components of the images denoised using BM3D and Faisan appear to have more denoising artifacts than those denoised using BM3D Stokes and PBM3D. In the DoP components, the images denoised using PBM3D have cleaner edges, which are more similar to the ground truth than DoP components denoised using all of the other methods; this is highlighted in Fig. 9, which shows a close-up of the “window” images. The AoP components denoised using PBM3D are notably more faithful to the original than the other methods, which can be seen in Figs.  9 and 10.

 figure: Fig. 5.

Fig. 5. “Oranges” image with noise of standard deviation σ=0.15 (a high noise level), denoised using BM3D Stokes (S) and PBM3D (P) (G, ground truth). For BM3D Stokes and PBM3D, the S0 images are visually similar to the ground truth. For both methods, however, the S1 images are notably different, and the S2 images are almost unrecognizable.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Polarization components of “oranges” image after application of several denoising methods. G, ground truth; N, noisy; B, BM3D; S, BM3D Stokes; P, PBM3D; Z, Zhao; F, Faisan. Noise standard deviation, σ=0.026. Note that the DoP images have been scaled such that black represents DoP=0 and white represents DoP=0.5.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Polarization components of “cars” image after application of several denoising methods. G, ground truth; N, noisy; B, BM3D; S, BM3D Stokes; P, PBM3D; Z, Zhao; F, Faisan. Noise standard deviation, σ=0.026.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Polarization components of “window” image after application of several denoising methods. G, ground truth; N, noisy; B, BM3D; S, BM3D Stokes; P, PBM3D; Z, Zhao; F, Faisan. Noise standard deviation, σ=0.026.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Close-up of “windows” image from Fig. 8 (G, ground truth; S, BM3D Stokes; P, PBM3D). The DoP component of the image denoised using PBM3D exhibits fewer artifacts than the imaged denoised using BM3D Stokes, especially underneath the window. In the AoP components, the lower windows are much more faithfully represented by the image denoised using PBM3D than BM3D Stokes.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Close-up of “cars” image from Fig. 7 (G, ground truth; S, BM3D Stokes; P, PBM3D). DoP components are similar for the images denoised using BM3D Stokes and PBM3D, with slight differences noticeable in the car’s bumper. Detail around the number plate of the car and panels on the right side of the image are more faithfully denoised using PBM3D than BM3D Stokes.

Download Full Size | PDF

D. Denoising Real Polarization Imagery

To further test PBM3D with real rather than simulated noise (as has been used so far), we used a DSLR camera with a rotatable polarizer to capture the three camera components, I0, I45, I90, of a scene of several lab objects, using several exposure settings on the camera (Table 4). The exposure setting was varied in order to vary the amount of noise present. The polarization images were then denoised using PBM3D. Figure 11(a) shows the DoP of the captured image when the exposure was 0.0222 s, and Fig. 11(b) shows the DoP of the same image, denoised using PBM3D. The effect of denoising is evident, with the perceptible noise in the noisy DoP image being greater than for the denoised DoP image.

Tables Icon

Table 4. Estimated σ, the Standard Deviation of Noise, and Wilcoxon Test Results for the Data in Fig. 12a

 figure: Fig. 11.

Fig. 11. DoP image of a collection of lab objects, taken with an exposure of 0.0222s. (a) Image without denoising. (b) Image denoised using PBM3D. The circles indicate where the true DoP value was measured using a spectrometer. It can be seen that the noisy image tends to show much larger DoP values. DoP values measured at each point are shown in Fig. 12.

Download Full Size | PDF

In addition to the imaging polarimetry, we also measured the DoP of several regions of the scene using a spectrometer. The intensity count was averaged across the wavelength range corresponding to the camera sensitivity (400–700 nm) at three different orientations of a rotatable polarizer, i.e., 0 deg, 45 deg, and 90 deg. These mean intensities, I0, I45, I90, were then used to calculate the DoP using Eq. (7). The DoP of the corresponding regions in the polarization images was also calculated using Eq. (7) with a weighting on each of the camera components (I0,I45,I90) to account for the separate RGB channels, Ii=0.299R+0.587G+0.11B, which corresponds to the luminance, Y, of the YUV color space. The absolute difference between the DoP values from the spectrometry and from the imaging polarimetry with the noisy image and the same image denoised using PBM3D is shown in Fig. 12. The results were that the process of denoising extended the range of exposure time, over which the imaging polarimetric values were the same as the spectrometry measurements. Table 4 demonstrates that, at an exposure time of 0.0222 s, when σ0.0103, the values of the DoP from the noisy image become significantly different (Wilcoxon, n=24, V=43, p=0.002) from those calculated using the spectrometry measurements. In contrast, when the images were denoised using PBM3D, the exposure time could be bought down to 0.0056 s (σ0.0363) before the DoP values became different (Wilcoxon, n=24, V=73, p=0.040). Therefore, denoising using PBM3D increases the accuracy of the measurements by reducing the effect of noise on the measurement, allowing approximately 3.5 times as much noise to be tolerated.

 figure: Fig. 12.

Fig. 12. Absolute value of the difference between DoP values measured using the spectrometer and the noisy images, |ns|, and spectrometer and the denoised images, |ds|. The noisy and denoised images had varying noise values, as given in Table 4. The locations where the DoP values were calculated are shown in Fig. 11. Black line indicates where |ds|=|ns|; it can be seen that using the denoised images for DoP measurements results in much greater agreement with measurements taken using the noisy image as for most measurements |ds|<|ns|. Table 4 uses this data to show that denoising significantly reduces the error due to noise.

Download Full Size | PDF

6. CONCLUSION

Imaging polarimetry provides additional useful information from a natural scene compared with intensity-only imaging, and it has been found to be useful in many diverse applications. Imaging polarimetry is particularly susceptible to image degradation due to noise. Our contribution is the introduction of a novel denoising algorithm, PBM3D, which, when compared with state-of-the-art polarization denoising algorithms, gives superior performance. When applied to a selection of noisy images, those denoised using PBM3D had a PSNR of 4.50 dB greater on average than those denoised using the method of Faisan et al. [16] and 0.84 dB greater than those denoised using BM3D Stokes. PBM3D relies on a transformation from camera components into intensity-polarization components. We have given two algorithms for computing the optimal transformation matrix and given the optimal for our system and data set. We have also shown that, if imaging polarimetry is used to provide DoP point measurements, denoising using PBM3D allows approximately 3.5 times as much noise to be present than without denoising for the image to still have accurate measurements.

Funding

Engineering and Physical Sciences Research Council (EPSRC) (EP/I028153/1); Air Force Office of Scientific Research (AFOSR) (FA8655-12-2112).

Acknowledgment

The authors thank C. Heinrich and J. Zallat for the use of their denoising code.

REFERENCES

1. E. Hecht, Optics (Addison-Wesley, 2002).

2. G. Horváth and D. Varju, Polarized Light in Animal Vision: Polarization Patterns in Nature (Springer, 2004).

3. G. Horváth, ed., Polarized Light and Polarization Vision in Animal Sciences (Springer, 2014).

4. R. Wehner and M. Müller, “The significance of direct sunlight and polarized skylight in the ant’s celestial system of navigation,” Proc. Natl. Acad. Sci. USA 103, 12575–12579 (2006). [CrossRef]  

5. M. J. How, M. L. Porter, A. N. Radford, K. D. Feller, S. E. Temple, R. L. Caldwell, N. J. Marshall, T. W. Cronin, and N. W. Roberts, “Out of the blue: The evolution of horizontally polarized signals in Haptosquilla (Crustacea, Stomatopoda, Protosquillidae),” J. Exp. Biol. 217, 3425–3431 (2014). [CrossRef]  

6. M. J. How, J. H. Christy, S. E. Temple, J. M. Hemmi, N. J. Marshall, and N. W. Roberts, “Target detection is enhanced by polarization vision in a fiddler crab,” Curr. Biol. 25, 3069–3073 (2015). [CrossRef]  

7. J. Taylor, P. Davis, and L. Wolff, “Underwater partial polarization signatures from the shallow water real-time imaging polarimeter (shrimp),” in OCEANS’02 MTS/IEEE (2002), pp. 1526–1534.

8. T. York, S. B. Powell, S. Gao, L. Kahan, T. Charanya, D. Saha, N. W. Roberts, T. W. Cronin, J. Marshall, S. Achilefu, S. P. Lake, B. Raman, and V. Gruev, “Bioinspired polarization imaging sensors: from circuits and optics to signal processing algorithms and biomedical applications,” Proc. IEEE 102, 1450–1469 (2014). [CrossRef]  

9. F. Snik, J. Craven-Jones, M. Escuti, S. Fineschi, D. Harrington, A. De Martino, D. Mawet, J. Riedi, and J. S. Tyo, “An overview of polarimetric sensing techniques and technology with applications to different research fields,” Proc. SPIE 9099, 90990B (2014). [CrossRef]  

10. W. de Jong, J. Schavemaker, and A. Schoolderman, “Polarized light camera; a tool in the counter-IED toolbox,” in Prediction and Detection of Improvised Explosive Devices (IED) (SET-117) (RTO, 2007).

11. S.-S. Lin, K. Yemelyanov, E. N. Pugh, and N. Engheta, “Polarization enhanced visual surveillance techniques,” in Proceedings of IEEE International Conference on Networking, Sensing and Control (2004), pp. 216–221.

12. D. Miyazaki and K. Ikeuchi, “Shape estimation of transparent objects by using inverse polarization ray tracing,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 2018–2029 (2007). [CrossRef]  

13. A. E. R. Shabayek, O. Morel, and D. Fofi, “Bio-inspired polarization vision techniques for robotics applications,” in Handbook of Research on Advancements in Robotics and Mechatronics (IGI Global, 2015), pp. 81–117.

14. K. H. Britten, T. D. Thatcher, and T. Caro, “Zebras and biting flies: Quantitative analysis of reflected light from Zebra coats in their natural habitat,” PLoS ONE 11, e0154504 (2016). [CrossRef]  

15. Y.-Q. Zhao, Q. Pan, and H.-C. Zhang, “New polarization imaging method based on spatially adaptive wavelet image fusion,” Opt. Eng. 45, 123202 (2006). [CrossRef]  

16. S. Faisan, C. Heinrich, F. Rousseau, A. Lallement, and J. Zallat, “Joint filtering estimation of Stokes vector images based on a nonlocal means approach,” J. Opt. Soc. Am. 29, 2028–2037 (2012). [CrossRef]  

17. A. Buades, B. Coll, and J. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul. 4, 490–530 (2005). [CrossRef]  

18. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007). [CrossRef]  

19. E. Collett, Field Guide to Polarization (SPIE, 2005).

20. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45, 5453–5469 (2006). [CrossRef]  

21. J. Zallat, S. Aïnouz, and M. P. Stoll, “Optimal configurations for imaging polarimeters: impact of image noise and systematic errors,” J. Opt. A 8, 807–814 (2006). [CrossRef]  

22. R.-Q. Xia, X. Wang, W.-Q. Jin, and J.-A. Liang, “Optimization of polarizer angles for measurements of the degree and angle of linear polarization for linear polarizer-based polarimeters,” Opt. Commun. 353, 109–116 (2015). [CrossRef]  

23. S. Chang, B. Yu, and M. Vetterli, “Spatially adaptive wavelet thresholding with context modeling for image denoising,” IEEE Trans. Image Process. 9, 1522–1531 (2000). [CrossRef]  

24. G. Sfikas, C. Heinrich, J. Zallat, C. Nikou, and N. Galatsanos, “Recovery of polarimetric Stokes images by spatial mixture models,” J. Opt. Soc. Am. 28, 465–474 (2011). [CrossRef]  

25. J. Valenzuela and J. Fessler, “Joint reconstruction of Stokes images from polarimetric measurements,” J. Opt. Soc. Am. A 26, 962–968 (2009). [CrossRef]  

26. J. Zallat and C. Heinrich, “Polarimetric data reduction: a Bayesian approach,” Opt. Express 15, 83–96 (2007). [CrossRef]  

27. H. Sadreazami, M. O. Ahmad, and M. N. S. Swamy, “A study on image denoising in contourlet domain using the alpha-stable family of distributions,” Signal Process. 128, 459–473 (2016). [CrossRef]  

28. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Color image denoising via sparse 3d collaborative filtering with grouping constraint in luminance-chrominance space,” in IEEE International Conference on Image Processing (2007), Vol. 1, pp. I-313–I-316.

29. A. Danielyan, A. Foi, V. Katkovnik, and K. Egiazarian, “Denoising of multispectral images via nonlocal groupwise spectrum-PCA,” in Conference on Colour in Graphics, Imaging, and Vision (2010), pp. 261–266.

30. M. Maggioni, V. Katkovnik, K. Egiazarian, and A. Foi, “Nonlocal transform-domain filter for volumetric data denoising and reconstruction,” IEEE Trans. Image Process. 22, 119–133 (2013). [CrossRef]  

31. M. Maggioni, G. Boracchi, A. Foi, and K. Egiazarian, “Video denoising, deblocking, and enhancement through separable 4-D nonlocal spatiotemporal transforms,” IEEE Trans. Image Process. 21, 3952–3966 (2012). [CrossRef]  

32. M. Lebrun, “An analysis and implementation of the BM3D image denoising method,” Image Process. On Line 2, 175–213 (2012). [CrossRef]  

33. L. B. Wolff, “Polarization-based material classification from specular reflection,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 1059–1071 (1990). [CrossRef]  

34. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42, 511–525 (2003). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Simulation of an unpolarized scene with and without noise ( σ = 0.02 ). Black represents a value of 0, white of 1. The error is large for the noisy DoP image.
Fig. 2.
Fig. 2. Basic outline of the CBM3D/PBM3D denoising algorithm.
Fig. 3.
Fig. 3. I 0 of each image in the set.
Fig. 4.
Fig. 4. PSNR for denoised images as a function of σ , the standard deviation of noise. Above each plot is the name of the image denoised; line colors correspond to different denoising algorithms. For the top row, PSNR values are shown in Table 3. It can be seen that, for all images and all values of σ , PBM3D produces images with the greatest PSNR.
Fig. 5.
Fig. 5. “Oranges” image with noise of standard deviation σ = 0.15 (a high noise level), denoised using BM3D Stokes (S) and PBM3D (P) (G, ground truth). For BM3D Stokes and PBM3D, the S 0 images are visually similar to the ground truth. For both methods, however, the S 1 images are notably different, and the S 2 images are almost unrecognizable.
Fig. 6.
Fig. 6. Polarization components of “oranges” image after application of several denoising methods. G, ground truth; N, noisy; B, BM3D; S, BM3D Stokes; P, PBM3D; Z, Zhao; F, Faisan. Noise standard deviation, σ = 0.026 . Note that the DoP images have been scaled such that black represents DoP = 0 and white represents DoP = 0.5 .
Fig. 7.
Fig. 7. Polarization components of “cars” image after application of several denoising methods. G, ground truth; N, noisy; B, BM3D; S, BM3D Stokes; P, PBM3D; Z, Zhao; F, Faisan. Noise standard deviation, σ = 0.026 .
Fig. 8.
Fig. 8. Polarization components of “window” image after application of several denoising methods. G, ground truth; N, noisy; B, BM3D; S, BM3D Stokes; P, PBM3D; Z, Zhao; F, Faisan. Noise standard deviation, σ = 0.026 .
Fig. 9.
Fig. 9. Close-up of “windows” image from Fig. 8 (G, ground truth; S, BM3D Stokes; P, PBM3D). The DoP component of the image denoised using PBM3D exhibits fewer artifacts than the imaged denoised using BM3D Stokes, especially underneath the window. In the AoP components, the lower windows are much more faithfully represented by the image denoised using PBM3D than BM3D Stokes.
Fig. 10.
Fig. 10. Close-up of “cars” image from Fig. 7 (G, ground truth; S, BM3D Stokes; P, PBM3D). DoP components are similar for the images denoised using BM3D Stokes and PBM3D, with slight differences noticeable in the car’s bumper. Detail around the number plate of the car and panels on the right side of the image are more faithfully denoised using PBM3D than BM3D Stokes.
Fig. 11.
Fig. 11. DoP image of a collection of lab objects, taken with an exposure of 0.0222s. (a) Image without denoising. (b) Image denoised using PBM3D. The circles indicate where the true DoP value was measured using a spectrometer. It can be seen that the noisy image tends to show much larger DoP values. DoP values measured at each point are shown in Fig. 12.
Fig. 12.
Fig. 12. Absolute value of the difference between DoP values measured using the spectrometer and the noisy images, | n s | , and spectrometer and the denoised images, | d s | . The noisy and denoised images had varying noise values, as given in Table 4. The locations where the DoP values were calculated are shown in Fig. 11. Black line indicates where | d s | = | n s | ; it can be seen that using the denoised images for DoP measurements results in much greater agreement with measurements taken using the noisy image as for most measurements | d s | < | n s | . Table 4 uses this data to show that denoising significantly reduces the error due to noise.

Tables (8)

Tables Icon

Algorithm 1 BM3D, single step

Tables Icon

Algorithm 2 CBM3D, single step

Tables Icon

Algorithm 3 PBM3D, single step

Tables Icon

Algorithm 4 Pattern search method

Tables Icon

Table 1. PSNR Values for Images (Street, Dome, Building) Denoised Using the Following Matrices: I, Identity Matrix; S, Stokes Matrix; O, Opponent Matrix; P, Pattern Search Optimal a

Tables Icon

Table 2. Optimal Matrices Computed Using the Pattern Search Method for 10 Values of σ , the Noise Standard Deviation

Tables Icon

Table 3. PSNR for Denoising of Four Images (“Oranges,” “Cars,” “Windows,” “Statue”) Using Several Methods (B, BM3D; S, BM3D Stokes; P, PBM3D; Z, Zhao; F, Faisan) and Several Values of σ , the Standard Deviation of the Noise, Added a

Tables Icon

Table 4. Estimated σ , the Standard Deviation of Noise, and Wilcoxon Test Results for the Data in Fig. 12 a

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

S 0 = I ,
S 1 = I 0 I 90 ,
S 2 = I 45 I 135 .
S 0 = I 0 + I 90 ,
S 1 = I 0 I 90 ,
S 2 = I 0 + 2 I 45 I 90 .
DoP = S 1 2 + S 2 2 S 0 ,
AoP = 1 2 arctan ( S 2 S 1 ) .
T opt = arg min T i MSE ( I i , PBM 3 D T ( I i ) ) ,
T stokes = ( 1 2 0 1 2 1 2 0 1 2 1 4 1 2 1 4 ) .
T opp = ( 1 3 1 3 1 3 1 2 0 1 2 1 4 1 2 1 4 ) .
T opt = ( 0.3133 0.3833 0.3033 0.4800 0.0300 0.5100 0.2600 0.5200 0.2200 ) .
PSNR = 10 log 10 ( 1 MSE ) ,
MSE = 1 3 M N x Ω ( ( S 0 ( x ) S 0 ( x ) ) 2 + 1 2 ( S 1 ( x ) S 1 ( x ) ) 2 + 1 2 ( S 2 ( x ) S 2 ( x ) ) 2 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.