Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Scene-Based Nonuniformity Correction with Reduced Ghosting Using a Gated LMS Algorithm

Open Access Open Access

Abstract

In this paper, we present a scene-based nouniformity correction (NUC) method using a modified adaptive least mean square (LMS) algorithm with a novel gating operation on the updates. The gating is designed to significantly reduce ghosting artifacts produced by many scene-based NUC algorithms by halting updates when temporal variation is lacking. We define the algorithm and present a number of experimental results to demonstrate the efficacy of the proposed method in comparison to several previously published methods including other LMS and constant statistics based methods. The experimental results include simulated imagery and a real infrared image sequence. We show that the proposed method significantly reduces ghosting artifacts, but has a slightly longer convergence time.

©2009 Optical Society of America

1. Introduction

Detector nonuniformity is a phenomenon adversely affecting many imaging systems, particularly infrared systems [1]. In a focal plane array, the responsivity of the individual photodetectors will vary from detector to detector. This creates a fixed pattern noise that degrades the acquired imagery. Furthermore, the nonuniformity tends to drift with time and environmental conditions, making a one-time factory calibration insufficient [1]. For the experimental data used here, an infrared camera is calibrated on the ground and then flown on an aircraft. As the ambient temperature and other environmental conditions change during flight, the residual nonuniformity drifts and must be treated. Many scene-based nonuniformity correction (SB-NUC) techniques have been proposed to address this kind of drift. A sampling of papers in this area include [212]. Scene-based methods are attractive because they do not require halting the normal camera operating for periodic calibrations and do not require uniform calibration targets. Rather, these methods exploit motion in the acquired video to estimate the nonuniformity parameters and correct the imagery.

One class of SBNUC algorithms is based on the assumption that the first and second order statistics of each detector output should be the same over a sufficient number of frames. These are referred to as constant statistics (CS) methods. The original CS method assumes a Gaussian distribution of incident irradiance and is described in [2], while an extension designed to reduce ghosting is presented in [3]. A related approach that assumes that the detector outputs should have a constant range of values can be found in [4]. A Kalman filter based approach has been proposed in [5] that also uses the constant range assumption.

Another class of SBNUC techniques use a least mean square (LMS) algorithm to adaptively determine the nonuniformity model parameters based on a “desired” image that is formed using a spatial low pass filter [68]. The LMS approach was first proposed in [6, 7]. A very promising modification has been proposed in [8, 9] to reduce the LMS update rate at edges within the scene where the “desired” image estimate is the least accurate. The approach in [10] uses a recursive least squares algorithm and exploits knowledge of the readout electronics. Other SBNUC methods that rely on estimates of the global motion between frames can be found in [1113]. These methods can be highly effective, but have the extra computational burden of performing image registration. Furthermore, they may also experience difficulties when complex scene motion is present and does not adhere to the assumed motion model.

One problem facing most all SBNUC algorithms is “ghosting” artifacts. These artifacts generally result when motion across the whole image or a portion of the image temporarily slows or haults. The static image “burns” into the correction parameters, and after the motion resumes, the old scene is still visible superimposed on the new “corrected” scene. This phenomena occurs to a lesser extent even with continuous image motion when the motion is not sufficiently diverse to exposing each detector to a statistically similar set of scene intensities. A motion detection gate was proposed for the CS method in [3] which targets burn-in ghosting from lack of motion. The spatially adaptive approach in [8, 9] is also designed to reduce ghosting in the LMS method by reducing the update rate at edges within the scene, where burn-in is most likely to occur. While this modification significantly reduces ghosting artifacts and speeds convergence, it only slows the burn-in process and does not eliminate the burn-in potential for long motion pauses.

In this paper, we compare the CS and LMS SBNUC algorithms quantitatively and subjectively. We also propose a novel type of gating for the adaptive LMS methods. We show that the new gating method significantly reduces ghosting artifacts in the adaptive LMS SBNUC method at a cost of having somewhat longer convergence time. The remainder of this paper is organized as follows. In Section 2 we present the nonuniformity observation model and the SBNUC algorithms. Experimental results are presented in Section 3. Finally, conclusions are presented in Section 4.

2. Scene Based Nonuniformity Correction

Here we define the CS and the LMS based SBNUC algorithms. We begin by defining the observation model.

2.1. Observation Model

We shall assume that the photoresponses of the individual photodetectors in a focal plane array respond linearly [28, 1013] and their output is given by

Yij(n)=αij(n)Xij(n)+bij(n)+ηij(n),

where the subscript indices i, j are the spatial detector coordinates and n indicates the frame number. The true scene irradiance is given by Xi j(n), and ai j(n) and bi j(n) are the detector scale and biases, respectively. The temporal noise is given by ηij(n) and the observed pixel value is given by Yi j(n). Note that the scales and biases are functions of frame number as well as spatial location. However, we assume that the scales and biases drift very slowly in time and are almost fixed with respect to frame index.

Nonuniformity correction is performed by applying an linear mapping to the observed pixel values to provide an estimate of the true scene value so that the detectors appear to be performing uniformly. This correction is given by

X̂ij(n)=ĝij(n)Yij(n)+ôij(n),

for n=1,2,3, …,N. The gain and offset corrections are given by ĝij(n) and ôi j(n), respectively. In many applications, the estimated scene irradiance does not need to be radiometrically accurate. A global gain and offset error is usually acceptable, so long as the detectors appear to be operating uniformly.

2.2. Constant Statistics SBNUC

The first class of SBNUC algorithms we consider are the CS methods proposed in [2, 3]. The idea is that if the detectors are operating uniformly and the motion in the input video spreads the scene intensities uniformly, the output of each detector should produce values that have the same temporal mean and standard deviation. A corrected image can be found using this principle by subtracting the estimated temporal mean from each pixel and dividing by the temporal standard deviation. The effective gain correction for this method is given by

ĝij(n)=1Ŝij(n),

where Ŝij(n) is the estimated temporal standard deviation estimate for detector i,j for frame n. The effective offset correction is given by

ôij(n)=M̂ij(n)Ŝij(n),

where i j(n) is the estimated temporal mean estimate. Note that the image will be effectively scaled so that the pixels have a zero temporal mean and unit temporal standard deviation. Thus, a global gain and offset may be required to scale the image back to the desired dynamic range.

There are many ways to estimate the temporal statistics. The method proposed in [3] uses an exponential window to allow for slow drift in the nonuniformity parameters. A change threshold is also employed to gate the update of the statistical parameter estimates to reduce burn-in ghosting artifacts that may arise due to insufficient motion during portions of the input video. Specifically, the estimates described in [3] are given by

M̂ij(n)={(1α)Yij(n)+αM̂ij(n1)Yij(n)Yij(n1)>TM̂ij(n1)otherwise,

and

Ŝij(n)={(1α)Yij(n)M̂ij(n)+αŜij(n1)Yij(n)Yij(n1)>TŜij(n1)otherwise,

for n=1,2,3, …,N. Note that in (6), the mean absolute deviation is actually estimated, rather than the standard deviation. It provides similar results and computational advantages [3]. We initialize the process with ij(0) and Ŝij(0) being the global spatial mean and mean absolute deviation of the first frame, {Yij(1)}, respectively. We also define Yij(0)=∞ to ensure that |Yij(1)-Yij(0)|>T for all i,j. Note that α controls the effective number of frames making a significant contribution to the current estimate. An α close to 1 produces a wide window incorporating many frames. This gives the algorithm a long convergence time, but with the potential for a more robust estimate of the statistics. Note that after 𝓝=log(0.37)/log(α) frames, the first frame is given a weight of 0.37 of that of the current frame. Thus, 𝓝 serves as a type of time constant to help in selecting and interpreting α. The change threshold T controls the minimum amount of change between frames required to trigger an update of the estimates for that detector. We refer to the CS method using the estimates above as the gated CS method.

Note that for imagery with a high dynamic range, such as infrared systems, it is possible to have extreme scene values in the input data. When these extreme values factor into the estimates above, the estimates can be skewed significantly. This is true even when the extreme values are in the field of view for only a small number of frames. This can cause a burn-in effect not ameliorated by the change gate alone. To address this potential problem, we propose an additional gating condition (in addition to the change threshold). This additional constraint requires that for an estimate update to occur in Eq. (5) and Eq. (6), the observed pixel has to be within a specified number of mean absolute deviations of the temporal mean for the given detector. Here the temporal mean and mean absolute deviation used to define the constraint are estimated from a separate initial set of frames so as to avoid recursively biasing the estimates used for nonuniformity correction.

2.3. SBNUC Using the LMS

The second class of SBNUC algorithms considered in this paper are those based on the LMS stochastic gradient updates [6, 7]. The idea behind these methods is that we seek to drive the corrected image towards a “desired” image that is free from nonuniformity. The gain and offset corrections are adapted using the LMS algorithm based on the stochastic gradient for the mean squared error between the corrected image and the “desired” image estimate. For this to be successful, the “desired” image should be unbiased temporally relative to the true irradiance, but can have a significant amount of error variance (since we have many frames with which to form the nonuniformity parameter estimates). When the fixed pattern noise is spatially independent and identically distributed (iid), a simple low-pass smoothing filter can be applied to the observed frames to produce a suitable “desired” image. For correlated nonuniformity, other filters or estimators may be required based on the sensor noise [10].

Note that for infrared systems with no nonuniformity correction, the raw fixed pattern noise can often exhibit highly correlated nonuniformity such as stripes and checker-board patterns combined with iid nonuniformity. Such correlated patterns of nonuniformity are usually caused by nonuniformities in readout amplifiers [10]. However, if a laboratory blackbody correction is applied prior to SBNUC, the residual nonuniformity resulting from drift can often be adequately modeled as iid. This may allow one to form a suitable “desired” image using a simple low-pass filter. Note that other low frequency nonuniformity effects may also be present after a laboratory blackbody correction. However, this paper focuses on high spatial frequency nonuniformity and we employ an FIR Gaussian smoothing filter to form the “desired” image for the LMS SBNUC algorithms. Other types of filters could be used here, such as the moving average filter in [69]. However, we have selected an FIR Gaussian filter here since it successfully smoothes the fixed pattern noise and has a near ripple free frequency response. Note that if a significant number of outliers are present, due to bad pixels for example, a median filter or other outlier detection and replacement method may be required prior to the Gaussian filter to produce an unbiased “desired” image.

To formally define the LMS SBNUC algorithms, we first define the error image

Eij(n)=X̂ij(n)Bij(n),

for n=1,2,3, …,N. The image Bij(n) is the “desired” image (here a blurred version of the observed frame) and ij(n) is the current corrected image estimate. A stochastic gradient descent algorithm can be applied to the correction parameters to seek to minimize the mean squared error [6, 7], yielding

ĝij(n+1)=ĝij(n)εij(n)Eij(n)Yij(n),

and

ôij(n+1)=ôij(n)εij(n)Eij(n),

for n=1,2,3, …,N-1. The parameter εij(n) is a step size that governs the convergence behavior of the algorithm. The standard LMS SBNUC uses a fixed value, εij(n)=ε. We initialize the gain and bias corrections with ĝij(1)=1 and ôij(1)=0. Note that to obtain good convergence, we have found it necessary to scale the input data to lie within the interval [0,1]. This allows the gain and offsets to converge with a common step size.

This algorithm is capable of converging rapidly (e.g., in as little as tens of frames). However, any bias in the “desired” image will be transferred to the corrected image estimate. The algorithm is also susceptible to burn-in ghosting when a constant and erroneous stochastic gradient is repeatedly applied during the updates. The gradients have the most error concentrated near dynamic regions in the scene where the “desired” image has the largest error with respect to the true irradiance. When these erroneous gradients persist, due mainly to lack of motion, the burn-in artifact is created.

To address this weakness, a spatially adaptive LMS approach has been proposed in [8, 9] that adjusts the step size based on local spatial variance of the observed image. In dynamic regions with high spatial variance, the “desired” image is least accurate, and therefore smaller steps are taken. On the other hand, large steps are taken in flat image regions where the “desired” image is more accurate. In particular, this adaptive step size is given by

εij(n)=K1+M2σYij(n)2,

where σ 2 Yij(n) is an estimate of the local spatial variance centered at pixel i, j in frame n. The parameter K is the maximum step size, and M is the scaling constant used to normalize the data to the interval [0,1]. We refer to the LMS using the step size in Eq. (10) as the adaptive LMS algorithm. We have observed that this modification significantly reduces ghosting and actually increases convergence speed, since fewer big gradient steps are taken in an erroneous direction. However, because the step size is never actually set to zero with the adaptive LMS algorithm, it will not eliminate burn-in ghosting altogether. To eliminate burn-in from lack of motion, the LMS updates could be modified to include a change threshold like those in Eqs. (5) and (6). However, we have observed that better results can be obtained using the following change gating

εij(n)={K1+M2σYij(n)2Bij(n)Zij(n)>T0else

and

Zij(n+1)={Bij(n)Bij(n)Zij(n)>TZij(n)else,

for n=1,2,3, …,N-1. We define Zij(1)=∞ to ensure that |Bij(1)-Zij(1)|>T for all i, j. Note that here we are detecting change in the desired image at a given pixel location relative to the value of the desired image at the last frame used to update that pixel. We are not simply looking for frame-to-frame change. Detecting only significant frame-to-frame change will tend to exclude slowly varying image regions where the LMS does best and limit us to mostly sharp edges where the gradient error tends to be the largest. A similar change statistic could be defined using the observed image, rather than the low-pass filtered image. However, the “desired” image provides the additional benefit of temporal noise smoothing from the Gaussian low-pass filtering. We refer to the LMS using the step size in Eq. (11) as the gated adaptive LMS algorithm.

3. Experimental Results

To compare the various SBNUC algorithms, and in particular demonstrate the efficacy of the proposed gated adaptive LMS algorithm, we use a visible video sequence with simulated nonuiformity and an infrared video sequence with real nonuniformity.

3.1. Simulated Data

Here live video is obtained by manually panning an 8 bit visible camera in an interior room setting. We artificially create scale and bias nonuniformity by applying the model in Eq. (1). The scale and bias nonuniformity parameters are generated as realizations of iid Gaussian random variables. The scale nonuniformity parameters have a mean of 1 and standard deviation of 0.1, and the bias nonuniformity parameters have a mean of 0 and standard deviation of 10. These data allow for quantitative error analysis as we have access to the “true” irradiance values. The mean absolute error (MAE) versus frame number is shown in Fig. 1 for the SBNUC algorithms defined in the previous section. The results in Fig. 1 are very typical of the numerous video sequences tested. Note that for the first 500 frames the camera was moved in a steady and consistent manner to minimize any burn-in and allow the algorithms to converge. Between frames 500–550, 600–650 and 800–900 the camera was held stationary to challenge the algorithms with burn-in conditions. The CS method uses an exponential window parameter of α=0.992 (𝓝≈124). The gated CS method uses the same α and a change threshold of T=20. All of the LMS methods use a step size of ε=0.05 and an FIR Gaussian low-pass filter with a standard deviation of 5 pixels and kernel size of 21×21. The adaptive LMS methods use K=50 and M=255, and the gated adaptive LMS uses a change threshold of T=20.

Note that the gated CS method significantly outperforms the standard CS method, even without pauses in the motion (i.e., during the first 500 frames). During the pauses, the gated CS error remains constant. The error for the standard CS method rises during the pauses where burn-in is occuring and then increases rapidly once motion resumes as the ghosting artifacts corrupt the output. The LMS SBNUC methods clearly converge much faster than the CS method, producing arguably useful images after approximately 30 frames. These methods also converge to a lower MAE value than the CS methods. Note that the adaptive LMS converges the fastest. However, at the motion pauses, the LMS and adaptive LMS updates start to see the same gradient repeatedly applied and the output begins to look like the “desired” image (i.e., the Gaussian blurred image). Thus, the error can be seen to rise during the pauses. After motion resumes, the methods begin to quickly recover, but exhibit noticeable ghosting for the following 50 frames. Like the gated CS error, the gated adaptive LMS error remains constant during the pauses due to the gating operation.

 figure: Fig. 1.

Fig. 1. Mean absolute error versus frame number for the various SBNUC algorithms using simulated nonuniformity data.

Download Full Size | PDF

Using this same dataset, we also compared applying gating for the adaptive LMS using the blurred image, as defined above, with an alternative that uses the observed images for gating. The average MAE for frames 950–1000 is 2.98 when the gating is applied to the blurred image and is 3.24 when using the observed imagery. Thus, the gating operation does appear to be more robust using the blurred image.

Figure 2 (Media 1) shows the images for frame 546 (immediately after the first pause). Figure 2(a) shows the true scene irradiance. The image corrupted with simulated nonuniformity is shown in Fig. 2(b). The outputs using the Gated CS, LMS, adaptive LMS, and gated adaptive LMS are shown in Figs. 2(c)2(f), respectively. Notice the obvious ghosting in the non-gated LMS outputs. The gated CS and gated adaptive LMS images both appear to be well corrected, but the error for the gated CS is higher. By inspecting the error images for these two methods, shown in Fig. 3 scaled identically, it is clear that the gated CS method has more low frequency error. While this error contributes to the quantitative MAE, it may not be particularly objectionable visually in many applications.

 figure: Fig. 2.

Fig. 2. Simulated nonuniformity image results (Media 1). (a) Uncorrupted image (b) image with simulated gain and bias nonuniformity (c) corrected using the gated CS method (d) corrected with LMS (e) corrected with adaptive LMS (f) corrected with proposed gated adaptive LMS.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Absolute error images for (a) gated CS SBNUC (b) gated adaptive LMS SBNUC.

Download Full Size | PDF

3.2. Real Infrared Imagery

The second set of imagery comes from an infrared (IR) imager on an airborne platform. The camera is equipped with 1024×1024 Santa Barbara Focalplane array with detector pitch of 19.5µm producing 14 bit data. The optics have a focal length of 120 mm and f-number of 2.3. The video is acquired at 8 Hz. The sensor has been calibrated with a laboratory blackbody correction prior to the data collection. Residual low frequency nonuniformity has been corrected using a regression algorithm with an offset circularly-symmetric polynomial model and bad pixels have been replaced. Subtle residual high spatial frequency nonuniformity remains.

We have 2235 frames with consistent motion. In the first experiment with IR data, we use frames 2 through 2235 and repeat frame 2235 100 times (simulating a pause in motion). We then go to frame 1 and evaluate the various SBNUC algorithms on this frame. The results are shown in Fig. 4. The images are shown with unsharp masking to better reveal the subtle high frequency nonunifomity. The unsharp masking operation is a linear filter with a high boost frequency response. The input image is shown in Fig. 4(a). The outputs using the CS, gated CS, LMS, adaptive LMS, and gated adaptive LMS are shown in Figs. 4(b)4(f), respectively. The CS method uses an exponential window parameter of α=0.995 (𝓝≈198). The gated CS method uses the same α and a change threshold of T=100. The LMS methods use a step size of η=0.05 and an FIR Gaussian low-pass filter with a standard deviation of 5 pixels and kernel size of 21×21. The adaptive LMS methods use K=100, M=214-1, and the gated adaptive LMS uses a change threshold of T=100. Notice the obvious ghosting in the nongated LMS and CS outputs. The gated CS and gated adaptive LMS images do not have this ghosting artifact. It does appear, however, that the gated adaptive LMS has done a better job reducing the high spatial frequency nonuniformity. We also applied an offset only version of the SBNUC algorithms to the real infrared imagery. The outputs for offset-only gated CS and gated adaptive LMS are shown in Fig. 5.

Since the true image is not known for the infrared data, it is not possible to evaluate the methods by comparing the output to the true image. However, we believe one powerful way to evaluate SBNUC algorithms on real data is to estimate the central frame in a sequence two different ways. One estimate is formed using the preceding frames, and the other estimate is formed using the subsequent frames in reverse order. Ideally, both estimates would be identical, and also equal to the true central image in the sequence. Error between the two estimates represents a type of hysteresis or inconsistency for the SBNUC estimator. Here we estimate frame 1118 in our sequence both ways and compute the absolute difference images and mean absolute difference (MAD) values for the various estimators, keeping the same tuning parameters used in the previous experiment. The results are shown in Fig. 6 where the absolute errors are all mapped identically to a grayscale colormap. Note that the MAD value provides a bound on the average MAE for two estimates with a given estimator. In particular, it can be shown that one half of the MAD value is less than or equal to the average MAE for the two estimates relative to the unknown true image. Basically, a low MAD does not guarantee a good estimate, but a high MAD does indicate a poor average MAE for the estimator. Note that the LMS SBNUC estimators are more consistent with these data and the gated adaptive LMS algorithm has the lowest MAD value. These results again suggest that the CS methods tend to introduce a low spatial frequency error with scene structure not seen with the LMS methods. The CS methods are clearly far more sensitive to the scene content and the distribution of that content over the course of the image sequence. The hysteresis results for the offset-only version of the gated CS and gated adaptive LMS are shown in Fig. 7.

 figure: Fig. 4.

Fig. 4. Infrared image results shown with unsharp masking enhancement. (a) Raw image with residual nonuniformity (b) CS method (c) gated CS method (d) LMS (e) adaptive LMS (f) gated adaptive LMS.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Corrected images using offset-only SBNUC shown with unsharp masking enhancement. (a) Gated CS (b) gated adaptive LMS.

Download Full Size | PDF

To demonstrate the potential advantages of using an intensity gating for the CS method for high dynamic range images, we have selected an output frame containing a hot steam pipe that gives rise to extreme pixel values. The results are shown in Fig. 8. The algorithms utilize all 2235 frames leading up to the image shown in Fig. 8(a). The output of CS, gated CS, change and intensity gated CS, LMS, and gated adaptive LMS are shown in Figs. 8(b)8(f), respectively. The same algorithm parameters as above are used here and the intensity threshold for the output in Fig. 8(d) is set to be 4 mean absolute deviations from the temporal mean for each pixel. The change-only gated CS method shows significant artifacts near the pipe including the road intersection area. The change and intensity gated CS method has significantly reduced artifacts. Note that the gated adaptive LMS output in Fig. 8(f) also appears to be robust to the extreme values. This is because the extreme values produce high local variances which lowers the LMS step size in Eq. (10), preventing the extreme values from burning in. Note that for this same reason, it is important that bad pixels be treated prior to using the adaptive LMS SBNUC. This is because outliers will boost the local variance and possibly prevent proper convergence of the LMS in the vicinity of such a bad pixel.

Another metric to evaluate SBNUC algorithms, similar to that used in [5, 9], is a sharpness metric. The idea is that successful SBNUC should attenuate high frequency energy due to fixed pattern noise. It should be noted that this metric cannot distinguish between true high frequency energy and that from nonuniformity. However, when taken along with other metrics and subjective evaluation, this can be a useful measure. The sharpness metric is given by

ρ=X̂*h1X̂1,

where h is a discrete Laplacian convolution kernel and ‖·‖1 refers to an L 1 norm. These results for the image estimates used in Fig. 4 are shown in Table 1. Also shown in the table are the corresponding hysteresis results from above.

 figure: Fig. 6.

Fig. 6. Hysteresis MAD images for various SBNUC algorithms. (a) CS (MAD=89.26)(b) gated CS (MAD=59.60) (c) change and intensity gated CS (MAD=44.77) (d) LMS (MAD=26.56) (e) adaptive LMS (MAD=7.86) (f) gated adaptive LMS (MAD=7.36).

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Hysteresis MAD images for offset-only SBNUC with (a) gated CS (MAD=58.82) (b) gated adaptive LMS (MAD=4.79).

Download Full Size | PDF

Tables Icon

Table 1. Quantitative analysis of gated SBNUC method on real infrared imagery.

In a final experiment, we test the SBNUC methods on the same infrared image sequence but with no prior black body correction. In this case, the nonuniformity is much stronger and poses a bigger challange to the robustness of the SBNUC methods. The results are shown in Fig. 9. A single raw frame is shown in Fig. 9(a). Note that there are significant striping effects, most likely due to nonuniformity in the readout electronics [10]. There are also a number of outliers/bad pixels. For such imagery, it is difficult for a Gaussian smoothing filter working alone to produce an unbiased “desired” image estimate, free from striping and bad pixel effects. Thus, to produce a good “desired” image for the LMS method here, we use a multi-step process. First, outlier pixels are replaced. Next, we force the columns of each image to have the same average value to reduce striping. This partially corrected image is shown in Fig. 9(b). Finally, we apply the Gaussian blurring filter to the partially corrected image to form the “desired” image. Beyond that, all of the same methods and algorithm tuning parameters used for the residual nonuniformity correction are used. Thus, by use of creative means to form an unbiased “desired” image, the LMS SBNUC methods can be effective, even with significant levels of nonuniformity.

 figure: Fig. 8.

Fig. 8. Infrared image results with extreme pixel values. (a) Raw image. Corrected using the (b) CS method (c) gated CS method (d) change and intensity gated CS method (e) LMS (f) gated adaptive LMS.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Infrared image results with no prior black body correction. (a) Raw image with no black body correction (b) output after bias destriping correction (c) gated CS method (d) gated adaptive LMS.

Download Full Size | PDF

4. Conclusions

In this paper, we have compared the CS and LMS SBNUC algorithms which represent perhaps the most commonly employed methods. Error for the CS methods generally results from the scene intensities not being distributed across the focal plane array in a statistically uniform manner. These methods rely on the assumption that scene motion and the spatial distribution of intensity combine to provide uniform first and second order statistics for calibrated pixels. The CS algorithms will generally perform their best with diverse camera/scene motion and a large number of frames. The CS methods can treat both high and low spatial frequency nonuniformity simultaneously. The CS methods are also the most computationally simple. We have observed that the performance of the LMS algorithms is generally limited by the error is between the “desired” image and true irradiance image. The LMS based methods are less sensitive to the spatial/temporal distribution of intensity. For the bias only LMS methods, minimal diversity of spatial/temporal intensity is required for good performance. This is helpful, for example, when panning along a skyline, where the upper part of the imagery consistently sees a different range of intensities than the lower portion. For gain and bias correction, the LMS SBNUC methods do require some intensity diversity, but the intensities do not need to be uniformly distributed across the sensor as required by the CS methods.

The quantitative error analysis for high spatial frequency nonuniformity shows that the CS methods have significantly longer convergence times than the LMS methods, as well as higher overall MAE. Furthermore, the hysteresis analysis indicates that the CS methods are much more sensitive to the frame history. The sharpness results also significantly favor the LMS methods on the real infrared imagery. Based on these measures and subjective analysis, the LMS-based methods appear better suited to treating high spatial frequency detector nonuniformity. However, a separate low spatial frequency nonuniformity correction module is required with the LMS SBNUC, if such nonuniformity is also present.

For addressing the residual nonuniformity on the real infrared imagery used here, we have found that the offset only version of the LMS algorithms appear to slightly outperform the gain and offset versions, in terms of hysteresis MAD values and sharpness. This may suggest that the residual nonuniformity does not have a significant gain component. This observation is consistent with laboratory black body calibrations done at different temperatures. In that analysis, the detector gains were not affected much by a change in ambient temperature, but the biases were. For the CS methods, the gain and offset version appeared to perform slightly better subjectively and quantitatively than the offset only versions. We have observed that for the CS methods, allowing gain corrections (even when no gain nonuniformity is present) reduces the sensitivity of the CS methods to imperfect distribution of scene intensities. For example, if one area of the detector is only exposed to bright scene content, the resulting offsets will be excessively large negative numbers. However, the standard deviation for those detectors will also tend to be small due to poor diversity of intensities. This will boost the gain correction for those pixels, which partially compensates for with erroneous large negative offsets.

Finally, temporal change gating appears to be critical for both the CS and LMS methods when constant motion cannot be guaranteed. These gating algorithms help to prevent burn-in and the corresponding ghosting artifacts. We have shown that the change and intensity gating can be helpful for the CS method for high dynamic range imaging systems where extreme values may be encountered. Such extreme values do not appear to present a problem for the adaptive LMS SBNUC, since high variance areas give rise to small step sizes. The gated adaptive LMS convergence is slower than that for its non-gated counterpart, but remains much faster than the CS methods.

Acknowledgments

The authors would like to thank Kenneth Barnard, Mark Bicknell and William Turri for supporting project and for providing technical feedback. This work was sponsored under AFRL contracts FA8650-04-2-4201 and FA8650-06-D-1078.

References and links

1. A. F. Milton, F. R. Barone, and M. R. Kruer, “Influence of non-uniformity on infrared focal plane arrays performance,” Optical Engineering 24(5), 855–862 (1985).

2. Y. M. Chiang and J. G. Harris, “An Analog Integrated Circuit for Continuous-time Gain and Offset Calibration of Sensor Arrays,” Journal of Analog Integrated Circuits and Signal Processing 12, 231–238 (1997). [CrossRef]  

3. J. G. Harris and Y.-M. Chiang, “Minimizing the Ghosting Artifact in Scene-Based Nonuniformity Correction,” in SPIE Conference on Infrared Imaging Systems: Design Analysis, Modeling, and Testing IX, vol. 3377 (Orlando, Florida, 1998).

4. M. M. Hayat, S. N. Torres, E. E. Armstrong, S. C. Cain, and B. J. Yasuda, “Statistical Algorithm for Nonuniformity Correction in Focal-plane Arrays,” Applied Optics 38(5), 772–780 (1999). [CrossRef]  

5. S. N. Torres and M. M. Hayat, “Kalman Filtering for Adaptive Nonuniformity Correction in Infrared Focal Plane Arrays,” The Journal of the Optical Society of America A 20(3), 470–480 (2003). [CrossRef]  

6. D. A. Scribner, K. A. Sarkady, M. R. Kruer, J. T. Caulfield, J. D. Hunt, and C. Herman, “Adaptive Nonuniformity Correction for IR Focal Plane Arrays using Neural Networks,” in Proceedings of the SPIE: Infrared Sensors: Detectors, Electronics, and Signal Processing, T. S. Jayadev, ed., vol. 1541, pp. 100–109 (1991).

7. D. A. Scribner, K. A. Sarkady, M. R. Kruer, J. T. Caulfield, J. Hunt, M. Colbert, and M. Descour, “Adaptive Retina-like Preprocessing for Imaging Detector Arrays,” vol. 3, pp. 1955–1960 (IEEE International Conference on Neural Networks, San Francisco, CA, 1993).

8. S. N. Torres, E. M. Vera, R. A. Reeves, and S. K. Sobarzo, “Adaptive Scene-Based Nonuniformity Correction Method for Infrared Focal Plane Arrays,” in SPIE Conference on Infrared Imaging Systems: Design Analysis, Modeling, and Testing XIV, vol. 5076 (Orlando, Florida, 2003).

9. E. M. Vera and S. N. Torres, “Fast Adaptive Nonuniformity Correction for Infrared Focal-Plane Array Detectors,” EURASIP Journal on Applied Signal Processing 13, 1994–2004 (2005).

10. B. Narayanan, R. C. Hardie, and R. A. Muse, “Scene-based nonuniformity correction technique that exploits knowledge of the focal-plane array readout architecture,” Applied Optics 44(17), 3482–3491 (2005). [CrossRef]  

11. R. C. Hardie, M. M. Hayat, E. E. Armstrong, and B. J. Yasuda, “Scene-based Nonuniformity Correction with Video Sequences and Registration,” Applied Optics 39(8), 1241–1250 (2000). [CrossRef]  

12. B. M. Ratliff, M. M. Hayat, and R. C. Hardie, “An Algebraic Algorithm for Nonuniformity Correction in Focal Plane Arrays,” The Journal of the Optical Society of America A 19(9), 1737–1747 (2002). [CrossRef]  

13. R. C. Hardie and D. R. Droege, “A MAP Estimator for Simultaneous Super-Resolution and Detector Nonuniformity Correction,” EURASIP Journal on Advances in Signal Processing, Article ID 89354 2007 (2007). [CrossRef]  

Supplementary Material (1)

Media 1: AVI (3021 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Mean absolute error versus frame number for the various SBNUC algorithms using simulated nonuniformity data.
Fig. 2.
Fig. 2. Simulated nonuniformity image results (Media 1). (a) Uncorrupted image (b) image with simulated gain and bias nonuniformity (c) corrected using the gated CS method (d) corrected with LMS (e) corrected with adaptive LMS (f) corrected with proposed gated adaptive LMS.
Fig. 3.
Fig. 3. Absolute error images for (a) gated CS SBNUC (b) gated adaptive LMS SBNUC.
Fig. 4.
Fig. 4. Infrared image results shown with unsharp masking enhancement. (a) Raw image with residual nonuniformity (b) CS method (c) gated CS method (d) LMS (e) adaptive LMS (f) gated adaptive LMS.
Fig. 5.
Fig. 5. Corrected images using offset-only SBNUC shown with unsharp masking enhancement. (a) Gated CS (b) gated adaptive LMS.
Fig. 6.
Fig. 6. Hysteresis MAD images for various SBNUC algorithms. (a) CS (MAD=89.26)(b) gated CS (MAD=59.60) (c) change and intensity gated CS (MAD=44.77) (d) LMS (MAD=26.56) (e) adaptive LMS (MAD=7.86) (f) gated adaptive LMS (MAD=7.36).
Fig. 7.
Fig. 7. Hysteresis MAD images for offset-only SBNUC with (a) gated CS (MAD=58.82) (b) gated adaptive LMS (MAD=4.79).
Fig. 8.
Fig. 8. Infrared image results with extreme pixel values. (a) Raw image. Corrected using the (b) CS method (c) gated CS method (d) change and intensity gated CS method (e) LMS (f) gated adaptive LMS.
Fig. 9.
Fig. 9. Infrared image results with no prior black body correction. (a) Raw image with no black body correction (b) output after bias destriping correction (c) gated CS method (d) gated adaptive LMS.

Tables (1)

Tables Icon

Table 1. Quantitative analysis of gated SBNUC method on real infrared imagery.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

Yij(n)=αij(n)Xij(n)+bij(n)+ηij(n),
X̂ij(n)=ĝij(n)Yij(n)+ôij(n),
ĝij(n)=1 Ŝij (n),
ôij(n)=M̂ij(n)Ŝij(n),
M̂ij(n)={(1α)Yij(n)+αM̂ij(n1)Yij(n)Yij(n1)>TM̂ij(n1)otherwise ,
Ŝij(n)={(1α)Yij(n)M̂ij(n)+αŜij(n1)Yij(n)Yij(n1)>TŜij(n1)otherwise,
Eij(n)=X̂ij(n)Bij(n),
ĝij(n+1)=ĝij(n)εij(n)Eij(n)Yij(n),
ôij(n+1)=ôij(n)εij(n)Eij(n),
εij(n)=K1+M2σYij(n)2,
εij(n)={K1+M2σYij(n)2Bij(n)Zij(n)>T0else
Zij(n+1)={Bij(n)Bij(n)Zij(n)>TZij(n)else,
ρ=X̂*h1X̂1 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.