Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Scene-based nonuniformity correction for airborne point target detection systems

Open Access Open Access

Abstract

Images acquired by airborne infrared search and track (IRST) systems are often characterized by nonuniform noise. In this paper, a scene-based nonuniformity correction method for infrared focal-plane arrays (FPAs) is proposed based on the constant statistics of the received radiation ratios of adjacent pixels. The gain of each pixel is computed recursively based on the ratios between adjacent pixels, which are estimated through a median operation. Then, an elaborate mathematical model describing the error propagation, derived from random noise and the recursive calculation procedure, is established. The proposed method maintains the characteristics of traditional methods in calibrating the whole electro-optics chain, in compensating for temporal drifts, and in not preserving the radiometric accuracy of the system. Moreover, the proposed method is robust since the frame number is the only variant, and is suitable for real-time applications owing to its low computational complexity and simplicity of implementation. The experimental results, on different scenes from a proof-of-concept point target detection system with a long-wave Sofradir FPA, demonstrate the compelling performance of the proposed method.

© 2017 Optical Society of America

1. Introduction

Infrared focal-plane arrays (FPAs) are widely used in airborne infrared search and track (IRST) systems [1–3], which possess night vision, anti-hidden, and mist-penetrating capabilities. FPAs are often heavily contaminated with nonuniformity noise that arises from nonhomogeneous radiance response and the readout circuitry of each pixel. As the diffraction spot of a long-distance target does not occupy an area of more than 3 × 3 pixels, poor nonuniformity correction (NUC) leads to a low detection rate, high false alarm rate, and close range. Thus, advanced NUC management is essential for further target detection.

NUC methods can be broadly classified into two major categories. The first category includes the most commonly used blackbody-based NUC (BBNUC) methods, where all the pixels are mapped linearly to the average response of a uniform radiance source [4]. These methods are simple and easy to implement. An infrared (IR) camera is characterized and calibrated on the ground, before it is attached to an aircraft for airborne implementation. However, the radiance response of detectors, especially long-wave infrared (LWIR) detectors, drifts slowly over time. Moreover, imaging conditions in the air, including the temperature and atmospheric pressure, are quite different from those on the ground, resulting in surface deformations of the camera lens. These factors mean that one-time laboratory calibration is insufficient. Thus, achieving high precision requires airborne equipment to periodically be subjected to costly in-flight calibration. In IRST systems, the primary lens, as the only external optical part, is often designed to have a large aperture to increase the signal-to-noise ratio (SNR). Restricted by volumes and weights, it is impractical to calibrate the entire optical system with a blackbody of the same size as the primary lens. Instead, calibration is accomplished by switching a shutter or an internal blackbody into the optical path [5,6], which requires normal camera operation to be halted. In other words, the in-flight calibration remains an internal calibration. As we argued in a previous research paper, the BBNUC method fails to compensate for the response nonlinearity in staircase scenes from IRST systems [7]. That is, the precision of BBNUC decreases, when the scene temperature deviates from that of the reference point.

The second category comprises scene-based NUC (SBNUC) methods, which consist of statistics- and registration-based methods. Statistical methods rely on spatio-temporal assumptions, have low computational complexity and excellent real-time performance, but are motion dependent. The global constant statistics (GCS) method assumes that the first- and second-order statistics of observed scenes are constant over time [8]. This assumption is invalid in extreme scenes, such as the staircase scene in IRST systems. Moreover, the lack of motion in certain applications, such as safety monitoring, causes ghosting artifacts. The gated least mean square (LMS) algorithm significantly eliminates ghosting artifacts by halting updates when temporal variation is lacking [9], but decelerates convergence. The local constant statistics method improves the GCS method by a more reasonable assumption that the statistics of the scene are constant locally, but uneven globally [10]. A more robust SBNUC method for pushbroom detectors has been proposed [11], where the spatial assumption further converges to adjacent pixels. However, this method is not applicable to staring FPAs. The adjacent differential statistics (ADS) method based on the one-point bias (OPB) model lacks the ability to compensate for response nonlinearities [12]. Other typical methods include neural network [13], total variation [14–16], and Kalman filter methods [17].

Registration-based methods assume that different pixels respond identically for the same scene point within certain blocks of time [18–20]. However, an accurate registration in IRST is difficult, due to the low contrast of scenes filled with the sky, and image motion caused by scanning and high vibrations. Additionally, these methods may not be effective under particular conditions, such as local motion within clouds, scene rotation from high-frequency vibration of aerial cameras, and rapid changes in scene irradiation. Thus, registration-based methods are impractical in IRST systems.

Our research aims to improve the precision of statistical SBNUC from a different perspective; that is, by improving the simplicity and effectiveness of the statistical assumption and correction model, we attempt to improve the accuracy and the practical applicability of SBNUC. Virtually, the gain dominates the linear model. Therefore, it is considered that correcting only the gain should be as effective as correcting both the gain and bias. Our work led us to propose a novel SBNUC method named constant statistics of adjacent ratios (CSAR) for airborne point target detection systems. This method corrects the gain nonuniformity under the assumption that the radiation received by adjacent pixels is statistically constant. The proposed method can compensate for nonuniformity and nonlinear response with low computational complexity and few ghost artifacts. The validity of the proposed method was verified by carrying out a theoretical analysis and additionally by investigating its practical application in IRST systems.

This paper is organized as follows. In Section 2, the proposed CSAR method is introduced in detail, and the mathematical model of the correction error is established. Section 3 presents the experimental results on different scenes. A comprehensive discussion of parameter selection and implementation issues is presented in Section 4. Finally, Section 5 concludes this paper.

2. Nonuniformity correction

2.1. Nonuniformity observation model

In general, the correction model is linearly approximated as

Ji,j(f)=ki,jIi,j(f)+bi,j+ni,j,
where Ji,j (f) and Ii,j (f) are the gray value of the (i, j)th pixel in the corrected image J(f) and the raw image I(f) in the f -th frame, respectively, ki,j is the gain, bi,j is the bias, and ni,j is the additive random noise. Here, i ∈ [1, M] and j ∈ [1, N] denote the row and column indices, respectively. The coordinate origin is on the upper left.

The coefficient ki,j dominates the linear model, which is demonstrated theoretically and experimentally in subsection 3.1 and Appendix A. Also, ni,j is left out to facilitate the analysis of nonuniformity. In this paper, by setting the value of the coefficient bi,j to null, the two-point (TP) model in Eq. (1) degenerates to the one-point gain (OPG) model as

Ji,j(f)=ki,jIi,j(f).
Then, it becomes possible to solve the NUC problem for coefficient k(i, j).

2.2. CSAR method

The basic assumption is that the radiation that strikes each detector and its four nearest neighbors is usually similar. This observation is expressed by the median of adjacent ratios as

midF[Ji,j(f)Ji1, j(f)Ji+1,j(f)Ji,j1(f)Ji,j+1(f)4]=1.
Here, the operator midF [·] returns the middle of the F values. Next, equation (2) is substituted into Eq. (3) and because the ki,j-related terms remain constant for frame index f, they can be extracted outside the operator midF [·] to obtain
ki,jki1,jki+1,jki,j1ki,j+14midF[Ii,j(f)Ii1,j(f)Ii+1,j(f)Ii,j1(f)Ii,j+1(f)4]=1.

Note that the format in Eq. (4) is the most fundamental design of the method. In reality, however, solving for k(i, j) directly through Eq. (4) is difficult. Reducing the number of adjacent neighbors from 4 to 2 can simplify Eq. (4). As the nonuniformity of staring FPAs exists in two directions, two neighbors, one each in the horizontal and vertical directions, are selected. That is, the neighbors to the left and above the pixel, which are near the coordinate origin, are selected. Then, equation (4) is simplified as

ki,jki1,jki,j1midF[Ii,j(f)Ii1,j(f)Ii,j1(f)]=1.

This reduction in the number of neighbors is reasonable, because the relevance between a pixel and its four neighbors can be established either through the single equation in Eq. (4) or the three equations in Eq. (5), as illustrated in Fig. 1.

 figure: Fig. 1

Fig. 1 Relevance with four adjacent pixels through (a) Eq. (4) and (b) Eq. (5).

Download Full Size | PDF

For pixels where i ∈ [2, M] and j ∈ [2, N], the ratio between a pixel to the geometric mean of its left and upper neighbors is defined as

ri,j(f)=Ii,j(f)Ii1,j(f)Ii,j1(f).
Then, denote the median ratio as r˜i,j=midF[ri,j(f)]. Derived from Eqs. (5)(6), the solution to coefficient ki,j takes the following form
ki,j=ki1,jki,j1r˜i,j.

Figure 2 presents the proposed method diagrammatically. The calculation process is as follows. Firstly, calculate the ratio between adjacent pixels in each frame. Then, obtain the median matrix r of the ratio within F frames. Lastly, solve for ki,j in a recursive manner. Note that the solution k(i, j) in Eq. (5) is not unique. In other words, if ki,j0 is a solution, then cki,j0 is also a solution, where c is a nonzero constant. To arrive at a unique solution, we assume without loss of generality that k1,1 = 1.

 figure: Fig. 2

Fig. 2 Diagram of the CSAR method.

Download Full Size | PDF

It is obvious that Eq. (7) is a recursive formula. In other words, to calculate ki,j, it is necessary to calculate ki−1,j and ki,j−1. Further, to calculate ki−1,j, one needs to calculate ki−2,j and ki−1,j−1, and so forth. By continuing this process, it becomes obvious that it is ultimately necessary to calculate ki,j in the first row and the first column. Pixels outside the bounds of the image are assumed to equal the nearest boundary. That is, when i = 1, Ii−1,j (f) = Ii,j (f) and ki−1,j = ki,j. Similarly, when j = 1, Ii,j−1(f) = Ii,j (f) and ki,j−1 = ki,j. Thus, equation (5) for the array border can be expressed as

{ki,jki,jki,j1midF[Ii,j(f)Ii,j(f)Ii,j1(f)]=1i=1,j[2,N]ki,jki,jki1,jmidF[Ii,j(f)Ii,j(f)Ii1,j(f)]=1i[2,M],j=1.
Similarly, the ratio in Eq. (6) is described by
ri,j(f)={Ii,j(f)Ii,j1(f)i=1,j[2,N]Ii,j(f)Ii1,j(f)i[2,M],j=1.
Plugging Eq. (9) into Eq. (8) yields
ki,j={ki,j1r˜i,j2i=1,j[2,N]ki1,jr˜i,j2i[2,M],j=1.

Normally, pixel (1, 1) serves as a starting point for ki,j estimates. In this regard, equations (9)(10) enable us to acquire the correction coefficients on the boundary.

2.3. Mathematical model of error propagation

In this subsection, the model of error propagation is investigated. The estimation of r˜i,j inevitably introduces relative errors αi,j in the form of random noise. Due to the recursive calculation manner, errors are only expanded to coefficients kii,jj, where Δi ∈ [0, Mi] and Δj ∈ [0, Nj]. The median ratio with errors is denoted as r˜i,j=αi,jr˜i,j, where the value of αi,j approximates 1. The ratio of the estimated kii,jj to the true kii,jj is denoted as gΔij = kii,jj/kii,jj. As deduced in Appendix A,

gΔi,Δj={αi,j2ΔjΔi=0αi,j2(Δi+Δj)s=1Δis+ΔisΔi[1,Mi].

The demonstration of gΔij = gΔji indicates the equivalence of the row-wise and column-wise calculation procedures. The error of the corrected pixel Jii,jj (f) is denoted as EΔij. That is

EΔi,Δj=Ji+Δi,j+Δj(f)Ji+Δi,j+Δj(f)=(gΔi,Δj1)Ji+Δi,j+Δj(f).

Without loss of generality, we concentrate on the error for an FPA with a 14-bit analog-to-digital convertor (ADC) when the random noise intensity is equal to 2 DN and 4 DN [21]. The correction error is plotted in Fig. 3 with αi,j = 1.00025 and αi,j = 1.00050, respectively.

 figure: Fig. 3

Fig. 3 Error propagation when (a) αi,j = 1.00025 and (b) αi,j = 1.00050.

Download Full Size | PDF

Figure 3 shows that the correction error derived from the random noise of a certain pixel converges fast. Virtually, the random noise of all the former pixels contributes to the correction error of the latter pixels, with the last pixel accumulating all the correction errors.

3. Experimental results

We verified the effectiveness of the proposed method by carrying out one laboratory calibration and two outfield experiments with a principle prototype composed of a mercury cadmium telluride LWIR detector with a Stirling cycle cooler. The detailed features of the tested IR imaging system are condensed in Table 1. The noise equivalent temperature difference (NETD) of the FPA was given for a +20°C scene temperature with the integration time 300 µs.

Tables Icon

Table 1. Detailed features of the tested IR systems.

The proposed method was compared with the most commonly used two-point BBNUC method [4] and the ADS method [12] for different scenes. These methods were evaluated by comparing the visual effect and objective indices. For scenes with the sky as background, the local standard deviation (STD) σ of the corrected images was employed to quantitatively assess the NUC performance [7]. As distant targets often occupy areas less than 3×3 pixels in point target detection systems, the local STD affects the SNR of the target directly. A lower local STD means a further detection range, a higher detection probability, and a lower false alarm rate. For diverse scenes, we refer to the roughness ρ of the image J [8], defined as

ρ(J)=h1J1+h2J1J1,
where h1 and h2 are the horizontal and vertical difference filters, respectively. Besides, ∥·∥1 is the l1-norm operator, and ⊗ represents the discrete convolution operator.

3.1. Laboratory calibration

In this subsection, the linear models are compared, and residual errors caused by the proposed method are measured. The system is illustrated in Fig. 4. The FPA with a camera lens was positioned closely to the CI blackbody, which provided a source of uniform radiation. The blackbody and the FPA were placed in an airtight chamber to prevent moisture from condensing on the blackbody at low temperatures. The chamber, which was filled with dry nitrogen, was maintained at a slightly higher pressure than atmospheric pressure. The grabbed frames were transmitted to a personal computer (PC) through a frame grabber. The entire equipment suite was mounted onto an optics vibration platform.

 figure: Fig. 4

Fig. 4 Experimental equipment: (a) schematic diagram, (b) practical calibration platform.

Download Full Size | PDF

The reference point for the OPB and OPG models was set to −10°C. For the TP model, the two reference points were set to −10°C and −20°C. Correction coefficients of the CSAR method were also obtained with 1000 frames of the flat field when T = 10°C. The temperature of the to-be-calibrated scene was denoted as T, and T ∈ {−10°C, 0°C, +10°C, +20°C, +30°C}. The residual nonuniformity was quantitatively assessed by the global STD of the calibrated images. The calibration experiment was repeated 100 times, and the mean value of the raw images and the global STD of the calibrated images with their uncertainties are listed in Table 2.

Tables Icon

Table 2. Global STD of the images calibrated with different models and the CSAR method.

The dependence of the global STD on the mean value of the raw images for different correction models and the CSAR method is plotted in Fig. 5. The global STD is proportional to the deviation from the reference point due to the nonlinear response of the detector. It is obvious that the residual nonuniformities of the OPG, TP models, and the CSAR method are almost the same, all of which are much smaller than that of the OPB model. This confirms the rationality of correcting only the gain nonuniformity in this research. As shown by the analysis in Appendix A, the correction parameter ki,j in the OPG and TP models can compensate for the nonlinear response to the same extent, whereas the OPB model cannot compensate for it at all. Additionally, it should be noted that when T is equal to −10°C, the global STD is nonzero due to random noise.

 figure: Fig. 5

Fig. 5 Comparison of global STD with different models and the CSAR method.

Download Full Size | PDF

The influence of random noise on the error propagation was analyzed by grabbing another 1000 frames, and the correction coefficients for each T were calculated with the CSAR method. Then, the random noise was measured by the method in [22]. The experiment was repeated 100 times. The global STD of the corrected frame and the random noise versus the average value are plotted in Fig. 6. It can be seen that the random noise is less than 2 DN, when the blackbody temperature is below +30°C with the fixed integration time 300 µs, and the global STD of the corrected image is less than 6 DN. The experimental results in Figs. 5 and 6 show that the correction error does not spread because of the recursive calculation procedure.

 figure: Fig. 6

Fig. 6 Random noise and global STD with the proposed method as a function of gray value.

Download Full Size | PDF

3.2. Scenes with the sky background

We validated the effectiveness of the proposed method in point target detection systems by establishing a prototype IRST system, a photograph of which is shown in Fig. 7. The IR camera was mounted on a two-axis motion platform. The pan and tilt positions of the camera could be changed continuously through the Ethernet. The location and velocity of airliners could be read in the global navigation system (GNS). Half-well capacity was acquired by configuring the integration time to 300 µs, which was consistent with the laboratory calibration. The response drift of the detector was analyzed by carrying out the laboratory calibration seven days prior to the field experiment during summer in Changchun.

 figure: Fig. 7

Fig. 7 Proof-of-concept IRST system. On the right was the screen shot of the GNS viewer, where a Boeing B738 (airliner 1) was detected.

Download Full Size | PDF

In the first outfield experiment, the camera was tilted to a fixed pitch angle of 20°, where the field of view (FOV) was filled entirely with the sky and no ground. Sequentially, the camera was panned at a constant speed of 10°/s, whilst saving 1000 frames to calculate the correction coefficients. For the two-point BBNUC method, the two reference points −20°C and +20°C were selected, which were located right in the dynamic range of observed scenes. The effect of using the correction coefficients of the BBNUC, the ADS, and the proposed CSAR methods is shown in Fig. 8.

 figure: Fig. 8

Fig. 8 Comparison of the coefficients. (a) k in TP. (b) b in TP. (c) b in ADS. (d) k in CSAR.

Download Full Size | PDF

Furthermore, another 100 raw frames were grabbed and saved as to-be-corrected frames. The corrected frames are depicted in Fig. 9. The FOV contains some clouds, and the quality of the raw image shows deterioration due to excessive nonuniformity noise. In Fig. 9(a), the BBNUC method imposes residual nonuniformity on the corrected image, which primarily stems from the approximation error of the nonlinear response model [23]. The radiant fluxes of the scenes decrease monotonously as the altitude increases. The correction error of the pixels of which the radiant fluxes deviate from the two reference points increases gradually. The performance of BBNUC may be improved when the nonlinearity is compensated for in a separate test. Besides, the triangular-shaped fixed pattern noise (FPN) can be seen in the magnified details in Fig. 9(b), which results from the temporal drift of the detector. In other words, the time interval of seven days can result in visually distinguishable FPN. It can be seen that the LWIR detector used in this experiment is unstable.

 figure: Fig. 9

Fig. 9 Comparison of the corrected images. (a) Raw image. (b) BBNUC. (c) ADS. (d) CSAR. The smaller images below the main images are magnifications of the content within the red squares.

Download Full Size | PDF

Subjectively, the ADS method is superior to the BBNUC method. However, considering that the ADS method is based on the OPB model, the correction error of this model is only small when the dynamic range of the scene is narrow, as demonstrated in subsection 3.1. This indicates that the large dynamic range of the staircase scenes contributes negatively to the NUC performance. In contrast, the proposed method delivers the best image quality. The median operation in the proposed method can effectively eliminate the influence of extreme data, such as edge information, on the statistical properties of adjacent pixels. The correction coefficients are just estimated at the to-be-corrected temperature point. Thus, the deviation from the reference point is smaller, resulting in less residual nonuniformity noise. It is important to note that SBNUC methods involve relative rather than absolute radiation calibration, and this is why the corrected images vary in contrast and gray value.

The probability density function (PDF) and the cumulative distribution function (CDF) of the local STD are presented in Fig. 10. Figure 10(a) indicates that the peak value of the PDF by the BBNUC method is 10.86 DN, whereas that of the ADS method is 2.78 DN, and that of the proposed method is 2.45 DN. For a certain percentage in Fig. 10(b), the local STD by the proposed method is the lowest among the various contrasting methods.

 figure: Fig. 10

Fig. 10 Comparison of (a) PDF and (b) CDF of the local STD.

Download Full Size | PDF

Without loss of generality, we focus on the calculation of the local STD corresponding to the percentage of 50% in CDF, which is plotted for each frame in Fig. 11. The decrease in the local STD for the corrected images is directly relevant to lower residual nonuniformities.

 figure: Fig. 11

Fig. 11 Local STD with the percentage fixed to 50% in the CDF.

Download Full Size | PDF

With the GNS system in Fig. 7, a Boeing B738 from 59.37 km away was detected. The radius r of the Airy disk is r = 1.22λF = 23.18 μm, which approaches the pixel size of 30 μm. Thus, this detected target occupies not more than four pixels in the FPA. Without loss of generality, five sub-images were cropped from the corrected frames, as displayed in Fig. 12. The frames corrected by the CSAR method suffer from less residual nonuniformity noise.

 figure: Fig. 12

Fig. 12 The cropped target of a Boeing B738 from 59.37 km away. (a) The raw image, and the images corrected by (b) BBNUC, (c) ADS, and (d) CSAR.

Download Full Size | PDF

3.3. Scenes with diversities

The second outfield experiment was performed to validate the effectiveness of CSAR in general applications. This time, the tilt angle of the camera was 5°. This resulted in the FOV containing diverse elements such as the sky, buildings, trees, and streets. The corrected images are depicted in Fig. 13. The image corrected by the proposed method is free from ghosting artifacts, and the windows in the closest building can be discriminated.

 figure: Fig. 13

Fig. 13 Comparison of the corrected images. (a) Raw image. (b) BBNUC. (c) ADS. (d) Proposed.

Download Full Size | PDF

The anti-ghosting capability of the method was analyzed by plotting the adjacent ratio profiles. Without loss of generality, the ratios of pixels (100, 100) and (200, 200) are analyzed emphatically in Fig. 14(a). Influenced by the background, the ratio curve fluctuates rapidly near the median. The ratios were sorted in ascending order in Fig. 14(b) to facilitate the analysis. On the one hand, there is a large interval in the middle in which the ratio nearly remained unchanged, as indicated by the rectangular grey mark. On the other hand, extreme values derived from edges are distributed at the leftmost and rightmost sides of the plots, without affecting the median. In other words, the median operation can effectively counteract the ghosting artifacts. Obviously, the fewer textures and edges in the scene, the larger the flat area in the sorted curve and the more accurate the ratios. It should be noted that the sorting operation is an unnecessary step, and was simply included to facilitate analysis of the method.

 figure: Fig. 14

Fig. 14 Plot of (a) the ratio and (b) the sorted ratio for two typical pixels.

Download Full Size | PDF

We assessed the NUC performance in general applications by plotting the roughness as a function of the frame index in Fig. 15. The roughness result coincides with the visual quality in Fig. 13. In summary, the proposed method has shown compelling performance in general applications. It can effectively decrease the roughness, and improve the visual quality of the images.

 figure: Fig. 15

Fig. 15 Roughness (×10−2) as a function of frame index.

Download Full Size | PDF

4. Discussion

The frame number F is the only variable in the proposed method. A large F can increase the tolerance of extreme scenes. Thus, scenes with abundant edges and textures require F to be large. Conversely, for scenes containing little detailed information, F should be small. Generally, the correction parameter is more accurate with a larger F, and vice versa. For the diverse scenes in subsection 3.3, the corrected images when F is equal to 100, 300, and 500 are presented in Fig. 16. When F = 100 in Fig. 16(a), the ghosting artifacts are so severe that the buildings in the previous frames can be seen clearly. The ghosting artifacts are mitigated to a limited extent when F rises to 300, and disappear completely when F = 500.

 figure: Fig. 16

Fig. 16 Corrected images when (a) F = 100 (b) F = 300 (c) F = 500.

Download Full Size | PDF

We analyzed the source of the ghosting artifacts explicitly by calculating the sorted ratio of a specific pixel (194, 201) from the ghosting regions. The result is presented in Fig. 17. The ratios when F = 100 and F = 300 deviate from the true value by 0.034 and 0.017, respectively. In the proposed method, failure to estimate the ratios of adjacent pixels accurately directly leads to ghosting artifacts. The value of F is scene-dependent, and F = 500 is large enough to obtain an accurate estimation of the ratio in diverse scenes. In IRST systems, F = 1000 is sufficient to ensure the robustness and margin in extreme scenes. This is an empirical value.

 figure: Fig. 17

Fig. 17 Ratios for the pixel (194, 201) with different F.

Download Full Size | PDF

The proposed method uses a median operation to process ghosting artifacts, and this operation cannot be implemented recursively as the mean operation. Hence, all the F raw frames need to be stored first, which is a drawback of the proposed method. For the FPA described in this paper, 1024 raw frames require storage space of 160 MB. However, the availability of storage for large amounts of data permits an embedded implementation on the digital signal processor TMS320C6455, of which the memory space can be extended to 512 MB with an external double data rate (DDR) 2 memory [24]. For FPAs with a larger array size, such as 640 × 512 or 1024 × 768, raw images can be divided into several sub-images first. Then, the median ratio for sub-images can be obtained either serially with one processing chain, or in parallel with multiple processing chains. Lastly, the final correction coefficient is integrated with Eq. 7.

The adopted median operation adversely affects the running time (it is a bottleneck), which is a typical Top-k problem and can be solved by the BFPRT algorithm [25]. It has linear time complexity in the worst case for selecting the k-th largest element. On the Intel Core i3-2120 platform, the CPU time of the experiment in subsection 3.2 is 14.10 s. The method could be accelerated by computing Eq. (6) and its median in parallel with graphic processing units. The time interval for updating correction coefficients is FPA-dependent. Then, each raw frame is corrected with Eq. (2), which can be implemented in real-time.

Most applications combine BBNUC methods with SBNUC methods. However, airborne IRST systems operate in harsh environmental conditions, such as extreme temperature ranges, heating, and aerodynamic loads. Laboratory calibration aligns poorly with these critical environmental conditions. Thus, the previously stored two-point or multi-point correction coefficients lead to residual nonuniformities, of which the spatio-temporal statistics are non-stationary and nonconstant. We recommend performing in-flight NUC on raw frames with the proposed method. This solution inhibits respond drifts to the full extent.

It should be noted that the proposed CSAR method does not preserve the radiometric accuracy of the system. While most SBNUC methods do not, the CSAR method is more susceptible due to the assumption of the initial condition on k1,1 = 1. This is not detrimental to IRST systems, but a weakness compared to other SBNUC applications that are more complicated and can be better calibrated due to the amount of data collected over the dynamic range.

5. Conclusions

This paper presented a novel statistical SBNUC method, based on the CSAR assumption, for airborne point target detection systems. This method employs a median operator to estimate the ratio between each pixel and the geometric mean of its two adjacent pixels, after which the gain of the FPA is calculated recursively. In addition, we demonstrated in theory and experiment that the correction error from random noise is not diffused due to the manner in which the recursive calculation is performed. The strength of the proposed method lies in its high degree of robustness, high precision, little restriction on data collection, and low computational complexity, which enhances its competitiveness in airborne IRST systems. As for any other SBNUC methods, the proposed method can calibrate the whole optical system, and compensate for temporal drifts. A prototype of the IRST system was established to validate the proposed method. Compared with the most commonly employed BBNUC method and a typical ADS method, our method can decrease the local STD and the roughness of the images.

Since the proposed method is based on the OPG model, when the dynamic range in the horizontal direction varies considerably in IRST systems, the NUC performance is expected to degrade due to the nonlinear response. Although we have demonstrated that the gain parameter dominates the TP model, it is much easier to match one of the two reference points in the scenes in the TP model, thereby mitigating the nonlinear error. A possible solution is to calculate the gain and the bias in different scenes, and extend the CSAR method to the TP model.

Appendix A

The three linear correction models are compared here in theory. This analysis forms the basis of the proposed method. By setting the value of the coefficient ki,j to 1, the TP model in Eq. (1) is degenerated to the OPB model as

Ji,j(f)=Ii,j(f)+bi,j.

Calculation of the correction coefficients requires the acquisition of one uniform image I(f) when the blackbody temperature is set to T. Its arithmetical mean is denoted as Ī(f). For the OPG model, ki,j = Ī(f)/Ii,j. Similarly, for the OPB model, bi,j = Ī(f) − Ii,j. Then, the blackbody temperature is raised from T to T + ΔT. The corresponding image is denoted as Î(f), in which the (i, j)th pixel is

I^i,j(f)=Ii,j(f)+ΔIi,j(f).
Here, ΔIi,j(f) is a nonuniform component due to the nonlinear response of the detector, which has been demonstrated in our previous work [7]. The images corrected by the OPB, OPG, and TP models are denoted as J′(f), J″(f), and J‴(f), respectively. It is obvious that
{Ji,j(f)=I¯(f)+ΔIi,j(f)Ji,j(f)=I¯(f)+ki,jΔIi,j(f)Ji,j(f)=I¯(f)+ki,jΔIi,j(f).

It can be seen from Eq. (16) that the nonuniform component ΔIi,j(f) can be compensated for by the coefficient ki,j in the OPG and TP models. However, ΔIi,j(f) is not properly compensated for in the OPB model. In terms of compensating for nonlinear response, both the OPG and TP models outperform the OPB model.

Next, the model of the error propagation is derived in detail. Denote gΔi,Δj=αi,jPΔi,Δj for convenience. When Δi = 0, deduced from Eq. (7)P0,0 = 1, P0,1 = 2−1, P0,2 = 2−2, etc. Thus, we have P0,Δj = 2−Δj. When Δi ∈ [1, Mi],

PΔi,Δj={2(1+Δj)1+Δj1Δi=12(2+Δj)(1+Δj)(2+Δj)1×2Δi=22(3+Δj)(1+Δj)(2+Δj)(3+Δj)1×2×3Δi=32(Δi+Δj)(1+Δj)(2+Δj)(Δi+Δj)1×2××ΔiΔi=Δi.
Finally, we can deduce that
PΔi,Δj={2ΔjΔi=02(Δi+Δj)s=1Δis+ΔisΔi[1,Mi].

Without loss of generality, assume that Δj > Δi. When Δi = 0, we have P0,Δj = PΔj,0 = 2−Δj. When Δi ∈ [1, Mi],

PΔj,Δi=2(Δi+Δj)(1+Δi)(2+Δi)(Δj+Δi)1×2××Δj=2(Δi+Δj)[(1+Δi)(2+Δi)Δj][(1+Δj)(2+Δj)(Δj+Δi)][1×2××Δi][(1+Δi)(2+Δi)Δj].=2(Δi+Δj)(1+Δj)(2+Δj)(Δi+Δj)1×2××Δi

Equations (17) and (19) enable us to draw the conclusion that PΔij = PΔji. That is, the row-wise and column-wise procedures for calculating the coefficients are equivalent.

Funding

National Natural Science Foundation of China (NSFC) (61675202)

References and links

1. L. Fortunato, A. Ondini, C. Quaranta, and C. Giunti, “SKYWARD: the next generation airborne infrared search and track,” Proc. SPIE 9819, 98190K (2016).

2. A. Rogalski, Infrared Detectors (CRC, 2011), 2nd ed.

3. G. A. Page, B. D. Carroll, A. E. Pratt, and P. N. Randall, “Long-range target detection algorithms for infrared search and track,” Proc. SPIE 3698, 48–57 (1999 [CrossRef]  ).

4. D. L. Perry and E. L. Dereniak, “Linear theory of nonuniformity correction in infrared staring sensors,” Opt. Eng. 32, 1854–1859 (1993 [CrossRef]  ).

5. Z. Sun, S. Chang, and W. Zhu, “Radiometric calibration method for large aperture infrared system with broad dynamic range,” Appl. Opt. 54, 4659–4666 (2015 [CrossRef]   [PubMed]  ).

6. P. W. Nugent, J. A. Shaw, and N. J. Pust, “Radiometric calibration of infrared imagers using an internal shutter as an equivalent external blackbody,” Opt. Eng. 53, 123106 (2014 [CrossRef]  ).

7. L. Huo, D. Zhou, D. Wang, R. Liu, and B. He, “Staircase-scene-based nonuniformity correction in aerial point target detection systems,” Appl. Opt. 55, 7149–7156 (2016 [CrossRef]   [PubMed]  ).

8. M. M. Hayat, S. N. Torres, E. Armstrong, S. C. Cain, and B. Yasuda, “Statistical algorithm for nonuniformity correction in focal-plane arrays,” Appl. Opt. 38, 772–780 (1999 [CrossRef]  ).

9. R. C. Hardie, F. Baxley, B. Brys, and P. Hytla, “Scene-based nonuniformity correction with reduced ghosting using a gated LMS algorithm,” Opt. Express 17, 14918–14933 (2009 [CrossRef]   [PubMed]  ).

10. C. Zhang and W. Zhao, “Scene-based nonuniformity correction using local constant statistics,” J. Opt. Soc. Am. A 25, 1444–1453 (2008 [CrossRef]  ).

11. R. A. Leathers, T. V. Downes, and R. G. Priest, “Scene-based nonuniformity corrections for optical and SWIR pushbroom sensors,” Opt. Express 13, 5136–5150 (2005 [CrossRef]   [PubMed]  ).

12. L. Geng, Q. Chen, and W. Qian, “An adjacent differential statistics method for IRFPA nonuniformity correction,” IEEE Photon. J. 5, 6801615 (2013 [CrossRef]  ).

13. L. Rui, Y. Yang, Z. Duan, and Y. Li, “Improved neural network based scene-adaptive nonuniformity correction method for infrared focal plane arrays,” Appl. Opt. 47, 4331–4335 (2008 [CrossRef]  ).

14. Y. Cao and C.-L. Tisse, “Single-image-based solution for optics temperature-dependent nonuniformity correction in an uncooled long-wave infrared camera,” Opt. Lett. 39, 646–648 (2014 [CrossRef]   [PubMed]  ).

15. E. Vera, P. Meza, and S. Torres, “Total variation approach for adaptive nonuniformity correction in focal-plane arrays,” Opt. Lett. 36, 172–174 (2011 [CrossRef]   [PubMed]  ).

16. L. Liu and T. Zhang, “Optics temperature-dependent nonuniformity correction via ℓ0-regularized prior for airborne infrared imaging systems,” IEEE Photon. J. 7, 6803016 (2016).

17. J. E. Pezoa, M. M. Hayat, S. N. Torres, and M. S. Rahman, “Multimodel Kalman filtering for adaptive nonuniformity correction in infrared sensors,” J. Opt. Soc. Am. A 23, 1282–1291 (2006 [CrossRef]  ).

18. B. Yasuda, E. Armstrong, M. M. Hayat, and R. C. Hardie, “Scene-based nonuniformity correction with video sequences and registration,” Appl. Opt. 39, 1241–1250 (2000 [CrossRef]  ).

19. C. Zuo, Q. Chen, G. Gu, and X. Sui, “Scene-based nonuniformity correction algorithm based on interframe registration,” J. Opt. Soc. Am. A 28, 1164–1176 (2011 [CrossRef]  ).

20. J. Zeng, X. Sui, and H. Gao, “Adaptive image-registration-based nonuniformity correction algorithm with ghost artifacts eliminating for infrared focal plane arrays,” IEEE Photon. J. 7, 1–16 (2015).

21. D. Wang, T. Zhang, and H. Kuang, “Relationship between the charge-coupled device signal-to-noise ratio and dynamic range with respect to the analog gain,” Appl. Opt. 51, 7103–7114 (2012 [CrossRef]   [PubMed]  ).

22. G. E. Healey and R. Kondepudy, “Radiometric CCD camera calibration and noise estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 267–276 (1994 [CrossRef]  ).

23. A. Ferrero, J. Campos, and A. Pons, “Correction of photoresponse nonuniformity for matrix detectors based on prior compensation for their nonlinear behavior,” Appl. Opt. 45, 2422–2427 (2006 [CrossRef]   [PubMed]  ).

24. Texas Instruments, TMS320C645x DSP External Memory Interface (EMIF) User’s Guide (2005).

25. M. Blum, R. W. Floyd, V. Pratt, R. L. Rivest, and R. E. Tarjan, “Time bounds for selection,” J. Comput. Syst. Sci. 7, 448–461 (1973 [CrossRef]  ).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1
Fig. 1 Relevance with four adjacent pixels through (a) Eq. (4) and (b) Eq. (5).
Fig. 2
Fig. 2 Diagram of the CSAR method.
Fig. 3
Fig. 3 Error propagation when (a) αi,j = 1.00025 and (b) αi,j = 1.00050.
Fig. 4
Fig. 4 Experimental equipment: (a) schematic diagram, (b) practical calibration platform.
Fig. 5
Fig. 5 Comparison of global STD with different models and the CSAR method.
Fig. 6
Fig. 6 Random noise and global STD with the proposed method as a function of gray value.
Fig. 7
Fig. 7 Proof-of-concept IRST system. On the right was the screen shot of the GNS viewer, where a Boeing B738 (airliner 1) was detected.
Fig. 8
Fig. 8 Comparison of the coefficients. (a) k in TP. (b) b in TP. (c) b in ADS. (d) k in CSAR.
Fig. 9
Fig. 9 Comparison of the corrected images. (a) Raw image. (b) BBNUC. (c) ADS. (d) CSAR. The smaller images below the main images are magnifications of the content within the red squares.
Fig. 10
Fig. 10 Comparison of (a) PDF and (b) CDF of the local STD.
Fig. 11
Fig. 11 Local STD with the percentage fixed to 50% in the CDF.
Fig. 12
Fig. 12 The cropped target of a Boeing B738 from 59.37 km away. (a) The raw image, and the images corrected by (b) BBNUC, (c) ADS, and (d) CSAR.
Fig. 13
Fig. 13 Comparison of the corrected images. (a) Raw image. (b) BBNUC. (c) ADS. (d) Proposed.
Fig. 14
Fig. 14 Plot of (a) the ratio and (b) the sorted ratio for two typical pixels.
Fig. 15
Fig. 15 Roughness (×10−2) as a function of frame index.
Fig. 16
Fig. 16 Corrected images when (a) F = 100 (b) F = 300 (c) F = 500.
Fig. 17
Fig. 17 Ratios for the pixel (194, 201) with different F.

Tables (2)

Tables Icon

Table 1 Detailed features of the tested IR systems.

Tables Icon

Table 2 Global STD of the images calibrated with different models and the CSAR method.

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

J i , j ( f ) = k i , j I i , j ( f ) + b i , j + n i , j ,
J i , j ( f ) = k i , j I i , j ( f ) .
mid F [ J i , j ( f ) J i 1 ,   j ( f ) J i + 1 , j ( f ) J i , j 1 ( f ) J i , j + 1 ( f ) 4 ] = 1 .
k i , j k i 1 , j k i + 1 , j k i , j 1 k i , j + 1 4 mid F [ I i , j ( f ) I i 1 , j ( f ) I i + 1 , j ( f ) I i , j 1 ( f ) I i , j + 1 ( f ) 4 ] = 1 .
k i , j k i 1 , j k i , j 1 mid F [ I i , j ( f ) I i 1 , j ( f ) I i , j 1 ( f ) ] = 1 .
r i , j ( f ) = I i , j ( f ) I i 1 , j ( f ) I i , j 1 ( f ) .
k i , j = k i 1 , j k i , j 1 r ˜ i , j .
{ k i , j k i , j k i , j 1 mid F [ I i , j ( f ) I i , j ( f ) I i , j 1 ( f ) ] = 1 i = 1 , j [ 2 , N ] k i , j k i , j k i 1 , j mid F [ I i , j ( f ) I i , j ( f ) I i 1 , j ( f ) ] = 1 i [ 2 , M ] , j = 1 .
r i , j ( f ) = { I i , j ( f ) I i , j 1 ( f ) i = 1 , j [ 2 , N ] I i , j ( f ) I i 1 , j ( f ) i [ 2 , M ] , j = 1 .
k i , j = { k i , j 1 r ˜ i , j 2 i = 1 , j [ 2 , N ] k i 1 , j r ˜ i , j 2 i [ 2 , M ] , j = 1 .
g Δ i , Δ j = { α i , j 2 Δ j Δ i = 0 α i , j 2 ( Δ i + Δ j ) s = 1 Δ i s + Δ i s Δ i [ 1 , M i ] .
E Δ i , Δ j = J i + Δ i , j + Δ j ( f ) J i + Δ i , j + Δ j ( f ) = ( g Δ i , Δ j 1 ) J i + Δ i , j + Δ j ( f ) .
ρ ( J ) = h 1 J 1 + h 2 J 1 J 1 ,
J i , j ( f ) = I i , j ( f ) + b i , j .
I ^ i , j ( f ) = I i , j ( f ) + Δ I i , j ( f ) .
{ J i , j ( f ) = I ¯ ( f ) + Δ I i , j ( f ) J i , j ( f ) = I ¯ ( f ) + k i , j Δ I i , j ( f ) J i , j ( f ) = I ¯ ( f ) + k i , j Δ I i , j ( f ) .
P Δ i , Δ j = { 2 ( 1 + Δ j ) 1 + Δ j 1 Δ i = 1 2 ( 2 + Δ j ) ( 1 + Δ j ) ( 2 + Δ j ) 1 × 2 Δ i = 2 2 ( 3 + Δ j ) ( 1 + Δ j ) ( 2 + Δ j ) ( 3 + Δ j ) 1 × 2 × 3 Δ i = 3 2 ( Δ i + Δ j ) ( 1 + Δ j ) ( 2 + Δ j ) ( Δ i + Δ j ) 1 × 2 × × Δ i Δ i = Δ i .
P Δ i , Δ j = { 2 Δ j Δ i = 0 2 ( Δ i + Δ j ) s = 1 Δ i s + Δ i s Δ i [ 1 , M i ] .
P Δ j , Δ i = 2 ( Δ i + Δ j ) ( 1 + Δ i ) ( 2 + Δ i ) ( Δ j + Δ i ) 1 × 2 × × Δ j = 2 ( Δ i + Δ j ) [ ( 1 + Δ i ) ( 2 + Δ i ) Δ j ] [ ( 1 + Δ j ) ( 2 + Δ j ) ( Δ j + Δ i ) ] [ 1 × 2 × × Δ i ] [ ( 1 + Δ i ) ( 2 + Δ i ) Δ j ] . = 2 ( Δ i + Δ j ) ( 1 + Δ j ) ( 2 + Δ j ) ( Δ i + Δ j ) 1 × 2 × × Δ i
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.