Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Quantitative comparison of quadratic covariance-based anomalous change detectors

Open Access Open Access

Abstract

Simulations applied to hyperspectral imagery from the AVIRIS sensor are employed to quantitatively evaluate the performance of anomalous change detection algorithms. The evaluation methodology reflects the aim of these algorithms, which is to distinguish actual anomalous changes in a pair of images from the incidental differences that pervade the entire scene. By simulating both the anomalous changes and the pervasive differences, accurate and plentiful ground truth is made available, and statistical estimates of detection and false alarm rates can be made. Comparing the receiver operating characteristic (ROC) curves that encapsulate these rates provides a way to identify which algorithms work best under which conditions.

© 2008 Optical Society of America

1. Introduction

Algorithms for change detection in imagery are of general interest [1], but the remote sensing applications are particularly compelling: environmental monitoring, facility surveillance, agricultural surveying, illicit crop identification, camouflage defeat, moving target indication, small target detection in broad area search, and emergency response (after the hurricane, which roads are still open?). Given two images of the same scene, taken at different times and (inevitably) under different conditions, the aim is to find whether and where in the scene the interesting changes have occurred. What constitutes an “interesting change” depends on the specific application, but sometimes the problem is more open ended, and it is not known in advance what particular kinds of changes are being sought.

The aim in anomalous change detection is to identify those changes that are unusual, compared with the ordinary changes that occur throughout the image. It will take a human analyst to determine whether a given change is interesting or meaningful, and in any realistic operational scenario there is bound to be a human in the loop; but what anomalous change detection offers is a way to cull the mass of imagery to identify the changes that are unusual. Although it is difficult to devise a mathematical characterization of what constitutes an interesting change, the definition of “unusual” can be made more rigorous, and that provides a metric for optimizing algorithms.

A variety of anomalous change detection algorithms have been proposed: the chronochrome (CC) [2, 3], neural net prediction [4], covariance equalization (CE) [5], multivariate alteration detection [6], and a machine learning framework [7, 8]. Many of these algorithms can be formulated in terms of the covariance properties of the two multispectral images and can be expressed as quadratic functions of the input data; these algorithms will be summarized in Section 2.

One problem with the evaluation of anomaly detection algorithms is that anomalies are, by definition, rare; so imagery with actual ground-truth anoma lies will provide very few positive examples of anomalies. An algorithm that happens to work well on the few anomalies in the test data may not work well on other anomalies in the field. To address the problem of small-number statistics, this work will simulate both the anomalous (i.e., interesting) changes and the pervasive (i.e., uninteresting) differences, using data from the AVIRIS sensor [9, 10] as a starting point. Section 3 will describe the range of simulated changes and differences that will be used in the evaluations.

Finally, Section 4 will describe the results of these comparisons.

2. Anomalous Change Detection Algorithms

All the algorithms that will be considered are based on the covariance matrix of the hyperspectral data. None of the algorithms that will be considered use spatial information in the image directly. While including this information is generally a good idea, and a number of approaches have been suggested, from Markov random fields to spatial image preprocessing, these good ideas are distinct from the comparisons that are made here. There is no particular reason to believe that incorporating spatial context will improve one of the algorithms without improving any of the others.

In particular, let xRdx be a pixel value in the first image, and let yRdy correspond to the associated pixel value in the second image. Subtract the mean from both images, so that x=0 and y=0; then write

X=xxT,
Y=yyT,
C=yxT.
The algorithms under investigation will be quadratic functions of x and y. Specifically, each algorithm will provide a scalar measure of anomalousness:
A(x,y)=[xTyT]Q[xy],
where Q is a symmetric square matrix of size dx+dy. A change xy will be considered anomalous if A(x,y) is greater than a given threshold. For the quadratic algorithms under consideration, Q will be a function of the covariances X, Y, and C.

It will be useful to consider whitened data. Define

x˜=X1/2x,
y˜=Y1/2y,
and note that x˜x˜T=I and y˜y˜T=I. Also, define
C˜=Y1/2CX1/2,
and note that y˜x˜T=C˜. In terms of these whitened coordinates, we can write anomalousness as
A˜(x˜,y˜)=A(x,y)=A(X1/2x˜,Y1/2y˜)=[x˜Ty˜T]Q˜[x˜y˜],
where
Q˜=[X1/200Y1/2]Q[X1/200Y1/2].
Given Q˜, Eq. (9) can be inverted to provide the coefficient matrix
Q=[X1/200Y1/2]Q˜[X1/200Y1/2]
that can be used in the natural coordinates of the data, x and y. The various detectors that will be considered in this paper are summarized in terms of the coefficient matrix in Table 1.

Within the realm of these quadratic detectors, two categories of algorithms will be investigated. Those based on directly subtracting the two images have the property that their coefficient matrix Q is nonnegative definite and of rank less than dx+dy. The second category includes algorithms that do not employ a subtraction of images and produce coefficient matrices that are generically of full rank.

2A. Difference-Based Algorithms

All the algorithms in this category are based on the following steps:

  1. Compute the mean spectrum for each image, and subtract that mean from each pixel in each image.
  2. Identify two linear transformations, and apply one to the first image and the other to the second image. Which transforms are applied is what distinguishes the different methods in this category. The steps after this are the same for all difference-based methods.
  3. Subtract the transformed images to produce a d-dimensional multispectral difference image
  4. Compute a measure of anomalousness based on the magnitude of the differences, as measured by the Mahalanobis distance from the origin.

Given a pixel pair (x,y) with x from the first image and y from the second image, and given linear transforms (matrices) ARd×dx and BRd×dy, we compute a difference eRd given by

e=ByAx.
We compute a covariance matrix (sometimes called the dispersion of e) given by
E=eeT=(ByAx)(yTBTxTAT)=ByyTBTByxTATAxyTBT+AxxTAT=BYBTBCATACTBT+AXAT.
The anomalousness of a change for a pixel pair (x,y) is then measured with a scalar value A(x,y), given by straight anomaly detection (RX) [11],
A(x,y)=eTeeT1e=eTE1e=(ByAx)T[BYBTBCATACTBT+AXAT]1(ByAx).
Those pixels for which this value is larger than a given threshold are considered to be anomalous changes.

Note that this measure can be expressed in terms of a square dx+dy-dimensional matrix Q:

A(x,y)=[xTyT]Q[xy],
where
Q=[ATBT][BYBTBCATACTBT+AXAT]1[AB].
Although Q is a square array of size dx+dy, it has a rank of at most d [since that is the dimension of the middle matrix in Eq. (15)], and so only d nonzero eigenvalues. We also remark that Q is nonnegative definite, so the nonzero eigenvalues are in fact positive. One consequence of this is that the anomalousness A(x,y) is always nonnegative.

2A1. Simple Difference

In situations where the pervasive differences are small, the simplest detection algorithm is to just subtract the two images. This requires that both images have the same number of bands; this is often, but not always, the case in practical situations. Here,

e=yx;
so in terms of the general algorithm, we have A=B=I. Note that this makes sense only if dx=dy, and in this case we have d=dx=dy. From Eq. (13) we have
ASD(x,y)=(yx)T[YCCT+X]1(yx).
In terms of the quadratic coefficient matrix, this is
QSD=[II][YCCT+X]1[II].
Since both images have the same number of channels, we can write d=dx=dy, and then I is the d×d identity matrix.

2A2. Chronochrome

Developed by Schaum and Stocker [2, 3], the idea in the chronochrome algorithm is to consider a linear map LRdy×dx such that y^=Lx is a best estimator for y. Write

e=yLx
and choose L such that eTe is minimized. This occurs for L=CX1; so
ACC(x,y)=eTeeT1e=(yCX1x)T[YCX1CT]1(yCX1x).
Thus, in Eq. (13), we have A=L=CX1 and B=Iy. The coefficient matrix is then given by Eq. (15):
QCC=[X1CTIy][YCX1CT]1[CX1Iy].
The whitened version of this matrix is given by Eq. (9):
Q˜CC=[C˜TIy][IyC˜C˜T]1[C˜Iy].
An equivalent form of this matrix can be formed from the following identity:
[IVTVI]1=[I+VT(IVVT)1VVT(IVVT)1(IVVT)1V(IVVT)1]
=[VTI][IVVT]1[VI]+[I000].
Thus
Q˜CC=[IxC˜TC˜Iy]1[Ix000].
Now, there are in fact two chronochrome detectors, because this result is asymmetric with respect to x and y. If instead we chose to model e=xLy, then the best fit would be given by L=CTY1, and
ACC(x,y)=(xCTY1y)T[XCTY1C]1(xCTY1y),
which can be more succinctly expressed as
Q˜CC=[IxC˜][IxC˜TC˜]1[IxC˜T]
=[IxC˜TC˜Iy]1[000Iy].
It bears emphasizing that these two different CC detectors can give substantially different results, yet there is often no a priori way to choose between them.

2A3. Covariance Equalization

CE was first introduced by Schaum and Stocker [5] and was further developed in [12, 13, 14]. Two variants have previously been employed—standard CE and optimal CE—but a third, called diagonalized CE, will be described below.

Standard CE. The standard CE algorithm is the most widely used variant, probably due to its simplicity. The first step is to transform either x or y, or both of them, so that the transformed image cubes have equal covariance. The most straightforward (but not the only) way to do this is to whiten x and y individually. The second step is to use the simple difference (SD) detector on the transformed data. That is, a straight difference is used:

e=Y1/2yX1/2x.
Here, A=X1/2 and B=Y1/2, and, as with the straight difference algorithm, it is necessary that both images have the same number of channels (dx=dy). The anomalousness is given by
ACE-I(x,y)=(Y1/2yX1/2x)T[2IY1/2CX1/2X1/2CTY1/2]1(Y1/2yX1/2x).
In terms of the whitened variables, defined in Eqs. (5, 6, 7), this can be written as
ACE-I(x˜,y˜)=(y˜x˜)T[2IC˜C˜T]1(y˜x˜),
which is similar to the expression for straight differences in Eq. (17) (but with X, Y, and C replaced by X˜=I, Y˜=I, and C˜, respectively). In particular,
Q˜CE-I=[II][2IC˜C˜T]1[II].
Since dx=dy, there is no need to distinguish Ix and Iy.

An alternative approach, instead of equalizing the two images to have unit covariance, is to equalize the covariance of the x pixels to match those of the y pixels. That is,

e=yY1/2X1/2x.
This provides exactly the same result, however, since e=Y1/2e is a linear and invertible map. In particular, the anomalousness is given by
A(x,y)=eTeeT1e=eTY1/2Y1/2eeTY1/21Y1/2e=eTY1/2Y1/2eeT1Y1/2Y1/2e=eTeeT1e=A(x,y).

Optimal CE. For the optimal CE, an orthonormal rotation matrix R (satisfying RRT=I) is introduced before the subtraction:

e=Y1/2yRX1/2x=y˜Rx˜.
The optimal R is the matrix which, subject to RRT=I, minimizes Var(e); a derivation of the optimal R is given by Schaum and Stocker [12]. As it is used in the above expression, that optimal value is given by the rotational part of the matrix C˜=Y1/2CX1/2. That is, if we write the singular value decomposition of
C˜=UJVT,
where U and V satisfy UTU=I and VTV=I, and J is diagonal and nonnegative, then R=UVT is the rotational part. A more explicit expression is given by
R=(C˜C˜T)1/2C˜.
Finally,
A˜CE-R(x˜,y˜)=(y˜Rx˜)T[2IC˜RTRC˜T]1(y˜Rx˜)
=(y˜Rx˜)T[2I2(C˜C˜T)1/2]1(y˜Rx˜).
Thus
Q˜CE-R=12[RIy][Iy(C˜C˜T)1/2]1[RIy]
with R given in Eq. (37) as an expression that depends only on C˜.

Although standard CE requires that the two images have the same number of spectral channels, the introduction of R provides a way to apply CE to images for which dxdy. Here, R is a matrix of size dy×dx, and the difference e will be of dimension dy. When dy<dx, the matrix R not only rotates but also projects down to a lower-dimensional space. In the situation that dy>dx, then RRT is a dy×dy matrix with rank dx, and so it cannot be equal to I. A simple remedy is to swap the roles of x and y; a more general approach is described below.

Diagonalized CE. One way to generalize the CE algorithm is to employ two rotation matrices, one for each image; specifically, write

e=SY1/2yRX1/2x=Sy˜Rx˜,
where R is a d×dx matrix with RRT=I, and S is a d×dy matrix with SST=I. Note that in the case d=dy, then STS=I and STe=Y1/2ySTRX1/2x, which is of precisely the same form (and which produces precisely the same result) as the optimal CE defined in Eq. (35).

This generalization provides two advantages. The first is just the notational convenience of having an algorithm that works for dy>dx, without having to swap the roles of x and y. The second and more substantial advantage is that it provides the opportunity to incorporate dimension reduction by taking d smaller than both dx and dy.

Begin with the observation that minimizing the variance

Var(e)=eTe=trace(eeT)=trace((Sy˜Rx˜)(y˜TSTx˜TRT))
=trace(2ISC˜RTRC˜TST)
=trace(2ISC˜RT(SC˜RT)T)
=2d2trace(SC˜RT)
is the same as maximizing the trace of SC˜RT.

To get a handle on the problem of maximizing trace(SC˜RT), consider the singular value decomposition of C˜; that is, write C˜=UJVT, where UTU=I, VTV=I, and J is a diagonal matrix of nonnegative values. In this case, SC˜RT=SUJVTRT. Since R, S, U, and V are all orthogonal matrices, we know that SU and (RV)T are orthogonal. Following the approach used in [12] to derive a single optimal rotation matrix, we can write

trace(SC˜RT)=trace(SUJVTRT)trace(J).
Now, if we choose S=UT and R=VT, then C˜=STJR, and
trace(SC˜RT)=trace(SSTJRRT)=trace(J),
which indicates that the trace is maximized (and therefore that the variance is minimized). This choice of R and S then leads to a quadratic change detector that uses a matrix
Q˜CE-D=12[RTST][IJ]1[RS],
where R, S, and J are obtained, as described above, from the singular value decomposition of C˜; in particular, C˜=STJR.

It is important to note that there are multiple solutions to this maximization. Suppose that So and Ro are a pair of orthogonal matrices that maximize trace(SC˜RT). Let U be an arbitrary d×d orthogonal matrix; take S1=USo and R1=URo; then

trace(S1C˜R1T)=trace(USoC˜RoTUT)=trace(SoC˜RoT)
shows that R1, S1 provide another solution. The particular choice of U that leads to S1=I or R1=I produces the optimal CE solution. But other choices of U are equally optimal and may provide other advantageous properties.

By diagonalizing the whitened cross-covariance matrix C˜, this method provides a principled approach for dimension reduction. If it is stipulated that the entries in J are in decreasing order, from the largest to the smallest, then a natural dimension reduction is obtained by keeping only the largest elements. Let Jd represent the diagonal matrix that includes the first d elements of the diagonal matrix J. Write Ud and Vd as the truncations to d columns of U and V, respectively. Then S=UdT is a d×dy matrix; R=VdT is a d×dx matrix; and RRT=SST=I is the d×d identity matrix. In this formulation, Eq. (48) provides a dimension-reduced anomalous change detection algorithm.

But although the CE-D formulation provided this scheme for dimension reduction, the scheme is not limited to the CE-D algorithm. In particular,

x=RX1/2x,
y=SY1/2y
provide a generalization of the whitening transforms in Eqs. (5, 6) because the transformed vectors are still white; that is,
xxT=I,
yyT=I.
Additionally, the cross covariances are diagonal, with the most correlated coordinates listed first. That is to say,
yxT=SY1/2yxX1/2RT=SY1/2CX1/2RT=SC˜RT=J.
These transformed variables are d-dimensional vectors that can be used as input for other change detection algorithms, such as the full-rank algorithms described below, or even for nonquadratic or non-covariance-based algorithms.

Although the derivation was by an entirely different path, this CE-D detector turns out to be equivalent to the multivariate alteration detection of Nielsen et al. [6]. The multivariate alteration detection algorithm is usually invoked in terms of canonical correlation analysis, but the above shows the connection to (and equivalence with, in the d=dydx case) the covariance equalization algorithm. It also shows how implementation is possible using only singular value decomposition (instead of full canonical correlation analysis).

2B. Full-Rank Approaches to Anomalous Change Detection

The various difference-based algorithms described above all employ a subtraction of the two (suitably transformed) images. This leads to a coefficient matrix Q of size dx+dy for which the rank is no larger than the minimum of dx and dy. In the two full-rank approaches described below, there is no subtraction step; instead the two images are treated as one large image. The resulting coefficient matrix is generally of full rank.

The two algorithms in this section can very naturally be applied to arbitrarily distributed data. When the data are Gaussian and the cross correlations linear (so that all the information about the joint distribution is in the covariance and cross-covariance matrices), then these algorithms reduce to the forms that are described below.

2B1. Straight Anomaly Detection

Here the two images are treated as a single image with dx+dy channels, and the RX approach [11] is used to find anomalies in that higher-dimensional space. (Note that this is different from applying RX individually to the two separate images.) Specifically,

e=[xy]
and anomalousness is given by
A(x,y)=eTeeT1e=[xTyT][XCTCY]1[xy].
The contours of constant A(x,y) are ellipsoids in Rdx+dy, corresponding to the level curves of the probability density if the data were Gaussian.

The coefficient matrix is given by

QRX=[XCTCY]1
or, in whitened coordinates,
Q˜RX=[IxC˜TC˜Iy]1.

2B2. Anomalous Change Detection with Hyperbolic Boundaries

The RX described above identifies when points (x,y)Rdx+dy are unusual. However, it does not distinguish between the situation when x or y might individually be unusual versus the situation when it is the change between x and y that is unusual. It was to take this distinction into account that a new method was developed in [7, 8].

Let P(x,y) represent the underlying probability distribution for values x and y associated with corresponding pixels in an image. Write Px(x) and Py(y) as the projections of P(x,y) onto the x and y subspaces; that is,

Px(x)=P(x,y)dy,
Py(y)=P(x,y)dx.
Then the anomalous changes are those with particularly high values of mutual information. That is,
A(x,y)=logPx(x)+logPy(y)logP(x,y).

When the data distribution is Gaussian, these probability densities can be described in terms of the covariance and cross-covariance matrices X, Y, and C. Specifically, we can write

Px(x)=(2π)dx/2|X|1/2exp(12xTX1x),
Py(y)=(2π)dy/2|Y|1/2exp(12yTY1y),
P(x,y)=(2π)(dx+dy)/2|XCTCY|1/2exp(12[xTyT][XCTCY]1[xy]).
The mutual information in Eq. (61) can then be written as
A(x,y)=constant12xTX1x12yTY1y+12[xTyT][XCTCY]1[xy],
where the additive constant does not depend on x or y. That constant and the prefactor of 1/2 can be dropped [formally, we take A=2(Aconstant)] to produce a measure of anomalousness given by
A(x,y)=[xTyT]QHyper[xy],
where
QHyper=[XCTCY]1[X00Y]1;
or, in whitened coordinates,
Q˜Hyper=[IxC˜TC˜Iy]1[Ix00Iy].

Although this formalism can be applied to arbitrary distributions, its application in the case of a Gaussian distribution leads to the simple quadratic expression given here, which can of course be applied regardless of the actual distribution. It bears remarking that the matrix Q is not positive definite; there are negative as well as positive eigenvalues, and the boundaries of constant A(x,y) are hyperbolas in (x,y) space. For this reason, we refer to this as hyperbolic anomalous change detection. Another consequence of these negative eigenvalues is that, in contrast to both RX and the difference-based change detectors, the anomalousness A(x,y) measure is signed: it can be positive or negative.

A variant of hyperbolic anomalous change detection was found to be effective for detecting subpixel anomalous changes; here the θ1 limit of

Q˜θ=[IxC˜TC˜Iy]1[IxθC˜TθCIy]1
was considered, and a detector given by the following coefficient matrix was proposed [15]:
Q˜subpix=[IxC˜TC˜Iy]1[0C˜TC˜0][IxC˜TC˜Iy]1.

One issue that arises in the implementation of all these full-rank detectors is that the matrix that needs to be inverted is up to twice the size of the matrices that are inverted in the difference-based algorithms. Since matrix inversion generally scales as the third power of the matrix size, there is potentially a factor of 8 in computational expense. But an effective implementation can avoid this factor; employing the identity in Eq. (24), we need only compute the inverse of the smaller matrix (IyC˜C˜)1, and from that we can directly obtain

[IxC˜TC˜Iy]1=[Ix+C˜T(IyC˜C˜T)1C˜C˜T(IyC˜C˜T)1(IyC˜C˜T)1C˜(IyC˜C˜T)1].

2C. Invariances

For all the algorithms discussed here, the results are invariant under transformation of both images by the same linear invertible map. That is, if the x image has pixels x replaced by Lx and the y image has pixels y replaced by Ly, then the same pixels will be identified as anomalous changes. This result will be asserted here without proof, because the proofs are straightforward but tedious, and the details vary from algorithm to algorithm. It will be remarked, however, that this invariance is not fundamental to the anomalous change detection formulation, and it may not hold for more general nonquadratric algorithms.

This invariance to a single linear transform only makes sense when the two images have the same number of channels: that is, when dx=dy. When dxdy, two of the algorithms are no longer defined: these are simple difference (SD) and the standard covariance equalization (CE-I). With the exception of these two, all the other algorithms that have been discussed have a stronger invariance property. For these algorithms, one can apply separate linear invertible transforms to the x and to the y images, and the same anomalies will be found. Again, this can be straightforwardly if tediously demonstrated, simply by replacing x with Lx and y with My in the equations for anomalousness.

This means that applying transforms such as whitening or (untruncated) principal components to the two images will not alter the results of the algorithms. The failure of SD and CE-I to exhibit this invariance also has practical consequences. In images where the covariances of the images are quite different (especially if the images were taken over different spectral bands and/or with different instruments), then the performance of these algorithms will be most compromised.

2D. Other Algorithms

The algorithms considered here are limited in (at least) two important ways. One is the treatment of the distributions of the data as Gaussian, which leads to simple quadratic expressions that depend only on the covariance statistics of the data. The hyperbolic algorithm described above in Subsection 2B2 was initially proposed as a machine learning approach that could be applied to arbitrary distributions [7]. Along with more general distributions, one can posit nonlinear relationships between the x and the y images, and Clifton [4] investigated the use of neural networks for learning these relationships. Clifton’s algorithm is a kind of generalized CC in that one attempts to predict the y image from the x image and looks at the residuals for evidence of change. Another kind of generalized CC was described in [8]; the relationship between this algorithm and the original machine learning framework that was proposed in [7] is reflected in the similarity seen in Eqs. (25, 28) and Eq. (68).

A second limitation is the treatment of pixels as independent, when in fact there are strong spatial correlations in the data. A number of authors have investigated various sensible ways to exploit this structure; the paper by Kasetkasem and Varshney [16] provides just one approach (Markov random fields) but points to a much larger literature on the subject. Any operational anomalous change detection algorithm would do well to employ spatial information, but it was excluded from this study on the grounds that spatial exploitation could be employed by any of the spectral-based algorithms that were investigated here.

3. Methodology

Ultimately, the test of any detection algorithm is how well it detects what is really interesting in real data. Unfortunately, anomaly detection is problematic on two counts. For one thing, while it is possible to formulate a mathematical definition of “anomalous,” what is “really interesting” is in the eye of the beholder. A second problem is that anomalies are rare—so it is difficult to find enough of them to do any kind of statistical comparison of algorithms. For these reasons, the investigations reported here will employ simulations to produce both pervasive differences and anomalous changes in hyperspectral imagery. These simulations have acknowledged limitations—among them that the range of pervasive differences and anomalous changes that might occur in the field cannot be expected to match the range that is simulated. For this reason, it would be premature to infer from these studies an absolute measure of how well they might perform under truly realistic conditions; what will be provided instead is a common test set with adequate quantities of true anomalous changes to provide a basis for comparing algorithms and assessing which ones work best under which conditions.

The simulation begins with a single hyperspectral image. This image need not itself be simulated (though it can be), and in the studies performed here, is taken from the 224-channel AVIRIS sensor [9, 10]; see Fig. 1. From this single hyperspectral image, two images are created, each of which is a (global) transformation of the original image. The differences between these two images are pervasive (they are present at every pixel) and are considered normal. Any anomalous changes that would be detected in these pixels would be false alarms.

Next, an anomalous change can be introduced at a single pixel in this image pair. There are a variety of strategies for doing that; they are described in Subsection 3B, and one purpose of this study is to investigate the effect that this variety has on the performance of different anomalous change detection algorithms. For a given threshold t, one can determine whether that anomalous change is detected [A(x,y)t] or missed [A(x,y)<t]. Meanwhile, in computing A(x,y) over the rest of the (unchanged but pervasively different) pixel pairs, a false alarm rate is estimated as the fraction of pixels for which A(x,y)t. By making a large number of these image pairs, each pair containing a single anomalous pixel, an average probability of detection and false alarm rate can be estimated as a function of threshold t. Plotting the probability of detection against the false alarm rate provides a ROC curve that characterizes an algorithm’s performance.

For the study in this paper, however, a further efficiency was realized. Here, an anomalous change image pair was created in which every single pixel exhibited an anomalous change. (This could be done only because the algorithms under consideration do not use spatial information in their characterization of anomalousness.) Again, for a threshold t, the fraction of pixels in the anomalous image pair for which A(x,y)t defines the probability of detection at that threshold. At that same threshold, a false alarm rate is estimated by the fraction of pixels in the pervasive-difference image pair for which A(x,y)t. And again, a ROC curve can be generated. These ROC curves are shown in Figs. 2, 3, 4, 5, 6, 7.

To summarize, two image pairs are generated: a pervasive-difference pair and an anomalous change pair. All the pixels in the pervasive-difference pair exhibit the same global pervasive difference (the specific differences used here are listed in Subsection 3A). The anomalous-change pair is generated by using the pervasive-difference pair as a starting point, but every pixel in that image is replaced by a pixel that exhibits an actual anomalous change, using one of the schemes described in Subsection 3B.

The implementation of the individual algorithms took advantage of their quadratic nature. For each algorithm, Q was computed by using the formulas that are summarized in Table 1, and anomalousness A(x,y) was computed at each pixel by using Eq. (14).

3A. Pervasive Differences

Four models were considered for pervasive differences in hyperspectral images. These include adding noise to one of the images, smoothing one of the images, splitting the image spectrally, and misregistering the image. Misregistration is one of the most common, and confounding, sources of pervasive change in imagery, particularly remote sensing imagery. A preliminary investigation of the problem of change detection under misregistration has previously been reported [17], but the results shown here cover a different set of cases.

Let Io be an initial image; we invoke two transforms of this image to produce two derived images: I1=P1(Io) and I2=P2(Io). These transforms define the pervasive changes. For the experiments reported here, the following pervasive changes were employed.

  1. Smooth the image: here, P1 is the identity transform (that is, I1=Io), and P2 convolves the image with a Gaussian of width r=3 pixels.
  2. Add noise to the image: again P1 is the identity, and P2 is the image with multiplicative noise. In particular,
    P2(y)=y(1+ϵη),
    where ϵ is a parameter that indicates the amount of noise, and ηN(0,I) is an instantiation of zero-mean unit-variance Gaussian noise.
  3. Spectral split: here P1 takes the first k=112 channels from the image, and P2 takes the remaining channels:
    P1(y)=[Ik000]y,
    P2(y)=[000Ink]y.
    This simulates the situation when changes are sought between images taken with different cameras, in different spectral ranges.
  4. Misregistration: here, both P1 and P2 first smooth the image by convolution with a Gaussian of width r=3 pixels; then P2 translates the smoothed image by one pixel in the long direction of the image (the horizontal direction as seen in Fig. 1). The purpose of this smoothing is to mimic the effect of a more realistic misregistration, which would more typically be a fractional pixel.

3B. Anomalous Changes

Several models will be considered for anomalous changes. In these cases, we will keep the x pixel as it is (unchanged) and consider different approaches for generating a new y at that location. That is,

(x,y)(x,f(y)).

  1. To distinguish anomalous changes from outright anomalies, the anomalous changes will be simulated by using only normal pixels—and the best source of otherwise normal pixels is the image itself. So the anomalous pixel is chosen to be a y value from a different (random) location in the y image: f(y)=random(y).
  2. Subpixel anomalies: f(y)=(1α)y+αrandom(y), where α is the fraction of the pixel that is anomalous. In these experiments, α=0.3.
  3. Anomalous brightening or darkening: f(y)=αy, for some α>1. Here, α=2 was used. Since this transformation is applied after the images have been mean subtracted, the effect is to make bright pixels brighter, and dark pixels darker.
  4. Anomalous darkening or brightening: Again, f(y)=αy, but in this case, α=1. The effect is to make dark pixels brighter and bright pixels darker.

4. Results

Since both the pervasive differences and the anomalous changes are simulated, ROC curves can be plotted showing how the probability of detection varies with false alarm rate. Since anomaly detection presupposes a low false alarm rate regime, the plots shown are logarithmic in the false alarm rate.

4A. Comparison of Algorithms

Figure 2 is a representative plot; it shows ROC curves for a range of algorithms applied to the case where the anomalous changes are taken to be pixels plucked from a different part of the image; that is, using method 1 from the list in Subsection 3B. The motivation for this choice is to ensure that the pixel is by itself ordinary and that only its change is anomalous.

The SD detector actually performs reasonably well in Figs. 2a, 2b, which correspond to smoothing and misregistration. These are the cases for which the covariances of the two images are similar; so, even without any covariance-based correction, anomalous changes can be found. The smoothing does alter the covariance a little, however, and in Fig. 2a we see that SD is (slightly) outperformed by the CC and CE algorithms. The misregistration has almost no effect at all on covariance, and in Fig. 2d we see that the SD performance is virtually identical to that of CC and CE. In contrast, the noise in Fig. 2b and the spectral splitting in Fig. 2c both lead to substantially different covariances in the data cubes, and in those cases the performance of the SD algorithm is quite terrible.

To some extent, this behavior is also seen in the CE-I algorithm, but in that case, reasonable behavior is seen in Figs. 2a, 2b, 2d. While the multiplicative noise in Fig. 2b does increase the covariance, it does not rotate it (i.e., alter the channel-to-channel correlations) that much. In Fig. 2c, however, the two images, being in entirely different wavelength regimes, do not at all share a common covariance structure. It is in this case that the CE-I does so poorly.

Since there are two CCs, both are plotted in these ROC curves (with the same dashed curve style), and one of them often does much better than the other. It is difficult, however, to predict ahead of time which will be the better one. On the other hand, the ROC curves for the CE-R detector often seem to stride between the two ROC curves for the CC detectors. This suggests that the CE-R is generally a more reliable algorithm than the CC.

The full-rank RX detector, described in Subsection 2B1, generally performed relatively poorly, though it was not always the worst of the bunch. The algorithm looks for anomalies in the combined vector

e=[xy]
and is sensitive to anomalies in the individual images as well as to anomalous changes. Since the anomalous-change pixels in this simulation are drawn from the image itself, they are not by themselves anomalous, and so the RX loses power.

The Hyper algorithm, which was explicitly optimized to detect the kind of full-pixel anomalous changes that are simulated for Fig. 2, generally outperforms all the difference-based algorithms, in some cases by a substantial amount. The subpixel hyperbolic anomalous-change detector is something of a wild card, in some cases, particularly in the very low false alarm rate regime, outperforming all the algorithms. But that is not always the case (e.g., when the pervasive difference is multiplicative noise), and even when it does do well at a low false alarm rate, it is often quite poor in the high detection rate part of the ROC curve.

In Fig. 3, the experiment is repeated using subpixel anomalies, described by item 2 in Subsection 3B. Most of the trends seen in Fig. 2 are repeated, except that—as the theory [15] predicts—the subpixel anomaly detector outperformed all the other algorithms.

In Fig. 4, based on an anomalous brightening (or darkening) of a pixel in one of the images (but not in the other), as described in item 3 of Subsection 3B, there is a kind of mixing of anomaly and anomalous change. In this case, the RX algorithm was more competitive, compared with other algorithms, than it was when the anomalous-change pixels were drawn from other parts of the image; however, the explicit change detection algorithms still performed better, with the hyperbolic algorithms doing substantially better for the noise [Fig. 4b] and spectral split [Fig. 4c] cases, and the subpixel hyperbolic detector doing very well in all four cases.

Figure 5 illustrates a situation similar to Fig. 4, except that the sign of the effect is reversed: bright pixels are darkened and dark pixels are brightened. This is item 4 in Subsection 3B. In all these cases, the hyperbolic anomaly detectors outperformed the rest. Again, the subpixel detector did very well, outperforming the full-pixel hyperbolic detector in the smoothing [Fig. 5a] and misregistration [Fig. 5d] cases.

4B. Comparison of Dimension Reduction Schemes

The simulation framework that enables a quantitative comparison of algorithm performance also provides a way to compare different schemes for data preprocessing in general, and for dimension reduction in particular. Algorithm performance is invariant to linear invertible transformations of the data, but the truncation of bands is not invertible, so it will affect performance. Two different dimension reduction schemes were considered: principal components analysis (PCA) and canonical correlation analysis (CCA). The latter was implemented following the discussion of the CE-D algorithm in Subsection  2A3. In the simulation framework, the PCA or CCA was computed from, and applied to, the pervasive-difference image pair. For PCA, the two image cubes in the pair were treated independently, and each was separately rotated and then truncated to five channels. For CCA, the images cubes are treated as a pair, and the rotation for each image was based on a computation that included the statistics of both images. As with PCA, once those rotations were applied, the top five channels of each image were kept and the rest were truncated. The anomalous-change image pairs were, as in the previous cases, derived from the pervasive- difference image pairs. Although experiments were performed with different numbers of reduced dimensions, part of the reason that the d=5 case is shown is to emphasize how many dimensions can be truncated and still maintain good anomalous change detection efficacy.

In Fig. 6, the two hyperspectral image cubes are separately truncated to just five channels each, corresponding to the first five principal components for the images. Generally, this degrades performance, though in some cases [specifically, in Fig. 6b, where noise is the pervasive difference], it improves performance. Overall, however, the general behavior that was exhibited with all the channels in Fig. 2 is echoed in Fig. 6.

In Fig. 7, the image cubes were again truncated to just d=5 channels, using the expressions in Eqs. (50, 51). Two effects are noticeable. One is that many of the algorithms that performed differently on the raw data perform almost identically on the preprocessed data. That is because the preprocessing takes care of what all the variants of the CE algorithms were doing and makes them equivalent to even the simple difference. Another even more noticeable effect is that performance overall is substantially improved, and not only for the CE algorithms that the dimension reduction was designed for, but for the full-rank algorithms as well. Interestingly, the subpixel hyperbolic algorithm did not maintain its (somewhat erratic, but often substantial) performance advantage over the other algorithms when the dimension reduction was applied.

Informally, dimension reduction is expected to improve detector performance when the truncated dimensions contain more of the pervasive-difference variance than they do the anomalous-change (or target) variance. For CCA, the first components are those that are most correlated, so that the pervasive- difference variance is concentrated in the truncated dimensions. This is the motivation behind the use of CCA (advocated by Nielsen et al. [6]) for change detection.

5. Conclusion

A methodology has been introduced for evaluating anomalous change detection algorithms that addresses the inevitable difficulty with anomalies in general, that they are rare. This was done by simulating both the pervasive differences and the anomalous changes, and although simulations are always limited in their verisimilitude, what simulation provides here is an adequate quantity and variety of sample anomalies that statistical comparisons can be made.

I have also attempted to provide a kind of taxonomy for covariance-based anomalous-change detectors. There are a number that have been proposed over the years, and their relationships to one another have not always been evident. While there is ample room for extensions—by exploiting non-Gaussian distributions, nonlinear correlations, and the nonindependence of neighboring pixels—these algorithms provide a useful starting point.

Due to the inevitable limitations of the methodology, specific conclusions about the relative performance of different algorithms are necessarily tentative. Nonetheless, there are two trends that are worth emphasizing. One is that the new full-rank hyperbolic boundary algorithms introduced in [7, 15] and described in Subsection 2B2 consistently outperformed the other algorithms. Two is that dimension reduction via canonical correlation analysis produced substantial improvement for all the algorithms.

This work was supported by the Los Alamos Laboratory Directed Research and Development (LDRD) program. I am grateful to the reviewers for helpful suggestions that have improved this manuscript.

Tables Icon

Table 1. Coefficient Matricesa for the Quadratic Covariance-Based Anomalous Change Detectors

 figure: Fig. 1

Fig. 1 Broadband image from the 150×500 pixel chip that was used in this study. This is from the AVIRIS image labeled f960323t01p02_r04_sc01, which is in Florida, near the Kennedy Space Center. There are 224 spectral channels, spanning the visible to the near infrared.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 ROC curves plotted for various anomaly detection algorithms: simple difference (SD), chronochrome (CC), covariance equalization (CE-I and CE-R), straight anomaly detection (RX), hyperbolic anomalous change detection (Hyper), and subpixel hyperbolic anomalous change (Subpix). These experiments used full-pixel anomalies generated according to method 1 from Subsection 3B. Four different pervasive changes were considered, as described in Subsection 3A: (a) smoothing, (b) noise, (c) spectral splitting, and (d) single-pixel misregistration.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Same as Fig. 2, but here the anomalous change was subpixel, as described in item 2 of the list in Subsection 3B.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Same as Fig. 2, but here the anomalous change was brightening or darkening of a pixel, as described in item 3 of the list in Subsection 3B.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Same as Fig. 2, but here the anomalous change was darkening or brightening of a pixel, as described in item 4 of the list in Subsection 3B.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Same as Fig. 2, but PCA was applied to the two images before seeking anomalous changes. Only the first five principal components were used.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Same as Fig. 2, but CCA was employed as a preprocessing stage, so that the dimension was reduced to d=5 channels before change detection algorithms were applied. Note that, because of the substantially better performance achieved by this preprocessing, the vertical axis on these plots ranges only from 0.5 to 1.

Download Full Size | PDF

1. R. J. Radke, S. Andra, O. Al-Kofahi, and B. Roysam, “Image change detection algorithms: a systematic survey,” IEEE Trans. ImageProcess. 14, 294–307 (2005). [CrossRef]  

2. A. Schaum and A. Stocker, “Spectrally selective target detection,” in Proceedings of the International Symposium on Spectral Sensing Research (1997).

3. A. Schaum and A. Stocker, “Long-interval chronochrome target detection,” Proceedings of the International Symposium on Spectral Sensing Research (1997).

4. C. Clifton, “Change detection in overhead imagery using neural networks,” Appl. Intell. 18, 215–234 (2003). [CrossRef]  

5. A. Schaum and A. Stocker, “Linear chromodynamics models for hyperspectral target detection,” in Proceedings of the 2003 IEEE Aerospace Conference (IEEE, 2003), Vol. 4, pp. 1879–1885.

6. A. A. Nielsen, K. Conradsen, and J. J. Simpson, “Multivariate alteration detection (MAD) and MAF postprocessing in multispectral, bitemporal image data: new approaches to change detection studies,” Remote Sens. Environ. 64, 1–19 (1998). [CrossRef]  

7. J. Theiler and S. Perkins, “Proposed framework for anomalous change detection,” in Proceedings of the ICML Workshop on Machine Learning Algorithms for Surveillance and Event Detection (29 June 2006, Pittsburgh, Pa.), pp. 7–14.

8. J. Theiler and S. Perkins, “Resampling approach for anomalous change detection,” Proc. SPIE 6565, 65651U (2007). [CrossRef]  

9. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), Jet Propulsion Laboratory (JPL), National Aeronautics and Space Administration (NASA), http://aviris.jpl.nasa.gov/.

10. AVIRIS Free Standard Data Products, Jet Propulsion Laboratory (JPL), National Aeronautics and Space Administration (NASA), http://aviris.jpl.nasa.gov/html/aviris.freedata.html.

11. I. S. Reed and X. Yu, “Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution,” IEEE Trans. Acoust. Speech SignalProcess. 38, 1760–1770 (1990). [CrossRef]  

12. A. Schaum and A. Stocker, “Estimating hyperspectral target signature evolution with a background chromodynamics model,” in Proceedings of the International Symposium on Spectral Sensing Research (2003).

13. A. Schaum and A. Stocker, “Hyperspectral change detection and supervised matched filtering based on covariance equalization,” Proc. SPIE 5425, 77–90 (2004). [CrossRef]  

14. A. Schaum and E. Allman, “Advanced algorithms for autonomous hyperspectral change detection,” in the 33rd Applied Imagery Pattern Recognition Workshop (AIPR'04) (IEEE Computer Society, 2004), pp. 33–38. [CrossRef]  

15. J. Theiler, “Subpixel anomalous change detection in remote sensing imagery,” in Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation (IEEE Computer Society, 2008), pp. 165–168. [CrossRef]  

16. T. Kasetkasem and P. K. Varshney, “An image change detection algorithm based on Markov random field models,” IEEE Trans. Geosci. Remote Sens. 40, 1815–1823 (2002). [CrossRef]  

17. J. Theiler, “Sensitivity of anomalous change detection to small misregistration errors,” Proc. SPIE 6966, 69660X (2008). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Broadband image from the 150 × 500 pixel chip that was used in this study. This is from the AVIRIS image labeled f960323t01p02_r04_sc01, which is in Florida, near the Kennedy Space Center. There are 224 spectral channels, spanning the visible to the near infrared.
Fig. 2
Fig. 2 ROC curves plotted for various anomaly detection algorithms: simple difference (SD), chronochrome (CC), covariance equalization (CE-I and CE-R), straight anomaly detection (RX), hyperbolic anomalous change detection (Hyper), and subpixel hyperbolic anomalous change (Subpix). These experiments used full-pixel anomalies generated according to method 1 from Subsection 3B. Four different pervasive changes were considered, as described in Subsection 3A: (a) smoothing, (b) noise, (c) spectral splitting, and (d) single-pixel misregistration.
Fig. 3
Fig. 3 Same as Fig. 2, but here the anomalous change was subpixel, as described in item 2 of the list in Subsection 3B.
Fig. 4
Fig. 4 Same as Fig. 2, but here the anomalous change was brightening or darkening of a pixel, as described in item 3 of the list in Subsection 3B.
Fig. 5
Fig. 5 Same as Fig. 2, but here the anomalous change was darkening or brightening of a pixel, as described in item 4 of the list in Subsection 3B.
Fig. 6
Fig. 6 Same as Fig. 2, but PCA was applied to the two images before seeking anomalous changes. Only the first five principal components were used.
Fig. 7
Fig. 7 Same as Fig. 2, but CCA was employed as a preprocessing stage, so that the dimension was reduced to d = 5 channels before change detection algorithms were applied. Note that, because of the substantially better performance achieved by this preprocessing, the vertical axis on these plots ranges only from 0.5 to 1.

Tables (1)

Tables Icon

Table 1 Coefficient Matrices a for the Quadratic Covariance-Based Anomalous Change Detectors

Equations (76)

Equations on this page are rendered with MathJax. Learn more.

X = x x T ,
Y = y y T ,
C = y x T .
A ( x , y ) = [ x T y T ] Q [ x y ] ,
x ˜ = X 1 / 2 x ,
y ˜ = Y 1 / 2 y ,
C ˜ = Y 1 / 2 C X 1 / 2 ,
A ˜ ( x ˜ , y ˜ ) = A ( x , y ) = A ( X 1 / 2 x ˜ , Y 1 / 2 y ˜ ) = [ x ˜ T y ˜ T ] Q ˜ [ x ˜ y ˜ ] ,
Q ˜ = [ X 1 / 2 0 0 Y 1 / 2 ] Q [ X 1 / 2 0 0 Y 1 / 2 ] .
Q = [ X 1 / 2 0 0 Y 1 / 2 ] Q ˜ [ X 1 / 2 0 0 Y 1 / 2 ]
e = B y A x .
E = e e T = ( B y A x ) ( y T B T x T A T ) = B y y T B T B y x T A T A x y T B T + A x x T A T = B Y B T B C A T A C T B T + A X A T .
A ( x , y ) = e T e e T 1 e = e T E 1 e = ( B y A x ) T [ B Y B T B C A T A C T B T + A X A T ] 1 ( B y A x ) .
A ( x , y ) = [ x T y T ] Q [ x y ] ,
Q = [ A T B T ] [ B Y B T B C A T A C T B T + A X A T ] 1 [ A B ] .
e = y x ;
A SD ( x , y ) = ( y x ) T [ Y C C T + X ] 1 ( y x ) .
Q SD = [ I I ] [ Y C C T + X ] 1 [ I I ] .
e = y L x
A CC ( x , y ) = e T e e T 1 e = ( y C X 1 x ) T [ Y C X 1 C T ] 1 ( y C X 1 x ) .
Q CC = [ X 1 C T I y ] [ Y C X 1 C T ] 1 [ C X 1 I y ] .
Q ˜ CC = [ C ˜ T I y ] [ I y C ˜ C ˜ T ] 1 [ C ˜ I y ] .
[ I V T V I ] 1 = [ I + V T ( I V V T ) 1 V V T ( I V V T ) 1 ( I V V T ) 1 V ( I V V T ) 1 ]
= [ V T I ] [ I V V T ] 1 [ V I ] + [ I 0 0 0 ] .
Q ˜ CC = [ I x C ˜ T C ˜ I y ] 1 [ I x 0 0 0 ] .
A CC ( x , y ) = ( x C T Y 1 y ) T [ X C T Y 1 C ] 1 ( x C T Y 1 y ) ,
Q ˜ CC = [ I x C ˜ ] [ I x C ˜ T C ˜ ] 1 [ I x C ˜ T ]
= [ I x C ˜ T C ˜ I y ] 1 [ 0 0 0 I y ] .
e = Y 1 / 2 y X 1 / 2 x .
A CE-I ( x , y ) = ( Y 1 / 2 y X 1 / 2 x ) T [ 2 I Y 1 / 2 C X 1 / 2 X 1 / 2 C T Y 1 / 2 ] 1 ( Y 1 / 2 y X 1 / 2 x ) .
A CE-I ( x ˜ , y ˜ ) = ( y ˜ x ˜ ) T [ 2 I C ˜ C ˜ T ] 1 ( y ˜ x ˜ ) ,
Q ˜ CE-I = [ I I ] [ 2 I C ˜ C ˜ T ] 1 [ I I ] .
e = y Y 1 / 2 X 1 / 2 x .
A ( x , y ) = e T e e T 1 e = e T Y 1 / 2 Y 1 / 2 e e T Y 1 / 2 1 Y 1 / 2 e = e T Y 1 / 2 Y 1 / 2 e e T 1 Y 1 / 2 Y 1 / 2 e = e T e e T 1 e = A ( x , y ) .
e = Y 1 / 2 y R X 1 / 2 x = y ˜ R x ˜ .
C ˜ = U J V T ,
R = ( C ˜ C ˜ T ) 1 / 2 C ˜ .
A ˜ CE-R ( x ˜ , y ˜ ) = ( y ˜ R x ˜ ) T [ 2 I C ˜ R T R C ˜ T ] 1 ( y ˜ R x ˜ )
= ( y ˜ R x ˜ ) T [ 2 I 2 ( C ˜ C ˜ T ) 1 / 2 ] 1 ( y ˜ R x ˜ ) .
Q ˜ CE-R = 1 2 [ R I y ] [ I y ( C ˜ C ˜ T ) 1 / 2 ] 1 [ R I y ]
e = S Y 1 / 2 y R X 1 / 2 x = S y ˜ R x ˜ ,
Var ( e ) = e T e = trace ( e e T ) = trace ( ( S y ˜ R x ˜ ) ( y ˜ T S T x ˜ T R T ) )
= trace ( 2 I S C ˜ R T R C ˜ T S T )
= trace ( 2 I S C ˜ R T ( S C ˜ R T ) T )
= 2 d 2 trace ( S C ˜ R T )
trace ( S C ˜ R T ) = trace ( S U J V T R T ) trace ( J ) .
trace ( S C ˜ R T ) = trace ( S S T J R R T ) = trace ( J ) ,
Q ˜ CE-D = 1 2 [ R T S T ] [ I J ] 1 [ R S ] ,
trace ( S 1 C ˜ R 1 T ) = trace ( U S o C ˜ R o T U T ) = trace ( S o C ˜ R o T )
x = R X 1 / 2 x ,
y = S Y 1 / 2 y
x x T = I ,
y y T = I .
y x T = S Y 1 / 2 yx X 1 / 2 R T = S Y 1 / 2 C X 1 / 2 R T = S C ˜ R T = J .
e = [ x y ]
A ( x , y ) = e T e e T 1 e = [ x T y T ] [ X C T C Y ] 1 [ x y ] .
Q RX = [ X C T C Y ] 1
Q ˜ RX = [ I x C ˜ T C ˜ I y ] 1 .
P x ( x ) = P ( x , y ) d y ,
P y ( y ) = P ( x , y ) d x .
A ( x , y ) = log P x ( x ) + log P y ( y ) log P ( x , y ) .
P x ( x ) = ( 2 π ) d x / 2 | X | 1 / 2 exp ( 1 2 x T X 1 x ) ,
P y ( y ) = ( 2 π ) d y / 2 | Y | 1 / 2 exp ( 1 2 y T Y 1 y ) ,
P ( x , y ) = ( 2 π ) ( d x + d y ) / 2 | X C T C Y | 1 / 2 exp ( 1 2 [ x T y T ] [ X C T C Y ] 1 [ x y ] ) .
A ( x , y ) = constant 1 2 x T X 1 x 1 2 y T Y 1 y + 1 2 [ x T y T ] [ X C T C Y ] 1 [ x y ] ,
A ( x , y ) = [ x T y T ] Q Hyper [ x y ] ,
Q Hyper = [ X C T C Y ] 1 [ X 0 0 Y ] 1 ;
Q ˜ Hyper = [ I x C ˜ T C ˜ I y ] 1 [ I x 0 0 I y ] .
Q ˜ θ = [ I x C ˜ T C ˜ I y ] 1 [ I x θ C ˜ T θ C I y ] 1
Q ˜ subpix = [ I x C ˜ T C ˜ I y ] 1 [ 0 C ˜ T C ˜ 0 ] [ I x C ˜ T C ˜ I y ] 1 .
[ I x C ˜ T C ˜ I y ] 1 = [ I x + C ˜ T ( I y C ˜ C ˜ T ) 1 C ˜ C ˜ T ( I y C ˜ C ˜ T ) 1 ( I y C ˜ C ˜ T ) 1 C ˜ ( I y C ˜ C ˜ T ) 1 ] .
P 2 ( y ) = y ( 1 + ϵ η ) ,
P 1 ( y ) = [ I k 0 0 0 ] y ,
P 2 ( y ) = [ 0 0 0 I n k ] y .
( x , y ) ( x , f ( y ) ) .
e = [ x y ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.