Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Transformation of point spread functions on an individual pixel scale

Open Access Open Access

Abstract

In some types of imaging systems, such as imaging spectrometers, the spectral and geometric pixel properties like center wavelength, center angle, response shape and resolution change rapidly between adjacent pixels. Image transformation techniques are required to either correct these effects or to compare images acquired by different systems. In this paper we present a novel image transformation method that allows to manipulate geometric and spectral properties of each pixel individually. The linear transformation employs a transformation matrix to associate every pixel of a target sensor B with all related pixels of a source sensor A. The matrix is derived from the cross-correlations of all sensor A pixels and cross-correlations of sensor A and sensor B pixels. We provide the mathematical background, discuss the propagation of uncertainty, demonstrate the use of the method in a case study, and show that the method is a generalization of the Wiener deconvolution filter. In the study, the transformation of images with random, non-uniform pixel properties to distortion-free images leads to errors that are one order of magnitude smaller than those obtained with a conventional approach.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

Many scientific applications place high demands on the quality of data acquired with optical instruments such as remote sensing imaging spectrometers [14]. Consequently, great effort is put into building imaging spectrometers with uniform angular and spectral pixel properties [57]. Calibration measurements show, however, that some imaging spectrometers significantly suffer from spatial and spectral non-uniformities [8,9]. This manifests in different center wavelengths of pixels of the same spectral channel (smile), different center angles of pixels of the same spatial pixel (keystone), but also in changes of the response function shapes and resolution within a few pixels. In other words, the Point Spread Functions (PSFs) of these instruments are not constant but change rapidly with detector location. Image transformation techniques are therefore required to correct such optical distortions and non-uniformities.

In this paper we propose a linear transformation of images from a sensor $A$ into images of a sensor $B$ considering pixel-specific image properties. Sensor $B$ can either be a distortion-free version of sensor $A$ or a different instrument which allows the comparison of images acquired with both sensors. The transformation employs a transformation matrix to relate every pixel of sensor $B$ to a set of pixels of sensor $A$. The transformation matrix can also be used to propagate measurement uncertainties from sensor $A$ to sensor $B$. The proposed transformation generalizes the well-known Wiener deconvolution to sensors exhibiting a rapid pixel-to-pixel variation of geometric or spectral responsivity. Though developed for the correction of image distortions in imaging spectrometers, the presented algorithm is very general in nature. In fact, it should be applicable to almost any ground-based, air-, and spaceborne camera and imaging spectrometer.

Due to the vast amount of publications on Wiener filtering and deconvolution, a comprehensive literature review is beyond the scope of this paper. Since imaging spectroscopy for earth observation is our core focus, we begin with a brief review of related publications from this field of research.

Although many methods for the correction of pixel center angle and wavelength for imaging spectrometers exist, the number of techniques that take also the homogenization of image resolution into account is rather limited. Schläpfer et al. [2] investigate the homogenization of geometric response functions over all pixels of an imaging sensor with linear and nearest neighbor interpolation. The method is based on a single parameter, the Full Width at Half Maximum (FWHM), and neglects further information about the shape of the response function. Ceamanos and Douté [10] address the issue of varying resolution and center wavelength of the spectral response by sharpening the spectral bands after adjusting the center wavelengths by cubic spline interpolation. Zhao et al. [11] and Jia et al. [12] suggest to use an iterative method to create a superresolution image, which is subsequently convolved with the geometric and spectral model of a desired instrument. However, none of these publications discuss the propagation of uncertainties to the transformed images.

Besides the publications on imaging spectrometers discussed so far, there are several publications from other research fields that are relevant for our work. Nevas et al. [13] propose the simultaneous correction of stray light and bandpass effects of array spectroradiometers (non-imaging spectrometers). Therein the authors present a Tikhonov regularized spectral deconvolution method. This approach seems to be similar to the method presented here, but a detailed comparison is hampered by the lack of mathematical details provided by the authors. Similar techniques are also applied by astronomers, who typically refer to them as “PSF matching” or “PSF homogenization” methods. Within the field of astronomy these techniques are usually applied before comparing images taken by two or more cameras with different optical properties. In such a scenario homogenization is necessary to suppress images differences caused solely by different camera properties and not by actual variations in the light emitted by a star or other target of interest. Notable approaches in this category can be found in [1419]. These publications served as an important inspiration during the development of this work. While the homogenization techniques described therein are in principle the solution we are looking for, they require pixel-to-pixel variations of the responsivity to be either negligible or to be representable by a slowly varying function, e.g. low order polynomial as in [17].

The remainder of this paper is structured as follows: In Sec. 2 we introduce the algorithm we developed for the transformation of the imaging properties of an optical sensor and we discuss its most important characteristics. We proceed with a case study using simulated data in Sec. 3. Therein we apply our method to synthetic data of a virtual sensor to demonstrate that the method works as expected and to analyze its behavior. For the interested reader we provide a rigorous mathematical derivation of all used equations in Appendix A. and we derive the Wiener deconvolution as special case from these equations in Appendix B.

2. Theoretical background

In the following we discuss imaging systems that focus on infinity, such as remote sensing instruments. In this case the mapping of the objected plane of the imaging system can be reinterpret as the mapping of the angle of the incident electromagnetic radiation. For different focus ranges the angular coordinates used in the following can be exchanged with other coordinate systems, such as Cartesian.

2.1 Pixel response function

The imaging properties of an optical system can be expressed in terms of its impulse response, the so-called PSF. The PSF describes the output of an optical system under illumination with a perfectly collimated, monochromatic point source. The PSF typically depends on the direction of the incident radiation and on its wavelength. If the PSF is measured for many different angles of incidence and wavelengths, one obtains the so-called Pixel Response Function (PRF) for each pixel. Each PRF describes one pixel’s sensitivity to radiation depending on the radiation’s direction of incidence and wavelength. Figure 1 visualizes the relation between PSF and PRF.

 figure: Fig. 1.

Fig. 1. Slice through an angular axis of an exemplary PSF, which is the signal created by a point source, which has here an incident angle of 0 mrad. The orange dots indicate the signal of each pixel while the blue curves are the Pixel Response Functions (PRFs), i.e. the responses of the pixels dependent on the incident angle. As it can be seen, the pixel signal corresponds to the value of the PRFs at the intersection with the point source.

Download Full Size | PDF

Throughout this paper $f^{A}_k$ designates the PRF of pixel $k$ of sensor $A$. $k$ is an index (a number) used to identify a pixel of sensor $A$. Even if $A$ is an imaging sensor with multiple spectral and geometric dimensions, we use a single number as index for each pixel. For an RGB camera we could e.g. start with the left most top red (R) channel, proceed with its right neighbor and then continue down the first row, followed by all pixels in the second row and so on. Once we reach the last red pixel in the last row we continue in the same fashion for all green (G) and blue (B) pixels. All parameters upon which a PRF depends are summarized into a single vector $\boldsymbol { {q} }$. A typical choice would be $\boldsymbol { {q} }=(\lambda , \theta , \phi )^{T}$, where $\lambda$ is the wavelength and the direction of incidence is defined by the angles $\phi$ and $\theta$. Using this notation, we can express the reading $s_k^{A}$ of sensor $A$’s $k$-th pixel by

$$s_k^{A} = \int_Q f_k^{A}({\boldsymbol {q} }) \, L_s({\boldsymbol {q} }) \, d^{3}q,$$
where $L_s(\boldsymbol { {q} })$ is the at-aperture radiance entering the sensor and $Q$ symbolises the parameter space of possible combinations of angles of incidence and wavelengths.

To keep the notation compact, we henceforth express scalar products of the above form as $\langle \cdot , \, \cdot \rangle$. In this notation the previous expression simply becomes

$$s_k^{A} = \langle f_k^{A}, \, L_s \rangle.$$
Following common conventions, we assume $f_k^{A}$ to be normalized such that
$$\langle f_k^{A}, \, 1 \rangle \equiv 1.$$
This implies that the sensor is radiometrically calibrated, because a spectrally and spatially flat at-aperture radiance ($L_s(\boldsymbol { {q} })=L_0=\textrm {const.}$) obviously yields a reading $s_k^{A} = L_0$ identical to the incident radiance. Furthermore, we assume that the measured signal $s_k^{A}$ has been corrected for non-linearity, dark current and any electronic offset. Note that keystone, smile and stray light can all be expressed in terms of the PRF and are thus treatable by the formalism discussed in the following sections.

2.2 Calculation of the transformation matrix

In this section we present a set of equations for the manipulation of a sensor’s PRFs. A detailed mathematical derivation of these equations can be found in Appendix A. The proposed algorithm is as a generalization of the well known Wiener deconvolution in so far, as it extends the Wiener theory [20] to sensors with arbitrarily varying PRFs. While the classical Wiener deconvolution requires the PRF to be the same for all pixels, the method proposed here allows to use a dedicated PRF for each pixel.

We assume that we have a sensor $A$ with given properties expressed in terms of its PRFs. Additionally we assume a sensor $B$ with different and typically more desirable PRFs. The PRFs of both sensors must be known. For real instruments these parameters are usually determined by calibration measurements in a dedicated laboratory. Our goal is to predict the result of a hypothetical measurement with sensor $B$ based on an actual measurement acquired with sensor $A$. It should be noted that we do not make any assumptions about the sensors beyond their respective PRFs. Thus $A$ and $B$ could be anything from a simple monochromatic camera, via a single geometric pixel spectrometer to an imaging spectrometer. Obviously the conversion has limits: it is e.g. not possible to predict the output of a high resolution imaging spectrometer based on a measurement with a monochromatic camera. However, limited increases in geometric or spectral resolution (superresolution) are possible with the proposed method. Before we discuss the possibilities and limitations of the algorithm, we shall begin with the presentation of the essential equations, though.

Let sensors $A$ and $B$ consist of $M$ and $M^{\prime }$ individual pixels, respectively. Then we can express a measurement by each sensor through the vectors

$$\begin{aligned}{\boldsymbol {s} }^{A} := \left(s^{A}_1, \ldots, s^{A}_M\right)^{T} \end{aligned}$$
$$\begin{aligned}{\boldsymbol {s} }^{B} := \left(s^{B}_1, \ldots, s^{B}_{M^{\prime}}\right)^{T}. \end{aligned}$$
The reading of a single pixel of sensor $A$ is given by Eq. (2). The readings for sensor $B$ are obtained in complete analogy from
$$s_k^{B} = \langle f_k^{B}, \, L_s \rangle,$$
where $f_k^{B}, k= 1, \ldots , M^{\prime }$ are the PRFs of sensor $B$. To predict a measurement with sensor $B$ based on a measurement of sensor $A$, we require a transformation matrix $\boldsymbol K$ such that
$${\boldsymbol {s} }^{B} = \boldsymbol K \cdot {\boldsymbol {s} }^{A}.$$
The matrix $\boldsymbol K$ basically maps every pixel of sensor $B$ to the set of pixels of sensor $A$. At this point it is not clear if — or under which circumstances — such a matrix exists. We will return to this question later and proceed here under the assumption that a suitable $\boldsymbol K$ can be found. If sensor $A$’s PRFs overlap for neighboring pixels, then the readings of these pixels are correlated. In mathematical terms this correlation can be expressed in terms of the cross-correlation matrix $\boldsymbol C^{AA}$:
$$\boldsymbol C^{AA}_{ij} := \left\langle f^{A}_i, \, f^{A}_j \right\rangle.$$
In a similar fashion, pixels of sensor $B$ are correlated to pixels of sensor $A$, provided their respective PRFs overlap to some extent. Again, we can formalize this correlation with the cross-correlation matrix $\boldsymbol C^{BA}$:
$$\boldsymbol C^{BA}_{ij} := \left\langle f^{B}_i, \, f^{A}_j \right\rangle.$$
It may be intuitively clear that the linear map of measurements from sensor $A$ to sensor $B$ is equivalent to a linear map of the associated cross-correlation matrices. A formal derivation of this argument is found in Appendix A. This implies that
$$\boldsymbol C^{BA} = \boldsymbol K\, \boldsymbol C^{AA}.$$
Thus we can formally obtain $\boldsymbol K$ from the (least squares) inversion of Eq. (10), which yields
$$ \hat{\boldsymbol{K}}=\underset{\boldsymbol{K}}{\operatorname{argmin}}\left\{\left\|\boldsymbol{K} \boldsymbol{C}^{A A}-\boldsymbol{C}^{B A}\right\|_{2}^{2}\right\} $$
where $\hat {\boldsymbol K}$ symbolises the least squares estimator for the matrix $\boldsymbol K$ and $\lVert \cdot \rVert _2$ is the Euclidean norm. Since we expect this inversion to be ill-conditioned, we choose the regularized least-squares solution proposed by Tikhonov [2124], which yields
$$ \hat{\boldsymbol{K}}=\underset{\boldsymbol{K}}{\operatorname{argmin}}\left\{\left\|\boldsymbol{K} \boldsymbol{C}^{A A}-\boldsymbol{C}^{B A}\right\|_{2}^{2}+\gamma^{2}\|\boldsymbol{K} \boldsymbol{\Gamma}\|_{2}^{2}\right\} $$
$$\begin{aligned} = \boldsymbol C^{BA}\,\boldsymbol C^{AA} \left( \boldsymbol C^{AA} \boldsymbol C^{AA} + \gamma^{2} {\boldsymbol {\Gamma} }^{\dagger} {\boldsymbol {\Gamma} } \right)^{-1}. \end{aligned}$$
Here $\gamma$ and $\boldsymbol { {\Gamma } }$ are the regularization parameter and the Tikhonov regularization matrix, respectively. Both can be chosen to tune the balance between data fidelity and the penalization of undesired solutions. Typical choices for the regularization matrix are e.g. the identity matrix or a discrete Laplacian operator. Since PRFs normally have a smooth shape, a Laplacian that penalizes high frequencies seems to be a suitable first choice. Thus a two dimensional Laplacian is used in Sec. 3.4. However, other choices are possible and may yield better results depending on the exact problem at hand. The interested reader is referred to [24] and references therein for additional considerations regarding various regularization strategies. We would like to point out that Eq. (13) can be solved independently for each row of $\hat {\boldsymbol K}$ and $\boldsymbol C^{BA}$ and therefore lends itself to an individual treatment of each sensor $B$ pixel. Moreover the transformation matrix $\hat {\boldsymbol K}$ has to be calculated only once for a given sensor combination ($A$, $B$) before it can be used to correct any measurement $\boldsymbol { {s} }^{A}$ with a simple matrix multiplication following Eq. (7).

2.3 Transformation matrix constraints

As discussed in Sec. 2.1, the PRFs are normalized. The normalization condition Eq. (3) leads to a constraint on the rows of the transformation matrix $\boldsymbol K$. We can derive this constraint by investigating the special case of a spectrally and spatially flat illumination $\left (L_s(\boldsymbol { {q} })=L_0=\textrm {const.}\right )$ on both sensors. In this case we obtain the signals

$$\begin{aligned}s_k^{A} = L_0,\; k=1, \ldots, M \end{aligned}$$
$$\begin{aligned}s_k^{B} = L_0,\; k=1, \ldots, M^{\prime}. \end{aligned}$$
Inserting these into Eq. (7) immediately leads to
$$\sum_{j=1}^{M} K_{kj} = 1, \quad 1\leq k\leq M^{\prime}.$$
In other words, the sum over each row of the transformation matrix $\boldsymbol K$ has to be unity. Since $\boldsymbol K$ is independent of the incident radiation $L_s(\boldsymbol { {q} })$, this result can be used as a general normalization condition for the matrix, e.g. to suppress numerical noise.

2.4 Propagation of uncertainty

If we interpret the measurement $\boldsymbol { {s} }^{A}$ as a random variable with expectation value $E\left [\boldsymbol { {s} }^{A}\right ]= \bar {\boldsymbol { {s} }}^{A}$ and covariance matrix

$$\Sigma^{A}_{ij} = E \left[ \left(s_{i}^{A} - \bar{s}_{i}^{A}\right) \left(s_{j}^{A} - \bar{s}_{j}^{A}\right) \right]$$
then $\boldsymbol { {s} }^{B}$ is also a random variable with expectation value (mean) $E\left [\boldsymbol { {s} }^{B}\right ] = \boldsymbol K \cdot \bar {\boldsymbol { {s} }}^{A}$ and covariance matrix
$${\boldsymbol {\Sigma} }^{B}= \boldsymbol K {\boldsymbol {\Sigma} }^{A} \boldsymbol K^{\dagger}.$$
Eq. (18) can be used to propagate the uncertainty of measurement $\boldsymbol { {s} }^{A}$ to measurement $\boldsymbol { {s} }^{B}$. In the special case, where the covariance matrix of measurement $\boldsymbol { {s} }^{A}$ can be decomposed into a product $\boldsymbol { {\Sigma } }^{A}=\boldsymbol D^{A} \boldsymbol D^{A\dagger }$, the same is true for the covariance matrix of measurement $B$. We can write $\boldsymbol { {\Sigma } }^{B}=\boldsymbol D^{B} \boldsymbol D^{B\dagger }$ with
$${\boldsymbol {D} }^{B}= \boldsymbol K\,\boldsymbol D^{A}.$$
Noise of the transformation matrix $\boldsymbol K$ is generally of minor importance, since the PRFs can usually be measured with high accuracy and numerical noise can be suppressed, as shown in Sec. 2.3. Because Eq. (7) is a linear transformation of the signals $s^{A}$, any systematic change of the PRFs $f^{A}$ of sensor $A$ compared to the time of calibration is linearly transformed to the signals $s^{B}$. It is also noteworthy that noise in the PRF measurements results in systematic bias in the transformation matrix $\boldsymbol K$.

3. Case study

3.1 Test method

We test the presented algorithm by defining two synthetic camera sensors, a source sensor $A$ and a target sensor $B$, with their PRFs $f_k^{A}$ and $f_k^{B}$. With these we calculate the cross-correlation matrices $\boldsymbol C^{AA}$ and $\boldsymbol C^{BA}$ according to Eqs. (8) and (9), respectively. The transformation matrix $\boldsymbol K$ is then computed with Eq. (13). To judge the performance of the algorithm, we use the method depicted in Fig. 2. A test scene is sampled with both sensor models $\left (f_k^{A}\right .$ and $\left .f_k^{B}\right )$ according to Eqs. (2) and (6), which yield the images $\boldsymbol { {s} }^{A}$ and $\boldsymbol { {s} }^{B}$. Finally, we apply the transformation matrix $\boldsymbol K$ to the source image $\boldsymbol { {s} }^{A}$ following Eq. (7) and compare the resulting transformed image with the image target $\boldsymbol { {s} }^{B}$.

 figure: Fig. 2.

Fig. 2. Algorithm test method with two synthetic sensors. A test scene is sampled with the PRFs of the source and the target sensor creating the source and target image, respectively. The source image is then transformed to an image with properties close to the target image. Finally, the transformed and target image are compared.

Download Full Size | PDF

For a simpler description of test scenes and sensors, we only consider the transformation of two-dimensional geometric image properties here. However, the results do not change if instead a spectral coordinate system is assigned to one of the geometric axes as long as the relative position and the overlapping of the response functions do not change.

3.2 Test scenes

We define two synthetic test scenes, see Fig. 3. Both scenes consist of point sources which are defined by their location in a two-dimensional angle space and have relative intensities between $0$ and $1$. Test scene 1 consists of 13 point sources on a dark background. This is equivalent to stars surrounded by deep space. Test scene 2 consists of a checkerboard pattern, which tests the algorithm performance especially near the high-contrast edges where transformation errors are expected to occur. We selected these test scenes due to their high frequency components. Usually it is more challenging for transformation algorithms to handle high than low frequency components.

 figure: Fig. 3.

Fig. 3. Synthetic scenes used for performance tests: (a) 13 Point sources on a dark background. (b) A checkerboard pattern.

Download Full Size | PDF

3.3 Synthetic test sensors

We define the synthetic test sensors $A$ and $B$ by their two-dimensional PRFs $f_k^{A}(\theta , \phi )$ and $f_k^{B}(\theta , \phi )$ with normalized Gaussian functions $h$ for the two angular axes, which yields

$$f_k(\theta, \phi) = h(\theta, \theta_{\mathrm{c}, k}, \Delta \theta_k) h(\phi, \phi_{\mathrm{c}, k}, \Delta \phi_k) \textrm{,}$$
where $\theta _{\mathrm {c}, k}$ and $\phi _{\mathrm {c}, k}$ are the center angles and $\Delta \theta _k$ and $\Delta \phi _k$ are the FWHMs of the Gaussians of pixel $k$. The normalized Gaussian functions are defined as
$$h(x, x_c, \Delta x) = a \exp \left( -\frac{ (x - x_c)^{2} }{ 2c^{2} } \right) \textrm{, } c = \frac{ \Delta x }{ 2 \sqrt{ 2 \ln 2} } \textrm{, } a = \frac{ 1 }{ \sqrt{2} \left| c \right| \sqrt{\pi} } \textrm{.}$$
Both sensors have 31 pixels along the y-axis and 61 pixels along the x-axis. Sensor $A$ has response functions with FWHMs randomly ranging from 0.1 mrad to 0.125 mrad, see Fig. 4. The sampling distance along both axes is 0.05 mrad, resulting in sampling ratios (pixels per FWHM) of 2 to 2.5. The response functions of sensor $B$ are constant along both axes with sampling distances of 0.05 mrad and FWHMs of 0.125 mrad, giving a constant sampling ratio of 2.5. The PRF center positions of sensor $B$ are shifted along both axes by 50 % of the sampling distance with respect to sensor $A$:
$$\begin{aligned}\theta_{\mathrm{c}, k}^{B} = \theta_{\mathrm{c}, k}^{A} - \frac{{0.05}{\textrm{mrad}}}{2} \end{aligned}$$
$$\begin{aligned}\phi_{\mathrm{c}, k}^{B} = \phi_{\mathrm{c}, k}^{A} - \frac{{0.05}{\textrm{mrad}}}{2}. \end{aligned}$$
These differences between PRF center positions result in the largest distance between sampling points of the two sensors, thus the largest interpolation errors are to be expected.

 figure: Fig. 4.

Fig. 4. Response functions of sensor $A$ and sensor $B$ of the central y-axis pixel. (a) Sensor $A$ response functions have randomly distributed FWHMs ranging from 0.1 mrad to 0.125 mrad. (b) The response functions of sensor $B$ have constant FWHMs of 0.125 mrad.

Download Full Size | PDF

The sampling of both test scenes with sensor $A$ and sensor $B$ is depicted in Fig. 5. Due to the random response function widths, the sensor $A$ images appear noisy.

 figure: Fig. 5.

Fig. 5. Test scenes 1 and 2 sampled with sensors $A$ and $B$. The scale is in Arbitrary Units (AU). (a) Scene 1 sampled with sensor $A$. (b) Scene 2 sampled with sensor $A$. (c) Scene 1 sampled with sensor $B$. (d) Scene 2 sampled with sensor $B$.

Download Full Size | PDF

3.4 Transformation matrix generation

We assume that the radiometric calibration is performed as a separate procedure before the PRF transformation. This means that the integrals of all PRFs are normalized to unity. Before determining the cross-correlations, we calculate the discrete representations of the response functions with the trapezoidal rule. Since this process may introduce additional noise, i.e. the integral of the discrete responses differ slightly from the actual integral, we normalize the results again to unity. For each sensor, the cross-correlations $\boldsymbol C^{AA}$ and $\boldsymbol C^{BA}$ are calculated according to Eqs. (8) and (9), respectively. The transformation matrix is then computed from Eq. (13).

To limit the computation time and memory usage we define subkernels for each pixel of sensor $B$ before calculating the matrices $C^{BA}$ and $C^{AA}$, i.e. sparse matrices. This can be done, due to the limited extend of the PRFs. In the case study these subkernels for each pixel have a size of $15 \times 15$ pixels of sensor $A$, since the maximum distance of PRF interaction is here less than 15 pixels. The size is different for every sensor pair and depends on the actual extent of the PRFs of sensor $A$ and $B$. If stray light is treated, much larger subkernels may be required due to the long-range interaction between pixels. When PRFs do not overlap their cross-correlation is zero, which is forced to be the case for any pair of pixels not sharing the same subkernel in matrix $\boldsymbol C^{BA}$. Obviously, subkernels that are too small cause additional errors. The use of subkernels becomes especially necessary for real instruments that have a large number of pixels. For instance, in the case of our HySpex VNIR-1600 imaging spectrometer with 1600 geometric pixels and 160 spectral channels, the cross-correlation matrix $\boldsymbol C^{AA}$ would otherwise consist of $(1600 \times 160)^{2}$ elements. These subkernels are then used to solve the linear equation system individually for each pixel of sensor $B$.

Since the simulated camera is radiometrically calibrated, i.e. the integrals of the PRFs are normalized to unity, we apply Eq. (16) on the transformation matrix $\boldsymbol K$. This reduces errors introduced by the finite accuracy of the numerical calculations.

Following [19], we choose a second-order differential operator (two dimensional discrete Laplacian) for the regularization matrix $\boldsymbol { {\Gamma } }$. The equivalent operator $\boldsymbol { {d} }$ for the discrete convolution of a two-dimensional image is

$${\boldsymbol {d} } = \left[ \begin{matrix} 0 & -1 & 0 \\ -1 & 4 & -1 \\ 0 & -1 & 0 \end{matrix} \right] \textrm{.}$$
This translates to matrix representation by
$${\boldsymbol {\Gamma} }_{ij} = \begin{cases} -1 & \textrm{for } i = j - N_\mathrm{c}\\ -1 & \textrm{for } i = j-1 \neq r N_\mathrm{c} \\ 4 & \textrm{for } i = j \\ -1 & \textrm{for } i = j+1 \neq r N_\mathrm{c}+1 \\ -1 & \textrm{for } i = j + N_\mathrm{c}\\ 0 & \mathrm{else} \end{cases} \textrm{,}$$
where $N_{\mathrm {c}}$ is the number of image columns and $r$ is the row index of the 2D-image. The expressions $i = j-1 \neq r N_{\mathrm {c}}$ and $i = j+1 \neq r N_{\mathrm {c}} + 1$ account for row switching. Otherwise the leftmost pixel of a row in the 2D-image would regularlize the first pixels of the next row and vice versa.

As regularization parameter we use $\gamma ^{2} = 10^{-15}$. The parameter is found by trial and error, as the calculated transformation matrix is not very sensitive on the parameter when it is changed by one order of magnitude. For comparison with classical approaches, we calculated a constant transformation kernel by averaging of the two-dimensional subkernels of the transformation matrix $\boldsymbol K$.

3.5 Test results

The transformed images of sensor $A$ appear in the representation of Fig. 5 identical to the images of sensor $B$. We compare the transformed images of sensor $A$ with the images of sensor $B$ by determining their absolute difference, see Fig. 6.

 figure: Fig. 6.

Fig. 6. Differences between sensor $B$ and transformed sensor $A$ images. Scene 1 (a) and scene 2 (b) images transformed with constant kernel. Scene 1 (c) and scene 2 (d) images transformed with transformation matrix. Images transformed with the constant kernel show errors of up to 6 % due to the non-uniform response functions. These errors are significantly lower if the transformation matrix is used for transformation.

Download Full Size | PDF

The images transformed with the constant kernel show differences that are one order of magnitude larger, see Fig. 6(a) and (b), than images to which the transformation matrix was applied, see Fig. 6(c) and (d). At two edges bigger differences are visible. This is caused by the shift of the response function of sensor $B$ with respect to sensor $A$ by the half of a pixel to the left and to the top. Hence, the response functions of sensor $B$ are not entirely covered by response functions of sensor $A$. This means that relevant information for the transformation is missing, which causes unavoidable errors.

3.6 Impact on noise

As we have discussed in Sec. 2.4, the covariances of a transformed image can be calculated by Eq. (18) from the covariances of its source image. Assuming that the noise of the synthetic sensor is the same for all pixels, we can calculate the relative noise of sensor $B$ compared to sensor $A$ by replacing the covariance matrix $\boldsymbol { {\Sigma } }^{A}$ with the unit matrix $\boldsymbol I_{M}$ in Eq. (18), yielding

$${\boldsymbol {\Sigma} }^{B} = \boldsymbol K \boldsymbol I_{M} \boldsymbol K^{\dagger}.$$
On the diagonal of the covariance matrix $\boldsymbol { {\Sigma } }^{B}$ are then the variances relative to sensor $A$. The relative change of the standard deviation is shown in Fig. 7. As we can see, the noise of each pixel of the transformed images is below 50 % of the noise of the pixels of sensor $A$. Since sensor $A$ and sensor $B$ have a sampling ratio greater than two, the transformation matrix partly acts as a low pass filter. This means, that high frequency noise is suppressed by the transformation. At two image edges higher noise is expected, due to the image shift and the missing information there.

 figure: Fig. 7.

Fig. 7. Relative noise of images transformed from sensor $A$ to sensor $B$ assuming a flat illumination of sensor $A$.

Download Full Size | PDF

4. Summary and conclusion

We have proposed a novel method that allows the transformation of images of an optical imaging sensor $A$ to images of a sensor $B$. This new approach can be used to correct optical distortions, to homogenize spectral and spatial resolution, and to make images of different instruments comparable. The core idea of the method is to apply a transformation matrix on sensor $A$ images that maps every pixel of sensor $B$ to the set of pixels of sensor $A$. This matrix is derived from two cross-correlation matrices: one that describes correlations between the pixels of sensor $A$ and one that correlates each pixel of sensor $B$ with those of sensor $A$. Since this problem tends to be ill-posed, we assume smooth response functions and use Tikhonov regularization to suppress high frequency noise. It turns out that the derived transformation matrix has a low sensitivity to the regularization parameter, which can be varied within an order of magnitude without causing significant changes.

An advantage of the method is that the transformation matrix for a given sensor pair ($A$, $B$) needs to be calculated only once before it can be applied to many images with a simple matrix multiplication at comparatively low computational cost. Additionally, we developed an approximate matrix inversion for sparse problems based on sub-matrices, which considerably reduces the computational cost of the transformation matrix computation for sensors with many individual pixels (e.g. hyperspectral imaging spectrometers). This method is applicable in the absence of long range interactions between pixels beyond a certain distance.

In addition, the covariances of a target image can be easily calculated from the covariances of the source image. Our method can handle highly non-uniform geometric and spectral pixel properties, which are different for every pixel. This is evidenced by a case study for both point and extended sources. In the case study, we investigated the transformation of two dimensional images. However, we expect the transformation to be extendable to three- or higher-dimensional data sets, which we have shown in Sec. 2.

In cases of real instruments, the method may be limited by the accuracy of the geometric and spectral instrument model, which is usually determined by instrument calibration. Provided that an instrument is thoroughly calibrated, the method can significantly improve data quality. Considering the presented approach in instrument design may also allow a relaxation of the specifications for image distortion and uniformity.

An interesting aspect of the method is its capability to theoretically increase the resolution of sensor $A$, if the PRFs of sensor $B$ are chosen to be narrower than those of sensor $A$. Obviously an increase in information content of the data is not possible, so that information theory poses bounds on the extent to which superresolution is achievable. We performed some case studies with airborne hyperspectral data, which indicate promising results if there exists sufficent overlap between neighboring PRFs, i.e. if the sensor is oversampled. In our opinion these results warrant further investigation regarding both the theoretical and practical possiblities and limitations of the proposed method.

Appendix A Mathematical derivation of the transformation matrix equations

In this section we provide a derivation of Eqs. (10) and (13) in Sec. 2.2.

In the following equations, $\mathbb {K}$ can be either $\mathbb {R}$ or $\mathbb {C}$. Although the physical quantities discussed in this paper are exclusively real valued, we assume that they can be analytically continued to the complex domain. This is e.g. advantageous in the context of Fourier analysis, as demonstrated in Sec. B. Then the scalar product $\langle \cdot , \, \cdot \rangle$ introduced in Sec. 2.1 can be formally defined as

$$\langle f,g \rangle := \int_Q f({\boldsymbol {q} })\,g^{*}({\boldsymbol {q} })\,d^{3}q$$
for two square-integrable functions $f,g : Q \rightarrow \mathbb {K}$.

The PRFs are defined as $\left \{ f^{A}_k \big \vert f^{A}_k : Q \rightarrow \mathbb {K}, 1 \leq k \leq M \right \}$ and $\left \{ f^{B}_k \big \vert f^{B}_k : Q \rightarrow \mathbb {K}, 1 \leq k \leq M^{\prime } \right \}$. We proceed by introducing an orthonormal set of basis functions over $Q$ (e.g. Fourier basis functions, Hermite functions, …) $\left \{\psi _i\vert \psi _i:Q \rightarrow \mathbb {K},\; i\in \mathbb {N}\right \}$,

$$\left\langle \psi_i, \, \psi_j \right\rangle = \delta_{ij} \quad \forall i,j \in \mathbb{N}.$$
Now we can decompose any square-integrable function $f$ into these basis functions, i.e.
$$\begin{aligned} f({\boldsymbol {q} }) \approx \sum_{i=1}^{N} f_i \, \psi_i({\boldsymbol {q} }) \end{aligned}$$
$$\begin{aligned}f_i = \langle f, \, \psi_i \rangle. \end{aligned}$$
The approximation sign indicates that terms of orders greater than $N$ have been omitted to allow for computational evaluation of the sum. In order to proceed in matrix notation, we construct two matrices $\boldsymbol A\in \mathbb {K}^{M\times N}$ and $\boldsymbol B\in \mathbb {K}^{M^{\prime }\times N}$ from the expansion coefficients of the PRFs for each sensor by
$$\begin{aligned}A_{kl} := \left\langle f^{A}_k, \, \psi_l \right\rangle \end{aligned}$$
$$\begin{aligned}B_{kl} := \left\langle f^{B}_k, \, \psi_l \right\rangle. \end{aligned}$$
According to Eq. (29), the spectral decomposition of the PRFs can be written as
$$f_k^{A}({\boldsymbol {q} }) \approx \sum_{l=1}^{N} A_{kl} \, \psi_l({\boldsymbol {q} }),$$
and we can express Eq. (2) for the signal measured by sensor $A$ in terms of the basis functions by
$$s^{A}_k \approx \sum_{l=1}^{N} A_{kl} \, \left\langle \psi_l, \, L_s \right\rangle.$$
In complete analogy we obtain the signal measured by sensor $B$ by
$$s^{B}_k \approx \sum_{l=1}^{N} B_{kl}\,\left\langle \psi_l,\,L_s \right\rangle.$$
Using the vectors $\boldsymbol { {s} }^{A}\in \mathbb {K}^{M}$, $\boldsymbol { {s} }^{B}\in \mathbb {K}^{M^{\prime }}$ introduced in Sec. 2.2 together with a vector $\boldsymbol { {l} }_s\in \mathbb {K}^{N}$ containing the expansion coefficients of the at-aperture radiance
$${\boldsymbol {l} }_s := \big( \langle \psi_1, \, L_s \rangle, \ldots, \langle \psi_N, \, L_s \rangle \big)^{T},$$
we can express the measured signals in matrix form, yielding
$$\begin{aligned} {\boldsymbol {s} }^{A} = \boldsymbol A \cdot {\boldsymbol {l} }_s \end{aligned}$$
$$\begin{aligned} {\boldsymbol {s} }^{B} = \boldsymbol B\cdot {\boldsymbol {l} }_s. \end{aligned}$$
Note that we develop the incident radiance up to the same order $N$ as the PRFs. This might seem strange at first glance, considering that the at-sensor radiance varies with higher frequency than the relatively smooth PRFs. A closer look at Eqs. (37) and (38) reveals that orders greater than $N$ would vanish in the matrix products for $\boldsymbol { {s} }^{A}$ and $\boldsymbol { {s} }^{B}$, if we developed $L_s$ up to higher orders. Thus the choice of a single upper order is justified.

To derive an expression for the desired transformation matrix $\boldsymbol K\in \mathbb {K}^{M^{\prime } \times M}$, which satisfies Eq. (7), we proceed with a formal inversion of Eq. (37) using the Moore-Penrose inverse $\boldsymbol A^{+}$. Inserting the result into Eq. (38) yields

$${\boldsymbol {s} }^{B} = \boldsymbol B \boldsymbol A^{+} \cdot {\boldsymbol {s} }^{A}.$$
Apparently this result in combination with Eq. (7) implies
$$\boldsymbol K = \boldsymbol B \boldsymbol A^{+}.$$
Note that the inversion of $\boldsymbol A$ is most likely a severely ill-posed problem which has to be stabilized by the multiplication with $\boldsymbol B$.

At this point it is helpful to take a closer look at the cross-correlation matrices $\boldsymbol C^{AA} \in \mathbb {K}^{M\times M}$ and $\boldsymbol C^{BA} \in \mathbb {K}^{M^{\prime } \times M}$, which we defined by Eqs. (8) and (9) in terms of the scalar products of the PRFs of sensors $A$ and $B$. With the help of the row vectors

$$\begin{aligned}{\boldsymbol {a} }_k := \left( A_{k1}, \ldots, A_{kN} \right)^{T}, \quad 1\leq k \leq M \end{aligned}$$
$$\begin{aligned}{\boldsymbol {b} }_k := \left( B_{k1}, \ldots, B_{kN} \right)^{T}, \quad 1\leq k \leq M^{\prime} \end{aligned}$$
we observe that the scalar products can be expressed in terms of these row vectors, if the spectral decomposition 3 of the PRFs is inserted into the scalar products:
$$\begin{aligned}C^{AA}_{ij} = \left\langle f^{A}_i, \, f^{A}_j \right\rangle \approx {\boldsymbol {a} }_i \cdot {\boldsymbol {a} }_j^{\dagger} \end{aligned}$$
$$\begin{aligned}C^{BA}_{ij} = \left\langle f^{B}_i, \, f^{A}_j \right\rangle \approx {\boldsymbol {b} }_i \cdot {\boldsymbol {a} }_j^{\dagger}. \end{aligned}$$
In matrix form these equations become
$$\begin{aligned}\boldsymbol C^{AA} \approx \boldsymbol A \boldsymbol A^{\dagger} \end{aligned}$$
$$\begin{aligned}\boldsymbol C^{BA} \approx \boldsymbol B \boldsymbol A^{\dagger}, \end{aligned}$$
where $\boldsymbol A^{\dagger }$ symbolizes the conjugate transpose of $\boldsymbol A$. Since scalar products are invariant under base transformations, the cross correlation matrices do not depend on the set of basis functions $\{\psi _k\}$. The last two expressions for the cross-correlation matrices in combination with the defining properties of the Moore-Penrose inverse
$$\begin{aligned}(\boldsymbol A^{+} \boldsymbol A)^{\dagger} = \boldsymbol A^{+} \boldsymbol A \end{aligned}$$
$$\begin{aligned}\boldsymbol A \boldsymbol A^{+} \boldsymbol A = \boldsymbol A \end{aligned}$$
can be inserted into Eq. (40) to obtain
$$\boldsymbol K\,\boldsymbol C^{AA} = \boldsymbol B \boldsymbol A^{+} \boldsymbol A \boldsymbol A^{\dagger} = \boldsymbol B (\boldsymbol A \boldsymbol A^{+} \boldsymbol A)^{\dagger} = \boldsymbol B \boldsymbol A^{\dagger} = \boldsymbol C^{BA}.$$
This is exactly the desired result as Eq. (10), which we obtained from a more intuitive argument in Sec. 2.2. To obtain the Tikhonov solution 13 we transpose Eq. (49), which yields
$$\boldsymbol C^{AA} \boldsymbol K^{\dagger} = \boldsymbol C^{AB},$$
solve the result using the matrix version of the Tikhonov equation by
$$\hat{\boldsymbol K}^{\dagger} = \left( \boldsymbol C^{AA} \boldsymbol C^{AA} + \gamma^{2}\, {\boldsymbol {\Gamma} }^{\dagger} {\boldsymbol {\Gamma} } \right)^{-1} \boldsymbol C^{AA} \, \boldsymbol C^{AB},$$
and finally transpose again to obtain Eq. (13). Here we used the obvious fact that $\boldsymbol C^{AA}$ is self-adjoint and introduced the conjugate transpose $\boldsymbol C^{AB}$ of $\boldsymbol C^{BA}$, which is defined in analogy to Eq. (9) by swapping $A$ and $B$ everywhere.

Appendix B Relation to the Wiener filter

In this section we will show that the classical Wiener deconvolution filter is in fact a special case of the theory presented in Sec. 2.2. Here we assume PRFs of the form

$$f_k({\boldsymbol {q} }) = \varphi(\lambda - \lambda_k) \, \delta(\theta - \theta_0) \, \delta(\phi - \phi_0), \quad 1 \leq k \leq M$$
to simplify the calculations. In other words, we limit ourselves to a sensor measuring $M$ spectral bands for a single geometric pixel with a very narrow field of view. The spectral PRF shall be the same for all bands and the field of view is assumed to be sufficiently small to be approximated by delta functions. Additionally we assume a regular wavelength grid with nodes $\left \{ \lambda _k \lvert \lambda _k = \lambda _0 + k \Delta \lambda , k \in \mathbb {N} \right \}$ and we require $\varphi$ to be band limited. This implies that the Fourier transform $\tilde \varphi$
$$\tilde \varphi(\nu) := \int_{\mathbb{R}} \varphi(\lambda) e^{-i2\pi\nu\lambda} \,d \lambda$$
is zero outside the interval $\left [ -\nu _{\textrm {max}}, \nu _{\textrm {max}} \right ]$. If the sampling distance of the wavelength grid satisfies the Nyquist criterion $\Delta \lambda \leq (2\nu _{\textrm {max}})^{-1}$, we can choose $\nu _s := (2 \Delta \lambda )^{-1} \geq \nu _{\textrm {max}}$ and $\Delta \nu := 2 \nu _s M^{-1}$ such that $\Delta \nu \Delta \lambda = M^{-1} = (2s)^{-1}$. Then the Fourier transform can be replaced by the discrete Fourier series on the intervals $\tilde S:=\left [-\nu _s,\nu _s\right ]$ and $S:=[\lambda _{-s}, \lambda _s]$. The elements of the unitary Fourier matrix $ {\boldsymbol {\mathcal F} }\in \mathbb {C}^{M \times M}$, defined as
$$\mathcal{F}_{jk} = \frac{1}{\sqrt{M}} \exp\left( i2 \pi \frac{jk}{M} \right) = \frac{1}{\sqrt{M}} e^{i2 \pi \nu_j \, (\lambda_k - \lambda_0)},$$
with $\nu _j:=j\Delta \nu ,$ form an orthonormal basis for the set of band limited square-integrable functions with scalar product
$$\left\langle f,\, g\right\rangle_S := \int_S f(\lambda) \, g^{*}(\lambda) \,d \lambda.$$
This inspires the following choice for our basis functions, which is
$$\psi_l(\lambda) = \frac{1}{\sqrt{M}} e^{i 2 \pi \nu_l \, (\lambda - \lambda_0)}.$$
The Plancherel theorem and the shift theorem of the Fourier transform can be applied to evaluate scalar products of the form
$$\big\langle f_k,\,\psi_l \big\rangle = \left\langle \tilde f_k, \, \tilde \psi_l \right\rangle = \frac{1}{\sqrt{M}} e^{i 2 \pi \nu_l \lambda_0} \left\langle \tilde \varphi e^{-i 2 \pi \nu\ \lambda_k}, \, \delta(\nu - \nu_l) \right\rangle = \tilde \varphi(\nu_l) \, \mathcal{F}^{*}_{kl}.$$
Using diagonal matrices $\tilde {\boldsymbol A}, \tilde {\boldsymbol B} \in \mathbb {C}^{M\times M}$, defined as $\tilde {\boldsymbol A} := \hbox {diag} \left ( \tilde \varphi ^{A}(\nu _1), \ldots , \tilde \varphi ^{A}(\nu _M) \right )$ and $\tilde {\boldsymbol B} := \hbox {diag} \left ( \tilde \varphi ^{B}(\nu _1), \ldots , \tilde \varphi ^{B}(\nu _M) \right )$ we can express the last equation in matrix form by
$$\begin{aligned}\boldsymbol A = {\boldsymbol {\mathcal F^{\dagger}} } \tilde{\boldsymbol A} \end{aligned}$$
$$\begin{aligned}\boldsymbol B = {\boldsymbol {\mathcal F^{\dagger}} } \tilde{\boldsymbol B} \end{aligned}$$
$$\begin{aligned}\boldsymbol C^{AA} = {\boldsymbol {\mathcal F} }^{\dagger} \tilde{\boldsymbol A} \tilde{\boldsymbol A}^{\dagger} {\boldsymbol {\mathcal F} } \end{aligned}$$
$$\begin{aligned}\boldsymbol C^{BA} = {\boldsymbol {\mathcal F} }^{\dagger} \tilde{\boldsymbol B} \tilde{\boldsymbol A}^{\dagger} {\boldsymbol {\mathcal F} }. \end{aligned}$$
Insertion of these expressions into the Tikhonov solution (13) along with the definitions $\tilde {\boldsymbol C}^{AA} := \tilde {\boldsymbol A} \tilde {\boldsymbol A}^{\dagger }$ and $\tilde {\boldsymbol C}^{BA} := \tilde {\boldsymbol B} \tilde {\boldsymbol A}^{\dagger }$ yields
$$\hat{\boldsymbol K} \left( {\boldsymbol {\mathcal F} }^{\dagger} \tilde{\boldsymbol C}^{AA} \tilde{\boldsymbol C}^{AA} {\boldsymbol {\mathcal F} } + \gamma^{2}\, {\boldsymbol {\Gamma} }^{\dagger} {\boldsymbol {\Gamma} } \right) = {\boldsymbol {\mathcal F} }^{\dagger} \tilde{\boldsymbol C}^{BA} \tilde{\boldsymbol C}^{AA} {\boldsymbol {\mathcal F} }.$$
Choosing $\boldsymbol \Gamma = \tilde {\boldsymbol A}^{\dagger } {\boldsymbol {\mathcal F} }$ followed by multiplication with $ {\boldsymbol {\mathcal F} }^{\dagger }$ from the right results in
$$\hat{\boldsymbol K} {\boldsymbol {\mathcal F} }^{\dagger} \left( \tilde{\boldsymbol C}^{AA} + \gamma^{2} \, {\boldsymbol {1} } \right) \tilde{\boldsymbol C}^{AA} = {\boldsymbol {\mathcal F} }^{\dagger} \tilde{\boldsymbol C}^{BA} \tilde{\boldsymbol C}^{AA}$$
and thus
$$\hat{\boldsymbol K}\, {\boldsymbol {\mathcal F} }^{\dagger} = {\boldsymbol {\mathcal F} }^{\dagger} \, {\boldsymbol {\tilde B} } \boldsymbol W,$$
where we introduced the Wiener deconvolution matrix $\boldsymbol W \in \mathbb {C}^{M\times M}$ with components
$$W_{ij} = \frac{ \tilde \varphi^{A}(\nu_j)^{*} }{ \left\| \tilde \varphi^{A}(\nu_j) \right\|^{2} + \gamma^{2} } \, \delta_{ij}.$$
In the classical Wiener deconvolution the Tikhonov parameter $\gamma ^{2}$ is chosen as inverse of the signal-to-noise ratio. This result is not surprising in the context of Fourier theory as it states
$${\boldsymbol {s} }^{B} = \hat{\boldsymbol K} \cdot {\boldsymbol {s} }^{A} = {\boldsymbol {\mathcal F} }^{\dagger} {\boldsymbol {\tilde B} } \boldsymbol W {\boldsymbol {\mathcal F} } \cdot {\boldsymbol {s} }^{A}$$
when inserted into Eq. (7). In other words, the least squares solution $\hat {\boldsymbol K}$ acts like a multiplication in the frequency domain. First the Fourier transform of signal $\boldsymbol { {s} }_A$ is multiplied by the Wiener deconvolution matrix $\boldsymbol W$, which is a regularized inverse of the Fourier transformed PRF matrix $\tilde {\boldsymbol A}$. The result is then multiplied with the Fourier transform of the desired PRF matrix $\tilde {\boldsymbol B}$ before it is transformed back to the wavelength domain. In essence, this is a direct consequence of the convolution theorem of the Fourier transform, which states that convolution in the wavelength domain is equivalent to multiplication in the frequency domain.

This demonstrates that the solution proposed in Sec. 2.2 is a generalization of the Wiener deconvolution to cases where the PRFs vary with channel and/or pixel.

Acknowledgment

We thank Peter Haschberger for proof-reading the manuscript and Clemens Rammeloo for valuable suggestions regarding the structure of the document. We are also indebted to Robert Schaback and Adrian Doicu, who gave us valuable insights into the underlying mathematical concepts and encouraged us to publish this material.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. R. O. Green, “Spectral calibration requirement for earth-looking imaging spectrometers in the solar-reflected spectrum,” Appl. Opt. 37(4), 683 (1998). [CrossRef]  

2. D. Schläpfer, J. Nieke, and K. I. Itten, “Spatial PSF nonuniformity effects in airborne pushbroom imaging spectrometry data,” IEEE Trans. Geosci. Electron. 45(2), 458–468 (2007). [CrossRef]  

3. J. Nieke, D. Schläpfer, F. Dell’Endice, J. Brazile, and K. I. Itten, “Uniformity of imaging spectrometry data products,” IEEE Trans. Geosci. Electron. 46(10), 3326–3336 (2008). [CrossRef]  

4. J. Meola, “Examining the impact of spectral uncertainty on hyperspectral data exploitation,” in Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXIV, vol. 10644 of Proc. SPIE (2018), pp. 164–175.

5. P. Z. Mouroulis, “Low-distortion imaging spectrometer designs utilizing convex gratings,” in International Optical Design Conference 1998, vol. 3482 of Proc. SPIE (1998), pp. 594–601.

6. P. Z. Mouroulis, “Spectral and spatial uniformity in pushbroom imaging spectrometers,” in Imaging Spectrometry V, vol. 3753 of Proc. SPIE (1999), pp. 133–141.

7. P. Mouroulis, R. O. Green, and T. G. Chrien, “Design of pushbroom imaging spectrometers for optimum recovery of spectroscopic and spatial information,” Appl. Opt. 39(13), 2210 (2000). [CrossRef]  

8. K. Lenhard, A. Baumgartner, and T. Schwarzmaier, “Independent laboratory characterization of neo HySpex imaging spectrometers VNIR-1600 and SWIR-320m-e,” IEEE Trans. Geosci. Electron. 53(4), 1828–1841 (2015). [CrossRef]  

9. F. Ewald, T. Kölling, A. Baumgartner, T. Zinner, and B. Mayer, “Design and characterization of specMACS, a multipurpose hyperspectral cloud and sky imager,” Atmos. Meas. Tech. 9(5), 2015–2042 (2016). [CrossRef]  

10. X. Ceamanos and S. Doute, “Spectral smile correction of CRISM/MRO hyperspectral images,” IEEE Trans. Geosci. Electron. 48, 3951–3959 (2010). [CrossRef]  

11. H. Zhao, G. Jia, and N. Li, “Transformation from hyperspectral radiance data to data of other sensors based on spectral superresolution,” IEEE Trans. Geosci. Electron. 48, 3903–3912 (2010). [CrossRef]  

12. G. Jia, A. Hueni, M. E. Schaepman, and H. Zhao, “Detection and correction of spectral shift effects for the airborne prism experiment,” IEEE Trans. Geosci. Electron. 55(11), 6666–6679 (2017). [CrossRef]  

13. S. Nevas, G. Wübbeler, A. Sperling, C. Elster, and A. Teuber, “Simultaneous correction of bandpass and stray-light effects in array spectroradiometer data,” Metrologia 49(2), S43–S47 (2012). [CrossRef]  

14. D. M. Bramich, “A new algorithm for difference image analysis,” Mon. Not. R. Astron. Soc.: Lett. 386(1), L77–L81 (2008). [CrossRef]  

15. K. D. Gordon, C. W. Engelbracht, G. H. Rieke, K. A. Misselt, J. T. Smith, and J. Robert C. Kennicutt, “The behavior of the aromatic features in M101 H II regions: Evidence for dust processing,” Astrophys. J. 682(1), 336–354 (2008). [CrossRef]  

16. T. Darnell, E. Bertin, M. Gower, C. Ngeow, S. Desai, J. J. Mohr, D. Adams, G. E. Daues, M. Gower, C. Ngeow, S. Desai, C. Beldica, M. Freemon, H. Lin, E. H. Neilsen, D. Tucker, L. A. N. da Costa, L. Martelli, R. L. C. Ogando, M. Jarvis, and E. Sheldon, “The dark energy survey data management system: The coaddition pipeline and PSF homogenization,” in Astronomical Data Analysis Software and Systems XVIII, vol. 411 of Astronomical Society of the Pacific Conference Series (2009), pp. 18–21.

17. E. Bertin, “Automated morphometry with SExtractor and PSFEx,” in Astronomical Data Analysis Software and Systems XX, vol. 442 of Astronomical Society of the Pacific Conference Series (2011), pp. 435–438.

18. S. Desai, R. Armstrong, J. J. Mohr, D. R. Semler, J. Liu, E. Bertin, S. S. Allam, W. A. Barkhouse, G. Bazin, E. J. Buckley-Geer, M. C. Cooper, S. M. Hansen, F. W. High, H. Lin, Y.-T. Lin, C.-C. Ngeow, A. Rest, J. Song, D. Tucker, and A. Zenteno, “The blanco cosmology survey: Data acquisition, processing, calibration, quality diagnostics, and data release,” Astrophys. J. 757(1), 83 (2012). [CrossRef]  

19. A. Boucaud, M. Bocchio, A. Abergel, F. Orieux, H. Dole, and M. A. Hadj-Youcef, “Convolution kernels for multi-wavelength imaging,” Astron. Astrophys. 596, A63 (2016). [CrossRef]  

20. N. Wiener, Extrapolation, interpolation, and smoothing of stationary time series : with engineering applications (Technology Press of the Massachusetts Institute of Technology, 1964).

21. A. N. Tikhonov, “Solution of incorrectly formulated problems and the regularization method,” Soviet Math. Dokl. 4, 1035–1038 (1963).

22. A. N. Tikhonov, “Regularization of incorrectly posed problems,” Soviet Math. Dokl. 4, 1624–1627 (1963).

23. A. N. Tikhonov and V. Y. Arsenin, Solutions of ill-posed problems (V. H. Winston & Sons, 1977).

24. A. Doicu, T. Trautmann, and F. Schreier, Numerical regularization for atmospheric inverse problems (Springer, 2010).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Slice through an angular axis of an exemplary PSF, which is the signal created by a point source, which has here an incident angle of 0 mrad. The orange dots indicate the signal of each pixel while the blue curves are the Pixel Response Functions (PRFs), i.e. the responses of the pixels dependent on the incident angle. As it can be seen, the pixel signal corresponds to the value of the PRFs at the intersection with the point source.
Fig. 2.
Fig. 2. Algorithm test method with two synthetic sensors. A test scene is sampled with the PRFs of the source and the target sensor creating the source and target image, respectively. The source image is then transformed to an image with properties close to the target image. Finally, the transformed and target image are compared.
Fig. 3.
Fig. 3. Synthetic scenes used for performance tests: (a) 13 Point sources on a dark background. (b) A checkerboard pattern.
Fig. 4.
Fig. 4. Response functions of sensor $A$ and sensor $B$ of the central y-axis pixel. (a) Sensor $A$ response functions have randomly distributed FWHMs ranging from 0.1 mrad to 0.125 mrad. (b) The response functions of sensor $B$ have constant FWHMs of 0.125 mrad.
Fig. 5.
Fig. 5. Test scenes 1 and 2 sampled with sensors $A$ and $B$ . The scale is in Arbitrary Units (AU). (a) Scene 1 sampled with sensor $A$ . (b) Scene 2 sampled with sensor $A$ . (c) Scene 1 sampled with sensor $B$ . (d) Scene 2 sampled with sensor $B$ .
Fig. 6.
Fig. 6. Differences between sensor $B$ and transformed sensor $A$ images. Scene 1 (a) and scene 2 (b) images transformed with constant kernel. Scene 1 (c) and scene 2 (d) images transformed with transformation matrix. Images transformed with the constant kernel show errors of up to 6 % due to the non-uniform response functions. These errors are significantly lower if the transformation matrix is used for transformation.
Fig. 7.
Fig. 7. Relative noise of images transformed from sensor $A$ to sensor $B$ assuming a flat illumination of sensor $A$ .

Equations (66)

Equations on this page are rendered with MathJax. Learn more.

s k A = Q f k A ( q ) L s ( q ) d 3 q ,
s k A = f k A , L s .
f k A , 1 1.
s A := ( s 1 A , , s M A ) T
s B := ( s 1 B , , s M B ) T .
s k B = f k B , L s ,
s B = K s A .
C i j A A := f i A , f j A .
C i j B A := f i B , f j A .
C B A = K C A A .
K ^ = argmin K { K C A A C B A 2 2 }
K ^ = argmin K { K C A A C B A 2 2 + γ 2 K Γ 2 2 }
= C B A C A A ( C A A C A A + γ 2 Γ Γ ) 1 .
s k A = L 0 , k = 1 , , M
s k B = L 0 , k = 1 , , M .
j = 1 M K k j = 1 , 1 k M .
Σ i j A = E [ ( s i A s ¯ i A ) ( s j A s ¯ j A ) ]
Σ B = K Σ A K .
D B = K D A .
f k ( θ , ϕ ) = h ( θ , θ c , k , Δ θ k ) h ( ϕ , ϕ c , k , Δ ϕ k ) ,
h ( x , x c , Δ x ) = a exp ( ( x x c ) 2 2 c 2 ) c = Δ x 2 2 ln 2 a = 1 2 | c | π .
θ c , k B = θ c , k A 0.05 mrad 2
ϕ c , k B = ϕ c , k A 0.05 mrad 2 .
d = [ 0 1 0 1 4 1 0 1 0 ] .
Γ i j = { 1 for  i = j N c 1 for  i = j 1 r N c 4 for  i = j 1 for  i = j + 1 r N c + 1 1 for  i = j + N c 0 e l s e ,
Σ B = K I M K .
f , g := Q f ( q ) g ( q ) d 3 q
ψ i , ψ j = δ i j i , j N .
f ( q ) i = 1 N f i ψ i ( q )
f i = f , ψ i .
A k l := f k A , ψ l
B k l := f k B , ψ l .
f k A ( q ) l = 1 N A k l ψ l ( q ) ,
s k A l = 1 N A k l ψ l , L s .
s k B l = 1 N B k l ψ l , L s .
l s := ( ψ 1 , L s , , ψ N , L s ) T ,
s A = A l s
s B = B l s .
s B = B A + s A .
K = B A + .
a k := ( A k 1 , , A k N ) T , 1 k M
b k := ( B k 1 , , B k N ) T , 1 k M
C i j A A = f i A , f j A a i a j
C i j B A = f i B , f j A b i a j .
C A A A A
C B A B A ,
( A + A ) = A + A
A A + A = A
K C A A = B A + A A = B ( A A + A ) = B A = C B A .
C A A K = C A B ,
K ^ = ( C A A C A A + γ 2 Γ Γ ) 1 C A A C A B ,
f k ( q ) = φ ( λ λ k ) δ ( θ θ 0 ) δ ( ϕ ϕ 0 ) , 1 k M
φ ~ ( ν ) := R φ ( λ ) e i 2 π ν λ d λ
F j k = 1 M exp ( i 2 π j k M ) = 1 M e i 2 π ν j ( λ k λ 0 ) ,
f , g S := S f ( λ ) g ( λ ) d λ .
ψ l ( λ ) = 1 M e i 2 π ν l ( λ λ 0 ) .
f k , ψ l = f ~ k , ψ ~ l = 1 M e i 2 π ν l λ 0 φ ~ e i 2 π ν   λ k , δ ( ν ν l ) = φ ~ ( ν l ) F k l .
A = F A ~
B = F B ~
C A A = F A ~ A ~ F
C B A = F B ~ A ~ F .
K ^ ( F C ~ A A C ~ A A F + γ 2 Γ Γ ) = F C ~ B A C ~ A A F .
K ^ F ( C ~ A A + γ 2 1 ) C ~ A A = F C ~ B A C ~ A A
K ^ F = F B ~ W ,
W i j = φ ~ A ( ν j ) φ ~ A ( ν j ) 2 + γ 2 δ i j .
s B = K ^ s A = F B ~ W F s A
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.