## Abstract

The higher order orthogonal iteration (HOOI) is used for a single-frame and multi-frame space-variant blind deconvolution (BD) performed by factorization of the tensor of blurred multi-spectral image (MSI). This is achieved by conversion of BD into blind source separation (BSS), whereupon sources represent the original image and its spatial derivatives. The HOOI-based factorization enables an essentially unique solution of the related BSS problem with orthogonality constraints imposed on factors and the core tensor of the Tucker3 model of the image tensor. In contrast, the matrix factorization-based unique solution of the same BSS problem demands sources to be statistically independent or sparse which is not true. The consequence of such an approach to BD is that it virtually does not require *a priori* information about the possibly space-variant point spread function (PSF): neither its model nor size of its support. For the space-variant BD problem, MSI is divided into blocks whereupon the PSF is assumed to be a space-invariant within the blocks. The success of proposed concept is demonstrated in experimentally degraded images: defocused single-frame gray scale and red-green-blue (RGB) images, single-frame gray scale and RGB images blurred by atmospheric turbulence, and a single-frame RGB image blurred by a grating (photon sieve). A comparable or better performance is demonstrated in relation to the blind Richardson-Lucy algorithm which, however, requires *a priori* information about parametric model of the blur.

©2010 Optical Society of America

## 1. Introduction

Various artifacts such as defocusing, atmospheric turbulence, relative motion between image and object planes, aberrations, etc. can lead to blurry images and the loss of spatial information. The reconstruction of an original image from its blurred version is referred to as image restoration or image deconvolution [1–4]. In non-blind deconvolution, the blurring kernel or point spread function (PSF) is given [1,2], while in blind deconvolution the PSF is unknown and the restoration of the original image is a more challenging problem [3,4]. To make blind deconvolution computationally tractable, in a great majority of cases an assumption is made on PSF to be spatially-invariant (isoplanatic) [3–13]. The space-variant (anisoplanatic) approach to blind image deconvolution has, for example, been addressed in [14–16]. The great majority of both space-variant and space-invariant blind image deconvolution methods is model-based i.e. relies on a parametric model of the blur or the original image. Therefore, the availability of *a priori* knowledge about them is assumed [3,7–9,11,13–16]. Presumed *a priori* information is also related to the size of support of the PSF that however is assumed to be unknown. This information is not always available. The same comment applies to an assumption of availability of multiple low-resolution frames, and that is necessary to reconstruct one super-resolution image [7,8,14–16]. To circumvent problems associated with possible unavailability of discussed *a priori* information, I have previously proposed space-invariant model-free single-frame blind deconvolution of gray scale image that is based on: matrix factorization with sparseness constraints in [5], 3D tensor factorization in [6] and dependent component analysis in [17]. The blind deconvolution of multi-spectral image has been performed by applying the proposed algorithm to each spectral image separately. Here, I propose higher order orthogonal iteration (HOOI)-based tensor factorization [18,19], for model-free space-variant blind deconvolution of single-frame (static) and/or multi-frame (dynamic) multi-spectral image. The most general problem of space-variant multi-frame blind deconvolution of multi-spectral image is solved through factorization of 6D tensor. Space-variant single-frame blind deconvolution of multi-spectral image and space-invariant multi-frame blind deconvolution of multi-spectral image are solved through factorization of 5D tensors. Space-invariant single-frame blind deconvolution of gray scale image is solved through factorization of 3D tensor, while space-variant single-frame blind deconvolution of gray scale image, space-invariant single-frame blind deconvolution of multi-spectral image and space-invariant multi-frame blind deconvolution of gray scale image are solved through factorizations of 4D tensors.

The proposed approach relies on conversion of blind deconvolution into blind source separation [20], whereupon sources represent an original image and its spatial derivatives. Converting blind deconvolution into blind source separation is achieved through the implicit use of the Taylor expansion of the shifted original image around the origin in the image-forming convolution equation. As in my previous contributions [5,6,17], I rely on a bank of 2D Gabor filters [20,21], to realize the multichannel version (required for blind source separation) of a single-frame degraded image. Usage of Gabor filter bank to convert single-channel to multi-channel image required by blind source separation is, however, not the only possible solution. In [22], multi-resolution filter bank based on Morlet wavelet has been used for the same purpose. Completely opposite approach to single-channel blind source separation is considered in [23,24], where single-channel signal is partitioned into the pieces that form a multi-channel version suitable for blind source separation. This appears as an alternative valuable to explore in the future work. Due to the use of implicit Taylor expansion, the unknown source image is assumed to be smooth up to some order, i.e. an *n*
^{th} order expansion requires the source image to be *n*-times differentiable at the origin. This smoothness requirement is expected, most likely, to limit performance of proposed approach to blind image deconvolution when the degradation process is strong. Then, additional terms in the implicit Taylor expansion are necessary requiring a higher level of smoothness of the source image and this might not be fulfilled. Likewise, images with sharp boundaries are also expected to be deconvolved less effectively by the proposed approach. Nevertheless, an important contribution of the proposed approach to blind image deconvolution remains: no information about model or size of support of the PSF is required. Due to the reasons discussed, the proposed method should be considered complementary rather than competing to physically constrained iterative blind image deconvolution algorithms for situations when physically based constraints are difficult or impossible to be defined. Due to the same reasons, a comparison of the proposed method against applications of specific blind deconvolution methods (for example, against methods developed for deconvolution of turbulence degraded image in astronomy) is not fair because these application-specific methods will almost surely perform better if all required *a priori* information are provided to them. Therefore, it is my intention in the experimental section of this paper to demonstrate success of the proposed approach by deconvolving experimental images degraded by various types of blurs. In the last example, the blur caused by a grating does not even have a physical analogy in some practical situation. Thus, it is impossible to find a proper model for it. Blind Richardson-Lucy (R-L) algorithm [25,26], is used to represent model-based blind deconvolution methods in comparative performance analysis against proposed model-free algorithm. Although a classic method, blind R-L is a paradigm and serves the purpose in demonstrating that an equivalent or even better performance is obtained by proposed model-free method even after the model and parameters for blind R-L have been optimally “tuned”.

HOOI-based tensor factorization enables an essentially unique solution of the related blind source separation problem (up to standard scaling and permutation indeterminacies [27,28]), with orthogonality constraints imposed on factors and the core tensor of the Tucker3 model of the of the multichannel image tensor. In contrast, the matrix factorization-based unique solution of the same blind source separation problem requires that sources are statistically independent or sparse. In an adopted approach to blind deconvolution where sources represent an original image and its spatial derivatives, these constraints are not fulfilled. Hence, this constraints-relaxed solution of model-free blind deconvolution that evolves due to the use of tensor factorization is another contribution of this paper that represents distinct improvement of the previous results [5,17].

The rest of this paper is organized as follows: in Section 2 tensor notation, the Tucker3 tensor model and HOOI decomposition are introduced. The multichannel linear mixture model of blurred image tensor is introduced in Section 3. The proposed approach to various types of blind image deconvolution problems is exemplified in Section 4 on an: (*i*) experimental de-focused single-frame gray scale and red-green-blue (RGB) images, (*ii*) experimental multi-frame gray scale and RGB images degraded by atmospheric turbulence and, (*iii*) experimental single-frame RGB image degraded by a grating. Conclusions are given in Section 5.

## 2. Basics of tensor notation, Tucker 3 model and HOOI decomposition

A tensor is a multi-way array of data: $\underset{\xaf}{G}\text{\hspace{0.17em}}\in \text{\hspace{0.17em}}{\mathbb{R}}^{{I}_{1}\times {I}_{2}\mathrm{...}\times {I}_{N}}$, where *N* denotes the number of modes or order of the tensor $\underset{\xaf}{G}$ and ${\left\{{I}_{n}\in {\mathbb{Z}}_{+}\right\}}_{n=1}^{N}$denotes dimensions of each mode, where ${\mathbb{Z}}_{+}$denotes a set of positive integers. A particular element of the tensor $\underset{\xaf}{G}$ is given with ${g}_{{i}_{1}{i}_{2}\mathrm{...}{i}_{N}}$where *i*
_{1} = 1,..., *I*
_{1}, *i*
_{2} = 1,..., *I*
_{2}, and *i _{N}* = 1, ...,

*I*. This is the standard notation adopted for use in multi-way analysis [29]. The Tucker3 model represents tensor $\underset{\xaf}{G}$ as an

_{N}*n*-mode product between the core tensor $\underset{\xaf}{R}\in {\mathbb{R}}^{{J}_{1}\times {J}_{2}\mathrm{...}\times {J}_{N}}$and array factors ${\left\{{A}^{(n)}\in {\mathbb{R}}^{{I}_{n}\times {J}_{n}}\right\}}_{n=1}^{N}$ [27,30,31]:

Throughout this paper it is assumed that *J*
_{1} = *J*
_{2} = ...*J*
_{N} = *J*, where $J\le \text{min}\left\{{I}_{1},{I}_{2},\mathrm{...},{I}_{N}\right\}$. Since the core tensor allows interaction of a factor with any factor in the other modes, [see Eq. (2)], the Tucker3 model is flexible in modeling complex interactions within the data tensor $\underset{\xaf}{G}$. This, however, prevents the uniqueness of its decomposition [Eq. (1)]. Through the process known as mode-*n* unfolding, (see Appendix), tensor $\underset{\xaf}{G}$can be mapped to a matrix ${G}_{(n)}$. The mode-*N* unfolded version of tensor $\underset{\xaf}{G}$ in Eq. (1) is transformed into:

*N*unfolded core tensor $\underset{\xaf}{R}$and '⊗' denotes Kronecker product. It is of interest to relate Eq. (3) to the linear mixture model used commonly in various blind source separation problems [28]:where ${G}_{(n)}\in {\mathbb{R}}^{{I}_{n}\times {I}_{1}{I}_{2}{I}_{n-1}\mathrm{...}{I}_{n+1}\mathrm{...}{I}_{N}}$ represents mode-

*n*matrix equivalent, $n\text{\hspace{0.17em}}\in \text{\hspace{0.17em}}\left\{1,\mathrm{...},N\right\}$ of the data tensor $\underset{\xaf}{G}$ in Eq. (1). In blind source separation vocabulary, $A\in {\mathbb{R}}^{{I}_{n}\times J}$is known as a mixing or basis matrix and $F\in {\mathbb{R}}^{J\times {I}_{1}{I}_{2}{I}_{n-1}\mathrm{...}{I}_{n+1}\mathrm{...}{I}_{N}}$is known as a matrix of

*J*sources, hidden or latent variables. It follows from Eqs. (3) and (4), see also [6] for related 3D tensor factorization problem, that:

**F**. However, Eqs. (1), (3) and (5) suggest that factorization of data tensor $\underset{\xaf}{G}$ can extract an estimate of the source tensor $\widehat{\underset{\xaf}{F}}\text{\hspace{0.17em}}\in \text{\hspace{0.17em}}{\mathbb{R}}^{{I}_{1}\times {I}_{2}\mathrm{...}\times {I}_{N-1}\times J}$ directly:

*n*-th index

*i*

_{n}to

*k*,

*l*are mutually orthogonal (with respect to the standard inner product on matrix spaces) for all possible values of

*n, k*and

*l*, subject to

*k*≠

*l*. For example, if $\underset{\xaf}{R}$is the third order tensor, then different matrix slices along any mode must be mutually orthogonal. Ordering means that ${\Vert {\underset{\xaf}{R}}_{{i}_{n}=1}\Vert}_{2}\ge {\Vert {\underset{\xaf}{R}}_{{i}_{n}=2}\Vert}_{2}\ge \mathrm{...}\ge {\Vert {\underset{\xaf}{R}}_{{i}_{n}={I}_{n}}\Vert}_{2}\text{\hspace{0.17em}}\forall n$. Owing to these orthogonality constraints, the tensor factorization performed by HOOI algorithm is essentially unique. The HOOI algorithm minimizes square of the Euclidean distance between tensor $\underset{\xaf}{G}$ and its approximation:

Due to orthogonality constraints the core tensor $\underset{\xaf}{R}$can also be represented as

## 3. Multi-dimensional linear mixture model of degraded image

To make a presentation related to this topic easier to follow, a Taylor expansion-based 3D linear mixture model related to space-invariant blind deconvolution of a blurred single-frame gray scale image will be introduced briefly. This model has been used previously in [20,5,6,17]. Since an implicit Taylor expansion is always associated with the two spatial modes of the image, extension of the linear mixture model to represent more complex scenarios should be straightforward by assigning additional indices to represent new modalities introduced in relation to the basic scenario: space-invariant blind deconvolution of blurred single-frame gray scale image. The more complex scenarios include: single-frame gray scale image blurred by a space-variant blur, single-frame multi-spectral image blurred by space-invariant and space-variant blurs, multi-frame gray scale image blurred by space-variant and space-invariant blurs, and multi-frame multi-spectral image blurred by space-variant and space-invariant blurs. Table 1 provides an overview of all considered blind deconvolution scenarios with brief description of the physical meaning of the tensor modes.

Linear mixture model of a single-frame gray scale image degraded by space-invariant blur is based on an image forming equation: a convolution of space-invariant PSF **H** with an original source image **F**:

*M*denotes half of the PSF support size. In Eq. (9) presence of the additive noise is ignored in order to focus on an essential issue: model-free blind image deconvolution. It is however clear that some form of image de-noising is performed, if necessary, prior to executing deconvolution. It is also assumed that the unknown original image

**F**is

*n*

^{th}order smooth implying that it is

*n*-times differentiable at the origin (

*i*

_{1},

*i*

_{2}), whereupon

*n*represents the order of spatial derivatives (terms) in the Taylor expansion of $F({i}_{1}-s,{i}_{2}-t)$around the origin (

*i*

_{1},

*i*

_{2}). An implicit Taylor expansion of the original image

**F**(

*i*

_{1}

*-s,i*

_{2}

*-t*) around (

*i*

_{1},

*i*

_{2}), is used to convert blind image deconvolution into blind source separation [20,5,6,17], yielding:

*i*

_{1}and

*i*

_{2}directions respectively. By using Eq. (10), Eq. (9) can be re-written as:

The unknown weighting coefficients in Eq. (11) are straightforward to derive and are given in [5,6,17]. It is of great importance to note that blurred image [Eq. (11)] represents a linear combination of the original image and its spatial derivatives. The unknown weighting coefficients *a*
_{1} to *a*
_{6} absorbed into themselves the coefficients of the PSF: ${\left\{H(s,t)\right\}}_{s,t=-M}^{M}$, including the support size parameter: *M*. Hence, blind image deconvolution could be converted to blind source separation provided that a multi-channel version of the blurred image [Eq. (11)] is available. As in [20,5,6,17], this is achieved by applying a bank of 2D Gabor filters to the blurred image [Eq. (11)]. The 2D Gabor filter bank consists of filters with two spatial frequencies and four orientations, where the real and imaginary parts of the filter are applied separately. A more detailed description of the 2D Gabor filter bank can be found in [20,21,17]. Applying it to degraded image [Eq. (11)] yields a new set of observed images:

*I*

_{3}= 17 images. Thus, by together merging a blurred image [Eq. (11)] with filtered images [Eq. (12)], the multichannel image forms a 3D tensor$\underset{\xaf}{G}\text{\hspace{0.17em}}\in \text{\hspace{0.17em}}{\mathbb{R}}_{0+}^{{I}_{1}\times {I}_{2}\times {I}_{3}}$. In relation to the

*N*-order Tucker3 tensor model [Eq. (1)], this scenario is characterized by

*N*= 3. Based on Eq. (6), it follows that an estimate of the tensor of source images is obtained by means of the HOOI-based tensor factorization as $\underset{\xaf}{\widehat{F}}=\underset{\xaf}{G}\text{\hspace{0.17em}}{\times}_{3}{\left({A}^{(3)}\right)}^{\u2020}$, where an estimate of the original image is obtained by setting the index that corresponds with the third dimension of $\underset{\xaf}{\widehat{F}}$ to

*j =*1 i.e. the estimate of the original image corresponds to the first source image.

The linear mixture model [Eqs. (11) and (12)] represents a single-frame gray scale image blurred by a space-invariant blur. Its generalization to more complex scenarios elaborated in Table 1 is obtained by expanding the 2D tensor [Eq. (11)] by adding new indices that correspond to the newly introduced modalities. For example, the space-variant deconvolution of a single-frame gray scale image assumes that the blurred image is divided into *I*
_{3} blocks, where PSF is assumed to be a space-invariant within each block. Then, each of the *I*
_{3} image blocks is filtered by means of a 2D Gabor filter bank yielding in overall a 4D multi-channel image tensor $\underset{\xaf}{G}\text{\hspace{0.17em}}\in \text{\hspace{0.17em}}{\mathbb{R}}_{0+}^{{I}_{1}\times {I}_{2}\times {I}_{3}\times {I}_{4}}$. The estimate of the tensor of source images is obtained as $\underset{\xaf}{\widehat{F}}=\underset{\xaf}{G}\text{\hspace{0.17em}}{\times}_{4}{\left({A}^{(4)}\right)}^{\u2020}$, where an estimate of the 3D tensor of the blocks of the original image is obtained by setting an index that corresponds with the fourth dimension of $\underset{\xaf}{\widehat{F}}$ to *j* = 1. The original image itself is then obtained by a proper rearranging of the blocks into a matrix.

Following the same line of reasoning space-invariant deconvolution of a single-frame multi-spectral image adds a third dimension to [Eq. (11)], where *I*
_{3} represents the number of spectral bands. Filtering each of the *I*
_{3} images by means of a 2D Gabor filter bank yields a 4D multi-channel image tensor $\underset{\xaf}{G}\text{\hspace{0.17em}}\in \text{\hspace{0.17em}}{\mathbb{R}}_{0+}^{{I}_{1}\times {I}_{2}\times {I}_{3}\times {I}_{4}}$. The estimate of the tensor of source images is obtained as $\underset{\xaf}{\widehat{F}}=\underset{\xaf}{G}\text{\hspace{0.17em}}{\times}_{4}{\left({A}^{(4)}\right)}^{\u2020}$, where an estimate of the 3D tensor of original multi-spectral image is obtained by setting an index that corresponds with the fourth dimension of $\underset{\xaf}{\widehat{F}}$ to *j* = 1. The physical interpretation of tensor modes associated with blind deconvolution scenarios considered in this paper is provided in the Table 1.

## 4. Experimental results for images blurred by de-focus, atmospheric turbulence and grating

In this section, the success of the tensor-factorization approach in solving various types of blind image deconvolution problems, according to the classification given in Table 1, is demonstrated in three experimental scenarios. As discussed previously, the proposed approach should be considered rather complementary than competing to physically constrained iterative blind image deconvolution algorithms for situations when physically based constraints are difficult or impossible to be defined. Therefore, a comparison of the proposed method against application-specific blind deconvolution methods (for example, against methods developed for deconvolution of turbulence-degraded image in astronomy) is not fair, because these methods will almost surely perform better when *a priori* information is provided to them.

Experimental scenarios include: (*i*) de-focused RGB image shown in Fig. 1
, as well as its gray scale version, with dimensions of 384 × 512 pixels. Solutions of blind deconvolution problems classified in Table 1 as 2, 3 and 4 have been demonstrated on this image. The image has been recorded by digital camera in a manually de-focused mode and has been used previously in [5,6,17].; (*ii*) RGB and gray scale multi-frame images of the Washington monument blurred by atmospheric turbulence. Four randomly chosen frames with dimensions of 160 × 80 pixels are shown in Fig. 6a
. The gray scale version of this image sequence has been used previously in [33] in formulation of an independent component analysis-based approach to sharpening an image blurred by atmospheric turbulence. Solutions of blind deconvolution problems classified in Table 1 as 5 and 7 have been demonstrated on these images.; (*iii*) single-frame RGB image blurred by a grating. The image with dimensions of 301 × 351 pixels is shown in the leftmost picture of Fig. 10
. The grating-caused blur does not have a real physical equivalent such as turbulence, de-focus or motion. Therefore, it is virtually impossible to define a proper model or other type of *a priori* information required by physically constrained iterative blind deconvolution algorithms such as exemplified by [7–14]. Thus, this grating-caused blur represents a good example for demonstrating general-purpose character of a proposed model-free approach to blind image deconvolution. Solutions of blind deconvolution problems classified in Table 1 as 3 and 4 have been demonstrated on this image.

All results obtained on the experimental images by the tensor factorization approach are compared against results obtained by a blind R-L algorithm [25,26], that is implemented by a MATLAB function deconvblind. The blind R-L algorithm is model-based and requires *a priori* information about the blur in the form of a parametric model. As other model-based deconvolution methods, it is sensitive to model misspecifications and a miss-estimation of the values of model parameters [1]. As can be seen in Figs. 4
, 9
and 10, the blind R-L algorithm yielded results of equal or lower quality than those obtained by the proposed tensor factorization model-free approach to blind deconvolution, despite the fact that the optimal choice of a blur model and model parameters were supplied to the algorithm. Figures 4 and 9 also illustrate sensitivity of the blind R-L method to a slight over-estimation of model parameters. In Fig. 10 the best result for blind R-L is shown after many attempts to choose the blur model and model parameters.

#### 4.1 Space-variant blind deconvolution of de-focused single-frame gray scale image

According to Table 1, this is a blind deconvolution problem 2. This problem results in 4D tensor factorization. The tensor model [Eq. (1)] of the gray scale version of a de-focused RGB image shown in Fig. 1 is characterized with: *I*
_{1} = 48, *I*
_{2} = 64, *I*
_{3} = 64, and *I*
_{4} = 17. The image has been divided into 64 blocks with the size of 48 × 64 pixels and PSF is assumed to be the space-invariant within each block. Figure 2
-left shows a result of the 4D tensor factorization approach. Figure 2-right shows a corresponding result, with poorer resolution, obtained when each block has been deconvolved separately through 3D tensor factorization. This result is explained by the fact that there is only one scaling indeterminacy associated with the solution of the 4D tensor factorization problem, while there are *I*
_{3} = 64 scaling indeterminacies associated with the solutions of 3D tensor factorization problems. White artifact lines and a sharp difference between the few blocks are visible in restored image and are the consequence of an adopted block-wise approach to space-variant blind deconvolution. However, the super-resolution effect of the deconvolution method is demonstrated. I consider the block-wise induced artifacts as an open problem that can possibly be reduced through some continuity (smoothness) constraints either directly through factorization or in a form of post-processing. Choosing the optimal size of the image blocks is also an open issue to be explored in the future.

#### 4.2 Space-invariant blind deconvolution of de-focused single-frame multi-spectral image

According to Table 1, this is a blind deconvolution problem 3. It results in 4D tensor factorization. Tensor model [Eq. (1)] of multi-channel de-focused RGB image is characterized with: *I*
_{1} = 384, *I*
_{2} = 512, *I*
_{3} = 3 and *I*
_{4} = 17. Figure 3
-left shows the result of a 4D tensor factorization approach to model-free blind deconvolution. A corresponding result, with poorer resolution, is presented in Fig. 3-right, and also in [7], where each spectral image has been deconvolved separately through 3D tensor factorization. This result is explained by the fact that there is only one scaling indeterminacy associated with the solution of the 4D tensor factorization problem, while there are *I*
_{3} = 3 scaling indeterminacies associated with the solutions of 3D tensor factorization problems. The result obtained by model-free blind deconvolution has served as a reference to optimize parameters for a blind R-L algorithm, the results of which are shown in Fig. 4. Knowing that the RGB image was de-focused, a disk with the proper radius was supplied to the algorithm as a blur model. The optimal result is shown in Fig. 4-left. As shown in Fig. 4-right, over-estimating the disk radius for 1 pixel only deteriorated the performance of the blind R-L method significantly.

#### 4.3 Space-variant blind deconvolutin of a de-focused single-frame multi-spectral image

According to Table 1, this is a blind deconvolution problem 4. To demonstrate this case, the RGB image shown in Fig. 1 has been divided into 64 blocks with the size of 48 × 64 pixels. PSF is assumed to be space-invariant within each block. The 5D tensor of multi-channel blurred image [Eq. (1)] is characterized with: *I*
_{1} = 48, *I*
_{2} = 64, *I*
_{3} = 64, *I*
_{4} = 3 and *I*
_{5} = 17. Figure 5
-left shows a result obtained with a proposed tensor factorization approach. Note, in direct comparison with Fig. 4-left and Fig. 3-left, that the book title and names of the authors are more readable (details are better resolved) in Fig. 5-left. Figure 5-right shows a corresponding result, with poorer resolution, obtained by deconvolving each of *I*
_{3} = 64 blocks in each of *I*
_{4} = 3 spectral bands separately through 3D tensor factorizations. This result is explained by the fact that there is only one scaling indeterminacy associated with the solution of the 5D tensor factorization problem, while there are *I*
_{3} × *I*
_{4} = 192 scaling indeterminacies associated with the solutions of 3D tensor factorization problems.

#### 4.4 Space-invariant blind deconvolution of a multi-frame multi-spectral image blurred by atmospheric turbulence

According to Table 1, this is a blind deconvolution problem 7. To demonstrate this case, the multi-frame RGB image of the Washington monument is used. Four frames chosen randomly are shown in Fig. 6a. The 5D tensor [Eq. (1)] of a blurred multi-channel multi-frame and multi-spectral image is characterized with: *I*
_{1} = 160, *I*
_{2} = 80, *I*
_{3} = 3, *I*
_{4} = 4 and *I*
_{5} = 17. Figure 6b-left shows an average of the four frames shown in Fig. 6a, while Fig. 6b-right shows an edge map extracted from Fig. 6b-left by Canny's algorithm with the threshold set to 0.21. Figure 7a
shows four source image frames restored with a proposed tensor factorization approach, while Fig. 7b-left shows the average of these four source image frames. Figure 7b-right shows an edge map extracted from Fig. 7b-left by Canny's algorithm with the threshold set again to 0.21. Details like windows and the top of the monument are reconstructed. Some of these details (windows) are missed on edges extracted from turbulence degraded image. They can be recovered by reducing the threshold (increasing the sensitivity) of Canny's algorithm. However, in this case, turbulence-induced artifacts would be picked up as well.

#### 4.5 Space-invariant blind deconvolution of multi-frame gray scale image blurred by atmospheric turbulence

According to Table 1, this is a blind deconvolution problem 5. To demonstrate this case, a gray scale version of the RGB image sequence from section 4.4 is used. 4D tensor [Eq. (1)] of a blurred image is characterized with: *I*
_{1} = 160, *I*
_{2} = 80, *I*
_{3} = 4 and *I*
_{4} = 17. Figure 8a
shows four source image frames obtained with the proposed tensor factorization approach, while Fig. 8b-left shows an average of the four source image frames shown in Fig. 8a. Figure 8b-right shows an edge map extracted from Fig. 8b-left by Canny's algorithm with the threshold set again to 0.21. Again, details like windows and the top of the monument are reconstructed. Figure 9a-left shows the image restored from a gray scale version of the average of the four blurred frames, shown in Fig. 6a, by means of a blind R-L algorithm. The corresponding edge map extracted by Canny's method with a threshold set again to 0.21 is shown in Fig. 9a-right. Knowledge of the type of degradation has been used to specify the correct model of the blur that is required by blind R-L algorithm: 2D Gaussian with a support size of 18 pixels and a standard deviation of 1.3 pixels has been used to obtain the shown result. The result obtained by the model-free approach, shown in Fig. 8b, has been used as a reference to obtain optimal values of the parameters of the 2D Gaussian. Yet, the blind R-L algorithm with parameters optimized for a blur model yielded a result that is not better than the one obtained by a model-free blind deconvolution approach. Figure 9b shows the result obtained by a blind R-L algorithm when the standard deviation is over-estimated to 1.9 pixels. This, again, illustrates sensitivity of the blind R-L algorithm to miss-estimation of model parameters. No such problems arise with the proposed model-free approach to blind deconvolution.

#### 4.6 Space-(in)variant blind deconvolutin of single-frame multi-spectral image blurred by a grating

According to Table 1, these are blind deconvolution problems 3 and 4. Space-variant blind deconvolution problem 4, results in 5D tensor factorization. The tensor (1) of the blurred image is characterized with: *I*
_{1} = 150, *I*
_{2} = 117, *I*
_{3} = 6, *I*
_{4} = 3 and *I*
_{5} = 17. A grating blurred RGB image is shown in the leftmost picture in Fig. 10. The true image is composed of a palette of color pens, and a painting on the white board. Since there is no “real life” physical analogy to this grating-caused blur, it is virtually impossible to select a specialized method to perform blind deconvolution of this grating-blurred image. The picture second from the left in Fig. 10 shows the result obtained by a solution of space-variant blind deconvolution problem 4. The picture second from the right in Fig. 10 shows the result obtained by a space-invariant blind deconvolution problem 3 that results in 4D tensor factorization. The tensor (1) of the blurred image is characterized with: *I*
_{1} = 301, *I*
_{2} = 351, *I*
_{3} = 3 and *I*
_{4} = 17. The best result obtained by blind R-L algorithm for a space-invariant problem is shown in the rightmost picture of Fig. 10. It has been obtained after many attempts to find both the proper model and parameters of the blur. Still, obtained results is not better than the one show in the picture second from right and obtained by a solution of space-invariant blind deconvolution problem 3. It is, however, visible that the space-variant version of blind deconvolution really improved the spatial resolution of the restored image significantly in relation to images restored by space-invariant blind deconvolution methods. This result has been obtained truly without any *a priori* information provided to the blind deconvolution algorithm.

## 5. Conclusion and future work

The HOOI-based tensor factorization approach has been proposed for the space-(in)variant model-free blind deconvolution of a single- and multi-frame multi-spectral image. This is achieved by converting blind deconvolution into blind source separation using the implicit Taylor expansion of the original image in the convolution image-forming equation. Herein, the sources represent the original image and its spatial derivatives. In addition to generality, the two major contributions of the proposed approach to blind image deconvolution are: (*i*) the HOOI-based factorization of the tensor of the blurred image is essentially unique with no hard constraints imposed on source images compared to matrix factorization based methods; (*ii*) neither model nor size of the support of the point spread function is required to be *a priori* known or estimated. However, use of the implicit Taylor expansion implies a certain level of smoothness of the original image. This might limit the performance of the proposed approach to blind image deconvolution when the blurring process is strong or the original image contains sharp boundaries. Nevertheless, the proposed method is expected to be useful in scenarios when *a priori* information required by physically constrained iterative blind deconvolution methods are difficult or impossible to define. The two fundamental issues are considered to be important exploring in the future work: optimal selection of the size of the image blocks and neutralization of block-wise induced artifacts associated with space-variant deconvolution; sequence partitioning-based methods as a possible replacement of Gabor filter bank approach to single-channel blind source separation.

## Appendix: Mode-*n* unfolding

The mode-*n* unfolding of a tensor $\underset{\xaf}{G}\text{\hspace{0.17em}}\in \text{\hspace{0.17em}}{\mathbb{R}}^{{I}_{1}\times {I}_{2}\mathrm{...}\times {I}_{N}}$is denoted by ${G}_{(n)}$and arranges the mode-*n* fibers of the tensor $\underset{\xaf}{G}\text{\hspace{0.17em}}$into a matrix ${G}_{(n)}$, see also definition 1.4 in [27], as well as [31]. A tensor element (*i*
_{1}, *i*
_{2},...,*i*
_{N}) maps onto a matrix element (*i*
_{n}, *j*), where

For example, a third order tensor $\underset{\xaf}{G}\text{\hspace{0.17em}}\in \text{\hspace{0.17em}}{\mathbb{R}}^{{I}_{1}\times {I}_{2}\times {I}_{3}}$, with elements ${g}_{{i}_{1}{i}_{2}{i}_{3}}$and indices (*i*
_{1},*i*
_{2},*i*
_{3}) has a corresponding position (*i*
_{n}, *j*) in the mode-*n* unfolded matrix ${G}_{(n)}$ (*n*=1,2,3) as follows: mode-*1*: *j* = *i*
_{2} + (*i*
_{3} - 1)*I*
_{2}; mode-*2*: *j* = *i*
_{1} + (*i*
_{3} - 1)*I*
_{1}; mode-*3*: *j* = *i*
_{1} + (*i*
_{2} - 1)*I*
_{1}.

## Acknowledgments

This work was supported by the Ministry of Science, Education and Sports, Republic of Croatia, grant 098-0982903-2558. I also thank the anonymous reviewers for their very constructive comments. Professor Wasyl Wasylkiwskyj's and Dr. Jordi Sancho-Parramon's help in proofreading the manuscript is also gratefully acknowledged.

## References and links

**1. **R. L. Lagendijk, and J. Biemond, *Iterative Identification and Restoration of Images* (KAP, 1991).

**2. **M. R. Banham and A. K. Katsaggelos, “Digital Image Restoration,” IEEE Signal Process. Mag. **14**(2), 24–41 (1997). [CrossRef]

**3. **D. Kundur and D. Hatzinakos, “Blind Image Deconvolution,” IEEE Signal Process. Mag. **13**(3), 43–64 (1996). [CrossRef]

**4. **P. Campisi, and K. Egiazarian, eds., *Blind Image Deconvolution* (CRC Press, Boca Raton, 2007).

**5. **I. Kopriva, “Single-frame multichannel blind deconvolution by nonnegative matrix factorization with sparseness constraints,” Opt. Lett. **30**(23), 3135–3137 (2005). [CrossRef] [PubMed]

**6. **I. Kopriva, “3D tensor factorization approach to single-frame model-free blind-image deconvolution,” Opt. Lett. **34**(18), 2835–2837 (2009). [CrossRef] [PubMed]

**7. **F. Li, X. Jia, and D. Fraser, “Superresolution Reconstruction of Multispectral Data for Improved Image Classification,” IEEE Geosci. Remote Sens. Lett. **6**(4), 689–693 (2009). [CrossRef]

**8. **H. Ji and C. Fermüller, “Robust wavelet-based super-resolution reconstruction: theory and algorithm,” IEEE Trans. Pattern Anal. Mach. Intell. **31**(4), 649–660 (2009). [CrossRef] [PubMed]

**9. **T. J. Schulz, “Multiframe blind deconvolution of astronomical images,” J. Opt. Soc. Am. A **10**(5), 1064–1073 (1993). [CrossRef]

**10. **R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, “Removing Camera Shake from a Single Photograph,” ACM Trans. Graph. **25**(3), 787–794 (2006). [CrossRef]

**11. **Q. Shan, J. Jia, and A. Agarwala, “High-quality Motion Deblurring from a Single Image,” ACM Trans. Graph. **27**(3), 1 (2008). [CrossRef]

**12. **S. Cho and S. Lee, “Fast Motion Deblurring,” ACM Trans. Graph. **28**(5), 1 (2009). [CrossRef]

**13. **J. Miskin, and D. J. C. MacKay, “Ensemble Learning for Blind Image Separation and Deconvolution,” in: *Advances in Independent Component Analysis* (Springer, London, 2000).

**14. **M. Šorel and J. Flusser, “Space-variant restoration of images degraded by camera motion blur,” IEEE Trans. Image Process. **17**(2), 105–116 (2008). [CrossRef] [PubMed]

**15. **J. Bardsley, S. Jefferies, J. Nagy, and R. Plemmons, “A computational method for the restoration of images with an unknown, spatially-varying blur,” Opt. Express **14**(5), 1767–1782 (2006). [CrossRef] [PubMed]

**16. **E. F. Y. Hom, F. Marchis, T. K. Lee, S. Haase, D. A. Agard, and J. W. Sedat, “AIDA: an adaptive image deconvolution algorithm with application to multi-frame and three-dimensional data,” J. Opt. Soc. Am. A **24**(6), 1580–1600 (2007). [CrossRef]

**17. **I. Kopriva, “Approach to blind image deconvolution by multiscale subband decomposition and independent component analysis,” J. Opt. Soc. Am. A **24**(4), 973–983 (2007). [CrossRef]

**18. **L. De Lathauwer, B. De Moor, and J. Vandewalle, “A multilinear singular value decomposition,” SIAM J. Matrix Anal. Appl. **21**(4), 1253–1278 (2000). [CrossRef]

**19. **L. De Lathauwer, B. De Moor, and J. Vandewalle, “On the best rank-1 and rank-(R1,R2,…,RN) approximation of higher-order tensors,” SIAM J. Matrix Anal. Appl. **21**(4), 1324–1342 (2000). [CrossRef]

**20. **S. Umeyama, “Blind deconvolution of blurred images by use of ICA,” Electron Commun. Jpn **84**(12), 1–9 (2001).

**21. **J. G. Daugman, ““Complete Discrete 2-D Gabor Transforms by Neural Networks for Image Analysis and Compression,” IEEE Trans. Acoust. Speech Signal Process. **36**(7), 1169–1179 (1988). [CrossRef]

**22. **J. Lin and A. Zhang, “Fault feature separation using wavelet-ICA filter,” NDT Int. **38**(6), 421–427 (2005). [CrossRef]

**23. **M. E. Davies and C. J. James, “Source separation using single channel ICA,” Signal Process. **87**(8), 1819–1832 (2007). [CrossRef]

**24. **H. G. Ma, Q. B. Jiang, Z. Q. Liu, G. Liu, and Z. Y. Ma, “A novel blind source separation method for single-channel signal,” Signal Process. **90**(12), 3232–3241 (2010). [CrossRef]

**25. **D. A. Fish, A. M. Brinicombe, E. R. Pike, and J. G. Walker, “Blind deconvolution by means of the Richardson-Lucy algorithm,” J. Opt. Soc. Am. A **12**(1), 58–65 (1995). [CrossRef]

**26. **D. S. C. Biggs and M. Andrews, “Acceleration of iterative image restoration algorithms,” Appl. Opt. **36**(8), 1766–1775 (1997). [CrossRef] [PubMed]

**27. **A. Cichocki, R. Zdunek, A. H. Phan, and S. I. Amari, *Nonnegative Matrix and Tensor Factorization* (John Wiley & Sons, 2009).

**28. **A. Cichocki, and S. Amari, *Adaptive Blind Signal and Image Processing* (John Wiley, New York, 2002).

**29. **H. A. L. Kiers, “Towards a standardized notation and terminology in multiway analysis,” J. Chemometr. **14**(3), 105–122 (2000). [CrossRef]

**30. **L. R. Tucker, “Some mathematical notes on three-mode factor analysis,” Psychometrika **31**(3), 279–311 (1966). [CrossRef] [PubMed]

**31. **A. Cichocki, and A. H. Phan. Fast Local Algorithms for Large Scale Nonnegative Matrix and Tensor Factorizations. IEICE Trans Fundamentals 2009; E92-A(3): 708–721.

**32. **B. W. Bader, and T. G. Kolda, *MATLAB Tensor Toolbox version 2.2**.*http://csmr.ca.sandia.gov/~tkolda/TensorToolbox.

**33. **I. Kopriva, Q. Du, H. Szu, and W. Wasylkiwskyj, “Independent Component Analysis Approach to Image Sharpening in the Presence of Atmospheric Turbulence,” Opt. Commun. **233**(1-3), 7–14 (2004). [CrossRef]