Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Two-kernel image deconvolution

Open Access Open Access

Abstract

A method for deconvolution of the true image of an object from two recorded images is proposed. These two images have to be made by an imaging system with two different but interconnected kernels. The method is formulated as a system of Fredholm equations of the first kind reduced to a single functional equation in Fourier space. The kernels of the system and the true image of an object are found from the same recorded images.

© 2013 Optical Society of America

1. Introduction

The problem of deconvolution of the true image of an object from a recorded image of the object is equivalent (for linear imaging systems) to solving a 2-D Fredholm integral equation of the first kind:

I(x0,y0)=K[xx0,yy0]F(x,y)dxdy,
where F(x, y) is the true image of an object, I(x0, y0) – a recorded image of the object, and K[xx0, y – y0] – the kernel of this equation. There are numerous approaches to solving this problem, including blind deconvolution [1], which under some assumptions does not require knowledge of the kernel to restore the true image of an object. However, in general, due to the ill-posed character of the problem, it is important to know the kernel with high accuracy [2]. Usually, the kernel is theoretically calculated or experimentally measured. After that, well-known techniques like Wiener filtering [3], Richardson-Lucy iterations [4,5], Tikhonov regularization [6], or their modifications are used. A drawback of these methods is that they do not guarantee that the kernel used in restoration calculations is equal to the kernel that is employed during image recording.

Rather than to perform passive measurements/calculations of the kernel, it is possible to affect the kernel physically to achieve better deconvolution results. For 1-D images (signals), we proposed this idea in [7] and implemented in [8] with the purpose of simultaneous calculation of the kernel and the true signal. For 2-D images, the coded exposure [9] and coded aperture [10] techniques are used in order to make the deconvolution problem better posed. 3-D images can be restored by changing the focus settings of an imaging system as it is described in [11]; a scene and a depth map can be estimated from several low resolution defocused images [12].

As opposed to depth from focus/defocus methods, e.g [13], where the shape of the kernel is assumed to be known while the width of the kernel depends on a 3-D surface and focal settings, in the proposed method, neither the shape nor the width of the kernel are known in advance; the entire point of the method is to determine the kernel precisely. After the kernel is calculated, the corresponding deconvolution can be found by known methods for solving ill-posed problems. For LIDAR imaging systems based on a 2-D antenna array, the result of such deconvolution is the true topographical image of a scene.

The method is based on following manipulations with the kernel of Eq. (1). The pattern of a LIDAR two-dimensional antenna array can be stretched in two directions without changing the shape of the pattern [14]. Such a transformation can be achieved, for example, by using the same LIDAR but working with a different wavelength of radiation.

The idea of the method is to use two images of an object so that the second image is recorded by the same system but with the stretched kernel. The equation describing the relationship between these two unknown kernels along with two Fredholm equations describing the recorded images form a system of three equations. The kernels and the true image F(x, y) of the object are found from this system of equations. No specific assumptions regarding the shapes or the widths of the kernels are required. The method works if the kernel of Eq. (1) is separable and the Fourier spectrum of the recorded image is differentiable and bounded at frequency equal to zero.

2. 1-D algorithm

For 1-D images, a recorded image I(x0) can be expressed as

I(x0)=A(xx0)S(x)dx,
where A(x - x0) is the kernel of an imaging system and S(x) is the unknown true image of an object.

According to the proposed algorithm, in order to get the true image of an object, at least two images have to be recorded by an imaging system. The first image is recorded with the kernel A1(x - x0); the second – with the kernel A2(x - x0) = A1[(x - x0)/m], where m >1 (m is not necessarily an integer number). Figure 1 illustrates the relationship between two kernels when m = 2.

 figure: Fig. 1

Fig. 1 The relationship between two kernels of the imaging system.

Download Full Size | PDF

Mathematically, the problem of deconvolution of images obtained by 1-D imaging system is equivalent to solving the system of the following two integral equations:

I1(x0)=A(xx0)S(x)dx,
I2(x0)=A[xx0m]S(x)dx
with two unknown functions S(x) and A(x - x0). This system can be rewritten in Fourier space as a system of two functional equations with two unknown functions s(w) and a(w):
i1(w)=a(w)s(w),
i2(w)=ma(mw)s(w),
where w is an angular frequency and i1(w), i2(w), a(w), and s(w) are the Fourier transforms of I1(x0), I2(x0), A(xx0), and S(x) respectively. Rewriting Eq. (6) in the form i2(w/m) = ma(w)s(w/m) and dividing Eq. (5) by the rewritten Eq. (6) gives a single functional equation with one unknown function s(w):

i2(w/m)s(w)=mi1(w)s(w/m).

The substitution of Eq. (5) and the rewritten Eq. (6) into Eq. (7) gives the identical equation that shows that Eq. (7) holds for any w, including those w in which Eq. (6) turns into zero.

The solution s(w) of Eq. (7) is not unique: s(w) multiplied by a constant is also a solution. At the same time, s(w) is a unique solution among all functions that are equal to 1 when w = 0. To prove this fact, suppose that there is a different solution r(w) of Eq. (7):

i2(w/m)r(w)=mi1(w)r(w/m).
Division of Eq. (8) by Eq. (7) gives
r(w)s(w)=r(w/m)s(w/m)=r(w/m2)s(w/m2)=...=r(w/mN)s(w/mN).
Both r(0) = 1 and s(0) = 1, so r(w/mN) → 1 and s(w/mN) → 1 while N → 0; hence, r(w) = s(w). So s(w) is unique up to a constant multiplier.

Equation (7) can be rewritten as

s(w)=i1(w)1mi2(w/m)s(w/m)=i1(w)1mi2(w/m)i1(w/m)1mi2(w/m2)s(w/m2)=...=Y(w)H(w)s(w/mN),
where
Y(w)=k=0Ni1(wmk),
H(w)=k=1N1mi2(wmk).
Let N → ∞; then s(w/mN) → 1 and Eq. (10) becomes

Y(w)=H(w)s(w).

The substitution of Eqs. (11) and (12) into Eq. (13), then using Eq. (7) N times in the result, and letting N → ∞ give the identical equation that shows that Eq. (13) holds for any w. It means that Eq. (13) has the same solution s(w) as the original system of Eqs. (5) and (6). Equation (13) remains an ill-posed problem: if H(w) has a zero value, the exact recovery of S(x) is impossible. However, now Eq. (13) has H(w) expressed in terms of the experimentally known function i2(w) and Y(w) expressed in terms of the experimentally known function i1(w). Now there is no need to measure the kernel H(w) or to make a theoretical assumption regarding its shape.

To illustrate the fact that the product (11) converges, consider an example where m = 2 and i1(w) = exp(−3w2):

Y(w)=k=0i1(wmk)=exp(3w2)exp(3w24)exp(3w216)exp(3w264)...=exp(4w2).

Below is the proof that the product (11) converges under the assumption that i1(w) has a first derivative, that this derivative is bounded at w = 0, and that i1(w) is normalized so that i1(w) → 1 while w → 0. The last condition means that starting from some number k all terms of the product are close to 1.

To prove the convergence of a product, it is sufficient to prove the convergence of a sum [15], which for the product (11) is

k=0|L(w2k)|,
where L(w) = log[i1(w)]. The choice of m = 2 does not affect the logic of the proof (the logic is correct for any m > 1). It is clear that L(0) = 0 because i1(w) →1 while w → 0; besides the derivative
dL(w)dw=1i1(w)di1(w)dw.
is bounded at w = 0 because di1(w)/dw is bounded at w = 0. In other words, there is a number w0 such that for all 0 < w < w0 it is true that |L(w)| < wD, where D is some finite number. It means that for a large enough number K
k=K+1|L(w2k)|k=K+1w2kD=wD2K.
Hence, the product (11) converges.

Similarly, the product (12) converges under assumptions that i2(w) has a first derivative, that this derivative is bounded at w = 0, and that i2(w) is normalized so that i2(w) → m while w → 0.

Equation (13) can be solved by any known method for solving ill-posed problems, e.g [6]:

s(w)=H*(w)Y(w)H*(w)H(w)+h2,
where Y(w) is defined by Eq. (11), H(w) is defined by Eq. (12), and a regularization parameter h can be estimated as it is described in [16].

The system of Eq. (5) and (6) can be solved for the kernel as well:

a(w)=Y*(w)H(w)i1(w)Y*(w)Y(w)+α2,
where α is a regularization parameter.

3. 2-D algorithm for a LIDAR imaging system

In a LIDAR imaging system, images are recorded by a scanning LIDAR beam. Each pixel of images is presented by at least one LIDAR pulse. The objective of the proposed algorithm is to resolve objects that are smaller than the diameter of a LIDAR beam.

For a LIDAR based on a 2-D antenna array, the radiation pattern depends on the ratio of wavelength of radiation to the grating pitch [14] and can be manipulated both in x- and y- directions.

According to the algorithm, in order to get the true image of an object, at least two images have to be recorded by an imaging system. The first image I1(x0, y0) is recorded with a beam of photons having a wavelength λ1; the second image I2(x0, y0) – with a beam of photons having a wavelength λ2. The image I1(x0, y0) represents the average travel time of photons with the wavelength λ1 from their source to the target point (x0, y0) and back; the image I2(x0, y0) represents the average travel time of photons with a wavelength λ2. Because these wavelengths are different, the diameters of the beams are also different; one beam illuminates an area which is greater than an area illuminated by the other beam, and hence the areas have different topographies.

It is assumed that there are no infinite-depth holes in the object. The number of photons returning from a point (x, y) of the object is proportional to the intensity of illumination of this point. This intensity is proportional to the radiation pattern K[x - x0, y - y0]; so K[x - x0, y - y0] serves as a weight in calculating the average return time of the photons. The return time of photons from a point (x, y) located at the distance d = F(x, y) from the source of photons depends on d linearly: it is equal to 2d/c, where c is the speed of light. So Eq. (1) represents the average return time measured at the moment when the LIDAR beam is aimed at the point (x0, y0); the coefficient 2/c is omitted as unimportant.

The described way of image registration is based on travel time of photons, so the colors of illuminated objects are irrelevant.

For the first image, Eq. (1) can be expressed as

I1(x0,y0)=K[xx0,yy0]F(x,y)dxdy=A[xx0]{B[yy0]F(x,y)dy}dx,
where K[xx0,yy0]=A[xx0]B[yy0]; for the second image, Eq. (1) can be expressed as
I2(x0,y0)=A[xx0m]{B[yy0n]F(x,y)dy}dx,
where m is a stretching factor of the kernel in the direction x, and n – in the direction y. (If the stretching is achieved by the same LIDAR but with different wavelengths of radiation, then m = n.) Eq. (20) and Eq. (21) can be rewritten in Fourier space as
i1(w,v)=a(w)b(v)f(w,v),
i2(w,v)=ma(mw)nb(nv)f(w,v),
where a(w), b(v), and f(w, v) are Fourier transforms of A[xx0], B[yy0], and F(x, y).

This system of two functional equations can be easily reduced to a single functional equation

i2(wm,vn)f(w,v)=mni1(w,v)f(wm,vn)
with one unknown function f(w, v). The solution of this equation exists and is unique up to a constant multiplier; the proof of that is as for Eq. (7) but repeated twice – first for v and then for w.

Equation (24) can be rewritten as

f(w,v)=i1(w,v)1mni2(wm,vn)f(wm,vn)=i1(w,v)1mni2(wm,vn)i1(wm,vn)1mni2(wm2,vn2)f(wm2,vn2)=...
The same logic that has been employed for deriving Eq. (10)Eq. (13) gives
(w,v)=H*(w,v)Y(w,v)H*(w,v)H(w,v)+φ2,
where
H(w,v)=k=1N1mni2(wmk,vnk,
Y(w,v)=k=0Ni1(wmk,vnk),
and φ is a regularization parameter.

4. Numerical simulation of image deconvolution

The true image of an object is simulated on a rectangle 0 ≤ x < 80, 0 ≤ y < 80 by a function

F(x,y)=[exp((x37)22)+exp((x43)22)][exp((y37)22)+exp((y43)22)];
see Fig. 2. This true image F(x, y), recorded images I1 and I2, and the recovered image Frecovered are presented as bitmaps 256 by 256 pixels:

 figure: Fig. 2

Fig. 2 The true image F(x, y), a recorded image I1, a recorded image I2, and the recovered image Frecovered. All these images are negatives.

Download Full Size | PDF

The images I1 and I2 are calculated as the convolutions of F(x, y) with the following kernels A and B:

A[xx0m]=exp[12(xx03m)2],
B[yy0n]=exp[12(yy03n)2],
where m = n = 1 for I1, m = n = 2 for I2. Noise is added to these images (the standard deviation of the noise is equal to 0.03 of the maximum intensity of each image). The recovered image is a reversed Fourier transform of f(w, v) calculated by formula (26). The regularization parameter φ2 is equal to 0.00003. To remove a high frequency component from the recovered image, only 20 first harmonics are used and the intensities of the recovered image less than 50% of its maximum intensity are set to zero.

The process of deconvolution is sensitive not only to noise but to imprecision of calculations of kernels as well. The sensitivity to imprecision is estimated by changing the number N used in Eqs. (27) and (28). For N = 8, the error in the calculation of the kernels’ values at the half of their heights is equal to 0.003%; for N = 2 – to 3%. At the same time, the visual presentation of Frecovered when N = 8 is practically the same as when N = 2.

5. Discussion

The proposed method allows finding the true image of an object from its two recorded images expressed as a system of two Fredholm equations. The importance of the method is that it does not require neither experimental nor theoretical information regarding the kernels of these equations. The method can be used when other known methods are not applicable.

To use the method, it is necessary to know the relationship between these two kernels; this requirement is easy to satisfy. The method also requires the kernels to be separable functions, which is valid for a wide class of imaging systems, e.g., systems based on Gaussian beams or on 2-D antenna arrays.

The mathematical requirement for convergence of the proposed algorithm is that derivatives di1(w)/dw and di2(w)/dw have to be bounded at w = 0. In other words, d[a(w)s(w)]/dw has to be bounded at w = 0. For some kernels a(w), this requirement cannot be satisfied. At the same time, most of practically used kernels, for example, sombrero kernels have derivatives that are bounded and equal to zero at w = 0.

To get a calculated true image close to a theoretical one, it is enough to have less than 8 terms in the products used in Eqs. (27) and (28).

The current presentation of the method is limited to a conceptual demonstration. Future research should include development of direct methods for solving Eq. (24) and utilization of the method for real-life imaging systems.

6. Conclusions

The main advantage of the proposed method is that in order to recover the true image of an object, there is no need to know in advance the kernel of an imaging system. The method resolves a logical contradiction of the standard deconvolution approach, where the kernel of an imaging system and an image of an object are obtained in different experimental conditions.

As opposed to known reconstruction techniques, the method gives the resolution of the recovered image of an object better than the resolution of recorded images of the object.

The method can be used in scanning microscopy, remote sensing, and beam profiling.

References and links

1. M. Demenikov and A. R. Harvey, “Parametric blind-deconvolution algorithm to remove image artifacts in hybrid imaging systems,” Opt. Express 18(17), 18035–18040 (2010). [PubMed]  

2. W. Dong, H. Feng, Z. Xu, and Q. Li, “A piecewise local regularized Richardson–Lucy algorithm for remote sensing image deconvolution,” Opt. Laser Technol. 43(5), 926–933 (2011). [CrossRef]  

3. N. Wiener, The Extrapolation, Interpolation and Smoothing of Stationary Time series (John Wiley & Sons, 1949).

4. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. 62(1), 55–59 (1972). [CrossRef]  

5. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974). [CrossRef]  

6. A. N. Tikhonov and V. Y. Arsenin, Methods for Solution of Incorrect Problems (Nauka, 1986).

7. V. A. Gorelik, “Method of recovering the fine structure of a spectrum without measuring the instrument function of the spectrometer,” Tech. Phys. 39, 444–446 (1994).

8. V. A. Gorelik and A. V. Yakovenko, “Fine structure extraction without analyser function measurement,” J. Elec. Spec. Rel. Phenom. 73(1), R1–R3 (1995). [CrossRef]  

9. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25(3), 795–804 (2006). [CrossRef]  

10. A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007). [CrossRef]  

11. Y. Y. Schechner and N. Kiryati, “Depth from Defocus vs. Stereo: How Different Really Are They?” Int. J. Comput. Vis. 39(2), 141–162 (2000). [CrossRef]  

12. D. Rajan and S. Chaudhuri, “Simultaneous Estimation of Super-Resolved Scene and Depth Map from Low Resolution Defocused Observations,” IEEE Trans. Pattern Anal. Mach. Intell. 25(9), 1102–1117 (2003). [CrossRef]  

13. P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via Diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 30(3), 518–531 (2008). [CrossRef]   [PubMed]  

14. J. K. Doylend, M. J. R. Heck, J. T. Bovington, J. D. Peters, L. A. Coldren, and J. E. Bowers, “Two-dimensional free-space beam steering with an optical phased array on silicon-on-insulator,” Opt. Express 19(22), 21595–21604 (2011). [CrossRef]   [PubMed]  

15. A. R. Vasishtha, Complex Analysis, 11th ed. (Krishna Prakashan Media, 2010).

16. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in C: The Art of Scientific Computing, 2nd ed. (Cambridge University Press, 1992), Chap. 13.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (2)

Fig. 1
Fig. 1 The relationship between two kernels of the imaging system.
Fig. 2
Fig. 2 The true image F(x, y), a recorded image I1, a recorded image I2, and the recovered image Frecovered. All these images are negatives.

Equations (31)

Equations on this page are rendered with MathJax. Learn more.

I( x 0 , y 0 )= K[x x 0 ,y y 0 ]F(x,y)dxdy,
I( x 0 )= A(x x 0 )S(x)dx,
I 1 ( x 0 )= A(x x 0 )S(x)dx,
I 2 ( x 0 )= A[ x x 0 m ] S(x)dx
i 1 (w)=a(w)s(w),
i 2 (w)=ma(mw)s(w),
i 2 (w/m)s(w)=m i 1 (w)s(w/m).
i 2 (w/m)r(w)=m i 1 (w)r(w/m).
r(w) s(w) = r(w/m) s(w/m) = r(w/ m 2 ) s(w/ m 2 ) =...= r(w/ m N ) s(w/ m N ) .
s(w)= i 1 (w) 1 m i 2 (w/m) s(w/m)= i 1 (w) 1 m i 2 (w/m) i 1 (w/m) 1 m i 2 (w/ m 2 ) s(w/ m 2 )=...= Y(w) H(w) s(w/ m N ),
Y(w)= k=0 N i 1 ( w m k ) ,
H(w)= k=1 N 1 m i 2 ( w m k ) .
Y(w)=H(w)s(w).
Y(w)= k=0 i 1 ( w m k ) =exp(3 w 2 )exp(3 w 2 4 )exp(3 w 2 16 )exp(3 w 2 64 )...=exp(4 w 2 ).
k=0 |L( w 2 k )|,
dL(w) dw = 1 i 1 (w) d i 1 (w) dw .
k=K+1 |L( w 2 k )| k=K+1 w 2 k D = wD 2 K .
s(w)= H * (w)Y(w) H * (w)H(w)+ h 2 ,
a(w)= Y * (w)H(w) i 1 (w) Y * (w)Y(w)+ α 2 ,
I 1 ( x 0 , y 0 )= K[x x 0 ,y y 0 ]F(x,y)dxdy= A[x x 0 ]{ B[y y 0 ]F(x,y)dy}dx ,
I 2 ( x 0 , y 0 )= A[ x x 0 m ]{ B[ y y 0 n ]F(x,y)dy}dx ,
i 1 (w,v)=a(w)b(v)f(w,v),
i 2 (w,v)=ma(mw)nb(nv)f(w,v),
i 2 ( w m , v n )f(w,v)=mn i 1 (w,v)f( w m , v n )
f(w,v)= i 1 (w,v) 1 mn i 2 ( w m , v n ) f( w m , v n )= i 1 (w,v) 1 mn i 2 ( w m , v n ) i 1 ( w m , v n ) 1 mn i 2 ( w m 2 , v n 2 ) f( w m 2 , v n 2 )=...
(w,v)= H * (w,v)Y(w,v) H * (w,v)H(w,v)+ φ 2 ,
H(w,v)= k=1 N 1 mn i 2 ( w m k , v n k ,
Y(w,v)= k=0 N i 1 ( w m k , v n k ),
F(x,y)=[exp( (x37) 2 2 )+exp( (x43) 2 2 )][exp( (y37) 2 2 )+exp( (y43) 2 2 )];
A[ x x 0 m ]=exp[ 1 2 ( x x 0 3m ) 2 ],
B[ y y 0 n ]=exp[ 1 2 ( y y 0 3n ) 2 ],
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.