Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Parametric blind-deconvolution algorithm to remove image artifacts in hybrid imaging systems

Open Access Open Access

Abstract

Hybrid imaging systems employing cubic phase modulation in the pupil-plane enable significantly increased depth of field, but artifacts in the recovered images are a major problem. We present a parametric blind-deconvolution algorithm, based on minimization of the high-frequency content of the restored image that enables recovery of artifact-free images for a wide range of defocus. We show that the algorithm enables robust matching of the image recovery kernel with the optical point-spread function to enable, for the first time, optimally low noise levels in recovered images.

©2010 Optical Society of America

1. Introduction

Hybrid imaging systems, in which a cubic phase-modulation function is introduced in the pupil plane, have been widely reported as a means for increasing depth of field [13]. Because the modulation-transfer function (MTF) in these systems is approximately invariant with defocus, image restoration is possible for an increased range of defocus with a single kernel [1]. We have recently shown however that strong variations of the phase-transfer function (PTF) with defocus [4] lead to severe image artifacts in the restored image due to phase mismatch in the image recovery [5]. The resultant phase modulation of the overall system-transfer function causes the image artifacts to resemble image replications [5] in a similar, but more complex analogue, to the sidebands introduced by phase modulation of a narrow band, time-domain telecommunications carrier. These artifacts are in addition to ringing due to regularization and the boundary problems related to deconvolution with the discrete Fourier transform. Ringing artifacts can be reduced by iterative techniques [6] and boundary artifacts can be reduced by implementing special boundary conditions [79].

We have previously highlighted the possibility of deducing image defocus by characterization of image-replication artifacts and hence of removing phase modulation of the system-transfer function to enable artifact-free image restoration [5]. Determination of the optimal restoration kernel from the blurring characteristics of the recorded or restored image is known as parametric blind deconvolution. Algorithms for parametric blind deconvolution involve optimization of the restoration kernel against an image metric, such as generalized cross validation (GCV) [10,11]. To our knowledge, no techniques have previously been reported for the reduction of artifacts in hybrid imaging. We show in section 2 that we can determine the correct defocus restoration-kernel by minimization of a simple metric that is proportional to the variance of the high frequency content of the image. The simplicity of this metric offers greater computational efficiency than GCV. In section 3 we assess the performance of the algorithm using numerical data and we conclude in section 4.

2. Novel parametric blind-deconvolution algorithm to remove image artifacts

When the defocus assumed in the calculation of the restoration kernel for an imaging system with a cubic phase-modulated pupil is different from the actual optical defocus, restored images exhibit replication-like artifacts with a high-pass filter characteristic [5]. These high-pass-filtered replications increase the high-frequency content of the image and the optimal value of the defocus, W20r, used to determine the restoration kernel can therefore be estimated by minimizing a parameter proportional to the variance of the high frequency content of the restored image as a function of the defocus. This estimate of the optimal value of W20r, which we refer to as W˜20r, can be written as

W˜20r =ArgminW20{[i(W20r)]},
where i(W20r) is the restored image in the spatial domain and is a cost function that is proportional to the variance of the high frequency content. If a regularized Wiener filter is used for image restoration, Eq. (1) can be written as:
W˜20r =ArgminW20{[1[H*(W20r)H(W20)|H(W20r)|2+KIdiff]]},
where H(W 20) is the imaging optical transfer-function (OTF), H(W20r) is the Fourier transform of the restoration kernel, * denotes complex conjugate, I diff is the diffraction-limited image in the frequency domain, K is a regularization parameter which optimally is the inverse of the signal-to-noise ratio and 1 is the inverse Fourier transform.

The aim of the optimization is to find the value of the optimal filter defocus, W20r, that minimizes the argument of Eq. (2) and is hence equal to optical defocus, W 20, yielding artifact-free images. The variance of the high frequency content of an image can be readily obtained by a discrete-wavelet transform (DWT) [12]. A DWT with a wavelet filter with length l and decomposition depth dmax gives d = d max sub-bands; LHl,d(i) low-high or vertical sub-bands, HLl,d(i) high-low or horizontal sub-bands and HHl,d(i) high-high or diagonal sub-bands [13]. As an example, a cameraman image and its DWT with a two-level decomposition with a Daubechies filter with l = 2, i.e. a Haar filter [13], is shown in Fig. 1(a) and 1(b), respectively. A schematic illustration of the two-level DWT is shown in Fig. 1(c).

 figure: Fig. 1

Fig. 1 (a) Cameraman image. (b) DWT of the cameraman image using a Daubechies filter with l = 2 and (c) schematic illustration of a 2-level DWT.

Download Full Size | PDF

The variance of the high frequency content of the image, or specifically, the variance of the white Gaussian noise, σ n 2, in an image can be robustly estimated by measuring the Median Absolute Deviation (MAD) of the one-level DWT, in particular [12]. As the cost function, ∇, we propose here the sum of the MAD of the high-pass bands:

(i(W20r)) =MAD{HL2,1(G(i(W20r)))}+MAD{LH2,1(G(i(W20r)))}
where G is a Gaussian filter used to suppress noise in the image [14]. We choose to suppress noise to account for non-optimal regularization in the Wiener filter; for example when the SNR is estimated incorrectly. The optimal Gaussian filter may be determined by the degree of noise suppression sought [14], however, the minimization of the MAD metric is relatively insensitive to the noise level and hence the filter size so for simplicity we choose a relatively large Gaussian filter with fixed dimension of 20x20 which provides high levels of noise suppression. A simple one-level wavelet decomposition of the restored image in Eq. (3) is performed using a Haar-filter, which is the optimal filter for enhancing details in highly blurred images [13] such as are obtained by applying Gaussian smoothing. The diagonal band HH 2,1(i) has little effect and so for simplicity is not included in the definition of . An alternative to using the Gaussian filter for low-pass filtering would be to increase the decomposition depth [13], however the Gaussian pre-filter provides two important advantages: (1) the image is not down-scaled and therefore high-frequency artifact details are retained and (2) it enables application to images that are too small for wavelet decomposition to be efficiently applied. Combining Eqs. (3) and (2) yields the optimal image-recovery defocus as given in Eq. (4). We will refer to the argument inside Argmin as the Median Absolute Deviation Sum (MADS), i.e. W˜20r =ArgminW20{MADS}. We will now discuss the performance of the parametric blind-deconvolution algorithm employing this new MADS metric with specific attention to robustness to noise and image content and how well it enables recovery of artifact-free images.

W˜20r =ArgminW20{MAD[HL2,1{G(1[H*(W20r)H(W20)|H(W20r)|2+KIdiff])}]                          +MAD[HL2,1{G(1[H*(W20r)H(W20)|H(W20r)|2+KIdiff])}]}

3. Performance of the algorithm

As a demonstration of the performance of the algorithm, we present simulations of a typical hybrid imaging system for the input scene shown in Fig. 1(a) and a cubic phase function with a phase-modulation amplitude of α = 5λ . The results presented are, however, representative of a more general range of images and values of α. W˜20r is calculated using Eq. (4) for simulations of the following five arbitrary and representative values of optical defocus: W 20 = 0.31575λ, 1.39658λ, 1.80218λ, 2.76166λ and 4.30275λ. Equivalent results may be obtained for negative defocus. For each defocused image, white Gaussian noise is added such that the signal-to-noise (SNR) is equal for all defocused images, where SNR is defined as the ratio of the dynamic range of the image to the standard deviation of the noise. The data in Fig. 2 show the variation of the value of MADS for the five values of W 20 as W20ris varied between 0 and 5λ for SNRs of 80dB and 40dB. The estimates of the optimal values of W20r, that isW˜20r, correspond to the minima of these curves. It can be seen in all cases that W˜20ris approximately equal to optical defocus W 20: the mean error, Δ = <|W 20-W˜20r|>, for the five curves is 0.011λ for SNR = 80dB and 0.089λ for SNR = 40dB. Although the variation of MADS with W 20-W20ris smooth, there can be local minima, and a reasonably starting estimate of W 20 is desirable for reliable convergence of a search procedure.

 figure: Fig. 2

Fig. 2 Normalized variation of MADS applied to the cameraman image with 0≤W20r≤5λ for (a) SNR = 80dB and (b) SNR = 40dB, for W 20 = 0.31575λ, 1.39658λ, 1.80218λ, 2.76166λ and 4.30275λ.

Download Full Size | PDF

We now consider the reliability of the algorithm when applied to a range of image contents, SNRs and phase-modulation amplitude α. The data in Fig. 3 illustrate typical variations of Δ with SNR: these are shown for each of nine images (the eight images in Fig. 4 in addition to Fig. 1(a)) averaged over the five arbitrary W 20 enumerated above. The solid black line shows the average of these nine curves. For each W 20, W˜20rwas found using a Fibonacci search procedure. The sensitivity of Δ to phase-function amplitude, α, is illustrated in Fig. 5 , for an arbitrary image (in this case the cameraman image) for SNRs of 40, 60 and 80 dB.

 figure: Fig. 3

Fig. 3 Average of |W 20-W˜20r| for the five arbitrary W 20 and nine images as a function of SNR. The solid black curve shows the average of the nine image-specific curves.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Images of (a) boat, (b) Lena, (c) man (d) mandrill, (e) plastic, (f) spoke target, (g) straws and (h) US air force test chart.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Average of |W 20-W˜20r| for the five arbitrary W 20 and the cameraman image as a function of the amplitude, α, of the pupil-plane phase function.

Download Full Size | PDF

The data in figure Fig. 3 indicate that the parametric blind-deconvolution enables suppression of artifacts to sufficiently low levels that they are not visible by eye: for SNR>30dB, Δ is less than 0.3λ and at these low defocus deviations, artifacts are sufficiently suppressed such that the artifacts cannot be perceived in high SNR images; for lower SNRs, although Δ, and hence the magnitudes of the artifacts, increase significantly, the artifacts are masked by the higher noise levels and hence invisible to the eye.

The efficiency with which the algorithm enables removal of artifacts for varying α and SNR will now be discussed using the data in Fig. 5. It is important to note that although it is common to consider only large values of α so as to enable image recovery with a single kernel (for example α = 15λ is proposed for mitigation of W 20 = 5λ in the key paper in hybrid imaging [1]), the corollary of a large α is a large reduction in the SNR of the recovered image. However, using the algorithm described here it is possible to optimize the image-recovery kernel so that Δ~0 and the optimal value of α in terms of mean-square error in the recovered image, averaged over the defocus range, is equal to about half of the maximum defocus [3]; therefore a reduction by a factor 6 in α is possible in comparison with that described in [1]. The algorithm described here, enables considerably smaller, optimal values of α to be used with a consequent maximization in image SNR. It can be see from Fig. 5 that above a certain threshold value of α, Δ is approximately independent of α. This threshold is approximately α = 2λ, α = 2.5λ and α = 3.5λ for SNRs of 80, 60 and 40dB, respectively, and shows that at these SNRs, artifacts can be effectively removed (since Δ is sufficiently small) for the described system with 0≤W 20≤4.3λ. Hence, when SNR = 80dB, the optimal value of α~W 20,max/2 proposed in [3] can be attained, but for lower SNR, a modest increase in α is necessary for the algorithm to be able to determine W 20 with sufficient accuracy for artifacts to be effectively suppressed. The described range of W 20 covers most potential applications of hybrid imaging and hence the proposed blind-deconvolution algorithm described here offers a new capability for the near-optimal application of hybrid imaging in terms of recovering of images free of artifacts and with near-optimally high SNRs.

4. Conclusions

We have described a novel parametric blind-deconvolution algorithm that enables the avoidance of the artifacts that beset images recovered from hybrid imaging systems employing a cubic phase mask. We have shown that this algorithm enables reduction in the strength of the phase function required to mitigate a given defocus and hence maximizes the SNR for hybrid imaging. The application of the algorithm to images with constant or smoothly varying defocus (such as occur with lens-defocus aberrations such as simple defocus, astigmatism or field curvature) is straightforward, however, for imaging in the presence of sharp discontinuities in defocus, such as occur in finite-conjugate imaging of scenes with depth, the challenge of optimal image recovery poses significantly greater inherent challenges. Finally, since we are able to determine the defocused W 20 with good accuracy this offers the prospect of passive ranging using hybrid imaging.

Acknowledgement

We would like to thank Scottish Enterprise for funding, Ewan Findlay at STMicroelectronics for technical support, Gonzalo Muyo for helpful comments and Bertrand Lucotte for inspiration.

References and links

1. J. E. R. Dowski Jr and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34(11), 1859–1866 (1995). [CrossRef]   [PubMed]  

2. M. Demenikov, E. Findlay, and A. R. Harvey, “Miniaturization of zoom lenses with a single moving element,” Opt. Express 17(8), 6118–6127 (2009). [CrossRef]   [PubMed]  

3. T. Vettenburg, N. Bustin, and A. R. Harvey, “Fidelity optimization for aberration-tolerant hybrid imaging systems,” Opt. Express 18(9), 9220–9228 (2010). [CrossRef]   [PubMed]  

4. G. Muyo and A. R. Harvey, “Decomposition of the optical transfer function: wavefront coding imaging systems,” Opt. Lett. 30(20), 2715–2717 (2005). [CrossRef]   [PubMed]  

5. M. Demenikov and A. R. Harvey, “Image artifacts in hybrid imaging systems with a cubic phase mask,” Opt. Express 18(8), 8207–8212 (2010). [CrossRef]   [PubMed]  

6. J. van der Gracht, J. Nagy, V. Pauca, and R. Plemmons, “Iterative restoration of wavefront coded imagery for focus invariance,” in Integrated Computational Imaging Systems, OSA Technical Digest Series (2001).

7. M. Donatelli and S. Serra-Capizzano, “Anti-reflective boundary conditions and re-blurring,” Inverse Probl. 21(1), 169–182 (2005). [CrossRef]  

8. J. G. Nagy, M. K. Ng, and L. Perrone, “Kronecker Product Approximations for Image Restoration with Reflexive Boundary Conditions,” SIAM J. Matrix Anal. Appl. 25(3), 829–841 (2003). [CrossRef]  

9. Q. Liu, T. Zhao, W. Zhang, and F. Yu, “Image restoration based on generalized minimal residual methods with antireflective boundary conditions in a wavefront coding system,” Opt. Eng. 47(12), 127005–1 (2008). [CrossRef]  

10. S. J. Reeves and R. M. Mersereau, “Blur identification by the method of generalized cross-validation,” IEEE Trans. Image Process. 1(3), 301–311 (1992). [CrossRef]   [PubMed]  

11. N. Nguyen, P. Milanfar, and G. Golub, “Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement,” IEEE Trans. Image Process. 10(9), 1299–1308 (2001). [CrossRef]  

12. I. M. Johnstone and B. W. Silverman, “Wavelet Threshold Estimators for Data with Correlated Noise,” J. R. Stat. Soc., B 59(2), 319–351 (1997). [CrossRef]  

13. J. Kautsky, J. Flusser, B. Zitová, and S. Simberová, “A new wavelet-based measure of image focus,” Pattern Recognit. Lett. 23(14), 1785–1794 (2002). [CrossRef]  

14. G. Deng, and L. W. Cahill, “An adaptive Gaussian filter for noise reduction and edge detection, ” Nuclear Science Symposium and Medical Imaging Conference, IEEE Conference Record, 3, 1615–1619 (1993).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 (a) Cameraman image. (b) DWT of the cameraman image using a Daubechies filter with l = 2 and (c) schematic illustration of a 2-level DWT.
Fig. 2
Fig. 2 Normalized variation of MADS applied to the cameraman image with 0≤ W 20 r ≤5λ for (a) SNR = 80dB and (b) SNR = 40dB, for W 20 = 0.31575λ, 1.39658λ, 1.80218λ, 2.76166λ and 4.30275λ.
Fig. 3
Fig. 3 Average of |W 20- W ˜ 20 r | for the five arbitrary W 20 and nine images as a function of SNR. The solid black curve shows the average of the nine image-specific curves.
Fig. 4
Fig. 4 Images of (a) boat, (b) Lena, (c) man (d) mandrill, (e) plastic, (f) spoke target, (g) straws and (h) US air force test chart.
Fig. 5
Fig. 5 Average of |W 20- W ˜ 20 r | for the five arbitrary W 20 and the cameraman image as a function of the amplitude, α, of the pupil-plane phase function.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

W ˜ 20 r   = Argmin W 20 { [ i ( W 20 r ) ] } ,
W ˜ 20 r   = Argmin W 20 { [ 1 [ H * ( W 20 r ) H ( W 20 ) | H ( W 20 r ) | 2 + K I diff ] ] } ,
(i( W 20 r ))  = MAD { H L 2 , 1 ( G ( i ( W 20 r ) ) ) } + MAD { L H 2 , 1 ( G ( i ( W 20 r ) ) ) }
W ˜ 20 r   = Argmin W 20 { MAD [ H L 2 , 1 { G ( 1 [ H * ( W 20 r ) H ( W 20 ) | H ( W 20 r ) | 2 + K I diff ] ) } ]                            + MAD [ H L 2 , 1 { G ( 1 [ H * ( W 20 r ) H ( W 20 ) | H ( W 20 r ) | 2 + K I diff ] ) } ] }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.