Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Restoration of soft x-ray laser images of nanostructures

Open Access Open Access

Abstract

We present advanced techniques for the restoration of images obtained by soft x-ray laser microscopy. We show two methods. One method is based on adaptive thresholding, while the other uses local Wiener filtering in the wavelet domain to achieve high noise gains. These wavelet based denoising techniques are improved using spatial noise modeling. The accurate noise model is built up from two consecutive images of the object and respective background images. To our knowledge, the results of both proposed approaches over-perform competitive methods. The analysis is robust to enable image acquisition with significantly lower exposure times, which is critical in samples that are sensitive to radiation damage as is the case of biological samples imaged by SXR microscopy.

© 2014 Optical Society of America

1. Introduction

Soft X-Ray (SXR) full field microscopy extends the spatial resolution of optical microscopes to the tens of nm range [1]. SXR full field microscopes have been demonstrated using illumination from synchrotron, incoherent SXR sources, SXR lasers and SXR coherent harmonic sources [14]. In most cases, image acquisition ranges from single shot to tens of seconds to minutes [2,5]. The quality of the images in terms of resolution and contrast obtained with SXR microscopes depend on the characteristics of illumination, the optics and detection systems. Generally the quality of the images is high, but they still contain a significant level of noise. Part of the noise can be removed using standard background subtraction during the experiment, but this process does not necessarily remove all noise.

The traditional approach to image denoising uses Wiener filtering. More recently, non-linear denoising methods have been developed [6]. The key of these novel approaches is in sparse representation of the observed phenomenon, i.e. the useful information in the image is being linearly transformed into a small set of big coefficients, while the noise is uniformly spread. Wavelet-based denoising has been applied to a wide range of one dimensional (1D) signals, as well as to multidimensional data, including two-dimensional (2D) images. The noise is actually suppressed in the wavelet transform domain by application of some non-linear operation, like soft or hard thresholding [7]. Some of the attempts to increase the efficiency of denoising methods were in direction of multiple representations [8], or adaptive wavelets, providing for sparser representation [912]. Some initial attempts of wavelet based x-ray images restoration were given in [13]. A denoising and deblurring method based on fast wavelet transform is given in [14].

In this paper, we present a novel space-adaptive noise modeling technique that provides for more accurate non-linear processing. We show a methodology to model and remove the noise in SXR images obtained using a compact λ = 46.9 nm microscope [15]. An accurate model of the noise is developed from the analysis of two consecutive images and corresponding background images. An initial clean image model is developed using classic wavelet denoising techniques, then both models of the clean image and noise are used in an original application of empirical Wiener filtering in the wavelet domain. The results in the image quality and the noise gain are superior to competing state-of-the-art methods, as shown in Section 6.

The image denoising and reconstruction methods presented in this paper are very robust, and can be used to reduce the image exposure, which is critical when using SXR microscopes to image low contrast samples, or those that are sensitive to dose, such as biological samples [16]. The methods can also be used to improve quality on images taken with a single laser shot [17].

The paper is organized as follows. In Section 2 we briefly describe the SXR microscope used to obtain the images. Modeling of the spatially varying noise is presented in Section 3. Classic and adaptive threshold wavelet modeling is discussed in Section 4, and empirical Wiener filtering in the wavelet domain is presented in Section 5. The results are summarized and numerically compared to competitive methods in Section 6.

2. Full field SXR imaging with a compact λ = 46.9 nm microscope

The images that are used to demonstrate the denoising methods were acquired with a compact soft x-ray microscope operating in transmission configuration previously described in [15]. The microscope used illumination from a table-top SXR laser whose output consists of pulses of ~10 μJ energy (2.4 × 1012 photons/pulse) at a wavelength of 46.9 nm. A 13% efficiency Schwarzschild condenser was used to illuminate the object. The images were formed on a charged coupled array detector (CCD) using a 0.12 NA Fresnel zone plate objective. The CCD camera is a back illuminated 1024x1024 array of 24μm x 24μm pixels. The camera was positioned at 0.99 m away from the zone plate producing a magnification of 470 × . In this configuration the microscope’s spatial resolution is better than 200 nm [15]. The test object consisted of circular patterns of small openings with the smallest features, 100 nm in size, located in the inner circle of the pattern. The test object and zone plate were fabricated by electron beam lithography to be free-standing [18]. The images were acquired with 10 laser shots, with the laser operating at 1 Hz repetition rate. To build the noise model (Section 3) and to implement denoising methods described in Sections 4 and 5, two consecutive images of the test object and corresponding background images were acquired under the same conditions (Fig. 1).

 figure: Fig. 1

Fig. 1 Two images of the same object, yinp1 and yinp2. Salt & peppers background noise is clearly visible, exhibiting the same spatial distribution in both images. The object represents a wheel test pattern. The width of the features in the inner ring is 100 nm.

Download Full Size | PDF

3. Noise theoretical framework and modeling

The noise in the SXR images is mainly detector noise which consists of three components: readout, dark current and photon (or shot) noise. The readout noise is mostly produced by electronics, and it is random, Gaussian, identically distributed. The dark current noise is produced by the actual pixel sensors. It is random, but its level exhibits deterministic pattern and salt & peppers spatial distribution, due to imperfect pixel manufacturing. The pattern is very stable and does not change in time. The dark noise level is proportional to the integration (“exposure”) time and to the sensor's temperature. Depending on the temperature distribution, it has an approximately polynomial spatial structure. The dark noise and the photon noise are Poisson distributed and proportional to the square root of each pixel’s magnitude. The photon noise refers to the inherent natural variation of the incident photon flux.

The standard process to eliminate the dark current in the image is to perform a subtraction of the background: y = yinpb, where yinp is the input image. Background image b is taken in the dark, keeping the integration time the same as the image exposure time. The subtraction cancels out the dark noise pattern and salt & peppers component, but actually doubles the variance of the dark noise, which corresponds to the 2increase of its magnitude.

After the background subtraction, we assume additive residual noise model:

y(m,n)=x(m,n)+w(m,n),
where x is the underlying clean image we want to restore, w is zero mean noise, and y is the background subtracted image; m and n denote spatial indices. Although noise w has Gaussian and Poisson component, it can be reliably approximated by a zero mean Gaussian model.

In this paper, we propose to use two images of the same object for accurate noise modeling. The resulting noise model will be used in all proposed denoising procedures.

3.1 Preprocessing

Due to errors in pixel sensors, some of the values are permanently low (“cold” pixels), or close to the saturation level (“hot” pixels). Moreover, cosmic rays can produce some random spikes. Its visual appearance is salt & peppers alike, and our first step is to remove it:

  • 1. We estimate the inverse of the cumulative distribution function (CDF) of {y}. In practice, it was calculated by sorting all elements of Y={y(m,n)}.
  • 2. We found ± 3.3σ bounds ylow and yhigh, which correspond to CDF range (p,1–p), where p=1erf(3.3/2)0.001.
  • 3. We search for m and n indices of pixels that do not satisfy the condition: ylow<y(m,n)<yhigh. We replaced them by the median of neighboring elements: y(m,n)=median {y(k,j)  |    k=m1,m,m+1;  j=n1,n,n+1}.

This procedure removes most of the residual salt & peppers noise, while the rest of the image remains intact.

If we have two different images of the same object with the same exposure times, yinp1 and yinp2, accompanied with two different background shots b1 and b2, two background the corrected images are y1 = yinp1b1, y2 = yinp2b2. A simple averaging:

y=y1+y22,
provides for an increase of the signal to noise ratio (SNR) by square root of 2 (Fig. 2).

 figure: Fig. 2

Fig. 2 From left to right: the background corrected images (y1 and y2), and its average y. The deterministic noise pattern is canceled out by subtraction, and the residual noise is reduced by preprocessing and averaging.

Download Full Size | PDF

Next we introduce one extra equalization step to account for the difference in the exposure, mainly due to variations of the laser pulse energy. We use a simple linear model to compensate for the eventual differences: y1new = C1y1 + C0. We minimize an error function E(C) = (C1y1 + C0y2)2 by solving system of linear equations: φTφC*=φTy2, where C=[C1C0]T is a vector of two constants and the matrix φ is defined as φ=[y11]. Accent denotes reshaping of matrices into column vectors, 1 is a vector with all ones, and superscript ·T denotes transpose. C* is the minimum least square (LS) solution that corresponds to the compensation model. To keep our notation readable, let y1 denote already compensated image (subscript new is avoided). Moreover, optimum constants are denoted without asterisk: C1, C0.

Preprocessed and corrected images y1 and y2, as described above, and their average y obtained using Eq. (2), will be considered as input to all subsequent denoising procedures described in Sections 4 and 5.

3.2 Total noise level

Depending on whether we have one or two images, we estimate the noise level σw(m,n) in two different ways. If we have only one background subtracted image available, we use wavelet based spatial noise modeling techniques. Instead, if we have two images available, as the first step we calculate the absolute value of its difference: yd = |y1y2|. This operation cancels out the actual image under the assumption that the images are exactly the same. What remains is an initial sample estimate of the total image noise level. In yd, the photon noise appears two times: in yinp1 and yinp2, while the dark and readout (or background) noise appears four times: in yinp1, yinp2, b1 and b2. Notice that the same ratio (photon noise + doubled background noise) appears in each background subtracted image y1 and y2, as well as in its average y. Hence, the obtained noise sample estimate well suits what we need for noise modeling and removal. Compensated sample estimate yd = |y1y2| is shown on the left hand side of Fig. 3.

 figure: Fig. 3

Fig. 3 Left: sample estimate of the total noise level yd. Center: resulting total noise level model σ^w(m,n) built from two images difference. Right: total noise level estimated from a single image σ^w1(m,n).

Download Full Size | PDF

Additionally, for better insight into the noise physics, we build the background noise model starting from the sample estimate bd = |b1b2|, and subtract the surplus from the total noise model to obtain pure input noise levels, which will be presented in Section 3.3.

Next, we apply wavelet based techniques on yd to build the desired noise model. The wavelet techniques use spatial information to get a more reliable noise model from a single sample estimate. We start from z = yd and calculate the single level non-decimated 2D wavelet decomposition of the absolute differences. The 2D wavelet transform is realized as a separable generalization of the 1D wavelet transform, which is implemented using a set of linear filters [19]. For the simplicity, here we give only the reconstruction expression:

z(m,n)=j=1,...J[hj(m,n)Hm,n,j+vj(m,n)Vm,n,j+dj(m,n)Dm,n,j]+aj(m,n)Am,n,j,
where Ψm,n,j={Hm,n,j,Vm,n,j,Dm,n,j} are 2D wavelet functions, ψj(m,n)={hj(m,n),vj(m,n),dj(m,n)} denote wavelet detail coefficients, j is level of decomposition, and J is chosen maximum decomposition level. aj and Aj stand for residual approximation coefficients and 2D scale functions respectively. Values {h}, {v} and {d} are obtained by a cascade of row-wise and column-wise filters, and represent horizontal, vertical and diagonal wavelet coefficients, respectively. In the rest of the paper, we will use the same notation z whenever we use the wavelet transform and reconstruction expression Eq. (3).

In our case, the maximum decomposition level was set to five, J = 5, and we used Daubechies wavelets with 5 vanishing moments [19]. We estimated the noise level deviation using first level diagonal wavelet coefficients σN=median(|d1(m,n)|)/0.6745. Constant 0.6745 is tailored to Gaussian noise distribution, which is an acceptable assumption for noise level deviation.

Now, we apply the soft thresholding method in the wavelet domain [7]:

T=KσN,
tj(m,n)=|ψj(m,n)|T,ψ^j(m,n)=sign[ψj(m,n)](tj(m,n)+|tj(m,n)|)/2,
where T is a fixed threshold value. It is proportional to σN up to some constant (K = 2.8). The expression Eq. (5) was used for all decomposition levels j = 1,…,J. The reconstructed image y^(m,n) was calculated using Eq. (3) with thresholded coefficients ψ^j(m,n)={h^j(m,n),v^j(m,n),d^j(m,n)} from Eq. (5) to obtain a pixel-wise total noise level estimate σ^w(m,n)=y^(m,n)/1.131. The denominator constant is set to match the noise level of a single image. To make our estimate more robust, we estimated the inverse CDF of {σ^w}. Then, we calculated ± 2.5σ bounds σ^low and σ^high that correspond to CDF range (p, 1–p) with p = 0.012, and limited the eventual outliers to the closest bounds.

Since we use averaged images y = (y1 + y2)/2, the actual total noise level estimate must be divided by factor of 2. The result σ^w(m,n) is shown in the center of Fig. 3. The Poisson square of the magnitude law is clearly visible: its visual appearance resembles the image itself. We used the resulting noise model in all subsequent denoising procedures, described in Sections 4 and 5.

3.3 Background noise level

For an additional insight in the noise components, we conducted the procedure described above on the background image differences as well. We started from z = bd. Maximum wavelet decomposition level was set to five, J = 5. We used Daubechies wavelets with 4 vanishing moments, and constant K = 3.3 in Eq. (4). After thresholding using Eq. (5), the reconstruction is given by Eq. (3) and σwb0(m,n)=y^(m,n)/1.131, which is a reliable background noise model with denominator constant adjusted to the single image noise levels. The results are shown on the left hand side of Fig. 4.

 figure: Fig. 4

Fig. 4 Left: wavelet based background noise model σwb0. Right: polynomial background noise model σwb.

Download Full Size | PDF

We excluded a frame of 5% edging pixels in σwb0 from further calculations, to avoid the influence of the wavelet transform extension on the edges. To get a simple, but reliable background noise level estimate, we calculated coefficients {pk} of the polynomial noise model:

σ^wb=p5(M×M)+p4(M×N)+p3(N×N)+p2M+p1N+p01
M and N are matrices containing normalized spatial coordinates of rows and columns, respectively. 1 is a matrix filled by ones, and resulting σ^wb is a 2D polynomial model representing the noise level. Symbol × represents element-wise multiplication. To find vector of unknown coefficients P=[p5p4p3p2p1p0]T, we minimize the error function E(P)=(σ^wboσ^wb(P))2 by solving system of linear equations:
ΦTΦP*=ΦTσwb0,
where Φ=[M×MM×NN×NMN1], and accent denote reshaping of matrices into column vectors. P* is the minimum least square (LS) solution obtained by solving Eq. (6), and our estimate is σ^*wb=σ^wb(P*). We avoid asterisks in the rest of the text.

Since the LS (or minimum L2) regression is known to be sensitive on statistical outliers in the data, prior to solving Eq. (6), we estimated the inverse CDF of σ^wb0. Then, we calculated ± 3σ bounds σ^low and σ^high that correspond to the CDF range (p, 1–p), and p = 0.003. We enumerated indices of elements that do not satisfy condition σ^low<σ^wb0(m,n)<σ^high, and removed the concomitant rows of Φ and σ^wb0 in Eq. (6), in order to obtain a robust polynomial model of the noise level.

The results of background modeling are shown in Fig. 4. The polynomial model characterizes the dark noise level distribution that is related to small temperature differences across the CCD. The differences are very small, but still noticeable in our model.

3.4 Justification of the noise model

In Section 3.2 we estimated the total noise level σ^w, and in Section 3.3 we estimated the background noise level σ^wb. If the two images were exactly the same due to background subtraction, the background random noise component would appear two times in total noise model σ^w.

Here, we estimate a pure input noise model σ^w0that does not include the background noise surplus. Since σ^2w=σ^2w0+σ^2wb, the pure input noise model is σ^w0=σ^2wσ^2wb. Because we used our compensation model and multiplied one of the images by constant C1, the actual relation between the noise variances is σ^2w=σ^2w0+C1σ^2wb, and the pure input noise estimate is σ^w0=σ^2wC1σ^2wb. The estimate matches the noise level of the single image, namely the one which was not compensated. The difference under the square root should be always positive, because the left term contains photon noise and approximately doubled background noise, which is greater than the right term. This was confirmed in our example, although some numerical difficulties can be expected with very low illumination levels. To overcome the eventual problem, the low bound of difference can be set to σ^2wC1σ^2wbto σ^2wb.

The probability density functions (PDF) of the background noise and of the resulting pure input noise model are shown in Fig. 5. In the latter, we can clearly see the Poisson photon noise distribution. The levels and distributions obtained justify our noise modeling approach.

 figure: Fig. 5

Fig. 5 Left: PDF estimate of the background noise level. Right: PDF estimate of the resulting pure input noise level.

Download Full Size | PDF

3.5 Total noise level from a single image

Here we consider the case in which only one background subtracted image is available. We decompose the image from Eq. (3), with decomposition level set to one, J = 1, using Daubechies wavelets with 5 vanishing moments. To obtain a robust estimation of the spatial distribution of the noise level σw1(m,n), we apply the 15x15 median filter to the absolute values of diagonal wavelet coefficients d1:

σ^w1(m,n)=medFilt15×15(|d1(m,n)|)/0.6745.

The size of the support of the median filter was chosen as a compromise between accuracy and ability of spatial adaptation. The level 1 diagonal coefficients contain mostly noise. Due to the central limit theorem, after linear filtering, they are more Gaussian than the original noise in z = y1. The image components in d1 correspond to a small number of large Laplacean distributed wavelet coefficients and are being filtered out by median filter: the output mostly represents the noise. Hence, constant 0.6745 in Eq. (7) is tailored to match input Gaussian noise distribution, which was a reasonable assumption. The result is shown in Fig. 3 on the right. Basic statistical descriptors are approximately the same for both models: median and mean of σ^wand σ^w1 differ ~2% or ~8%, respectively. The result justifies our noise modeling approach. But, the model obtained from a single image is blurred and noisy, so it is desirable to develop the noise model from two images.

4. Fixed and adaptive threshold wavelet denoising

We use the noise model developed in Section 3.2, and the averaged input images from Section 3.1 for image restoration. At first, we transformed the background subtracted and averaged image y(m,n) from Eq. (2), described by the additive noise model Eq. (1) into the wavelet domain. The maximum decomposition level was set to five, J = 5. We used the shift invariant (non-decimated) transform. Our choice was Daubechies wavelets with 5 vanishing moments db5. The threshold was set according to Eq. (4), with K = 2.5 and the average noise level was estimated using σN=mean{σ^w(m,n)}. We applied the soft thresholding method according to Eq. (5), and reconstructed the image using Eq. (3). The resulting image is shown on the left hand side of Fig. 6. The result of the fixed threshold wavelet denoising method is used as an initial image model in Section 5.

 figure: Fig. 6

Fig. 6 Left: wavelet denoised image using fixed threshold. Right: wavelet denoised image using adaptive thresholds.

Download Full Size | PDF

Now, we the transform background subtracted and averaged image y(m,n) into the wavelet domain using shorter wavelets. This time we use non-decimated symmlets (Daubechies designed least asymmetric wavelets) with 3 vanishing moments [19]. The maximum decomposition level was set to five, J = 5.

We apply the soft thresholding method in the wavelet domain, using our spatial noise level model σw (shown in the center part of Fig. 3):

T(m,n)=Kσw(m,n),
tj(m,n)=|ψj(m,n)|T(m,n),ψ^j(m,n)=sign[ψj(m,n)](tj(m,n)+|tj(m,n)|)/2,
where T(m,n) is adaptive threshold value. The constant was set to K = 2.5, which is an empirically obtained value adjusted to the noise model. Expressions Eq. (8) and Eq. (9) were used for all decomposition levels j = 1,…,J. The reconstruction y^(m,n) was calculated using inverse wavelet transform Eq. (3) using adaptively thresholded coefficients ψ^j(m,n)={h^j(m,n),v^j(m,n),d^j(m,n)} from Eq. (9), and the result is shown on the right hand side of Fig. 6. Proposed scheme is shown in Fig. 7. Although both results are good, adaptive thresholds method is more accurate in noise removal, which is corroborated in Section 6.

 figure: Fig. 7

Fig. 7 Proposed wavelet denoising method using adaptive thresholds. WT denotes wavelet transform, WT−1 denotes its inverse, and indices correspond to choice of wavelets.

Download Full Size | PDF

5. Empirical Wiener filtering in the wavelet domain

In sections 3 and 4, we used fixed and adaptive threshold methods to denoise the image. The success of wavelet based methods relies on sparseness of the useful information in the wavelet domain, and is based on non-linear operations on wavelet coefficients that reduce the noise. The non-linear operation is usually limited to hard or soft thresholding, where the latter is described by expressions Eq. (5) and Eq. (9). The choice of the constant K in equations Eq. (4) and Eq. (8) is usually adjusted to minimize the risk of damaging the information while maximizing the noise removal, and is always a compromise between the two.

A different approach, proposed in [20], resembles the idea of Wiener filtering, but is applied in the wavelet domain instead of the Fourier domain. If noise N is independent of image X, the original Wiener filter transfer function is H(ω1,ω2)=SXX(ω1,ω2)/(SXX(ω1,ω2)+SNN(ω1,ω2)), where ω1 and ω2 are spatial frequencies, SXX(ω1,ω2) and SNN(ω1,ω2) are power spectra of corresponding image and noise random processes, respectively. Their estimation, in general, is not an easy task.

In [20], the wavelet power spectrum of an image is estimated using an initial image model obtained by fixed threshold wavelet denoising, like the one shown on the left hand side of 6. The noisy image, as well as the corresponding initial image model, is transformed into the wavelet domain. It is important to notice that the choice of wavelet functions must be different from the previously used, while obtaining the initial model. The chosen wavelet functions must be orthogonal, so the transform is uniform. The noise in [20] was considered white and spatially invariant, thus the wavelet power spectrum was constant σN2. Instead of thresholding, we applied the following operation to the wavelet coefficients:

ψ^j(m,n)=|ψpj(m,n)|2/(|ψpj(m,n)|2+σN2)ψj(m,n),
where ψpj(m,n) denotes the wavelet coefficients obtained by the novel wavelet decomposition of the initial image model, and ψj(m,n) are the wavelet coefficients of the noisy image, obtained using exactly the same wavelets. Denoising is achieved using the inverse wavelet transform Eq. (3), and results are shown to be superior to classic, fixed threshold based methods.

We used the initial image model obtained by fixed wavelet thresholding from Section 4. As an addition to [20], in this work we used the spatial total noise model σw(m,n) from Section 3.2 that is shown in the center of Fig. 3. Our assumption is that the noise is white (thus equally distributed across all the wavelet decomposition levels), but spatially non-uniformly distributed, as clearly visible in Fig. 3. It is important to notice that the selected wavelet functions must be shift invariant and derived from an orthogonal template. The diagram of the proposed method is shown in Fig. 8.

 figure: Fig. 8

Fig. 8 Proposed variant of empirical Wiener denoising in the wavelet domain.

Download Full Size | PDF

The following operation was applied to the wavelet coefficients:

ψ^j(m,n)=|ψpj(m,n)|2/(|ψpj(m,n)|2+σw2(m,n))ψj(m,n),
where ψpj(m,n) denotes the wavelet coefficients obtained by wavelet decomposition WT2 of the initial image model, σw(m,n) is our spatial noise model, and ψj(m,n) are the wavelet coefficients of the noisy image, obtained using exactly the same wavelets. Denoising is achieved using the inverse wavelet transform Eq. (3), and the results are shown in Fig. 9.

 figure: Fig. 9

Fig. 9 Denoised image using the proposed empirical Wiener denoising in the wavelet domain.

Download Full Size | PDF

The noise is almost completely removed, while all the image details are preserved and clearly visible. The results are, to our knowledge, superior to state-of-the-art competitive methods [2123]. It is verified in Section 6 on our SXR microscopy examples.

6. Image restoration results

We applied our methods on real data. We did not have an ideal noise free image that is necessary for the explicit computation of the signal-to-noise ratios. Hence, instead of computing SNR-s, we estimate the residual noise levels and calculate the ratios and gains. We compared results of the proposed methods mutually to several competitive methods.

In Section 3, we used the difference of the two images to obtain a reliable input noise model. Hence, we use a similar method to estimate the residual noise and the actual noise gain. We applied each of the denoising methods on two noisy images of the same object, separately. Then, a residual noise model is built from the distance of the two denoised estimates. Distance means any metric function, such as Euclidean or Manhattan distance. Under the assumption that the underlying clean image is well restored and equally preserved in both denoised estimates, the image part is being canceled out, and the distance, derived from the difference, approximates the residual noise.

Hence, our first approach is to calculate the distance between the two denoised images:

d^m=mean(|y^1y^2|)/1.131,
where y^1 and y^2 denote sets of all pixels of the two denoised images. We used a robust estimator based on mean absolute deviation. The denominator is adjusted to meet the single image noise magnitude. As a reference, we calculated the distance of the input images, and hence the noise gain:

dm=mean(|y1y2|)/1.131,ge=20log10(dm/d^m).

It is important to point out that ideal measurements that contain no noise would result in completely repeatable observations: the distance between different shots of the same object would be zero. Hence, if we regard our data acquisition system as a combination of the microscope and subsequent image processing, a better system would be the one that produces lower dispersion of the results. Hence, the noise gain ge is a good criterion for the comparison of the competitive restoration methods.

In our second approach to additionally suppress the image residuals from the noise estimate, we conducted one-level wavelet decomposition of the image differences (N = 1, non-decimated symmlets with 3 vanishing moments, using Eq. (3)), and calculated the distances and gains of the wavelet details d1 rather than images y. Due to the nature of the wavelet transform, most of the spatially correlated components are being removed from the details. The remaining image components (e.g. on edges) appear as outliers in d1. Since we have used robust distance estimate, the result less dependent on the image itself, and is more related to the pure noise content.

The results are presented in Table 1. They clearly depend on the residual noise estimation method: wavelet based measure is biased in favor of wavelet based denoising methods. Still, both measures provide a useful framework for numerical comparison of the results. We additionally prove the proposed measures on a synthetic image example at the end of this paragraph.

Tables Icon

Table 1. Residual noise levels and noise gains (in brackets) for different denoising methods and two residual noise estimators on the real image example*

The first 3-row group in Table 1 present the results of the competitive methods. Fixed threshold wavelet denoising is well known for its high performance, simplicity and numerical efficiency. Portilla’s Bayes Least Square solution on Gaussian Scale Mixtures in the wavelet domain [21] is, as cited in [22] and many other sources as the most efficient approach (http://decsai.ugr.es/~javier/denoise/software/index.htm). Furthermore, we used Foi’s point-wise Shape Adaptive DCT [23] for comparison (http://www.cs.tut.fi/~foi/SA-DCT/#ref_software). All competitive methods were implemented using MATLAB ®, default parameters, and were conducted using a constant noise model (level estimated in Section 4).

The group of the last 2-rows in Table 1 corresponds to the proposed methods. Both methods use the spatial noise model from Section 3.2. They show the best results in the sense of noise gain.

Finally, we check the proposed noise modeling method, image restoration methods and proposed assessment methods on a synthetic test image example (Fig. 10). The synthetic gray level image consists of first and second order two-dimensional piecewise polynomials. Added noise has two components: non-uniform Gaussian and square-of-magnitude Poisson. The levels are adjusted to approximately match the real world example. Two pseudo-random noise realizations are generated for a construction of two “measurements”. The results are given in Table 2. Exactly calculated SNR-s clearly show the advantage of two methods: pointwise shape adaptive DCT and the proposed Wiener filtering in the wavelet domain. Lower dispersion of the results is sufficiently correlated to higher SNR, which validates the assessment results on real images. The lowest dispersions are achieved for the proposed Wiener filtering in the wavelet domain method.

 figure: Fig. 10

Fig. 10 Synthetic test image analysis. From left to right: noisy image, exact noise levels (Gaussian + Poisson), estimated noise levels, restored image using Wiener filtering in the wavelet domain.

Download Full Size | PDF

Tables Icon

Table 2. Signal to noise ratios (SNR), residual noise levels and noise gains (in brackets) for different denoising methods and two residual noise estimators on the synthetic image example*

The corresponding MATLAB code for all proposed methods can be found in http://www.fer.unizg.hr/_download/repository/RestaurationSXR.zip.

7. Conclusion

We have demonstrated two approaches to image denosing which are robust and simple to implement, as they only require the acquisition of two consecutive images and their respective background signals at the same exposure conditions, from a single acquisition to multiple acquisition. In case of the latter, the methods have the potential to reduce the exposure time which is of critical importance in the SXR imaging biological samples. Restoring images using efficient noise removal methods widens the range of application of SXR microscopy. Based on our analysis, it would be advantageous in SXR imaging to take two short exposures of an object, instead of the longer one, and use their difference to build a reliable noise model that is necessary for a successful image restoration.

The wavelet based method is used for building a noise model from two images of the same object. The model can be built separately for the background (dark and readout noise), or for the whole image that contains 2x background and an additional photon noise. The latter model is used for denoising. It is shown that the model can be built from a single image as well, but is less reliable than the one obtained from two images of the same object. The spatial noise model is used in both proposed denoising methods.

The adaptive threshold wavelet based method is simple and fast, giving very good results. It is the best among all competitive state-of-the-art methods if the image distance criterion is used. The Wiener filtering in the wavelet domain method achieves the best denoising results if distance of wavelet details criterion is used. Both methods use accurate spatial noise model that is derived from two shots of the same object.

Acknowledgments

We acknowledge Dr. Georgiy Vaschenko and Dr. Fernando Brizuela for their contributions to the development of table-top SXR microscopes that captured the images used for analysis in the paper. The contributions of Erik Anderson and Weilun Chao from the Center of X-ray Optics and I. Artioukov and A. Vinogradov from the Lebedev Physical Institute for providing the microscope optics are also acknowledged. This work was supported by the Engineering Research Centers Program of the National Science Foundation under NSF Award Number EEC-0310717, by the University of Zagreb under project “Methods for sparse modeling and processing of the audio signals, images and videos” and by the European Community Seventh Framework Programme under grant No. 285939 (ACROSS).

References and links

1. W. Chao, P. Fischer, T. Tyliszczak, S. Rekawa, E. Anderson, and P. Naulleau, “Real space soft x-ray imaging at 10 nm spatial resolution,” Opt. Express 20(9), 9777–9783 (2012). [CrossRef]   [PubMed]  

2. K. H. Lee, S. B. Park, H. Singhal, and C. H. Nam, “Ultrafast direct imaging using a single high harmonic burst,” Opt. Lett. 38(8), 1253–1255 (2013). [CrossRef]   [PubMed]  

3. G. Vaschenko, C. Brewer, F. Brizuela, Y. Wang, M. A. Larotonda, B. M. Luther, M. C. Marconi, J. J. Rocca, C. S. Menoni, E. H. Anderson, W. Chao, B. D. Harteneck, J. A. Liddle, Y. Liu, and D. T. Attwood, “Sub-38 nm resolution tabletop microscopy with 13 nm wavelength laser light,” Opt. Lett. 31(9), 1214–1216 (2006). [CrossRef]   [PubMed]  

4. O. von Hofsten, M. Bertilson, J. Reinspach, A. Holmberg, H. M. Hertz, and U. Vogt, “Sub-25-nm laboratory x-ray microscopy using a compound Fresnel zone plate,” Opt. Lett. 34(17), 2631–2633 (2009). [CrossRef]   [PubMed]  

5. C. A. Brewer, F. Brizuela, P. Wachulak, D. H. Martz, W. Chao, E. H. Anderson, D. T. Attwood, A. V. Vinogradov, I. A. Artyukov, A. G. Ponomareko, V. V. Kondratenko, M. C. Marconi, J. J. Rocca, and C. S. Menoni, “Single-shot extreme ultraviolet laser imaging of nanostructures with wavelength resolution,” Opt. Lett. 33(5), 518–520 (2008). [CrossRef]   [PubMed]  

6. D. L. Donoho and I. M. Johnstone, “Ideal adaptation via wavelet shrinkage,” Biometrika 81(3), 425–455 (1994). [CrossRef]  

7. D. L. Donoho, “De-noising by soft thresholding,” IEEE Trans. Inf. Theory 41(3), 613–627 (1995). [CrossRef]  

8. H. Choi and R. Baraniuk, “Multiple Wavelet Basis Image Denoising Using Besov Ball Projections,” IEEE Signal Process. Lett. 11(9), 717–720 (2004). [CrossRef]  

9. R. L. Claypoole Jr, G. M. Davis, W. Sweldens, and R. G. Baraniuk, “Nonlinear wavelet transforms for image coding via lifting,” IEEE Trans. Image Process. 12(12), 1449–1459 (2003). [CrossRef]   [PubMed]  

10. M. Vrankić, D. Seršić, and V. Sučić, “Adaptive 2-D wavelet transform based on the lifting scheme with preserved vanishing moments,” IEEE Trans. Image Process. 19(8), 1987–2004 (2010). [CrossRef]   [PubMed]  

11. D. Seršić, “A realization of wavelet filter bank with adaptive filter parameters,” In Proc. EUSPICOTampere, Finland, pp. 1733–1736, (2000).

12. M. Tomić and D. Seršić, “Adaptive edge-preserving denoising by point-wise wavelet basis selection,” IET Signal Processing 6(1), 1–7 (2012). [CrossRef]  

13. H. Stollberg, J. Boutet de Monvel, A. Holmberg, and H. M. Hertz, “Wavelet-based image restoration for compact X-ray microscopy,” J. Microsc. 211(2), 154–160 (2003). [CrossRef]   [PubMed]  

14. X. J. Guo, X. L. Liu, C. Ni, B. Liu, S. M. Huang, and M. Gu, “Improving image quality of x-ray in-line phase contrast imaging using an image restoration method,” Opt. Express 19(23), 23460–23468 (2011). [CrossRef]   [PubMed]  

15. G. Vaschenko, F. Brizuela, C. Brewer, M. Grisham, H. Mancini, C. S. Menoni, M. C. Marconi, J. J. Rocca, W. Chao, J. A. Liddle, E. H. Anderson, D. T. Attwood, A. V. Vinogradov, I. A. Artioukov, Y. P. Pershyn, and V. V. Kondratenko, “Nanoimaging with a compact extreme-ultraviolet laser,” Opt. Lett. 30(16), 2095–2097 (2005). [CrossRef]   [PubMed]  

16. D. Y. Parkinson, G. McDermott, L. D. Etkin, M. A. Le Gros, and C. A. Larabell, “Quantitative 3-D imaging of eukaryotic cells using soft X-ray tomography,” J. Struct. Biol. 162(3), 380–386 (2008). [CrossRef]   [PubMed]  

17. S. Carbajo, I. D. Howlett, F. Brizuela, K. S. Buchanan, M. C. Marconi, W. Chao, E. H. Anderson, I. Artioukov, A. Vinogradov, J. J. Rocca, and C. S. Menoni, “Sequential single-shot imaging of nanoscale dynamic interactions with a table-top soft x-ray laser,” Opt. Lett. 37(14), 2994–2996 (2012). [CrossRef]   [PubMed]  

18. E. H. Anderson, “Specialized electron beam nanolithography for EUV and X-ray diffractive optics,” IEEE J. Quantum Electron. 42(1), 27–35 (2006). [CrossRef]  

19. I. Daubechies, “Ten Lectures on Wavelets,” Volume 61 of CBMS-NSF Regional Conference Series in Applied Mathematics, National Science Foundation Conference Board of the Math. Sciences, SIAM, (1992). [CrossRef]  

20. S. P. Ghael, A. M. Sayed, and R. G. Baraniuk, “Improved Wavelet Denoising via Empirical Wiener Filtering,” in Proceedings of SPIE, Mathematical Imaging, San Diego, (1997). [CrossRef]  

21. J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Image Denoising using Scale Mixtures of Gaussians in the Wavelet Domain,” IEEE Trans. Image Process. 12(11), 1338–1351 (2003). [CrossRef]   [PubMed]  

22. F. Luisier, T. Blu, and M. Unser, “A New SURE Approach to Image Denoising: Interscale Orthonormal Wavelet Thresholding,” IEEE Trans. Image Process. 16(3), 593–606 (2007). [CrossRef]   [PubMed]  

23. A. Foi, V. Katkovnik, and K. Egiazarian, “Pointwise Shape-Adaptive DCT for High-Quality Denoising and Deblocking of Grayscale and Color Images,” IEEE Trans. Image Process. 16(5), 1395–1411 (2007). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Two images of the same object, yinp1 and yinp2. Salt & peppers background noise is clearly visible, exhibiting the same spatial distribution in both images. The object represents a wheel test pattern. The width of the features in the inner ring is 100 nm.
Fig. 2
Fig. 2 From left to right: the background corrected images (y1 and y2), and its average y. The deterministic noise pattern is canceled out by subtraction, and the residual noise is reduced by preprocessing and averaging.
Fig. 3
Fig. 3 Left: sample estimate of the total noise level yd. Center: resulting total noise level model σ ^ w (m,n) built from two images difference. Right: total noise level estimated from a single image σ ^ w1 (m,n) .
Fig. 4
Fig. 4 Left: wavelet based background noise model σ wb0 . Right: polynomial background noise model σ wb .
Fig. 5
Fig. 5 Left: PDF estimate of the background noise level. Right: PDF estimate of the resulting pure input noise level.
Fig. 6
Fig. 6 Left: wavelet denoised image using fixed threshold. Right: wavelet denoised image using adaptive thresholds.
Fig. 7
Fig. 7 Proposed wavelet denoising method using adaptive thresholds. WT denotes wavelet transform, WT−1 denotes its inverse, and indices correspond to choice of wavelets.
Fig. 8
Fig. 8 Proposed variant of empirical Wiener denoising in the wavelet domain.
Fig. 9
Fig. 9 Denoised image using the proposed empirical Wiener denoising in the wavelet domain.
Fig. 10
Fig. 10 Synthetic test image analysis. From left to right: noisy image, exact noise levels (Gaussian + Poisson), estimated noise levels, restored image using Wiener filtering in the wavelet domain.

Tables (2)

Tables Icon

Table 1 Residual noise levels and noise gains (in brackets) for different denoising methods and two residual noise estimators on the real image example*

Tables Icon

Table 2 Signal to noise ratios (SNR), residual noise levels and noise gains (in brackets) for different denoising methods and two residual noise estimators on the synthetic image example*

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

y(m,n)=x(m,n)+w(m,n),
y= y 1 + y 2 2 ,
z(m,n)= j=1,...J [ h j (m,n) H m,n,j + v j (m,n) V m,n,j + d j (m,n) D m,n,j ] + a j (m,n) A m,n,j ,
T=K σ N ,
t j (m,n)=| ψ j (m,n) |T, ψ ^ j (m,n)=sign[ ψ j (m,n) ]( t j (m,n)+| t j (m,n) | )/2,
σ ^ wb = p 5 (M×M)+ p 4 (M×N)+ p 3 (N×N)+ p 2 M+ p 1 N+ p 0 1
Φ T Φ P * = Φ T σ wb0 ,
σ ^ w1 (m,n)= medFilt 15×15 ( | d 1 (m,n) | )/0.6745.
T(m,n)=K σ w (m,n),
t j (m,n)=| ψ j (m,n) |T(m,n), ψ ^ j (m,n)=sign[ ψ j (m,n) ]( t j (m,n)+| t j (m,n) | )/2,
ψ ^ j (m,n)= | ψ pj (m,n) | 2 /( | ψ pj (m,n) | 2 + σ N 2 ) ψ j (m,n),
ψ ^ j (m,n)= | ψ pj (m,n) | 2 /( | ψ pj (m,n) | 2 + σ w 2 (m,n) ) ψ j (m,n),
d ^ m =mean(| y ^ 1 y ^ 2 |)/1.131,
d m =mean(| y 1 y 2 |)/1.131, g e =20 log 10 ( d m / d ^ m ).
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.