Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Flexible focus function consisting of convex function and image enhancement filter

Open Access Open Access

Abstract

We propose a new focus function Λ that, like many of the existing focus functions, consists of a convex function and an image enhancement filter. Λ is rather flexible because for any convex function and image enhancement filter, it is a focus function. We proved that Λ is a focus function using a model and Jensen’s inequality. Furthermore, we generated random Λs and experimentally applied them to simulated and real blurred images, finding that 98% and 99% of the random Λs, respectively, have a maximum value at the best-focused image and most of them decrease as the defocus increases. We also applied random Λs to motion-blurred images, blurred images in different-sized windows, and blurred images with different types of noise. We found that Λ can be applied to motion blur and is robust to different-sized windows and different noise types.

© 2014 Optical Society of America

1. Introduction

A focus function is always modeled as a function that has a maximum value at the best-focused image and decreases as the defocus increases [1]. Efforts are under way to develop focus functions relevant to widely used applications, such as automated microscopy, holography, deblurring, image refocusing, iris recognition, and depth of focus estimation [2]. In the past 40 years, more than 30 focus functions have been proposed. Some of them have the same principles or similar patterns, and some have the same formulae but different parameters. Thus, researchers have presented several categories of focus functions.

A large category of focus functions is derivative-based functions, which assume that well-focused images have more high-frequency content than defocused images [3,4], and apply image enhancement filters to extract the high-frequency content. (In this paper, an image enhancement filter is a convolution mask that uses smoothing or sharpening of the image contrast to make an image more useful for analysis.) Brenner et al. constructed a focus function by summing the squares of the second-order differences of the image [5]. Groen et al. presented three categories of focus functions and applied methods to sum the normalized variation of the image [6]. Krotov convolved the image by a Sobel filter and then summed the squares of the gradients of the image [7]. Subbarao et al. convolved the image by a Laplace filter and then summed the squares of the gradients of the image [8]. Nayar et al. convolved the image by a Laplacian filter and then used the l1-norm of the gradient to construct the focus function [9]. Santos et al. presented several focus functions by summing the difference of the image higher than a threshold, summing the squares of the differences of the image higher than a threshold, and summing the squares of the image values higher than a threshold [10]. Daugman proposed a sharpening filter and summed the total high-frequency power of the 2D Fourier spectrum of the image [11]. Kang et al. proposed a sharpening filter smaller than Daugman’s and argued that the filter contains more high-frequency power [12]. Langehanenberg et al. used the determination of the logarithmically weighted cumulated Fourier spectra to determine the image sharpness of the reconstructed amplitude distributions [13].

A small category of focus functions is mid-frequency-based functions. Lee et al. detected objects distributed in the image using a mid-frequency discrete cosine transform (DCT) focus measure and then selected the target object through fuzzy reasoning [14]. Feng et al. presented an image clarity evaluation function based on the center blocking DCT method [15]. Wang et al. used combination filters to strengthen the image medium-frequency information in order to construct focus functions [16]. Another small category of focus functions is those based on statistics, in which focus functions are constructed using variance and correlation. Mendelsohn et al. summed the image values higher than a threshold [17]. Groen counted the number of pixels with values higher than a threshold [6]. Another small category of focus functions is histogram-based functions. Firestone et al. argued that a focused image contains more information than a defocused image and used the image entropy [18].

There are also focus functions that cannot be classified into the above categories. For example, Brázdilová et al. selected only areas of interest within an image and proposed further enhancements of existing methods [19]. Hamm et al. proposed a content-based focus searching method that uses a priori information on the observed objects by employing local object features and boosted learning [20]. Tsai et al. proposed a focus function that transforms the focus profile to the reciprocal domain [21]. Xu et al. proposed a focus detection criterion based on the phase contrast [22]. Ferraro et al. used an approximation of the Tamura coefficient to construct a focus function [23]. Gao et al. used the difference between the normalized amplitudes of red and green light [24].

Categorizing focus functions helps people understand how a focus function works. In addition, focus functions in one category always have one or more parameters, which enable researchers to construct new focus functions by tuning the parameters on the basis of the known focus functions. E.g., Elozory et al. proposed an indicator function that signifies whether the current pixel location is high contrast, a function that uses the binary median filter, and a function that returns the thresholded absolute gradient contrast strength in both the horizontal and vertical direction for a given pixel location [25].

However, no study has presented a focus function that both summarizes many known focus functions and has flexible and adjustable parameters. In this paper, we summarize 13 known focus functions [513] to produce the focus function Λ, which has two flexible and adjustable parameters: a convex function, and an image enhancement filter. For any given convex function and image enhancement filter, Λ is a focus function.

The proposed focus function Λ was first discovered using the observation that many of the known focus functions consist of a convex function and an image enhancement filter, as shown in Eq. (1), despite their differences.

Λ=φ(I(x,y)*g)dxdy,
where I denotes the image, (x,y) denotes the coordinates of the image pixels, φ is a convex function, and g is an image enhancement filter.

A close observation of the focus functions that can be summarized by Eq. (1) yields two conclusions. First, the convex function φ can be a strictly convex function, such as the quadratic function φ=x2, or a non-strictly convex function, such as the absolute value function φ=|x|. Second, it can be an axisymmetric function, such as the quadratic function φ=x2, or a non-axisymmetric function, such as the threshold absolute value function (e.g., φ=0when x < 0 and φ=|x| when x ≥ 0). Further, the choice of the image enhancement filter g is rather arbitrary; it can be a one-dimensional image enhancement filter such as g=[11] or a two-dimensional image enhancement filter such as g=[010141010]. Most image enhancement filters are sharpening filters, such as the Laplacian filter; however, we found that smoothing filters, for example, the Gaussian filter, also work.

The observations above imply that the construction of the convex functions and image enhancement filters is rather free. Thus, we wondered whether Eq. (1) succeeds for any convex function and image enhancement filter.

If the answer is yes, we would have by far the most flexible focus function, because the construction of the convex function and image enhancement filter is rather arbitrary. Furthermore, the new function will deepen our understanding of how focus functions work because so many known focus functions can be summarized by Eq. (1).

This paper is arranged as follows. First, we propose the new focus function Λ. Second, we create a model of the defocus problem. Then, using the model, we give a brief proof that Λ is a focus function for any convex function and image enhancement filter. We must note that the proof was based on the model; however, the result might also works in situations that do not fit the model. Finally, we constructed random Λs and experimentally applied them to both out-of-focus blurred images and motion-blurred images.

2. The proposed Λ

We found that 13 known focus functions can be expressed by Eq. (1), as shown in Table 1.

Tables Icon

Table 1. List of Known Focus Functions that Can Be Expressed by Eq. (1).

Furthermore, we noticed that the construction of the convex functions and the image enhancement filters are rather free, as shown in Fig. 1. Therefore, we proposed that for any convex function and image enhancement filter, Λ is a focus function.

 figure: Fig. 1

Fig. 1 Curves of convex functions φ in Table 1. Functions a, b, c, d, and e are convex; f and g are approximately convex. Function h is reversed convex because defocusing an image always shrinks the histogram, which reverses the smoothing process.

Download Full Size | PDF

The major challenge of proving that Λ is a focus function lies in how to model the defocus problems such that we can discuss it mathematically.

3. Modeling the defocus problem

In optical systems, an out-of-focus image is always described as the real image convolved by a point spread function (PSF). In polar coordinates, a diffraction-limited PSF has the form of a Bessel function of the first kind. For a wavelength λ with an f/# of F focusing radiation, it is

h(r)=4[J1(πr(λF)1]2[πr(λF)1]2,
which can be approximated by the Gaussian function [26], as shown in Fig. 2. In addition, the Gaussian function is also always used to fit the modulation transfer function (MTF) [2730]. This supports the idea that the PSF can be approximated by the Gaussian function, because the MTF and PSF are connected by a Fourier transform, and the Fourier transform of a Gaussian function is also a Gaussian function.

 figure: Fig. 2

Fig. 2 PSF can be approximated by Gaussian function.

Download Full Size | PDF

Thus, in image processing, we can obtain a blurred image by convolving a less-blurred image using a Gaussian filter. This is a blurring model, However, in practice, this model is too simple. The Gaussian filter and PSF are both Dispersion Functions (in this paper, a Dispersion Function is defined as a real function with two properties: first, it is non-negative everywhere; second, its integral is equal to 1; this is shown in Eqs. (14) and (15) in the appendix). Therefore, we extended the model by modeling the blurred image as the less-blurred image convolved by a Dispersion Function, as shown in Eq. (3).

I2=I0*PSF2=I0*PSF1*G=I1*G,
where I0 is the best focused image, I1 is the less blurred image, I2 is the more blurred image, PSF1 is the PSF of I1, PSF2 is the PSF of I2, and G is a Dispersion Function.

Further, when an image is convolved by an image enhancement filter, the input image values outside the bounds of the image are computed by implicitly assuming that the input image is periodic.

4. A brief proof

The proof of “Λ is a focus function” is equivalent to the proof of “Λ has a maximum value at the best-focused image and decreases as the defocus increases.”

Statements

In the proof, we will need the Dispersion Inequality [an inequality we proposed that describes the inequality in the dispersing process, as shown in Eq. (16) in the appendix].

Continuous form of the problem

Continuous form of the problem: Consider I1 and I2 to be images of the same object plane, where I2 is more defocused than I1. Let φ be a convex function in R2 and g be an arbitrary function in R2; then we have

φ((I2*g)(x,y))dxdyφ((I1*g)(x,y))dxdy.

Here, R2 denotes two-dimensional vector space, and * is the convolution operator.

Proof

We start from the left side of Eq. (4). According to Eq. (3), we have

φ((I2*g)(x,y))dxdy=φ(((I1*G)*g)(x,y))dxdy=φ((I1*(G*g))(x,y))dxdy,
where G is a Dispersion Function.

On the other hand, according to the two-dimensional form of Eq. (25) in the appendix, we have

φ((I1*(G*g))(x,y))dxdyφ((I1*g)(x,y))dxdy,
where G is a Dispersion Function.

From Eqs. (5) and (6) we obtain Eq. (4). □

The proof is briefly illustrated in Fig. 3.

 figure: Fig. 3

Fig. 3 Brief illustration of the proof.

Download Full Size | PDF

Discussion

The inequality might be the consequence of light energy dispersion during defocusing.

In real applications, images always have a finite height and width, in which condition Eq. (4) takes the form of

Ωφ((I2*g)(x,y))dxdyΩφ((I1*g)(x,y))dxdy,
where Ω is the image window.

The proof was based on a model, however, the result might also work in situations that do not fit the model, such as holography, which is a coherent imaging problem. Despite this limitation, the proof does suggest a focus function that always works under most situations. To study the performance of Λ in real situations, Λ was experimentally applied to blurred images.

5. Experimental results

The experiments were conducted in two parts. First, the experimental data, which contain the random Λs and the blurred images, were prepared. Then, we experimentally applied the constructed random Λs to the blurred images.

5.1 Preparation of experimental data

We began by constructing random Λs by generating random convex functions and random image enhancement filters.

First, random convex functions were constructed. In this paper, the convex function was modeled with a real polynomial in one variable. Because the second derivative of a polynomial is also a polynomial, and the second derivative of a convex function is non-negative everywhere, the problem of constructing a convex polynomial was reduced to the easier problem of constructing a positive polynomial. Furthermore, Positive polynomials have the property that “Every real polynomial in one variable is non-negative on R if and only if it is a sum of two squares of real polynomials in one variable” [31], which enables us to model a positive polynomial as

pm2+pn2,
where pm represents an mth-order polynomial, and pn represents an nth-order polynomial.

Integrating Eq. (8) twice, we obtain the convex function

(pm2+pn2).
In this paper, we set m = 3, n = 2; thus, the convex function was an eighth-order polynomial.

Random convex functions can be constructed by randomizing the parameters in Eq. (9). In this paper, the gray values were normalized in the range of [0,1]. Consequently, to obtain convex functions significant in the range of [-1,1], the parameters were randomized in the range of [-1,1]. Examples of the obtained random convex functions are shown in Fig. 4.

 figure: Fig. 4

Fig. 4 Shapes of 30 random convex functions generated by Eq. (9) in the range of [-1,1].

Download Full Size | PDF

Next, the random image enhancement filters g were constructed. They were constructed as 5 × 5 matrices, and the values of each matrix were randomized in the range of [-0.5, 0.5]. There are two types of image filters in the known focus functions. One type is smoothing filters, such as Gaussian filters, in which the sum of all the elements is equal to 1; the other type is sharpening filters, such as a Laplacian filter, in which the sum of all the elements is equal to 0. Here, the random filters g act as sharpening filters. Thus, the random filters g were normalized such that the sum of all the elements is equal to 0. The normalization was carried out by moving g vertically along the y-axis such that the sum of all the elements is equal to 0. Examples of the random image enhancement filters are displayed in Fig. 5.

 figure: Fig. 5

Fig. 5 Meshes of four random image enhancement filters.

Download Full Size | PDF

Next, three series of blurred images were constructed: simulated out-of-focus blurred images, real out-of-focus images, and simulated motion-blurred images.

The simulated out-of-focus blurred images were constructed by convolving the focused images (as shown in Fig. 6) by the simulated PSFs. Four types of images were selected: a portrait (the Lena image), an object, an outdoor landscape, and a microscopic image. The PSFs were simulated using PSF Lab, which simulates the illumination PSF of a confocal microscope under various imaging conditions [32]. Most of the parameters in PSF Lab were set to the defaults except for the depth parameter, which denotes the distance from the microscope objective to the coverslip/sample interface. Adjusting the depth from −6 μm to 26 μm in 2 μm steps, we obtained 17 PSFs; examples are shown in Fig. 7. Then, convolving the focused image by the PSFs, we obtained the simulated out-of-focus blurred images, which were indexed from 1 to 17 with increasing depth; examples are shown in Fig. 8. The blurred images were best focused at index 8, and the defocus increased as the index increased or decreased.

 figure: Fig. 6

Fig. 6 Original images: portrait (Lena), object, outdoor landscape, and microscopic image.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Examples of hot color maps of PSFs generated by PSF Lab.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Examples of blurred images. Blurred images were best focused at index 8, and the defocus increased as the index increased or decreased.

Download Full Size | PDF

The real out-of-focus blurred images were captured by a microscope (XSP-06-1600). Here, a 5 × eyepiece and a 100 × objective were selected to observe a printed paper. The focal length was adjusted manually such that the object plane crossed the focal plane from the downside to the upside. Seventeen blurred images were captured and were indexed from 1 to 17 from the lowest to the highest object plane; examples are shown in Fig. 9. The blurred images were best focused at index 10, and the defocus increased as the index increased or decreased.

 figure: Fig. 9

Fig. 9 Examples of real out-of-focus blurred images captured by a microscope. The images were best focused at index 10, and the defocus increased as the indexes increased or decreased.

Download Full Size | PDF

Motion blur was simulated by a filter to approximate rapid linear motion of a camera along an angle of 45 degrees with blurring degrees of 0, 2, 4, …, 16 pixels. Nine PSFs were constructed; examples are shown in Fig. 10. Then, by convolving the original image Lena by the PSFs as shown in Fig. 6, the motion-blurred images were obtained; examples are shown in Fig. 11. The images were indexed from 1 to 9 with increasing blurring degree. The blurred images were the least motion-blurred at index 1, and the blurring degree increased as the index increased.

 figure: Fig. 10

Fig. 10 Examples of meshes of motion-blurred PSFs.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Examples of motion-blurred images. The images were the least motion blurred at index 1, and the blurring degree increased as the index increased.

Download Full Size | PDF

Third, three series of simulated blurred images in different sizes of windows were constructed by selecting different-sized windows on the simulated out-of-focus blurred Lena images, as follows: 64 × 64 small window, 128 × 128 median window, and 256 × 256 large window, as shown in Figs. 12 and 13. Each window contains both textured and smooth areas.

 figure: Fig. 12

Fig. 12 Different-sized windows on Lena image.

Download Full Size | PDF

 figure: Fig. 13

Fig. 13 Examples of blurred images in different-sized windows. Blurred images were best focused at index 8, and the defocus increased as the index increased or decreased.

Download Full Size | PDF

Finally, three series of simulated blurred images with different noise were constructed by adding three types of noise to the simulated out-of-focus blurred Lena images: photon noise, thermal noise, and photon plus thermal noise. Photon noise is caused by photons and is usually represented by a Poisson distribution. Thermal noise is caused by electronic devices and follows a Gaussian distribution. The proposed Poisson–Gaussian model has the form [33]

IN=I+nP+nG,
where IN represents the image with noise, I represents the original image, and nP represents the Poisson distribution
1a(I+nP)~P(1aI),
where a is the parameter of the Poisson distribution. The Gaussian distribution nG is given by
nG~N(0,b),
where b is the standard deviation of the noise.

Setting a = 0.064, b = 0; a = 0, b = 0.032; and a = 0.064, b = 0.032, three series of blurred images with noise were obtained, as shown in Fig. 14.

 figure: Fig. 14

Fig. 14 Examples of blurred images with different types of noise. Blurred images were best focused at index 8, and the defocus increased as the index increased or decreased.

Download Full Size | PDF

5.2 Experiments

The experiments were conducted as follows.

First, the constructed Λs were applied to the simulated out-of-focus blurred images. For convenience in comparing the performances of the Λs, their values were normalized in the range of [-1,1]. For each Λ curve, normalization was performed in two steps. First, the curve was moved vertically along the y axis such that Λ'(I1)=0; then, Λ′ was normalized using Eq. (13).

normΛ'(Ii)=Λ'(Ii)max(abs(Λ'(Ii))),
where I denotes the blurred image, i denotes the index of the blurred image, and norm indicates normalization.

The experimental results are displayed in Fig. 15, where the x axis denotes the image index, and the y axis denotes the Λ value. About 98% of the constructed Λs reach the maximum value at the best-focused image and most of them decrease as the defocus increases, which supports our proof in the previous section.

 figure: Fig. 15

Fig. 15 Experimental results for 196 random Λs applied to simulated out-of-focus blurred images. The blurred images were best focused at index 8, and the defocus increased as the index increased or decreased.

Download Full Size | PDF

Second, the constructed Λs were applied to the real out-of-focus blurred images. The focus value was calculated for the entire image. For convenient comparison, the same normalization method was used. The experimental results are displayed in Fig. 16. About 99% of the constructed Λs reach the maximum value at the best-focused image and almost all of them decrease as the defocus increases, which confirms that the focus function Λ can be used in real applications.

 figure: Fig. 16

Fig. 16 Experimental results for 196 constructed Λs applied to real out-of-focus blurred images. The images were best focused at index 10, and the defocus increased as the index increased or decreased.

Download Full Size | PDF

From Fig. 15 and Fig. 16, we can see that about 2% of the constructed Λs do not work well as focus functions. This is because the proof was constructed using a model, whereas the real situation is far more complicated. First, when an image is convolved by an image enhancement filter, input image values outside the bounds of the image are used, which causes errors. Second, real objects are three-dimensional rather than two-dimensional, so the image plane is not an ideal plane, which also causes errors. Third, errors result from noise in the optical system. The experimental results indicated that the best focus function varies with the application. In fact, researchers have been comparing and choosing focus functions for specific applications for years, and to date, the best focus functions for different applications have always varied.

Third, the constructed focus functions Λ were applied to the motion-blurred images. As we know, focus functions are widely used in many applications; therefore, we proposed that Λ might also work on motion blur. For convenient comparison, the same normalization was used. The experimental results are displayed in Fig. 17. About 82% of the constructed Λs reach maximum values for the image without motion blurring and decrease as the motion blur increases, which confirms that the proposed focus function Λ works on not only out-of-focus blur, but also motion blur.

 figure: Fig. 17

Fig. 17 Experimental results for 196 random Λs applied to motion-blurred images. The images were the least motion blurred at index 1, and the blurring degree increased as the index increased.

Download Full Size | PDF

Then, The constructed Λs were applied to the simulated out-of-focus blurred images in different-sized windows. For convenient comparison, the same normalization was used. The experimental results are displayed in Fig. 18. About 99.9% of the constructed Λs reach the maximum value at the best-focused image, and most of them decrease as the defocus increases, which confirms that the focus function Λ can be used when window is applied.

 figure: Fig. 18

Fig. 18 Experimental results for 196 constructed Λs applied to simulated out-of-focus blurred images in different-sized windows. The images were best focused at index 8, and the defocus increased as the index increased or decreased.

Download Full Size | PDF

From Figs. 15 and 18, we can see that the Λs perform better for larger images or windows. That might be the result of the scale-space effect [34]. When an image is convolved by an image enhancement filter, input image values outside the bounds of the image are used, which causes errors, and for a fixed-size PSF, the smaller the image is, the larger the errors are.

Finally, the constructed Λs were applied to the simulated out-of-focus blurred images with different noise. For convenient comparison, the same normalization was used. The experimental results are displayed in Fig. 19. About 98% of the constructed Λs reach the maximum value at the best-focused image, and most of them decrease as the defocus increases, which confirms that the focus function Λ is robust to Poisson and Gaussian noise, such as photon noise and thermal noise.

 figure: Fig. 19

Fig. 19 Experimental results for 196 constructed Λs applied to simulated out-of-focus blurred images with different types of noise. The images were best focused at index 8, and the defocus increased as the index increased or decreased.

Download Full Size | PDF

6. Conclusions

The known focus functions are summarized to create the flexible focus function Λ, which consists of a convex function and an image enhancement filter. Λ is rather flexible because for any convex function and any image enhancement filter, it is a focus function. A proof was also proposed using a model and Jensen’s inequality. It should be noted that the proof is built on a Gaussian or Gaussian-alike PSF assumption, which may be incompatible with some imaging systems. Nevertheless, the proposed focus function is still acceptable in wide variety of applications. A method of constructing random Λs was developed, and experimental results for both simulated and real blurred images proved its feasibility. The developed technology can be especially helpful in practical applications such as microscopy, holography, depth of focus estimation, deblurring, image refocusing, and iris recognition because it provides significant convenience in constructing focus functions. Future studies will address questions such as how to construct Λ for a specific application. Another question of interest is how to construct the convex functions by adjusting the parameters. Furthermore, studies on how the criteria for good focus functions can be improved would also be of remarkable value.

Appendix 1

The Dispersion Inequality

Statements

To prove that Λ is a focus function, we propose the Dispersion Inequality, which describes the inequality in the dispersion process.

First, we introduce two terms, the Dispersion Function and “disperse”:

The Dispersion Function: Let σ be an integrable function in R1; then σ is a Dispersion Function if σ satisfies the following two formulas:

σ(x)0,
σ(x)dx=1.

disperse: Let f be an integrable function in R1; then we say that f is disperse when f convolves by a Dispersion Function σ.

Then, the Dispersion Inequality is defined as follows.

The Dispersion Inequality

The Dispersion Inequality: Let φ be a convex function on [α,β], f be an integrable function in R1, αf(x) ≤ β, xR1, and σ be a Dispersion Function. Then the Dispersion Inequality is stated as

φ((f*σ)(x))dxφ(f(x))dx.

Here, R1 is a one-dimensional vector space, and * is the convolution operator.

Proof

From the left part of Eq. (16), we get

φ((f*σ)(x))dx=φ(f(t)σ(xt)dt)dx.

Then, we used Jensen’s inequality [35]. Jensen’s inequality states that if φ is a convex function on [α,β], f and p are integrable functions in R1, αf(x) ≤ β, xR1, and p satisfies the following two formulas:

p(x)0,
p(x)dx>0.

Then we have

φ(p(x)f(x)dxp(x)dx)p(x)φ(f(x))dxp(x)dx.

Obviously, σ is a subset of p. Replacing p with σ in Eq. (20), we get

φ(f(x)σ(x)dx)σ(x)φ(f(x))dx.

From Eq. (21), we have

φ(f(t)σ(xt)dt)dxσ(xt)φ(f(t))dtdx.

From the right side of Eq. (22), we have

σ(xt)φ(f(t))dtdx=σ(xt)φ(f(t))dxdt=φ(f(t))σ(xt)dxdt=φ(f(t))dt.=φ(f(x))dx

From Eqs. (17), (22) and (23), we obtain Eq. (16). □

Discussion

Special attention is called to the fact that if the convolution operator * in the Dispersion Inequality is replaced with the cross-correlation operator, the inequality still holds. The inequality also holds on higher dimensions.

Finally, we introduce two characteristics of the Dispersion Inequality that are applied directly to prove that Λ is a focus function in this article:

  • 1. Transitivity

    Let σ1 and σ2 be Dispersion Functions; then we have

    φ((f*(σ1*σ2))(x))dxφ((f*σ1)(x))dx.

  • 2. Pre-proceeding invariance

    Let g be an integrable function in R1, then we have

    φ(((f*(σ1*σ2))*g)(x))dxφ(((f*σ1)*g)(x))dx.

    The proof is ignored.

Appendix 2

The finite form of Λ

Statements

In section 4, we proved the continuous form of Λ. However, the situation in real optical imaging systems is always discrete, where the images and image enhancement filters are matrices. In this section, a proof of the finite form of Λ is presented.

First, we start from a lemma.

Lemma

The simplified finite form of Λ: Consider I1 and I2 to be images of the same object plane, where I2 is more defocused than I1, I1 and I2 to be m × n matrices, and φ to be a convex function in R2; then we have

i=1,j=1m,nφ(I2(i,j))i=1,j=1m,nφ(I1(i,j)).

Proof

According to Eq. (3), I2 is equal to I1 convolved by a Dispersion Function G that is a p × q matrix, which indicates that from the left side of Eq. (26), we have

i=1,j=1m,nφ(I2(i,j))i=1,j=1m,nφ(r=1,s=1p,qG(r,s)×I1(mod(i+r,m),mod(j+s,n)))=i=1,j=1m,nφ(r=1,s=1p,qG(r,s)×I1(i+r,j+s)),
where “mod” denotes modulus.

On the other hand, G is a Dispersion Function; thus, we have the following two formulas:

G(r,s)>0,1rp,1sq,
r=1,s=1p,qG(r,s)=1.

According to Eq. (29), from the right side of Eq. (26) we have

i=1,j=1m,nφ(I1(i,j))=(r=1,s=1p,qG(r,s))×i=1,j=1m,nφ(I1(i,j))=i=1,j=1m,nr=1,s=1p,qG(r,s)×φ(I1(mod(i+r,m),mod(j+s,n))),i=1,j=1m,nr=1,s=1p,qG(r,s)×φ(I1(i+r,j+s))
where “mod” denotes modulus.

Jensen’s inequality states that for a real convex function φ, real numbers x1, …, xn, and positive weights aj, we have

φ(aixiai)aiφ(xi)ai.

From Eqs. (28), (29), and (31) and the matrix of I1, we have

φ(r=1,s=1p,qG(r,s)×I1(i+r,j+s))r=1,s=1p,qG(r,s)×φ(I1(i+r,j+s)).

Furthermore, we have

i=1,j=1m,nφ(r=1,s=1p,qG(r,s)×I1(i+r,j+s))i=1,j=1m,nr=1,s=1p,qG(r,s)×φ(I1(i+r,j+s)).

From Eqs. (27), (30) and (33), we have Eq. (26). □

Now we prove the finite form of Λ.

The finite form of Λ

The finite form of Λ: Consider that I1 and I2 are images of the same object plane, where I2 is more defocused than I1; I1 and I2 are m × n matrices, φ is a convex function in R2, g is a p × q image enhancement filter, and

I1'=I1*g,
I2'=I2*g.

Then we have

i=1,j=1m,nφ(I2'(i,j))i=1,j=1m,nφ(I1'(i,j)).

The proof is ignored.

Acknowledgments

The authors are very grateful to Prof. Zhou Jun for fruitful discussions and helpful comments on the manuscript. The presented work builds on the National Basic Research Program of China (No. 2012CB316400) and the National Natural Science Foundation of China (No. 61171151).

References and links

1. W. Zhang, Z. Ye, T. Zhao, Y. Chen, and F. Yu, “Point spread function characteristics analysis of the wavefront coding system,” Opt. Express 15(4), 1543–1552 (2007). [CrossRef]   [PubMed]  

2. P. Favaro, “Shape from focus, and, defocus: convexity, quasiconvexity and defocus-invariant textures,” in ICCV (2007).

3. Y. Sun, S. Duthaler, and B. J. Nelson, “Autofocusing in computer microscopy: Selecting the optimal focus algorithm,” Microsc. Res. Tech. 65(3), 139–149 (2004). [CrossRef]   [PubMed]  

4. Y. Sun, S. Duthaler, and B. J. Nelson, “Autofocusing algorithm selection in computer microscopy,” in 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2005) (IEEE, 2005).

5. J. F. Brenner, B. S. Dew, J. B. Horton, T. King, P. W. Neurath, and W. D. Selles, “An automated microscope for cytologic research a preliminary evaluation,” J. Histochem. Cytochem. 24(1), 100–111 (1976). [CrossRef]   [PubMed]  

6. F. C. Groen, I. T. Young, and G. Ligthart, “A comparison of different focus functions for use in autofocus algorithms,” Cytometry 6(2), 81–91 (1985). [CrossRef]   [PubMed]  

7. E. Krotkov, “Focusing,” Int. J. Comput. Vis. 1(3), 223–237 (1987). [CrossRef]  

8. M. Subbarao, T. S. Choi, and A. Nikzad, “Focusing techniques,” Opt. Eng. 32(11), 2824–2836 (1993). [CrossRef]  

9. S. K. Nayar and Y. Nakagawa, ““Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994).

10. A. Santos, C. Ortiz de Solórzano, J. J. Vaquero, J. M. Peña, N. Malpica, and F. del Pozo, “Evaluation of autofocus functions in molecular cytogenetic analysis,” J. Microsc. 188(3), 264–272 (1997). [CrossRef]   [PubMed]  

11. J. Daugman, “How iris recognition works,” IEEE Trans. Circuits Syst. Video Technol. 14(1), 21–30 (2004). [CrossRef]  

12. B. J. Kang and K. R. Park, “A study on iris image restoration,” in International Conference on Audio- and Video-Based Biometric Person Authentication (2005), pp. 31–40. [CrossRef]  

13. P. Langehanenberg, B. Kemper, and G. Bally, “Autofocus algorithms for digital-holographic microscopy,” in European Conference on Biomedical Optics, Optical Society of America (2007).

14. S. Y. Lee, Y. Kumar, J. M. Cho, S. W. Lee, and S. W. Kim, “Enhanced autofocus algorithm using robust focus measure and fuzzy reasoning,” IEEE Trans. Circuits Syst. Video Technol. 18(9), 1237–1246 (2008). [CrossRef]  

15. F. Quan, K. Han, and X. C. Zhu, “A new auto-focusing method based on the center blocking DCT,” in Fourth International Conference on Image and Graphics ( ICIG 2007) (2007).

16. W. Jian and H. B. Chen, “A novel auto-focus function,” in 6th International Symposium on Advanced Optical Manufacturing and Testing Technologies (AOMATT 2012) (International Society for Optics and Photonics, 2012).

17. M. L. Mendelsohn and B. H. Mayall, “Computer-oriented analysis of human chromosomes. 3. Focus,” Comput. Biol. Med. 2(2), 137–150 (1972). [CrossRef]   [PubMed]  

18. L. Firestone, K. Cook, K. Culp, N. Talsania, and K. Preston Jr., “Comparison of autofocus methods for automated microscopy,” Cytometry 12(3), 195–206 (1991). [CrossRef]   [PubMed]  

19. S. L. Brázdilová and M. Kozubek, “Information content analysis in automated microscopy imaging using an adaptive autofocus algorithm for multimodal functions,” J. Microsc. 236(3), 194–202 (2009). [CrossRef]   [PubMed]  

20. H. Peter, J. Schulz, and K. H. Englmeier, “Content-based autofocusing in automated microscopy,” Image Anal. Stereol. 29(3), 173–180 (2010). [CrossRef]  

21. D. C. Tsai and H. H. Chen, “Effective autofocus decision using reciprocal focus profile,” in 18th IEEE International Conference on Image Processing (ICIP) (IEEE, 2011). [CrossRef]  

22. L. Xu, M. Mater, and J. Ni, “Focus detection criterion for refocusing in multi-wavelength digital holography,” Opt. Express 19(16), 14779–14793 (2011). [CrossRef]   [PubMed]  

23. P. Ferraro, P. Memmolo, C. Distante, M. Paturzo, A. Finizio, and B. Javidi, “An autofocusing algorithm for digital holograms,” Proc. SPIE 8384, 838408 (2012).

24. P. Gao, B. Yao, R. Rupp, J. Min, R. Guo, B. Ma, J. Zheng, M. Lei, S. Yan, D. Dan, and T. Ye, “Autofocusing based on wavelength dependence of diffraction in two-wavelength digital holographic microscopy,” Opt. Lett. 37(7), 1172–1174 (2012). [CrossRef]   [PubMed]  

25. D. T. Elozory, K. A. Kramer, B. Chaudhuri, O. P. Bonam, D. B. Goldgof, L. O. Hall, and P. R. Mouton, “Automatic section thickness determination using an absolute gradient focus function,” J. Microsc. 248(3), 245–259 (2012). [CrossRef]   [PubMed]  

26. G. V. Poropat, “Effect of system point spread function, apparent size, and detector instantaneous field of view on the infrared image contrast of small objects,” Opt. Eng. 32(10), 2598–2607 (1993). [CrossRef]  

27. F. F. Yin, M. L. Giger, and K. Doi, “Measurement of the presampling modulation transfer function of film digitizers using a curve fitting technique,” Med. Phys. 17(6), 962–966 (1990). [CrossRef]   [PubMed]  

28. S. E. Reichenbach, S. K. Park, and R. Narayanswamy, “Characterizing digital image acquisition devices,” Opt. Eng. 30(2), 170–177 (1991). [CrossRef]  

29. A. P. Tzannes and J. M. Mooney, “Measurement of the modulation transfer function of infrared cameras,” Opt. Eng. 34(6), 1808–1817 (1995). [CrossRef]  

30. T. Li, H. Feng, Z. Xu, X. Li, Z. Cen, and Q. Li, “Comparison of different analytical edge spread function models for MTF calculation using curve-fitting,” Proc. SPIE 7498, 74981H (2009). [CrossRef]  

31. E. Artin, “Uber die Zerlegung definiter Funktionen in Quadrate,” Abh. Math. Seminar Univ. Hamburg 5, 85–99 (1927).

32. M. J. Nasse and J. C. Woehl, “Realistic modeling of the illumination point spread function in confocal scanning optical microscopy,” J. Opt. Soc. Am. A 27(2), 295–302 (2010). [CrossRef]   [PubMed]  

33. A. Foi, M. Trimeche, V. Katkovnik, and K. Egiazarian, “Practical Poissonian-Gaussian noise modeling and fitting for single-image raw-data,” IEEE Trans. Image Process. 17(10), 1737–1754 (2008). [CrossRef]   [PubMed]  

34. P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 12(7), 629–639 (1990). [CrossRef]  

35. J. L. W. V. Jensen, “Sur les fonctions convexes et les inégalités entre les valeurs moyennes,” Acta Math. 30(1), 175–193 (1906). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (19)

Fig. 1
Fig. 1 Curves of convex functions φ in Table 1. Functions a, b, c, d, and e are convex; f and g are approximately convex. Function h is reversed convex because defocusing an image always shrinks the histogram, which reverses the smoothing process.
Fig. 2
Fig. 2 PSF can be approximated by Gaussian function.
Fig. 3
Fig. 3 Brief illustration of the proof.
Fig. 4
Fig. 4 Shapes of 30 random convex functions generated by Eq. (9) in the range of [-1,1].
Fig. 5
Fig. 5 Meshes of four random image enhancement filters.
Fig. 6
Fig. 6 Original images: portrait (Lena), object, outdoor landscape, and microscopic image.
Fig. 7
Fig. 7 Examples of hot color maps of PSFs generated by PSF Lab.
Fig. 8
Fig. 8 Examples of blurred images. Blurred images were best focused at index 8, and the defocus increased as the index increased or decreased.
Fig. 9
Fig. 9 Examples of real out-of-focus blurred images captured by a microscope. The images were best focused at index 10, and the defocus increased as the indexes increased or decreased.
Fig. 10
Fig. 10 Examples of meshes of motion-blurred PSFs.
Fig. 11
Fig. 11 Examples of motion-blurred images. The images were the least motion blurred at index 1, and the blurring degree increased as the index increased.
Fig. 12
Fig. 12 Different-sized windows on Lena image.
Fig. 13
Fig. 13 Examples of blurred images in different-sized windows. Blurred images were best focused at index 8, and the defocus increased as the index increased or decreased.
Fig. 14
Fig. 14 Examples of blurred images with different types of noise. Blurred images were best focused at index 8, and the defocus increased as the index increased or decreased.
Fig. 15
Fig. 15 Experimental results for 196 random Λs applied to simulated out-of-focus blurred images. The blurred images were best focused at index 8, and the defocus increased as the index increased or decreased.
Fig. 16
Fig. 16 Experimental results for 196 constructed Λs applied to real out-of-focus blurred images. The images were best focused at index 10, and the defocus increased as the index increased or decreased.
Fig. 17
Fig. 17 Experimental results for 196 random Λs applied to motion-blurred images. The images were the least motion blurred at index 1, and the blurring degree increased as the index increased.
Fig. 18
Fig. 18 Experimental results for 196 constructed Λs applied to simulated out-of-focus blurred images in different-sized windows. The images were best focused at index 8, and the defocus increased as the index increased or decreased.
Fig. 19
Fig. 19 Experimental results for 196 constructed Λs applied to simulated out-of-focus blurred images with different types of noise. The images were best focused at index 8, and the defocus increased as the index increased or decreased.

Tables (1)

Tables Icon

Table 1 List of Known Focus Functions that Can Be Expressed by Eq. (1).

Equations (36)

Equations on this page are rendered with MathJax. Learn more.

Λ= φ(I(x,y)*g)dxdy ,
h(r)=4 [ J 1 (πr (λF) 1 ] 2 [πr (λF) 1 ] 2 ,
I 2 = I 0 *PS F 2 = I 0 *PS F 1 *G= I 1 *G,
φ(( I 2 *g)(x,y))dxdy φ(( I 1 *g)(x,y))dxdy .
φ(( I 2 *g)(x,y))dxdy = φ((( I 1 *G)*g)(x,y))dxdy = φ(( I 1 *(G*g))(x,y))dxdy ,
φ((I1*(G*g))(x,y))dxdy φ((I1*g)(x,y))dxdy ,
Ω φ(( I 2 *g)(x,y))dxdy Ω φ(( I 1 *g)(x,y))dxdy ,
p m 2 + p n 2 ,
( p m 2 + p n 2 ) .
I N =I+ n P + n G ,
1 a (I+ n P )~P( 1 a I),
n G ~N(0,b),
normΛ'( I i )= Λ'( I i ) max(abs(Λ'( I i ))) ,
σ(x)0,
σ(x)dx=1 .
φ((f*σ)(x))dx φ(f(x))dx .
φ((f*σ)(x))dx = φ( f(t)σ(xt)dt)dx .
p(x)0,
p(x)dx>0 .
φ( p(x)f(x)dx p(x)dx ) p(x)φ(f(x))dx p(x)dx .
φ( f(x)σ(x)dx) σ(x)φ(f(x))dx .
φ( f(t)σ(xt)dt)dx σ(xt)φ(f(t))dtdx .
σ(xt)φ(f(t))dtdx = σ(xt)φ(f(t))dxdt = φ(f(t)) σ(xt)dxdt = φ(f(t))dt . = φ(f(x))dx
φ((f*( σ 1 * σ 2 ))(x))dx φ((f* σ 1 )(x))dx .
φ(((f*( σ 1 * σ 2 ))*g)(x))dx φ(((f* σ 1 )*g)(x))dx .
i=1,j=1 m,n φ( I 2 (i,j)) i=1,j=1 m,n φ( I 1 (i,j)) .
i=1,j=1 m,n φ( I 2 (i,j)) i=1,j=1 m,n φ( r=1,s=1 p,q G(r,s)× I 1 (mod(i+r,m),mod(j+s,n)) ) = i=1,j=1 m,n φ( r=1,s=1 p,q G(r,s)× I 1 (i+r,j+s) ) ,
G(r,s)>0,1rp, 1sq,
r=1,s=1 p,q G(r,s)=1 .
i=1,j=1 m,n φ( I 1 (i,j)) =( r=1,s=1 p,q G(r,s) )× i=1,j=1 m,n φ( I 1 (i,j)) = i=1,j=1 m,n r=1,s=1 p,q G(r,s)×φ( I 1 (mod(i+r,m),mod(j+s,n))) , i=1,j=1 m,n r=1,s=1 p,q G(r,s)×φ( I 1 (i+r,j+s))
φ( a i x i a i ) a i φ( x i ) a i .
φ( r=1,s=1 p,q G(r,s)× I 1 (i+r,j+s) ) r=1,s=1 p,q G(r,s)×φ( I 1 (i+r,j+s)) .
i=1,j=1 m,n φ( r=1,s=1 p,q G(r,s)× I 1 (i+r,j+s) ) i=1,j=1 m,n r=1,s=1 p,q G(r,s)×φ( I 1 (i+r,j+s)) .
I 1 ' = I 1 *g,
I 2 ' = I 2 *g.
i=1,j=1 m,n φ( I 2 ' (i,j)) i=1,j=1 m,n φ( I 1 ' (i,j)) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.