Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Retrospective correction of nonuniform illumination on bi-level images

Open Access Open Access

Abstract

We propose a novel method for correcting the effect of nonuniform illumination on a bi-level image. The proposed method is based on a penalized nonlinear least squares objective function that measures the binariness of an image and the roughness of illumination. Compared with conventional methods, it has the advantages of 1) not suffering from a trivial minimizer, 2) not requiring tuning of design parameters, and 3) effective optimization. In addition, it yields a unique solution since the minimization of the objective function is well-posed. In simulations and experiments, the method showed better accuracy and speed than the conventional entropy-based method.

©2009 Optical Society of America

1. Introduction

Bi-level images (images that have only two intensity values) such as documents, vehicle license plates, and bar codes must be recognized automatically in various engineering applications [1,2]. The performance of the recognition is often limited by artifacts present on the images such as blurring, nonuniform illumination, and noise. To overcome the limitations, researchers have devoted efforts to develop efficient deblurring methods [13], denoising methods [4], and nonuniform illumination correction methods [5,6].

In this investigation, we study the retrospective correction of an artifact caused by nonuniform illumination. Although one may attempt to correct the artifact before the acquisition of an image by making illumination more uniform (often using special hardware devices), we focus on retrospective correction methods that correct the artifact after the acquisition of the image since the retrospective methods have the advantage of not requiring additional hardware devices. Moreover, retrospective correction is often required even if an image is acquired using the pre-correction method since the pre-correction is not able to make illumination completely uniform. The artifact caused by nonuniform illumination is a slowly varying multiplicative field to a true scene since the intensities in an image acquired by a camera represent the product of the reflectance of the true scene and illumination falling on the scene [7,8]. Therefore, if illumination is not uniform in the spatial domain, similar objects at different locations in the acquired image take on different intensity values, making it difficult to recognize the objects based on the intensity values. Similar artifacts may be present in magnetic resonance (MR) images due to the inhomogeneity of the applied magnetic field [912] and in microscopy images due to the nonuniform illumination of the light source [13,14]. To remove these artifacts, many studies have been carried out on the retrospective correction of field inhomogeneity for MR images [912], nonuniform illumination for microscopy images [13,14], and nonuniform shading for texture images [15].

Compared with MR and microscopy images, methods to correct bi-level images have been studied only sporadically. Although we previously studied the joint nonuniform illumination compensation and deblurring method for 1D bar code images, this method can be applied for only 1D barcode images that have pre-determined number of edges [5]. The method cannot be applied for general bi-level images such as 2D barcode images and text images. To our knowledge, there exists no nonuniform illumination correction method that is specialized for general bi-level images. We believe the scarcity of research for bi-level images is partly due to the fact that many methods developed for MR and microscopy images can also be applied to bi-level images. These methods rely on the assumption that the true image has only a small number of intensity values, which is also valid for bi-level images.

Compensating for nonuniform illumination requires separating an intrinsic image and nonuniform illumination (or inhomogeneous field in MR images) from an observed image [5]. The separation in the spatial domain is challenging since the effect of the illumination and the reflectance of the scene are interchangeable in the observed image [8]. Consequently, methods developed for correcting the nonuniform illumination effects have often incorporated a priori information about the intrinsic image and/or the nonuniform illumination. The most frequently used a priori information is the assumption that illumination is slowly changing in the spatial domain and that the number of different intensity values of an intrinsic image is small. The latter information is useful not only for brain MR images and some microscopic images but also for bi-level images, thereby making it possible to apply existing methods to bi-level images.

Based on the assumption that illumination is slowly varying in the spatial domain, homomorphic low pass filtering methods have been investigated [5,9,10,13]. Often, the performance of such approaches is not satisfactory since the intrinsic image also has low frequency components [5]. In addition, automatic design of the cut-off frequency of the low pass filter is a difficult problem [9,13]. Methods that utilize the two a priori facts noted above have shown improved performance compared with the homomorphic filtering method [13]. While dividing an acquired image using a smooth parameterized illumination model, methods that seek parameters that minimize the entropy of the corrected image [14] or that maximize the frequency content of the estimated probability mass function (PMF) have been proposed [11]. These methods are based on the observation that the estimated PMF from a true image is clustered around a small number of peaks.

There are several drawbacks in such methods. First, their performance greatly depends on the selection of design parameters such as histogram bin size and the shape of the kernel function involved in estimating the PMF [16]. Moreover, it is difficult to determine the design parameters automatically. Second, there can be a trivial minimizer since a zero multiplicative illumination field can generate minimum entropy. Although one might try to prevent a trivial minimizer from occurring by incorporating a penalty function in the objective function, such a penalty function requires the tuning of an additional design parameter [12]. In addition, the method often requires significant computation since not only estimating the entropy and the gradient of the entropy is computationally intensive but also efficient minimization of the entropy is difficult. Researchers are thus investigating fast entropy minimization methods [17].

To overcome these drawbacks, we propose a penalized nonlinear least squares method based on the approximation of maximum a posteriori (MAP) criterion under additive Gaussian noise. While dividing the observation using the parameterized illumination model based on the tensor product of the third order B-spline functions [18], we seek the parameters related to the illumination model and only one unknown intensity that minimizes an objective function based on the degree of binariness and the roughness of the illumination. For measuring the binariness, we adopt the double-well function that was previously used for the deblurring problem [1,19]. Our contribution is not limited to the application of the double-well function for nonuniform illumination correction, but extends to the design of an elaborated method that does not require the estimation of PMF and avoids a trivial minimizer. Moreover, we prove theoretically that the proposed method has a unique solution. In addition, since the proposed objective function has the special structure of nonlinear least squares, we are able to use efficient optimization algorithms for nonlinear least squares such as Levenberg-Marquardt [20].

2. Theory

2.1 Problem formulation

We model a M×N bi-level image acquired under nonuniform illumination as follows:

g(xi,yj)Poisson[s(xi,yj)f(xi,yj)],i=1,,M,j=1,,N,

where (xi,yj) represents a 2D discrete pixel grid on a image along the x and y axis, g(xi,yj) denotes the acquired 2D image, s(xi,yj) is unknown illumination, f(xi,yj)∊{α 1,α 2} is true bi-level image, respectively. Note that the true image f(xi,yj) has only two unknown intensity values α 1 and α 2. The noise present in the acquired image follows Poisson distribution. However, it has been well known that the Poisson noise can be approximated by Gaussian noise if the arrival rate of photon is sufficiently large [21].

The goal of retrospective nonuniform illumination correction is an accurate estimation of s=[s(xi,yj),⋯,s(xM,yN)] and f=[f(xi,yj),⋯,f(xM,yN)] from g=[g(xi,yj),⋯,g(xM,yN)] [5]. A natural approach to achieve this goal involves seeking s and f that minimize the square errors between the model and the acquired image. However, the least squares problem is ill-posed in the sense that the solution is not unique since the effect of s and f is interchangeable on g [6]. Moreover, even if one can design an effective regularization method for a unique solution, the problem is still very challenging since it requires a binary optimization with two unknown binary values. To overcome the difficulties, one may apply alternative methods that divide the acquired image by a parameterized illumination model to correct the effect of illumination. Such methods seek optimal parameters by minimizing (or maximizing) an objective function that is designed based on a priori information about the true image. For example, a method that minimizes estimated entropy from a corrected image can be applied for bi-level images [12]. This method parameterizes the inverse of illumination h(xi,yj) using a parameter vector b, and searches the optimal parameter that minimizes the estimated entropy as follows [14]:

b̂=argminbΣk=1nbpg(zk;b)logpg(zk;b)+λrR(b),

where pg(zk;b) is the estimated PMF from g(xi,yj)h(xi,yj;b), i=1,⋯M,j=1,⋯,N, R(b) is a roughness penalty function for acquiring smooth illumination field, λr is a regularization parameter, and nb is the number of nonzero entries of the estimated PMF. One may apply the histogram based methods [14] or kernel density based methods [16] to estimate the PMF. Since histogram based estimated PMF is not differentiable, kernel density based methods are often used to estimate the PMF [17]. The entropy based nonuniform illumination correction method is based on the a priori information that the estimated entropy from a true bi-level image is small since the intensities can have only one of the two values. Based on the similar information, a method that maximizes the frequency content of the estimated PMF has also been investigated [11].

However, the existing methods have several drawbacks. As explained in the previous section, these methods require the estimation of the PMF that involves many design parameters such as zk(k=1,⋯,nb), nb and kernel function. In addition, the methods suffer from the trivial minimizer of h(xi,yj;b)=0. One may try to avoid the trivial minimizer by including an additional penalty function. For example, an objective function that penalizes the difference between average pixel value of the observed image and the corrected image has been investigated [12]:

b̂=argminbΣk=1nbpg(zk;b)logpg(zk;b)
+λm(Σi=1MΣj=1Ng(xi,yj)h(xi,yj;b)Σi=1MΣj=1Ng(xi,yj))2
+λrR(b),

where λm and λr are regularization parameters. However, the method requires the tuning of two regularization parameters λm and λr as well as design parameters in estimating the PMF. In addition, an effective optimization of the entropy is challenging [17].

2.2 Proposed method

We first parameterize the inverse of the unknown illumination using the following tensor product of uniformly spaced cubic B-spline functions that has been used to effectively model 2D continuous functions [18,22]:

h(xi,yj;b)=Σk=1PΣl=1Qbk,lβ3(xi/hxlk)β3(yj/hyll),i=1,,M,j=1,,N,

where β 3(·) is a third order B-spline function centered at zero, b=[b 1,1,b 1,2.,b P,Q] denotes the coefficients for the tensor product of the 1D B-spline function, P and Q are the numbers of the 1D B-spline functions along the x and y axis, hx and hy are knot spacing along the x and y axis, and lk,ll, k=1,⋯, P, l=1,⋯,Q denote uniformly distributed center locations of the 1D B-spline functions along the x and y axis, respectively. Note that hx and hy controls the spacing between B-spline functions [22]. We impose positivity constraints on b to ensure that the model is positive everywhere. The B-spline model in Eq. (4) has been shown to be effective for approximating a continuous function when the coefficients are properly selected, and has been previously used to model nonuniform illumination [5].

We approximate the Poisson noise present in the acquired image by additive Gaussian noise based on the assumption that the number of photons at each pixel grid is sufficiently large. With the assumption and the parameterized illumination model, the MAP estimation of the true image and the illumination can be achieved by determining the following:

minimizeb>0,α1>0,α2>0f(xi,yj){α1,α2}Σi=1MΣj=1N(g(xi,yj)f(xi,yj)h(xi,yj;b))2.

Solving the nonlinear least squares problem defined in Eq. (5) is challenging since the two intensity values α 1,α 2 are unknown. Even if α 1,α 2 are known, the solution requires a binary optimization since the value of f(xi,yj) should be determined by either α 1 or α 2 on every (xi,yj), i=1,…,N, j=1,…,,M. Note that the binary optimization is challenging since the constraint is not a convex set [2]. In addition, the problem is ill-posed in the sense that the minimizer is not unique. If f(xi,yj) consisted of two intensity values α 1 and α 2 and h(xi,yj;b) are a minimizer of the problem defined in Eq. (5), then, for any arbitrary constant c >0, two intensity values 1 and 2 and h(xi,yj;c b) would be also a minimizer. To overcome this difficulty, we solve a minimization problem that includes only one intensity value as follows:

minimizeb>0,α>0f(xi,yj){α,1+α}Σi=1MΣj=1N1h(xi,yj;b)2(h(xi,yj;b)g(xi,yj)f(xi,yj))2.

The nonlinear least squares with only one intensity value is sufficient since the difference between the two intensity values would be determined by the magnitude of the inverse illumination. (We will show that the solution of our method based on Eq. (6) is unique in the subsequent section.) It is clear that the estimation of b and α based on the minimization problem defined in Eq. (6) is equivalent to the following:

b̂α̂=argminb>o,α>0Σi=1MΣj=1Nmin{1h(xi,yj;b)2(h(xi,yj;b)g(xi,yj)α)2,
1h(xi,yj;b)2(h(xi,yj;b)g(xi,yj)(α+1)2},

where min{a,b} denotes the minimum of a and b. We approximate the argument minimization problem defined in Eq. (7) by the argument minimization problem defined in Eq. (8) under low now noise assumption. We argue the validity of this approximation under low noise assumption using the similar argument developed previously [23] (See Appendix A).

b̂α̂=argminb>0,α>0Σi=1MΣj=1N1h(xi,yj;b)2(h(xi,yj;b)g(xi,yj)α)2
(h(xi,yj;b)g(xi,yj)(1+α))2.

After solving the problem defined in Eq. (8), one can correct the nonuniform illumination by multiplying h(xi,yj;b̂) to g(xi,yj), i.e., multiplying the estimated inverse of illumination to the acquired image. We further incorporate the important a priori information that the illumination is slowly varying in the spatial domain by a roughness penalty function. The proposed method with the roughness penalty function is defined as an estimation of (PQ+1)×1 parameter vector θ=[b,α]T by nonlinear least squares as follows:

θ̂=argminθΣi=1MΣj=1Nf(xi,yj;θ)2+λrR(θ),

where,

f(xi,yj;θ)=1h(xi,yj;b)(h(xi,yj;b)g(xi,yj)α)(h(xi,yj;b)g(xi,yj)(1+α)),

where R(θ) is a roughness penalty function that measures the roughness of the illumination and λr>0 is a regularization parameter. One possible choice of the penalty function can be defined as follows:

R(θ)=12θTRθ=Σi=1P1Σj=1Q1(bi+1,jbi,j)2+(bi,j+1bi,j)2,

where R denotes the matrix representation of the roughness penalty function.

The proposed method does not suffer from the trivial minimizer problem since the 1/h(xi,yj;b)2 term in the objective function prevents the illumination from being zero. In addition, it does not require the tuning of design parameters except for one regularization parameter, which is also required in the existing methods.

We have formulated the minimization problem in Eq. (9) as an unconstrained optimization problem without the positivity constraints since the constraints were not violated in our simulations and experiments. By doing this, the theoretical analysis in the subsequent section becomes simpler. In practice, we solved the minimization problem using the Levenberg-Marquardt method without the constraints, which has been shown to be effective in a nonlinear least squares problem [20].

2.3 Well-posedness of the proposed method

To prove the well-posedness of the nonlinear least squares problem defined in Eq. (9), it suffices to show that the following approximate Hessian of the objective function is positive definite [24,25]:

H(θ)=J(θ)TJ(θ)+λrR,

where J(θ) is MN×(PQ+1) Jacobian matrix of 1D column-stacked version of f(xi,yj;θ). Each element of J (θ) is defined as follows:

[J(θ)]k,l=f(xi,yj;θ)θl,

where k=(j-1)M+i. From Eq. (12), unless all the elements of b have the same value, the Hessian is positive definite since θT R θ>0. Therefore, it is sufficient to show that the Hessian is still positive definite even for the case of bu=[bu,.,bu] such that h(xi,yj;bu)=c, i.e., uniform illumination in the spatial domain. In this case, the kth element of 1×MN vector θTJ(θ)T is defined as follows:

[θTJ(θ)T]k=αf(xi,yj;α,bu)α+Σl=1P×Qbuf(xi,yj;α,bu)bl,

where k=(j-1)M+i. Note that unless θTJ(θ)T=0 for ∀(xi,yj), the Hessian is positive definite. It is not difficult to show that the condition θT J(θ)T=0 for ∀(xi,yj) can be rewritten as follows:

cg(xi,yj)αc[cg(xi,yj)α+cg(xi,yj)(1+α)]
1c[(cg(xi,yj)α)(cg(xi,yj)(1+α))]=0,(xi,yj).

When the observation has more than two values, Eq. (15) cannot be satisfied since g(xi,yj) should be the solution of the quadratic equation. Therefore, the nonlinear least squares problem is well-posed for this case. On the other hand, even if the observation has only two different intensity values, Eq. (15) is satisfied only for the following case:

g(xi,yj)=α/c,(xi,yj).

Therefore, unless the observation g(xi,yj) is a constant, i.e., a uniform image, the nonlinear least squares problem is well-posed and its solution is unique. Note that if the acquired image is a uniform image, separating illumination and the true image from the observed image is, by nature, not possible.

3. Results

3.1 Simulations

We first conducted simulations using test images. Figures 1(a) and 1(b) show a 129×129 pixel bar code image and a 129×257 pixel text image, respectively. These images were generated by multiplying the synthetic nonuniform illumination fields shown in Figs. 1(e) and 1(f) by the true bi-level images (not shown for brevity). We also generated Poisson noise as defined in Eq. (1). The SNR of each image was 25 dB. Figures 1(c) and 1(d) show the results of binarization for the images shown in Figs. 1(a) and 1(b), respectively. We determined a global threshold manually in a way such that the bit error rate (BER) of the binarization becomes the minimum. We computed the BER by calculating the percentage of incorrectly binarized pixels out of the total number of pixels. As shown in Figs. 1(c) and 1(d), the results of the binarization were not satisfactory due to nonuniform illumination, thereby making automatic recognition of the images difficult.

 figure: Fig. 1.

Fig. 1. Images under nonuniform illumination: (a) bar code image (b) text image (c) the bar code image after binarization (d) the text image after binarization (e) synthetic Gaussian shape illumination applied to the bar code image (f) synthetic linear shape illumination applied to the text image.

Download Full Size | PDF

We then applied the proposed method to the images shown in Figs. 1(a) and 1(b). To model the inverse of illumination, we placed 11×11 and 11×19 B-spline functions (equivalently, hx=16 and hy=16) for Figs. 1(a) and 1(b), respectively. We empirically determined the number of B-spline functions (equivalently, the spacing between B-spline functions) considering the speed of optimization and accuracy. If the number of B-spline functions is big, then optimization becomes more complicated [2]. On the contrary, if the number of B-spline functions is too small, the B-spline based model may not be able to estimate the nonuniform illumination correctly. Figures 2(a) and 2(b) show the corrected images of Figs. 1(a) and 1(b), respectively, and Figs. 2(e) and 2(f) show the estimated inverse of the illumination for the two images. One can see that the estimated inverses of the illuminations have similar shapes to the inverses of the true illuminations shown in Figs. 1(e) and 1(f). Figures 2(c) and 2(d) show the results of binarization using the corrected images shown in Figs. 2(a) and 2(b). It is clear that the results are greatly improved compared with the results shown in Figs. 1(c) and 1(d).

 figure: Fig. 2.

Fig. 2. Nonuniform illumination correction using the proposed method: (a) corrected bar code image (λr=1e -7) (b) corrected text image (λr=1e -5) (c) the bar code image after binarization (d) the text image after binarization (e) estimated inverse illumination for the bar code image (f) estimated inverse illumination for the text image.

Download Full Size | PDF

For comparison, we applied the entropy based method defined in Eq. (3) to the images shown in Figs. 1(a) and 1(b). For the entropy based method, we estimated PMF using the kernel density method as follows:

pg(zk)=Σi=1MΣj=1NKσ(zkg(xi,yj)s(xi,yj;b)),k=1,,nb,

where Kσ(·) is a Gaussian kernel density function with standard deviation σ. Note that there exist several design parameters including σ, the minimum and the maximum of the range of the PMF, and the number of nonzero PMF entries nb, etc.

Figure 3(a) shows the corrected image using the entropy based method (σ=0.2, λm=1e -5, λr=1e -3). Due to ineffectively chosen design parameters, the corrected image shown in Fig. 3(a), its binarization shown in Fig. 3(c), and the estimated illumination field shown in Fig. 3(e) are significantly different from the true images and illumination. We observe a similar phenomenon for the image shown in Fig. 1(b) (σ=0.01, λm=1e -5, λr=1e -3). Neither the corrected image shown in Fig. 3(b) nor its binarization shown in Fig. 3(d) is close to the true image since the estimated inverse of the illumination shown in Fig. 3(f) is significantly different from the true field.

 figure: Fig. 3.

Fig. 3. Nonuniform illumination correction using the entropy based method: (a) result for the bar code image after applying the entropy based method (b) result for the text image after applying the entropy based method (c) binarization for the bar code image (d) binarization for the bar code image (e) estimated inverse illumination for the text image (f) estimated inverse illumination for the text image.

Download Full Size | PDF

To quantify the performance of the proposed method and the entropy based method, we computed the correlation coefficient between corrected image and the true image to measure the degree of the similarity between the two images. We generated Poisson noise with different noise powers to the bar code image of Fig. 1(a) and the text image of Fig. 1(b) to simulate signals with different signal to noise ratio (SNR). We repeated the simulations for 50 different noise realizations and computed the average of the correlation coefficient values. In addition, considering that our goal is to restore the original binary image as closely as possible, we also computed BER values after binarization as a performance criterion. Table 1 and Table 2 show the correlation coefficients and BER values for the bar code image and the text image, respectively. For the entropy based method, we computed the correlation coefficients and BER for three different standard deviations in the kernel function to demonstrate the effect of the design parameter σ. All the other design parameters such as zk(k=1,⋯,nb),nb and λm were determined by trial and error in a way such that the performance becomes the best. Note that it is challenging to find optimal design parameters in practice since the ground truth image is not known.

As shown in Tables 1, the proposed method yielded more similar images to the true image than the entropy based method (in the sense that the correlation coefficient values of the proposed method were higher than those of the entropy based method) in all simulations. In addition, the proposed method exhibited significantly lower BER values than the entropy based method in all simulations. Even with the manually tuned optimal design parameters, the performance of the entropy based method was worse than that of the proposed method.

Similar phenomenon occurred for the text image. The correlation coefficients values of the proposed method were higher than those of the entropy based method. In addition, the BER values of the proposed method were lower in all simulations except for one low SNR case (SNR=15dB, σ=0.1). For such a case, although the proposed method yielded more similar image to the true image, the BER of the proposed method was slightly higher than that of the entropy based method. We suspect that this is due to the nonlinear nature of binary thresholding.

Tables Icon

Table 1. Correlation coefficient (BER) values for the bar code image (The unit for BER is %. For the proposed λr=1e-7, and for the entropy λr=1e-3, λm=1e-5, z1=4,znb=28,nb=128.)

Tables Icon

Table 2. Correlation coefficient (BER) values for the text image (The unit for BER is %. For the proposed λr=1e-5, and for the entropy λr=1e-3, λm=1e-5, z1=4,znb=28,nb=128.)

We also compared the convergence rate of the proposed method with that of the entropy based method. Figures 4(a) and 4(b) show the changes of the objective functions of the two methods for the bar code and text images during optimization (For display purpose, we normalized the values of the objective functions). While the proposed method converged to the minimum within a few iterations, the entropy based method required significantly larger numbers of iterations than the proposed method. We think that the fast convergence of the proposed method is due to the special form of nonlinear least squares type objective function.

 figure: Fig. 4.

Fig. 4. Change of the objective functions for the (a) bar code image (for proposed method λr=1e -7, and for entropy based method λr=1e -3, λm=1e-5, σ=0.1) and the (b) text image (for proposed method λm=1e -5, and for entropy based method λr=1e -3, λm=1e -5, σ=0.1)

Download Full Size | PDF

3.2 Experiments

To prove the effectiveness of the proposed method for real images, we applied the method to real images acquired by a digital camera (Canon EOS-40D). Figure 5(a) shows an acquired real QR bar code image and Fig. 5(b) shows a real document image under nonuniform illumination. As shown in these figures, the bar code and the document image have severe artifacts due to nonuniform illumination. Due to that, it is difficult to decode the bar code in Fig. 5(a) and to recognize characters in the dark intensity region of Fig. 5(b). Figures 5(c) and 5(d) show the corrected images using the proposed method and Figs. 5(e) and 5(f) show the estimated inverse of the illumination. It is clear that two images are significantly improved and more easily recognizable after the correction.

 figure: Fig. 5.

Fig. 5. Real images under nonuniform illumination and the result of applying the proposed correction method: (a) QR bar code image (b) document image (c) corrected image for the document image (d) corrected image for the QR bar code image (e) estimated inverse illumination for the QR bar code image (f) estimated inverse illumination for the document image.

Download Full Size | PDF

4. Discussion

The proposed nonuniform illumination correction method has advantages over the conventional methods since it does not require the tuning of design parameters related to the estimation of the PMF, nor does it suffer from a trivial minimizer. Further, the convergence of the proposed method was significantly faster than that of the conventional methods. In our simulations, the proposed method showed improved performance compared with the entropy based method in terms of both accuracy and speed. Even with manually determined optimal design parameters, the entropy based method exhibited lower correlation coefficient values and higher BER values than the proposed method. We think that the superior performance of the proposed method arises from the approximation of the statistically efficient MAP [26]. It has been reported that the entropy based method is not efficient in image registration [27].

Although we were able to successfully correct the effect of nonuniform illumination for bi-level images, deblurring is also necessary for better understanding of the images. Previously, we proposed a method that can simultaneously estimate the nonuniform illumination and perform deblurring for 1D bar code images [5]. However, the method was limited to bi-level images that can be parameterized using the locations of edges. Since bi-level images such as text images and 2D bar code images are difficult to be parameterized, a novel joint nonuniform illumination correction and deblurring method is desired. We plan to investigate such a method by combining Richardson-Lucy based blind deblurring methods [2830] and nonuniform illumination correction methods to estimate unknown blur, true image, and nonuniform illumination simultaneously.

In the simulations and experiments, to compute BER, we determined a global threshold that produces the lowest BER. One may apply other thresholding methods considering nonuniform illumination for improving the performance of binarization [31]. In that case, BER can be improved without nonuniform illumination compensation. Investigating such a method is not within the scope of this paper.

In the simulations and experiments, we empirically showed that the proposed method converges faster than the entropy based method. We believe that the difference is due to the fact that the proposed objective function has the special form of the sum of squares. We plan to conduct a theoretical study comparing the convergence of the entropy based method and that of the nonlinear least squares method. We expect that such study will be useful for not only nonuniform illumination correction but also other image processing methods based on entropy and nonlinear least squares such as image deblurring and image registration.

5. Conclusion

We investigated a novel correction method for nonuniform illumination on bi-level images using penalized nonlinear least squares. In contrast to existing methods, the proposed method does not suffer from a trivial minimizer nor does it require the manual tuning of design parameters. We also showed, theoretically, that the proposed method has a unique solution. In both simulations and experiments, the proposed method showed improved performance compared with the conventional entropy based method in terms of accuracy and speed. Due to its high accuracy and fast computation, we expect that the proposed method will be useful for correction of nonuniform illumination effects present on many bi-level images such as text images and bar code images.

Appendix A

We follow the similar approximation developed previously [23]. Without loss of generality, we assume that the minimum of the two terms in the objective function in Eq. (7) is the first term such that

min{1h(xi,yi;b)2(h(xi,yj;b)g(xi,yj)α)2,1h(xi,yj;b)2(h(xi,yj;b)g(xi,yj)(1+α))2}
=1h(xi,yj;b)2(h(xi,yj;b)g(xi,yj)α)2.

Let ε=h(xi,yj; b)g(xi,yj)-α. Then the following quantity defined in Eq. (A.2) can be rewritten by Eq. (A.3):

1h(xi,yj;b)2(h(xi,yj;b)g(xi,yj)α)2(h(xi,yj;b)g(xi,yj)(1+α))2
=1h(xi,yj;b)2(h(xi,yj;b)g(xi,yj)α)2(12ε+ε2).

In case of ε≈0, since Eq. (A.3) can be approximated by Eq. (A.1), we have the following approximation:

min{1h(xi,yj;b)2(h(xi,yj;b)g(xi,yj)α)2,1h(xi,yj;b)2(h(xi,yj;b)g(xi,yj)(1+α))2}
1h(xi,yj;b)2(h(xi,yj;b)g(xi,yj)α)2(h(xi,yj;b)g(xi,yj)(1+α))2.

Note that the approximation is valid only when error is sufficiently small, i.e., low noise case.

Acknowledgments

This work was supported by the Korean Research Foundation Grant funded by the Korean Government (KRF-2008-331-D00419) and Korean Science Engineering Foundation (KOSEF R17-2008-041-01001-0) funded by the Korean Government.

References and links

1. T. H. Li and K. S. Lii, “A joint estimation approach for two-tone image deblurring by blind deconvolution,” IEEE Trans. Image Process. 11(8), 847–858 (2002).

2. E. Y. Lam, “Blind bi-level image restoration with iterated quadratic programming,” IEEE Trans. Circ. Syst. 52(Part 2), 52–56 (2007).

3. Y. Shen, E. Y. Lam, and N. Wong, “Binary image restoration by positive semidefinite programming,” Opt. Lett. 32(2), 121–123 (2007). [CrossRef]  

4. T. F. Chan, S. Esedoglu, and M. Nikolova, “Finding the global minimum for binary image restoration,” in Proceedings of IEEE International Conference on Image Processing (ICIP, 2005), pp. 121–124.

5. J. Kim and H. Lee, “Joint nonuniform illumination estimation and deblurring for bar code signals,” Opt. Express 15(22), 14817–14837 (2007). [CrossRef]  

6. M. S. Brown and Y. C. Tsoi, “Geometric and shading correction for images of printed materials using boundary,” IEEE Trans. Image Process. 15(6), 1544–1554 (2006). [CrossRef]  

7. D. A. Forsyth and J. Ponce, Computer vision: A modern approach (Prentice Hall, 2003).

8. T. Chen, W. Yin, X. S. Zhou, D. Comaniciu, and T. S. Huang, “Total variation models for variable lighting face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1519–1524 (2006). [CrossRef]  

9. Z. Hou, “A review on MR image inhomogeneity correction,” Int. J. Biomed. Imaging 2006, 1–11 (2006). [CrossRef]  

10. B. H. Brinkmann, A. Manduca, and R. A. Robb, “Optimized homomorphic unsharp masking for MR grayscale inhomogeneity correction,” IEEE Trans. Med. Imaging 17(2), 161–171 (1998). [CrossRef]  

11. J. G. Sled, A. P. Zijdenbos, and A. C. Evans, “A nonparametric method for automatic correction of intensity nonuniformity in MRI data,” IEEE Trans. Med. Imaging 17(1), 87–97 (1998). [CrossRef]  

12. J. F. Mangin, “Entropy minimization for automatic correction of intensity nonuniformity,” IEEE Workshop on Mathematical Methods in Biomedical Image Analysis (MMBIA, 2000), pp. 162–169.

13. D. Tomaẑevič, B. Likar, and F. Pernuš, “Comparative evaluation of retrospective shading correction methods,” J. Microsc. 208(Pt 3), 212–223 (2002).

14. B. Likar, J. B. A. Maintz, M. A. Viergever, and F. Pernuš, “Retrospective shading correction based on entropy minimization,” J. Microsc. 197(Pt 3), 285–295 (2000). [CrossRef]  

15. H.-L. Shen and K. Li, “Decomposition of shading and reflectance from a texture image,” Opt. Lett. 34(1), 64–66 (2009). [CrossRef]  

16. B. W. Silverman, Density estimation for statistics and data analysis (Chapman and Hall, 1985).

17. Q. Ji, J. O. Glass, and W. E. Reddick, “A novel, fast entropy-minimization algorithm for bias field correction in MR images,” Magn. Reson. Imaging 25(2), 259–264 (2007). [CrossRef]  

18. M. Unser, “Splines: A perfect fit for signal and image processing,” IEEE Signal Process. Mag. 16(6), 22–38 (1999). [CrossRef]  

19. S. Esedoglu, “Blind deconvolution of bar code signals,” Inverse Probl. 20(1), 121–135 (2004). [CrossRef]  

20. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical recipes in C + +, 2nd ed. (Cambridge, 2005).

21. A. Papoulis and S. U. Pillai, Probability, random variables, and stochastic processes, 4th ed. (McGraw-Hill, 2002).

22. J. Kybic, P. Thévenaz, A. Nirkko, and M. Unser, “Unwarping of unidirectionally distorted EPI images,” IEEE Trans. Med. Imaging 19(2), 80–93 (2000). [CrossRef]  

23. O. Grellier and P. Comon, “Blind separation of discrete sources,” IEEE Signal Process. Lett. 5(8), 212–214 (1998). [CrossRef]  

24. J. Eriksson, P. A. Wedin, M. E. Gulliksson, and I. Söderkvist, “Regularization methods for uniformly rank deficient nonlinear least-squares problems,” J. Optim. Theory Appl. 127(1), 1–26 (2005). [CrossRef]  

25. J. A. Fessler, “Penalized weighted least-squares image reconstruction for positron emission tomography,” IEEE Trans. Med. Imaging 13(2), 290–300 (1997).

26. H. L. Van Trees, Detection, estimation, and modulation theory, Part 1 (Wiley, 1968).

27. J. Kim and J. A. Fessler, “Intensity-based image registration using robust correlation coefficients,” IEEE Trans. Med. Imaging 23(11), 1430–1444 (2004). [CrossRef]  

28. J. Zhang, Q. Zhang, and G. He, “Blind deconvolution: multiplicative iterative algorithm,” Opt. Lett. 33(1), 25–27 (2008). [CrossRef]  

29. T. J. Holmes, “Blind deconvolution of quantum-limited incoherent imagery: maximum-likelihood approach,” J. Opt. Soc. Am. A 9(7), 1052–1061 (1992). [CrossRef]  

30. D. A. Fish, A. M. Brinicombe, E. R. Pike, and J. G. Walker, “Blind deconvolution by means of the Richardson-Lucy algorithm,” J. Opt. Soc. Am. A 12(1), 58–65 (1995). [CrossRef]  

31. S. Lu and C. L. Tan, “Binarization of badly illuminated document images through shading estimation and compensation,” in Proceedings of IEEE International Conference on Document Analysis and Recognition (ICDAR, 2007), pp. 312–316.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Images under nonuniform illumination: (a) bar code image (b) text image (c) the bar code image after binarization (d) the text image after binarization (e) synthetic Gaussian shape illumination applied to the bar code image (f) synthetic linear shape illumination applied to the text image.
Fig. 2.
Fig. 2. Nonuniform illumination correction using the proposed method: (a) corrected bar code image (λr =1e -7) (b) corrected text image (λr =1e -5) (c) the bar code image after binarization (d) the text image after binarization (e) estimated inverse illumination for the bar code image (f) estimated inverse illumination for the text image.
Fig. 3.
Fig. 3. Nonuniform illumination correction using the entropy based method: (a) result for the bar code image after applying the entropy based method (b) result for the text image after applying the entropy based method (c) binarization for the bar code image (d) binarization for the bar code image (e) estimated inverse illumination for the text image (f) estimated inverse illumination for the text image.
Fig. 4.
Fig. 4. Change of the objective functions for the (a) bar code image (for proposed method λr =1e -7, and for entropy based method λr =1e -3, λm =1e-5, σ=0.1) and the (b) text image (for proposed method λm =1e -5, and for entropy based method λr =1e -3, λm =1e -5, σ=0.1)
Fig. 5.
Fig. 5. Real images under nonuniform illumination and the result of applying the proposed correction method: (a) QR bar code image (b) document image (c) corrected image for the document image (d) corrected image for the QR bar code image (e) estimated inverse illumination for the QR bar code image (f) estimated inverse illumination for the document image.

Tables (2)

Tables Icon

Table 1. Correlation coefficient (BER) values for the bar code image (The unit for BER is %. For the proposed λr =1e -7, and for the entropy λr =1e -3, λm =1e -5, z 1 = 4 , z n b = 28 , n b = 128 . )

Tables Icon

Table 2. Correlation coefficient (BER) values for the text image (The unit for BER is %. For the proposed λr =1e -5, and for the entropy λr =1e -3, λm =1e -5, z 1 = 4 , z n b = 28 , n b = 128 . )

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

g ( x i , y j ) Poisson [ s ( x i , y j ) f ( x i , y j ) ] , i = 1 , , M , j = 1 , , N ,
b ̂ = arg min b Σ k = 1 n b p g ( z k ; b ) log p g ( z k ; b ) + λ r R ( b ) ,
b ̂ = arg min b Σ k = 1 n b p g ( z k ; b ) log p g ( z k ; b )
+ λ m ( Σ i = 1 M Σ j = 1 N g ( x i , y j ) h ( x i , y j ; b ) Σ i = 1 M Σ j = 1 N g ( x i , y j ) ) 2
+ λ r R ( b ) ,
h ( x i , y j ; b ) = Σ k = 1 P Σ l = 1 Q b k , l β 3 ( x i / h x l k ) β 3 ( y j / h y l l ) , i = 1 , , M , j = 1 , , N ,
minimize b > 0 , α 1 > 0 , α 2 > 0 f ( x i , y j ) { α 1 , α 2 } Σ i = 1 M Σ j = 1 N ( g ( x i , y j ) f ( x i , y j ) h ( x i , y j ; b ) ) 2 .
minimize b > 0 , α > 0 f ( x i , y j ) { α , 1 + α } Σ i = 1 M Σ j = 1 N 1 h ( x i , y j ; b ) 2 ( h ( x i , y j ; b ) g ( x i , y j ) f ( x i , y j ) ) 2 .
b ̂ α ̂ = arg min b > o , α > 0 Σ i = 1 M Σ j = 1 N min { 1 h ( x i , y j ; b ) 2 ( h ( x i , y j ; b ) g ( x i , y j ) α ) 2 ,
1 h ( x i , y j ; b ) 2 ( h ( x i , y j ; b ) g ( x i , y j ) ( α + 1 ) 2 } ,
b ̂ α ̂ = arg min b > 0 , α > 0 Σ i = 1 M Σ j = 1 N 1 h ( x i , y j ; b ) 2 ( h ( x i , y j ; b ) g ( x i , y j ) α ) 2
( h ( x i , y j ; b ) g ( x i , y j ) ( 1 + α ) ) 2 .
θ ̂ = arg min θ Σ i = 1 M Σ j = 1 N f ( x i , y j ; θ ) 2 + λ r R ( θ ) ,
f ( x i , y j ; θ ) = 1 h ( x i , y j ; b ) ( h ( x i , y j ; b ) g ( x i , y j ) α ) ( h ( x i , y j ; b ) g ( x i , y j ) ( 1 + α ) ) ,
R ( θ ) = 1 2 θ T R θ = Σ i = 1 P 1 Σ j = 1 Q 1 ( b i + 1 , j b i , j ) 2 + ( b i , j + 1 b i , j ) 2 ,
H ( θ ) = J ( θ ) T J ( θ ) + λ r R ,
[ J ( θ ) ] k , l = f ( x i , y j ; θ ) θ l ,
[ θ T J ( θ ) T ] k = α f ( x i , y j ; α , b u ) α + Σ l = 1 P × Q b u f ( x i , y j ; α , b u ) b l ,
cg ( x i , y j ) α c [ cg ( x i , y j ) α + cg ( x i , y j ) ( 1 + α ) ]
1 c [ ( cg ( x i , y j ) α ) ( cg ( x i , y j ) ( 1 + α ) ) ] = 0 , ( x i , y j ) .
g ( x i , y j ) = α / c , ( x i , y j ) .
p g ( z k ) = Σ i = 1 M Σ j = 1 N K σ ( z k g ( x i , y j ) s ( x i , y j ; b ) ) , k = 1 , , n b ,
min { 1 h ( x i , y i ; b ) 2 ( h ( x i , y j ; b ) g ( x i , y j ) α ) 2 , 1 h ( x i , y j ; b ) 2 ( h ( x i , y j ; b ) g ( x i , y j ) ( 1 + α ) ) 2 }
= 1 h ( x i , y j ; b ) 2 ( h ( x i , y j ; b ) g ( x i , y j ) α ) 2 .
1 h ( x i , y j ; b ) 2 ( h ( x i , y j ; b ) g ( x i , y j ) α ) 2 ( h ( x i , y j ; b ) g ( x i , y j ) ( 1 + α ) ) 2
= 1 h ( x i , y j ; b ) 2 ( h ( x i , y j ; b ) g ( x i , y j ) α ) 2 ( 1 2 ε + ε 2 ) .
min { 1 h ( x i , y j ; b ) 2 ( h ( x i , y j ; b ) g ( x i , y j ) α ) 2 , 1 h ( x i , y j ; b ) 2 ( h ( x i , y j ; b ) g ( x i , y j ) ( 1 + α ) ) 2 }
1 h ( x i , y j ; b ) 2 ( h ( x i , y j ; b ) g ( x i , y j ) α ) 2 ( h ( x i , y j ; b ) g ( x i , y j ) ( 1 + α ) ) 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.