Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Semi-Huber potential function for image segmentation

Open Access Open Access

Abstract

In this work, a novel model of Markov Random Field (MRF) is introduced. Such a model is based on a proposed Semi-Huber potential function and it is applied successfully to image segmentation in presence of noise. The main difference with respect to other half-quadratic models that have been taken as a reference is, that the number of parameters to be tuned in the proposed model is smaller and simpler. The idea is then, to choose adequate parameter values heuristically for a good segmentation of the image. In that sense, some experimental results show that the proposed model allows an easier parameter adjustment with reasonable computation times.

© 2012 Optical Society of America

1. Introduction

Segmentation can be considered the first step and essential part in object recognition, scene and image understanding. Some of its applications comprise industrial quality control, medicine, robot navigation, geophysical exploration, military applications, agriculture, among others. Image segmentation is an image processing method that subdivides an image into its constitutive regions or objects. The level until which this process is carried out depends on the problem to be solved. This means, segmentation stops once the objects of interest in an application have been isolated. For example, in automated inspection in electronic assemblies is of particular interest to analyze images with the purpose of determining the presence or absence of anomalies such as missing components or broken paths. In this case, segmentation process is carried out until the required level to identify these elements.

On the other hand, digital images are usually affected by some degrading factors as blurring or noise coming from image acquisition systems or during the transmission-reception process, resulting in degraded or distorted images of the real world and, as a consequence, yielding inadequate segmentation results. A degradation process can be described as a degradation function H that, together with an additive noise term n, it operates on an input image x and produces a degraded image y:

y=Hx+n.
Given y, some previous knowledge about the degradation function and some knowledge about the additive noise term, the aim is to obtain an estimation of the original image x for a good segmentation of the regions or objects into it. The more we know about H and n, the closer will be to x [1]. The simplest case is when the degradation function H is modeled as a linear function, but it could be non linear in many cases and then the problem becomes more complex. In this work it will be assumed that H is the identity operator and we consider only degradation due to Gaussian noise.

In general, segmentation methods are based on two basic properties of the pixels in relation to their local neighborhood, discontinuity and similarity [2, 3]. Methods based on discontinuity property of the pixels are called boundary-based methods, while methods based on similarity property are called region-based methods. Unfortunately, these both techniques often fail to produce accurate segmentation. For example, in boundary-based methods, if an image is noisy or if its region attributes differ by only a small amount between regions, edge detection may result in spurious and broken edges. On the other hand, region-based methods always provide closed contour regions and make use of relatively large neighborhoods in order to obtain sufficient information to decide the aggregation of a pixel into a region. This tends to sacrifice resolution and detail in the image, which can result in segmentation errors at the boundaries of the regions, and in failure to distinguish regions that would be small in comparison with the block size used [2].

To overcome these difficulties, the use of Markov random fields (MRF) within a Bayesian framework has become a powerful method and has been used in different works and different areas such as medicine [48], texture modeling [911], image segmentation and restoration [1218] and SAR imagery classification [10], [1921]. This is because enables posing this problem, and many others in image processing, as statistical estimation problems [9] where the solution is going to be estimated from the degraded image. Prior distributions of images, which encode contextual constraints [14], can be modeled with MRF. The basic premise is that neighborhood pixels are expected to have similar characteristics [10]. Because MRF models state global statistics in terms of local neighborhood, all computations are restricted to a local window. Typical MRF algorithms visit all sites in the image in a specific order and execute a local computation at each site; this is repeated until some convergence criterion is reached [9].

Usually, information data (input image) is not enough for an accurate estimation of the original image, so the regularization of the problem is necessary. This means that a priori information or assumptions about the structure of x need to be introduced in the estimation process [16]. The a priori knowledge is given in terms of a probability distribution. This distribution, together with a probabilistic description of the noise that corrupts the observations, allows the use of Bayes theory to compute the posterior distribution which represents the likelihood of a solution x given the observations y [22].

Statistical methods look for the solution that best matches the probabilistic behavior of the data. Maximum likelihood (ML) estimation selects the reconstruction which most closely matches the data available. Maximum a posteriori (MAP) estimation allows the introduction of a prior distribution that reflects knowledge or beliefs concerning the types of images acceptable as estimates of the original one [4]. There is a wide variety of MRF models; the difference between them lies on the choice of a potential function. Each of them characterizes the interactions between pixels in the same local group by assigning a larger cost to configurations of pixels which are less likely to occur. The main idea is to avoid excessive penalties extracted, for instance, by the Gaussian’s quadratic potential function, which tends to blur edges due to the high cost of abrupt transitions [5].

In this work, we introduce a novel potential function named semi Huber MRF as a proposal of a new algorithm for image segmentation. The main advantage of this model lies in the fact that the hyperparameters to be tuned for an adequate result are less than those needed by other models that were taken as a reference to verify the results of the proposed one. Section 2 provides an overview about the Bayesian approach. Section 3 includes a theoretical basis of Markov random fields and describes the MAP estimators used in this work as a reference. A description of the proposed model and its corresponding MAP estimator is provided in section 4. Some results and comments for image segmentation experiments are presented in section 5. Finally, in section 6 some conclusions are given.

2. The Bayesian approach

Problems consisting on finding the solution of the model to restore some or all features of the objects in an image using assumptions about the real world are called inverse problems. A common approach to solve this kind of problems is the Bayesian modeling. A Bayesian model is a statistical description of an estimation problem that consists of three components. The first component, the prior model p(x) that is a probabilistic description of the real world or its properties, that we are trying to estimate, before collecting data. The second component, the sensor model p(y|x), is a description of the behavior of noise or stochastic characteristics that relate the original state x to the sampled input image or sensor values y. These two components can be combined to obtain the third component, the posterior model p(x|y), which is a probabilistic description of the current estimation of the original scene x, given the observed data y. The model is obtained using the Bayes rule:

p(x|y)=p(y|x)p(x)p(y),
where p(y) is the density function of y and is constant if the observed image is provided [21].

To use Bayesian modeling in image processing, it is necessary to encode somehow the smoothness inherent in the image. This can be done by describing the correlation between adjacent pixels of the image in the prior model, and a simple method for modeling such correlation are the Markov random fields [23].

In its usual application [12], Bayesian modeling is used to find the maximum a posteriori (MAP) estimate, that is, the value of x that maximizes the conditional probability p(x|y). It is one of the more efficient and most used estimators [4, 5, 8, 9, 18] defined by:

x^MAP=argmaxx𝕏{p(x|y)}=argmaxx𝕏{logp(y|x)+logg(x)},
where g(x) is a MRF function that models prior information of the phenomena to be estimated as a probability distribution, 𝕏 is the set of pixels capable to maximize p(x|y) and p(y|x) is the likelihood function from y given x [24].

3. Markov random fields and MAP estimators

In this section, some previous concepts about Markov random fields are given. Then we present and describe some existing MRF models that were taken as a reference in order to evaluate the performance of our proposal. Finally, the corresponding MAP estimators for each model are defined.

Let 𝕊 = {(i, j)|1 ≤ im, 1 ≤ jn} be the set of sites of a rectangular lattice for a 2D image of m × n size. Its elements correspond to the locations where an image is sampled. The sites in 𝕊 are related to one another via a neighborhood system defined as

={i|i𝕊},
where 𝕅i is the set of sites neighboring i. Figure 1(a) shows first order neighborhood with four neighbors, second order neighborhood is shown in Fig. 1(b) with eight neighbors, and Fig. 1(c) shows higher order neighborhoods.

 figure: Fig. 1

Fig. 1 Neighborhood sets for a single site of a lattice.

Download Full Size | PDF

A clique c is defined as a subset of sites in 𝕊 that consists of a single site c = {i}, a pair of neighboring sites c = {i,i′}, a triple of neighboring sites c = {i,i′,i″}, and so on. All posible cliques for the second order neighborhood system are displayed in Fig. 2 [19, 25]. The Hammersley-Clifford theorem establishes the equivalence between Markov random fields and Gibbs random fields [12, 21, 25, 26], so the MRF can be determined by defining the potential function in a Gibbs distribution, whose basic form is given by

g(x)=1Zexp(1TU(x)),
where
Z=x𝕏exp(1TU(x)),
is the partition function and in practice is a normalization constant value. T is the temperature parameter, that controls the sharpness of the distribution [12] and in practice is assumed to be 1 [25]. U(x) is the energy function such that
U(x)=c𝔺Vc(x),
which is determined as a sum of clique potentials Vc(x) over all posible cliques 𝔺 in the neighborhood [8, 19, 21, 25]. Most segmentation approaches based on MRF use the multi-level logistic (MLL) model to define the potential function. Usually the second order pairwise cliques are selected and the potentials of all non-pairwise cliques are defined to be zeros [21]. In that sense, we will consider in this work simple MRF’s based on a second order neighborhood (eight sites) and potential functions of the form ρ (λ(xi – xj)) which act on pairs of sites, where λ is a constant that scales the difference between pixel values.

 figure: Fig. 2

Fig. 2 Cliques for the second order neighborhood system.

Download Full Size | PDF

3.1. Generalized Gaussian MRF (GGMRF)

A common choice for the prior model is a Gaussian Markov random field (GMRF). The distribution for a random field of this kind has the form

g(x)=λ2(2π)N/2|B|1/2exp(λ2xtBx),
where B is a symmetric positive definite matrix, named the precision matrix, λ is a constant and xt is the transpose of x. To make this to correspond to a Gibbs distribution with neighborhood system ∂s, it is imposed the constraint that Bsr = 0 when s is not in ∂r and sr. This distribution may then be rewritten, to form the log likelihood, as
logg(x)=λ2(s𝕊asxs2+{s,r}𝔺bsr|xsxr|2),
where as = ∑rS Bsr and bsr = −Bsr.

The generalization of the GMRF is made by replacing the power 2 by p, where 1 ≤ p ≤ 2 and λ is a parameter inversely proportional to the scale of x [5]. The potential function for a GGMRF is then

logg(x)=λp(s𝕊asxsp+{s,r}𝔺bsr|xsxr|p)+c,
where as ≥ 0 and bsr > 0, s is the site of interest, r corresponds to the local neighbors and c is a constant term. In practice it is recommended to take as = 0 for Gaussian noise assumption, thus the unicity of the MAP estimator can be assured, resulting in
logg(x)=λp({s,r}𝔺bsr|xsxr|p)+c.
Here, bsr is a constant that depends on the distance between pixels s and r. The selected value for power p is determinant, since it constrains the convergence speed of the local or global estimator and the quality of the estimated image [24].

3.2. Welsh’s potential function

The Welsh’s potential function, proposed by Rivera [16] as a hard redescender potential function with granularity control, is defined as

logg(x)=λ(μ{s,r}𝔺bsrφ1(x)+(1μ){s,r}𝔺bsrρ2(x))+c,
where μ is the granularity control parameter, φ1(x) = e2 with e = (xsxr),
ρ2(x)=112kexp(kφ1(x)),
and k is a positive scale parameter for edge preservation.

3.3. Tukey’s potential function

Another hard redescender potential function also proposed by Rivera [16] is the Tukey’s potential function with granularity control, given by

logg(x)=λ(μ{s,r}𝔺bsrφ1(x)+(1μ){s,r}𝔺bsrρ3(x))+c,
where, in this case
ρ3(x)={1(1(2e/k)2)3,for|e/k|<1/2,1,inothercase,
k is also a scale parameter and μ provides the granularity control too.

3.4. MAP estimators

With Eq. (3) and the MRF’s previously defined, the corresponding MAP estimators are deduced. The MAP estimator for a GGMRF [5] is given by

x^MAPgg=argminx𝕏{s𝕊|ysxs|q+σqλp{s,r}𝔺bsr|xsxr|p},
where the term ∑s∈𝕊 |ys − xs|q stands for the term log p(y|x), and the term σqλp{s,r}∈𝔺 bsr|xsxr|p corresponds to the term logg(x) of Eq. (3). This applies for the other models. The minimization problem can be solved from a global or local point of view considering various methods [16, 28, 29, 30]. As global iterative techniques we have the descendent gradient, the conjugate gradient, the Gauss-Seidel, among others. Local minimization techniques work minimizing at each pixel xs. In this work the Levenberg-Marquardt algorithm was used for local minimization, because all the parameters included into the potential functions were chosen heuristically or according to values proposed in references [24]. Thus, local estimation is implemented with the expression
x^sgg=argminx𝕏{|ysxs|q+σqλprsbsr|xsxr|p},
where the subset ∂s stands for the sites in the neighborhood. Estimator performance depends on the chosen values for parameters p and q. For example if p = q = 2, we have the Gaussian condition for the potential function and the obtained estimator is similar to the least-square one since the likelihood function is quadratic. Moreover, when p = q = 1, the criterion is absolute and the estimator converges to the median one; nevertheless, this criterion is not differentiable at zero and it causes instability in the minimization process [24]. The form of the first term in Eq. (12) depends on the type of noise regarded. For all experiments made in this work we assumed that noise has a Gaussian distribution with mean value μn and variance σn2. From this idea, the corresponding value for parameter q was set at 2.

As a second MAP estimator we introduce in the second term of Eq. (3) the Welsh’s potential function [16]

x^MAPwel=argminx𝕏{s𝕊|ysxs|2+λ(μ{s,r}𝔺bsrφ1(x)+(1μ){s,r}𝔺bsrρ2(x))}.
In the same context, local estimation for this model is given by
x^swel=argminx𝕏{|ysxs|2+λ(μrsbsrφ1(x)+(1μ)rsbsrρ2(x))}.

Finally, the last MAP estimator used as a reference, corresponds to the Tukey’s potential function [16], which is given by the expression

x^MAPtuk=argminx𝕏{s𝕊|ysxs|2+λ(μ{s,r}𝔺bsrφ1(x)+(1μ){s,r}𝔺bsrρ3(x))},
where the local estimation is obtained by
x^stuk=argminx𝕏{|ysxs|2+λ(μrsbsrφ1(x)+(1μ)rsbsrρ3(x))}.

4. Semi-Huber proposal

For logg(x) in Eq. (3), we introduce the Huber-like norm or semi-Huber potential function, which has been used in one dimensional robust estimation problems [27] for the case of nonlinear regression. This function has been modified for the two dimensional case according to the following equation:

logg(x)=λ({s,r}𝔺bsrρ1(x))+c,
where s is the site of interest, r corresponds to the local neighbors, c is a constant term and
ρ1(x)=Δ022(1+4φ1(x)Δ021).
Here Δ0 > 0 is a constant value and φ1(x) = e2 with e = (xsxr).

Semi-Huber potential function is shown in Fig. 3 for Δ0 = 1. Near zero the function is quadratic and for values beyond ±1, the function is almost linear. This linear region of the function allows sharp edges, while convexity makes MAP estimate efficient to compute.

 figure: Fig. 3

Fig. 3 Semi-Huber cost function.

Download Full Size | PDF

The MAP estimator corresponding to the proposed semi-Hubber potential function, Eq. (18) [24, 27], is given by:

x^MAPqh=argminx𝕏{s𝕊|ysxs|2+λ{s,r}𝔺bsrρ1(x)}.
Equation (20) is simplified to obtain:
x^sqh=argminx𝕏{|ysxs|2+λrsbsrρ1(x)},
for local MAP estimation in a single site.

5. Experiments and results

In this section, we present a set of experiments to show the performance of the proposed model in the segmentation of some images. In all cases, we used the Levenberg-Marquardt algorithm provided in the optimization toolbox of MATLAB R2009a for the local minimization stage. For this algorithm, we need to introduce the initial value, X0, to start the search of the solution. It was observed that the final result depended on the choice of this value, which adds one additional hyperparameter to the segmentation process.

All the tests was executed on a Mac Pro computer with a 2×2.8 GHz Quad-Core Intel Xeon processor and 2 GB at 800 MHz DDR2 RAM. The first experiment was carried out with an image of the brain, trying to segment in three tissues: gray matter, white matter and cerebrospinal fluid. Figure 4 shows segmentation results of the brain image corrupted by centered Gaussian noise, n ∼ 𝕅(0, Iσn2). In top row from left to right we have the original brain image, brain image with noise ( σn2=10), and segmentation result using the semi-Huber MRF. In the bottom row we have the segmentation result using the GGMRF, segmentation result using the Welsh’s MRF and segmentation result using the Tukey’s MRF. Table 1 shows computation times taken by each model, and Table 2 shows the list of hyperparameters to be tuned.

 figure: Fig. 4

Fig. 4 Top row: original brain image, brain image corrupted by Gaussian noise and the segmentation result using the semi-Huber MRF. Bottom row: segmentation result using the GGMRF, segmentation result using the Welsh’s MRF and segmentation result using the Tukey’s MRF.

Download Full Size | PDF

Tables Icon

Table 1. Computation times taken by each model of MRF for segmentation of brain image.

Tables Icon

Table 2. List of parameter values for segmentation results in Fig. 4.

It can be seen that visual result obtained with the proposed model is good enough with respect to those obtained with the others and difference in computation times is not significant. What matters here is the fact that, from the original semi-Huber MRF model, we only had to adjust one hyperparameter value, Δ0 in this case, keeping λ = 1 (two if one takes into account the initial value in the optimization stage). Therefore, the number of times that it was necessary to run the segmentation process was significantly lower than with other models.

A second experiment was made with a geographical image of Paso de las Piedras dam, located in Argentina, taken from Google Earth. In this case, the main interest is on segment water from no water in spite of the noise present. Figure 5 shows segmentation results of the dike image. In top row from left to right we have the original dike image, dike image corrupted by Gaussian noise with σn2=20, and segmentation result using the semi-Huber MRF. In the bottom row we have the segmentation result using the GGMRF, segmentation result using the Welsh’s MRF and segmentation result using the Tukey’s MRF. Table 3 shows times taken by each model and Table 4 shows the list of hyperparameters values by which results presented in Fig. 5 were obtained.

 figure: Fig. 5

Fig. 5 Top row: original dike image, dike image corrupted by Gaussian noise and the segmentation result using the semi-Huber MRF. Bottom row: segmentation result using the GGMRF, segmentation result using the Welsh’s MRF and segmentation result using the Tukey’s MRF.

Download Full Size | PDF

Tables Icon

Table 3. Computation times taken by each model of MRF for segmentation of dike image.

Tables Icon

Table 4. List of parameter values for segmentation results in Fig. 5.

Here, it can be seen that with the semi-Huber proposed model, noise reduction looks more satisfactory than GGMRF model for example, where we can see in the water region, bigger gray points than the others. Even though the semi-Huber model took the longest time, the relative difference is negligible.

From a third experiment, we show segmentation results for a geographical image of Villaher-mosa Tabasco city, from the German Spatial Agency, taken by the german satellite TerraSAR-X in november 13th 2007. Figure 6 shows in top row from left to right: original image, image corrupted by Gaussian noise with σn2=20, and segmentation result using the semi-Huber MRF. In the bottom row we have the segmentation result using the GGMRF, segmentation result using the Welsh’s MRF and segmentation result using the Tukey’s MRF.

 figure: Fig. 6

Fig. 6 Top row: original satellite image of Villahermosa, Tabasco, image corrupted by Gaussian noise and the segmentation result using the semi-Huber MRF. Bottom row: segmentation result using the GGMRF, segmentation result using the Welsh’s MRF and segmentation result using the Tukey’s MRF.

Download Full Size | PDF

In this case, computation times during the segmentation process for each model had a similar behavior than in the previous cases. It means, there was not a significant difference between them, being about 575 s on average for a 282×190 image size. Again, we remark the advantage that using semi-Huber potential function, fewer parameters have to be tuned according to the type of image, which allows faster segmentation results.

In order to construct a more objective assessment of the results in numeric form, we also included an experiment performed with a synthetic image of the chessboard type, which was degraded with Gaussian noise and then segmented, applying the three models of reference and the proposed one. Two error measures were considered, from segmented images with respect to the original, namely the mean square error (MSE) and the mean absolute error (MAE).

Figure 7 shows the synthetic image to the left, and to the right, we have the same image degraded by Gaussian noise with σn2=20. Figure 8 contains the segmentation results obtained with each model. From left to right we have the segmentation result using the semi-Huber MRF, segmentation result using the GGMRF, segmentation result using the Welsh’s MRF and segmentation result using the Tukey’s MRF. Numerical results of the error measures are shown in Table 5, where it can be seen that the proposed model presents another advantage, the smaller error measures.

 figure: Fig. 7

Fig. 7 Synthetic image, original and degraded by noise.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Segmentation results of the chessboard synthetic image corresponding to each model.

Download Full Size | PDF

Tables Icon

Table 5. Numerical results of the error measures for the segmentation of the synthetic image with each model.

6. Conclusion

It was proposed the semi-Huber MRF model to construct a novel algorithm for image segmentation. We verified that this proposal has a satisfactory performance. Times in execution were similar and visual segmentation results were agree with those obtained from models reported in other works. In the case of the Generalized Gaussian, Welsh’s and Tukey’s Markov random fields, several tests had to be made to find adequate parameter values for each kind of image, since one has more degrees of freedom. While, for the semi-Huber MRF we obtained good results with fewer tests because the parameter adjustment is less complicated. On the other hand, the proposed model produced very good results concerning to error measures. Some of the experiments presented were made on geographical images because we pretend an application on the analysis of this kind of images, properly that concerning to hydrographic resources.

Acknowledgments

The authors would like to thank the FOMIX-CONACyT Gobierno del Estado de Zacatecas of México under project number ZAC-2007-CO1-82136 for the invaluable support that made possible to achieve this research. Osvaldo Gutiérrez wants to thank to Instituto Tecnológico Superior de Fresnillo authorities for all the support provided for the realization of this work, and also the support received from PIFI 2010.

References and links

1. R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Processing Using MATLAB, (Prentice Hall, 2004).

2. X. Cufí, X. Muñoz, J. Freixenet, and J. Martí, “A review on image segmentation thechniques integrating region and boundary information,” Adv. Imag. Elect. Phys. 120, 1–39 (Elsevier, 2003). [CrossRef]  

3. M. M. Fernández, “Contribuciones al análisis automático y semiautomático de ecografía fetal tridimensional mediante campos aleatorios de Markov y contornos activos. Ayudas al diagnóstico precoz de malformaciones,” PhD Thesis, Escuela Técnica Superior de Ingenieros de Telecomunicación, Universidad de Valladolid, November 2001.

4. K. Sauer and C. Bouman, “Bayesian estimation of transmission tomograms using segmentation based optimization,” IEEE Trans. Nucl. Sci. 39(4), 1144–1152 (1992). [CrossRef]  

5. C. Bouman and K. Sauer, “A generalized Gaussian image model for edge-preserving MAP estimation,” IEEE Trans. Image Process. 2(3), 296–310 (1993). [CrossRef]   [PubMed]  

6. K. Held, E. R. Kops, B. J. Krause, W. M. Wells III, R. Kikinis, and H. W. Müller-Gärtner, “Markov random field segmentation of brain MR images,” IEEE Trans. Med. Imaging 16(6), 878–886 (1997). [CrossRef]  

7. L. Cordero-Grande, P. Casaseca-de-la-Higuera, M. Martín-Fernández, and C. Alberola-López, “Endocardium and epicardium contour modeling based on Markov random fields and active contours,” in Proc. of IEEE EMBS Annu. Int. Conf. , 928–931 (2006).

8. Y. Zhang, M. Brady, and S. Smith, “Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm,” IEEE Trans. Med. Imaging 20(1), 45–57 (2001). [CrossRef]   [PubMed]  

9. S. Krishnamachari and R. Chellappa, “Multiresolution Gauss-Markov random field models for texture segmentation,” IEEE Trans. Image Process. 6(2), 251–267 (1997). [CrossRef]   [PubMed]  

10. D. A. Clausi and B. Yue, “Comparing cooccurrence probabilities and Markov random fields for texture analysis of SAR sea ice imagery,” IEEE Trans. Geosci. Remote Sens. 42(1), 215–228 (2004). [CrossRef]  

11. Y. Li and P. Gong, “An efficient texture image segmentation algorithm based on the GMRF model for classification of remotely sensed imagery,” Int. J. Remote Sens. 26(22), 5149–5159 (2005). [CrossRef]  

12. S. Geman and C. Geman, “Stochastic relaxation, Gibbs distribution, and the Bayesian restoration of images,” IEEE Trans. Pattern Anal. Mach. Intell. 6, 721–741 (1984). [CrossRef]  

13. J. E. Besag, “On the statistical analysis of dirty pictures,” J. Roy. Stat. Soc. B 48, 259–302 (1986).

14. S. Z. Li, “MAP image restoration and segmentation by constrained optimization,” IEEE Trans. Image Process. 7(12), 1730–1735 (1998). [CrossRef]  

15. R. Pan and S. J. Reeves, “Efficient Huber-Markov edge-preserving image restoration,” IEEE Trans. Image Process. 15(12), 3728–3735 (2006). [CrossRef]   [PubMed]  

16. M. Rivera and J. L. Marroquin, “Efficent half-quadratic regularization with granularity control,” Image Vision Comput. 21, 345–357 (2003). [CrossRef]  

17. M. Rivera, O. Ocegueda, and J. L. Marroquin, “Entropy-controlled quadratic Markov measure field models for efficient image segmentation,” IEEE Trans. Image Process. 16(12), 3047–3057 (2007). [CrossRef]   [PubMed]  

18. M. Mignotte, “A segmentation-based regularization term for image deconvolution,” IEEE Trans. Image Process. 15(7), 1973–1984 (2006). [CrossRef]   [PubMed]  

19. H. Deng and D. A. Clausi, “Unsupervised image segmentation using a simple MRF model with a new implementation scheme,” Pattern Recogn. 37, 2323–2335 (2004).

20. O. Lankoande, M. M. Hayat, and B. Santhanam, “Segmentation of SAR images based on Markov random field model,” in Proc. of IEEE Int. Conf. on Systems, Man, and Cybernetics , 2956–2961 (2005).

21. X. Lei, Y. Li, N. Zhao, and Y. Zhang, “Fast segmentation approach for SAR image based on simple Markov random field,” J. Syst. Eng. Electron. 21(1), 31–36 (2010).

22. J. Marroquin, S. Mitter, and t. Poggio, “Probabilistic solution of ill-posed problems in computational vision,” J. Amer. Statist. Assoc. 82(397), 76–89 (1987). [CrossRef]  

23. R. Szeliski, “Bayesian modeling of uncertainty in low-level vision,” Int. J. Comput. Vision 5(3), 271–301 (1990). [CrossRef]  

24. J. I. de la Rosa, J. J. Villa, and Ma. A. Araiza, “Markovian random fields and comparison between different convex criteria optimization in image restoration,” in Proc. XVII Int. Conf. on Electronics, Communications and Computers, 9 (CONIELECOMP, 2007).

25. S. Z. Li, Markov Random Field Modeling in Image Analysis (Springer-Verlag, 2009).

26. J. E. Besag, “Spatial interaction and the statistical analysis of lattice systems,” J. Roy. Stat. Soc. B 36, 192–236 (1974).

27. J. I. de la Rosa and G. Fleury, “Bootstrap methods for a measurement estimation problem,” IEEE Trans. Instrum. Meas. 55(3), 820–827 (2006). [CrossRef]  

28. M. Nikolova and R. Chan, “The equivalence of half-quadratic minimization and the gradient linearization iteration,” IEEE Trans. Image Process. 16(6), 1623–1627 (2007). [CrossRef]   [PubMed]  

29. T. F. Chan, S. Esedoglu, and M. Nikolova, “Algorithms for finding global minimizers of image segmentation and denoising models,” SIAM J. Appl. Math. 66(5), 1632–1648 (2006). [CrossRef]  

30. M. Nikolova, “Functionals for signal and image reconstruction: properties of their minimizers and applications,” Research report to obtain the Habilitation à diriger des recherches, Centre de Mathématiques et de Leurs Applications (CMLA), Ecole Normale Supérieure de Cachan (2006).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Neighborhood sets for a single site of a lattice.
Fig. 2
Fig. 2 Cliques for the second order neighborhood system.
Fig. 3
Fig. 3 Semi-Huber cost function.
Fig. 4
Fig. 4 Top row: original brain image, brain image corrupted by Gaussian noise and the segmentation result using the semi-Huber MRF. Bottom row: segmentation result using the GGMRF, segmentation result using the Welsh’s MRF and segmentation result using the Tukey’s MRF.
Fig. 5
Fig. 5 Top row: original dike image, dike image corrupted by Gaussian noise and the segmentation result using the semi-Huber MRF. Bottom row: segmentation result using the GGMRF, segmentation result using the Welsh’s MRF and segmentation result using the Tukey’s MRF.
Fig. 6
Fig. 6 Top row: original satellite image of Villahermosa, Tabasco, image corrupted by Gaussian noise and the segmentation result using the semi-Huber MRF. Bottom row: segmentation result using the GGMRF, segmentation result using the Welsh’s MRF and segmentation result using the Tukey’s MRF.
Fig. 7
Fig. 7 Synthetic image, original and degraded by noise.
Fig. 8
Fig. 8 Segmentation results of the chessboard synthetic image corresponding to each model.

Tables (5)

Tables Icon

Table 1 Computation times taken by each model of MRF for segmentation of brain image.

Tables Icon

Table 2 List of parameter values for segmentation results in Fig. 4.

Tables Icon

Table 3 Computation times taken by each model of MRF for segmentation of dike image.

Tables Icon

Table 4 List of parameter values for segmentation results in Fig. 5.

Tables Icon

Table 5 Numerical results of the error measures for the segmentation of the synthetic image with each model.

Equations (25)

Equations on this page are rendered with MathJax. Learn more.

y = H x + n .
p ( x | y ) = p ( y | x ) p ( x ) p ( y ) ,
x ^ MAP = arg max x 𝕏 { p ( x | y ) } = arg max x 𝕏 { log p ( y | x ) + log g ( x ) } ,
= { i | i 𝕊 } ,
g ( x ) = 1 Z exp ( 1 T U ( x ) ) ,
Z = x 𝕏 exp ( 1 T U ( x ) ) ,
U ( x ) = c 𝔺 V c ( x ) ,
g ( x ) = λ 2 ( 2 π ) N / 2 | B | 1 / 2 exp ( λ 2 x t B x ) ,
log g ( x ) = λ 2 ( s 𝕊 a s x s 2 + { s , r } 𝔺 b s r | x s x r | 2 ) ,
log g ( x ) = λ p ( s 𝕊 a s x s p + { s , r } 𝔺 b s r | x s x r | p ) + c ,
log g ( x ) = λ p ( { s , r } 𝔺 b s r | x s x r | p ) + c .
log g ( x ) = λ ( μ { s , r } 𝔺 b s r φ 1 ( x ) + ( 1 μ ) { s , r } 𝔺 b s r ρ 2 ( x ) ) + c ,
ρ 2 ( x ) = 1 1 2 k exp ( k φ 1 ( x ) ) ,
log g ( x ) = λ ( μ { s , r } 𝔺 b s r φ 1 ( x ) + ( 1 μ ) { s , r } 𝔺 b s r ρ 3 ( x ) ) + c ,
ρ 3 ( x ) = { 1 ( 1 ( 2 e / k ) 2 ) 3 , for | e / k | < 1 / 2 , 1 , in other case ,
x ^ MAP g g = arg min x 𝕏 { s 𝕊 | y s x s | q + σ q λ p { s , r } 𝔺 b s r | x s x r | p } ,
x ^ s g g = arg min x 𝕏 { | y s x s | q + σ q λ p r s b s r | x s x r | p } ,
x ^ MAP wel = arg min x 𝕏 { s 𝕊 | y s x s | 2 + λ ( μ { s , r } 𝔺 b s r φ 1 ( x ) + ( 1 μ ) { s , r } 𝔺 b s r ρ 2 ( x ) ) } .
x ^ s wel = arg min x 𝕏 { | y s x s | 2 + λ ( μ r s b s r φ 1 ( x ) + ( 1 μ ) r s b s r ρ 2 ( x ) ) } .
x ^ MAP tuk = arg min x 𝕏 { s 𝕊 | y s x s | 2 + λ ( μ { s , r } 𝔺 b s r φ 1 ( x ) + ( 1 μ ) { s , r } 𝔺 b s r ρ 3 ( x ) ) } ,
x ^ s tuk = arg min x 𝕏 { | y s x s | 2 + λ ( μ r s b s r φ 1 ( x ) + ( 1 μ ) r s b s r ρ 3 ( x ) ) } .
log g ( x ) = λ ( { s , r } 𝔺 b s r ρ 1 ( x ) ) + c ,
ρ 1 ( x ) = Δ 0 2 2 ( 1 + 4 φ 1 ( x ) Δ 0 2 1 ) .
x ^ MAP qh = arg min x 𝕏 { s 𝕊 | y s x s | 2 + λ { s , r } 𝔺 b s r ρ 1 ( x ) } .
x ^ s qh = arg min x 𝕏 { | y s x s | 2 + λ r s b s r ρ 1 ( x ) } ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.