Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Anisotropic coupled diffusion filter and binarization for the electronic speckle pattern interferometry fringes

Open Access Open Access

Abstract

In this paper novel approaches based on anisotropic coupled diffusion equations are presented to do filter and binarization for ESPI fringes. An advantageous characteristic associated with the proposed technique is that diffusion takes place mainly along the direction of the edge. Therefore, the proposed anisotropic coupled diffusion filter method can avoid blur of the fringe edge and protect the useful information of the fringe patterns. The anisotropic coupled diffusion binarization, which can repair the image boundary anisotropically, is able to avoid the redundant burr. More important, it can be directly applied to the noisy ESPI fringe patterns without much preprocessing, which is a significant advance in fringe analysis for ESPI. The effective of the proposed methods are tested by means of the computer-simulated and experimentally obtained fringe patterns, respectively.

©2012 Optical Society of America

1. Introduction

Electronic speckle pattern interferometry (ESPI) is a well-known, nondestructive, whole-field technique for measuring displacements [13]. Accurate extraction of phase from fringe patterns [4] is of fundamental importance for successful application of ESPI in obtaining displacement. However, strong grain-shape random noise in the fringe patterns usually leads to significant errors in phase extraction. In order to have accurate and efficient phase extraction, two major tasks are commonly carried out, that is, filtering and binarization of fringe patterns. While filtering of fringe patterns aims to eliminate the noise [5], binarization of the patterns is essential for determination of fringe centerlines and the phase values.

Existing techniques for noise elimination include the use of digital filters in spatial or frequency domain. Mean filtering and median filtering are commonly used, and the frequency filtering converts the image signal into frequency domain using Fourier Transform and then restrains selectively the different frequency components. However, the performances of these approaches are not satisfactory for ESPI fringes due to blurring effect. Recently, filtering based on partial differential equations (called PDEs filter method) has become an attractive research topic [6], with which the task of image processing is carried out by solving PDEs. Since the initial evolving equation (i.e. heat conduction equation) was first used in image filtering in 1983 [7], various forms of PDEs have been proposed for noise removal [813].

With regards to obtaining the ESPI binary fringe patterns, a commonly used method is the OTSU algorithm, which usually performs binarization only once. Applying the OTSU binarization algorithm directly to an original ESPI fringe having strong grain-shape random noise will lead to discontinuities and noise on the binary fringes resulted. Therefore, the fringe patterns must be filtered before the binarization using this traditional method. Although the effectiveness of PDEs in ESPI filtering and enhancement has been demonstrated in early research [1417], there is not much report on the application of PDEs for ESPI binarization. In the early 1990s, a simple algorithm (Merriman–Bence–Osher binarization algorithm, called MBO binarization algorithm for short) for motion by mean curvature was introduced based on the isotropic heat conduction equation [18]. The essential idea of the MBO algorithm is to repair the binary boundary of the image utilizing a heat conduction equation in an iterative manner. However, as the heat conduction equation diffuses information in all directions, this process also destroys the image content.

In this paper, we propose to use PDEs for both ESPI filter and binarization. The objective of this paper is to refine and extend previous work [1416] in the following two aspects: (1) New anisotropic coupled diffusion equations are employed to filter off the noise in ESPI, with which the diffusion manner and fidelity gene are improved; (2) The problem of fringe destruction associated with the MBO algorithm is solved by means of the anisotropic coupled diffusion equations, yielding a new binarization algorithm. Compared with MBO binarization algorithm, the binary images obtained by the proposed method are much more accurate.

2. The typical PDE image processing models

PDE based method manipulates images in continuous form, but with efficient numerical implementations. In general, let I: R2R represents a gray-level image to be processed, where I(x, y) is the gray-level value. The image deforms with time t according to the following:

ut=F[u(x,y,t)],u(x,y,0)=I(x,y)
where u(x, y, t): R2 × [0,τ]→R is the evolving image. F: RR is an operator which depends on the purposes of the processing, such as denoising, enhancement, segmentation, and the image I is the initial condition. The solution u(x, y, t) of Eq. (1) gives the processed image.

Various PDE models have been proposed for image denoising [17]. Research effort on this topic can be classified into two aspects; one is for controlling the diffusion speed, and the other focuses on the diffusion direction.

Perona and Malik [8] proposed to obtain the denoised image by solving the following anisotropic diffusion equation,

ut=div(c(|u|)u),u(x,y,0)=I(x,y)
where u is the gradient of evolving image u with respect to the filtered result, c is a smooth non-increasing function with c(0) = 1, c(s)≥0, and c(∞)→0, and div(s) denotes the divergence to s. The usual choice for c is of the form
ck(s)=[1+(s/k)2]1
where k is a constant parameter. The smoothing process is carried out based on the following ideas: if u(x,y) is big, the diffusion will be low and therefore the exact localization of the “edges” will be kept; if u(x,y) is small, then the diffusion will tend to smooth still more around (x, y). It is obvious that c influences the diffusion speed. Therefore minimal smoothing is acquired around the edge of the image. However, u(x,y) has big value also if (x, y) is an isolated noise. Consequently, Eq. (2) is very sensitive to noise.

F. Catté [9] proposed a selective smoothing model as follows

ut=div(c(|Gσ*u|)u),u(x,y,0)=I(x,y)
where Gσ is a Gaussian smoothing kernel. With the model in Eq. (4), the isolated noise is smoothed by the Gaussian kernel first. This model avoids the drawback of noise sensitivity. While Eq. (4) only has different diffusion speed for edge and flat region, it cannot distinguish tangent direction of the edge from the normal direction of the edge. As the results, important edges, especially corners and junctions, may not be well preserved.

To avoid edge blurring, it is natural for us to modify the diffusion operator so that it diffuses more in the direction along the edge. Such diffusion would apparently keep the location and sharpness of the edge and it can also smooth the picture on both sides of the edge. Such an anisotropic diffusion PDE, which dose not diffuse at all in the direction of the gradient, can be expressed as [10]:

ut=2uD2u(u,u)|u|2,u(x,y,0)=I(x,y)
where D2u(u,u)/|u|2 is the secondary partial difference along the gradient direction of u. The first term is the Laplacian which is the same as in scale space theory. The second term is an “inhibition” of diffusion in the direction of gradient. Equation (5) represents a degenerate diffusion. The equivalent form of Eq. (5) was adopted for computing, that is [10]
ut=|u|div(u|u|),u(x,y,0)=I(x,y)
Combining Eq. (4) and Eq. (6) yields the following:
ut=c(|v|)|u|div(u|u|),v=Gσu,u(x,y,0)=I(x,y)
The above contains not only the term |u|div(u/|u|), but also c(|Gσu|). The former provides the capability of selecting the diffusion direction and restraining diffusion in the direction of the gradient of edge, and the latter controls the diffusion speed for different regions.

3. The anisotropic coupled diffusion filter method for ESPI fringes

In fact, the direction of diffusion can be considered as the combination of two orthogonal directions. The anisotropic filter is that diffusion along the two orthogonal directions is in different degrees. For example, in Eq. (6) and Eq. (7), diffusion is just allowed along the tangent direction of image edge. Although Eq. (6) and Eq. (7) can protect the image edge effectively, the denoising ability is restrained, especially for ESPI fringe patterns, whose noise is severe very much and fringe edge is not obvious. In order to solve the problem, modal filter in the gradient direction is necessary to ESPI fringe patterns.

In addition, Chen indicates that the choice of v in Eq. (7) is related to the diffusion speed which plays an important role in the results [11]. For instance, v is just u in Eq. (2), and v = Gσ*u in Eq. (7). Therefore care must be taken in the determination of v. From this viewpoint, the isotropic smoothing Gσ*u is not an optimal choice, and better filter results may be obtained for more judicious choices. Chen proposed to determine v by the following nonlinear diffusion equation which resulted in good preservation of edges and corners [11].

vt=a(t)div(v|v|)b(vu)
Note that the evolution parameter a(t) changes with time, and b remains constant.

In consideration of the above and combining the choice of v proposed by Chen, we obtain a set of anisotropic coupled diffusion (ACD) equations as follows,

{ut=c(|v|)[α|u|div(u|u|)+βD2u(u,u)|u|2][1c(|v|)](uI),u(x,y,0)=I(x,y)vt=a(t)div(v|v|)b(vu),v(x,y,0)=I(x,y)

Each term of Eq. (9) is explained as follows:

  • a) v is determined by the second equation in Eq. (9), which performs a nonlinear diffusion and controls v smoothed gradually as time increases, but not far away from u.
  • b) c(|v|) is referred as diffusion gene, which controls the diffusion speed of each pixel. The area associated with small v will obtain the high diffusion speed; whereas, the edge, which has the large value of v, will gain low diffusion speed and will be preserved.
  • c) Two orthogonal directions are visible, i.e. the diffusion term |u|div(u/|u|) along the tangent direction of the edge and the diffusion term D2u(u,u)/|u|2 along the normal direction of the edge. By setting the tangent direction diffusion coefficient α grater than the normal direction diffusion coefficient β, filter can mainly along the direction of the edge and sparingly in the direction crossing the edge. In this case, noise in ESPI fringe patterns can be reduced effectively and edge of the fringe patterns can be protected at the same time.
  • d) [1c(|v|)](uI) is the fidelity term makes the smoothed image not deviate too much from the original data. It has large value around the true edge of image for good fidelity ability and small value in the interior of fringe for effective filter. The fidelity gene [1c(|v|)] is related to v, but not only is a constant or depends on u. This is an important improvement of our method different from the existing coupled PDEs filter method.

In order to solve Eq. (9) numerically, the equation has to be discretized. The images are represented by M × N matrices of intensity values. So, for any function (i.e., image) u(x, y), we let ui,j denote u(i, j) for 1<i<M, 1<j<N. The evolution equations give images at times tn=nΔt. We denote u(i,j,tn) by ui,jn.

The time derivative ut at (i, j, tn) is approximated by the forward difference

(ut)i,jn=ui,jn+1ui,jnΔt
where Δt is step-size in time.

The spatial derivatives are as follows

(ux)i,jn=(ui+1,jnui1,jn)/2
(uy)i,jn=(ui,j+1nui,j1n)/2
(uxx)i,jn=ui+1,jn2ui,jn+ui1,jn
(uyy)i,jn=ui,j+1n2ui,jn+ui,j1n
(uxy)i,jn=(ui+1,j+1nui1,j+1nui+1,j1n+ui1,j1n)/4
|ui,jn|=([(ux)i,jn]2+[(uy)i,jn]2)12

The same evaluation can be applied to (vx)i,j, (vy)i,j, (vxx)i,j, (vyy)i,j and (vxy)i,j. Then the diffusion term is expressed by the following:

|ui,jn|div(ui,jn|ui,jn|)=[(uy)i,jn]2(uxx)i,jn2(ux)i,jn(uy)i,jn(uxy)i,jn+[(ux)i,jn]2(uyy)i,jn[(ux)i,jn]2+[(uy)i,jn]2
D2ui,jn(ui,jn,ui,jn)|ui,jn|2=[(ux)i,jn]2(uxx)i,jn+2(ux)i,jn(uy)i,jn(uxy)i,jn+[(uy)i,jn]2(uyy)i,jn[(ux)i,jn]2+[(uy)i,jn]2
div(vi,jn|vi,jn|)=[(vy)i,jn]2(vxx)i,jn2(vx)i,jn(vy)i,jn(vxy)i,jn+[(vx)i,jn]2(vyy)i,jn([(vx)i,jn]2+[(vy)i,jn]2)32

By denoting ξi,jn as an approximation of c(|v(i,j,tn)|)=11+{|vi,jn|/k}2, we obtain the explicit discrete scheme of Eq. (9) as follows:

{ui,jn+1=ui,jn+Δt{ξi,jn[α|ui,jn|div(ui,jn|ui,jn|)+βD2ui,jn(ui,jn,ui,jn)|ui,jn|2](1ξi,jn)(ui,jnIi,j)}vi,jn+1=vi,jn+andiv(vi,jn|vi,jn|)b(vi,jnui,jn)

In order to demonstrate the performance of the ACD filter method, two groups of ESPI fringe patterns and their filter results are shown. One is based on the computer-simulated fringe, and the other is the experimentally obtained fringe. The images are denoised by the single anisotropic diffusion Eq. (7) (called the SAD filter method for short) and the proposed ACD filter method for comparison.

Figure 1 gives a group of computer-simulated fringe patterns, which is generated based on the equation

I(x,y)=P(x,y)+Q(x,y)Nm(x,y)cosφ(x,y)+NA(x,y)
where P(x,y) is the background, and Q(x,y)/P(x,y) is the fringe contrast; NA(x,y) and Nm(x,y) are the additive noise and multiplicative noise of fringe pattern respectively. Here, we chose P(x,y)150, Q(x,y)50, Nm(x,y) uniformly distributed over the intervals [0, 1], and NA(x,y) is the additive white Gaussian noise which can be generated as the method in Ref [19]. The phase ϕ(x,y) is calculated by the following:
ϕ(x,y)=20π[(xM/2M)2+(yN)2],x=1,2,,M,y=1,2,,N
where M and N represent the sizes of the image. To study the influences of the speckle size in the algorithm performance, noise images for NA(x,y) with the speckle size of one and two pixels are generated by the method of Ref [19]. respectively. Two initial ESPI fringe patterns are simulated by Eq. (14) and Eq. (15), which are shown in Figs. 1(a-1) and 1(b-1).

 figure: Fig. 1

Fig. 1 The computer-simulated fringe patterns and their filtered images. (a-1) Initial image with the speckle size of one pixel. (a-2) SAD filtered result of Fig. 1(a-1), (a-3) ACD filtered result of Fig. 1(a-1), (b-1) Initial image with the speckle size of two pixels. (b-2) SAD filtered result of Fig. 1 (b-1), (b-3) ACD filtered result of Fig. 1 (b-1).

Download Full Size | PDF

Figure 2 is an experimentally obtained fringe. The fringe pattern in Fig. 2(a) depicts the derivative of the out-of-plane displacement of a steel plate. The steel is rigidly clamped at its boundary and is subjected to a load.

 figure: Fig. 2

Fig. 2 An experimentally obtained fringe pattern and its filtered images. (a) Initial image. (b) SAD filtered result of Fig. 2(a), (c) ACD filtered result of Fig. 2(a).

Download Full Size | PDF

To avoid the inherent edge distortion error, we discarded 10 pixels at the boundary. Figures 1(a-2) and 1(a-3) are the SAD and ACD filtered results of Fig. 1(a-1) respecrively. Similar results for Figs. 1(b-1) and 2(a) are shown in Figs. 1(b-2), 1(b-3) and Figs. 2(b), 2(c) respectively. The values of parameters a(t) and b in ACD filter method are b = 0.02, an = 35 for 0≤n≤9, an = an-1-0.7 for 10≤n≤55, and an = an-1/2 if n is bigger than 55, and the rest parameters in filter are given in Table 1 .

Tables Icon

Table 1. Parameters Used in Figs. 1 and 2

The results show that both the SAD and the ACD filter method can remove noise effectively and protect the useful information for the fringe patterns. However, it is noticed that there are some blocks inside in the SAD filtered result, (see in Figs. 1(a-2), 1(b-2) and 2(b)). This is because the SAD filter method only filters along the image edge, and the concentrated high brightness pixels would not be considered as noise and be protected as a block. While in the ACD filtered results, the fringe patterns are more smoothed (see in Figs. 1(a-3), 1(b-3) and 2(c)), which benefits from the modal filter in the gradient direction.

In addition, from Fig. 1, one can see that although both SAD filtered results and the proposed ACD filtered results get worse with the increase of speckle size, the quality of our filtered results are always better than that of SAD in the same condition of speckle size.

In order to quantitatively evaluate the performance of the two filter methods further, a comparative parameter, the speckle index s [20], is calculated for each filtered results. The speckle index, which is used to quantify the local smoothness of the filtered fringe patterns, is evaluated as the average of the ratios of the local standard deviation to its mean:

s=1M×Nk=1Ml=1Nσk,lIk,l
where Ik,l is the average gray in the neighborhood for window of 3×3 pixels of the current point, and local standard deviation σk,l is defined as
σk,l=18i=11j=11(Iki,ljIk,l)2
The speckle index can be regarded as an average reciprocal of SNR (signal to noise ratio), where the signal is the mean value and the noise is the standard deviation. Therefore, a low speckle index will be regarded as an indication of local smoothness of the fringe pattern.

Table 2 shows the speckle index values of each filtered results in Figs. 1 and 2. The distinct results emerge from the analysis of the numerical tests. First, as the size of speckle increasing, the speckle index gets better for both two methods. The reason is as follows. Due to the noises in the fringe patterns, the gray level of some pixels is changed. The small-sized speckle noises can be removed more easily and the gray level of that pixel can be recovered. However, the large-sized speckle noise is hardly to be removed and it only be averaged with its neighbors. Therefore the filtered image corresponding to large-sized noise ESPI fringe pattern will be smoother, i.e. has lower speckle index. Second, our proposed ACD algorithm can give lower speckle index than SAD filter method in the same conditions for Figs. 1 and 2. Therefore, our filtering results are better than those obtained by SAD filter method. Such results are expected because our noise-reduction algorithm has better noise reduction capabilities.

Tables Icon

Table 2. Performance Evaluation for Each Filtered Image Shown in Figs. 1 and 2

4. The anisotropic coupled diffusion binarization algorithm for ESPI fringes

Now let us consider the problem of ESPI fringe binarization. A commonly used approach is the Merriman-Bence-Osher algorithm which can be implemented as follows [18]:

  • a) Initialize: u(x,y,0)=χD(x,y), where χ is the characteristic function for the initial region DRd. For the first iteration, u(x,y,0)=I(x,y).
  • b) Apply diffusion to u, i.e., solving
    ut=2u,u(x,y,0)=χD(x,y)
  • c) “Sharpen” the diffused region by setting
    u={1ifu>T0otherwise
where T[0,1]. Then begin again.

For anytime t, the level set {(x,y):u(x,y,t)=T} gives the location of the dividing line of white and black region.

The explicit discrete scheme of Eq. (18) can be expressed as

ui,jn+1=ui,jn+Δt(ui+1,jn+ui1,jn+ui,j+1n+ui,j1n4ui,jn)

The MBO algorithm can be regarded as to implement Eq. (18) several times. This implement number represents the operation number of times of the diffusion equation (i.e. Equation (18)), and is designated by Ne for convenience. Ne is different from the parameter n, which denotes the number of iterations to solve the diffusion equation. There are two main steps in each cycle (related to the parameter Ne) in the MBO algorithm, i.e. solving Eq. (18) (related to the parameter n) and binarizing its diffusion result. Therefore, the essential idea of MBO algorithm is that the binary boundary of the image is repaired utilizing the heat conduction equation in an iterative manner, i.e. binarizing the diffusion result at the end of each iteration. In the process, the boundary of the image will be gradually repaired. In fact, Eq. (18) is the isotropic diffusion equation for describing the heat conduction. Since it diffuses the information equally in all directions, this processing also destroys the image content.

Replace the heat conduction equation by the anisotropic coupled diffusion equations (Eq. (9)) in above binarization processing, we have

{ut=c(|v|)[α|u|div(u|u|)+βD2u(u,u)|u|2][1c(|v|)](uI),u(x,y,0)=χD(x,y)vt=a(t)div(v|v|)b(vu),v(x,y,0)=χD(x,y)
For the first iteration, u(x,y,0)=I(x,y), v(x,y,0)=I(x,y). The above is called the anisotropic coupled diffusion binarization (ACD binarization) algorithm. As the anisotropic coupled diffusion equations repair the image boundary mainly along the fringe direction, the useful information of the fringe patterns can be well reserved.

In order to demonstrate the performance of the above method, we applied the method to Figs. 1(b-1) and 2(a), and the results of the ACD binarization algorithm are compared with other existing techniques, including the classical OTSU binarization method [21] and the MBO binarization algorithm [18]. OTSU method is used for histogram shape-based image thresholding, or, to a gray level to binary image conversion [21]. The algorithm is based on the assumption that the image has two classes of pixels or bi-modal histogram (e.g. foreground and background) and an optimum threshold can be used to separate those two classes so that their combined spread (intra-class variance) is minimal. Because the original ESPI fringe contains strong grain-shape random noise, the ESPI fringe patterns must be filtered before OTSU binarization method is applied to the image patterns. In contrast, the MBO binarization algorithm and the ACD binarization algorithm, which are able to change the image content gradually in each diffusion process, can direct be used on the noisy ESPI fringe.

Figures 3(a)-(c) show the OTSU binarization result of Fig. 1(b-3), the MBO binarization result and the ACD binarization result of Fig. 1(b-1), respectively, and their centerline images acquired by the Hilditch algorithm [22] are given in Figs. 3(d)-(f). Similar results for the experimentally obtained fringe pattern (Fig. 2) are shown in Fig. 4 . The values of parameters a(t) and b in ACD binarization algorithm are same as Fig. 1(a-3) and the rest parameters for Figs. 3 and 4 are given in Table 3 .

 figure: Fig. 3

Fig. 3 Binarization results of Fig. 1 and their centerline images. (a) OTSU binarization result of Fig. 1(b-3). (b) MBO binarization result of Fig. 1(b-1). (c) ACD binarization result of Fig. 1(b-1). (d) Centerline of Fig. 3(a). (e) Centerline of Fig. 3(b). (f) Centerline of Fig. 3(c).

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Binarization results of Fig. 2 and their centerline images. (a) OTSU binarization result of Fig. 2(c). (b) MBO binarization result of Fig. 2(a). (c) ACD binarization result of Fig. 2(a). (d) Centerline of Fig. 4(a). (e) Centerline of Fig. 4(b). (f) Centerline of Fig. 4(c).

Download Full Size | PDF

Tables Icon

Table 3. Parameters Used in Figs. 3 and 4

The advantage of the ACD binarization algorithm is obvious in contrast to the other two methods. For OTSU method, preprocessing of the original ESPI fringes is absolutely necessary. If the boundary of the filtered fringe are not smooth, the fringe centerline obtained based on the OTSU binarization result will contain slight burrs (see Figs. 3(d)). Figures 3(b), 3(c), 4(b) and 4(c) are obtained from the original ESPI fringes directly without any preprocessing. However, the MBO binarization algorithm, which is based on the isotropic heat conduction equation, destroys the fringe content (see Figs. 3(b) and 4(b)) and leads to bad centerline results (see Figs. 3(e) and 4(e)). In contrast, the ACD binarization algorithm repairs the image boundary anisotropically, and hence the ideal binary fringe patterns and centerline results can be successfully obtained (see Figs. 3(c), 4(c), 3(f) and 4(f)).

5. Conclusion and outlook

We present the use of efficient anisotropic coupled diffusion (ACD) equations for filtering and binarization of the electronic speckle pattern interferometry fringes. The proposed ACD equations are advantageous in three important aspects. Firstly, the anisotropic diffusion term controls the diffusion direction in such a way that diffusion is mainly along the direction of the edge and sparingly in the direction crossing the edge, leading to effective image detail protection and noise elimination. Secondly, the fidelity term assures the filtered image not being far away from the initial image. Thirdly, the diffusion gene, which is to control diffusion speed of each pixel, is determined by a nonlinear diffusion equation. The effectiveness of proposed ACD filter method is demonstrated by means of both computer-simulated fringe and experimentally obtained fringe patterns, and significant improvement in the quality of the fringe patterns are demonstrated. Another advantage of the ACD binarization algorithm is its simplicity. It can work directly on original noisy images and preprocessing is not required. One more merit of the proposed ACD binarization algorithm is that it can repair the image boundary anisotropically, and hence it is able to avoid the redundant burr and protect the useful content of the fringe.

Acknowledgment

Sponsored by National Nature Science Foundation of China under grant No.61102150, and Open Foundation of Key laboratory of Opto-electronic Information Technology of Ministry of Education (Tianjin University) under grant No.2012KFKT003. The authors thank the editors and the anonymous reviewers for their constructive and helpful comments.

References and links

1. V. Bavigadda, R. Jallapuram, E. Mihaylova, and V. Toal, “Electronic speckle-pattern interferometer using holographic optical elements for vibration measurements,” Opt. Lett. 35(19), 3273–3275 (2010). [CrossRef]   [PubMed]  

2. J. L. Marroquin, M. Servin, and R. Rodriguez-Vera, “Adaptive quadrature filters and the recovery of phase from fringe pattern images,” J. Opt. Soc. Am. A 14(8), 1742–1753 (1997). [CrossRef]  

3. K. Genovese, L. Lamberti, and C. Pappalettere, “A comprehensive ESPI based system for combined measurement of shape and deformation of electronic components,” Opt. Lasers Eng. 42(5), 543–562 (2004). [CrossRef]  

4. G. Rajshekhar and P. Rastogi, “Editorial, “Fringe analysis: Premise and perspectives,” Opt. Lasers Eng. 50(8), iii–x (2012). [CrossRef]  

5. Q. Yu, X. Sun, X. Liu, and Z. Qiu, “Spin filtering with curve windows for interferometric fringe patterns,” Appl. Opt. 41(14), 2650–2654 (2002). [CrossRef]   [PubMed]  

6. C. Tang, L. Wang, H. Yan, and C. Li, “Comparison on performance of some representative and recent filtering methods in electronic speckle pattern interferometry,” Opt. Lasers Eng. 50(8), 1036–1051 (2012). [CrossRef]  

7. A. P. Witkin, “Scale-space filtering,” Proc. IJCAI, 1019–1021 (Karlsruhe, 1983).

8. P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 12(7), 629–639 (1990). [CrossRef]  

9. F. Catté, P.-L. Lions, J.-M. Morel, and T. Coll, “Image selective smoothing and edge detection by nonlinear diffusion,” SIAM J. Numer. Anal. 29(1), 182–193 (1992). [CrossRef]  

10. L. Alvarez, P.-L. Lions, and J.-M. Morel, “Image selective smoothing and edge detection by nonlinear diffusion,” SIAM J. Numer. Anal. 29(3), 845–866 (1992). [CrossRef]  

11. Y. Chen, C. A. Z. Barcelos, and B. A. Mair, “Smoothing and edge detection by time-varying coupled nonlinear diffusion equations,” Comput. Vis. Image Underst. 82(2), 85–100 (2001). [CrossRef]  

12. C. Tang, L. Han, H. Ren, T. Gao, Z. Wang, and K. Tang, “The oriented-couple partial differential equations for filtering in wrapped phase patterns,” Opt. Express 17(7), 5606–5617 (2009). [CrossRef]   [PubMed]  

13. Y. Wang, X. Ji, and Q. Dai, “Fourth-order oriented partial-differential equations for noise removal of two-photon fluorescence images,” Opt. Lett. 35(17), 2943–2945 (2010). [CrossRef]   [PubMed]  

14. C. Tang, F. Zhang, H. Yan, and Z. Chen, “Denoising in electronic speckle pattern interferometry fringes by the filtering method based on partial differential equations,” Opt. Commun. 260(1), 91–96 (2006). [CrossRef]  

15. C. Tang, F. Zhang, B. Li, and H. Yan, “Performance evaluation of partial differential equation models in electronic speckle pattern interferometry and the delta-mollification phase map method,” Appl. Opt. 45(28), 7392–7400 (2006). [CrossRef]   [PubMed]  

16. F. Zhang, W. Liu, C. Tang, J. Wang, and L. Ren, “Variational denoising method for electronic speckle pattern interferometry,” Chin. Opt. Lett. 6(1), 38–40 (2008). [CrossRef]  

17. C. Tang, L. Wang, and H. Yan, “Overview of anisotropic filtering methods based on partial differential equations for electronic speckle pattern interferometry,” Appl. Opt. 51(20), 4916–4926 (2012). [CrossRef]   [PubMed]  

18. B. Merriman, J. Bence, and S. Osher, “Motion of multiple junctions: A level set approach,” J. Comput. Phys. 112(2), 334–363 (1994). [CrossRef]  

19. P. Zhou and K. E. Goodson, “Subpixel displacement and deformation gradient measurement using digital image/speckle correlation (DISC),” Opt. Eng. 40(8), 1613–1620 (2001). [CrossRef]  

20. A. Dávila, G. H. Kaufmann, and D. Kerr, “Scale-space filter for smoothing electronic speckle pattern interferometry fringes,” Opt. Eng. 35(12), 3549–3554 (1996). [CrossRef]  

21. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE T. Syst Man Cy. 9(1), 62–66 (1979). [CrossRef]  

22. http://cgm.cs.mcgill.ca/~godfried/teaching/projects97/azar/skeleton.html.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1
Fig. 1 The computer-simulated fringe patterns and their filtered images. (a-1) Initial image with the speckle size of one pixel. (a-2) SAD filtered result of Fig. 1(a-1), (a-3) ACD filtered result of Fig. 1(a-1), (b-1) Initial image with the speckle size of two pixels. (b-2) SAD filtered result of Fig. 1 (b-1), (b-3) ACD filtered result of Fig. 1 (b-1).
Fig. 2
Fig. 2 An experimentally obtained fringe pattern and its filtered images. (a) Initial image. (b) SAD filtered result of Fig. 2(a), (c) ACD filtered result of Fig. 2(a).
Fig. 3
Fig. 3 Binarization results of Fig. 1 and their centerline images. (a) OTSU binarization result of Fig. 1(b-3). (b) MBO binarization result of Fig. 1(b-1). (c) ACD binarization result of Fig. 1(b-1). (d) Centerline of Fig. 3(a). (e) Centerline of Fig. 3(b). (f) Centerline of Fig. 3(c).
Fig. 4
Fig. 4 Binarization results of Fig. 2 and their centerline images. (a) OTSU binarization result of Fig. 2(c). (b) MBO binarization result of Fig. 2(a). (c) ACD binarization result of Fig. 2(a). (d) Centerline of Fig. 4(a). (e) Centerline of Fig. 4(b). (f) Centerline of Fig. 4(c).

Tables (3)

Tables Icon

Table 1 Parameters Used in Figs. 1 and 2

Tables Icon

Table 2 Performance Evaluation for Each Filtered Image Shown in Figs. 1 and 2

Tables Icon

Table 3 Parameters Used in Figs. 3 and 4

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

u t =F[ u(x,y,t) ],u( x,y,0 )=I( x,y )
u t =div( c( | u | )u ),u( x,y,0 )=I( x,y )
c k ( s )= [ 1+ ( s/k ) 2 ] 1
u t =div( c( | G σ *u | )u ),u( x,y,0 )=I( x,y )
u t = 2 u D 2 u( u,u ) | u | 2 ,u( x,y,0 )=I( x,y )
u t =| u |div( u | u | ),u( x,y,0 )=I( x,y )
u t =c( | v | )| u |div( u | u | ),v= G σ u,u( x,y,0 )=I( x,y )
v t =a( t )div( v | v | )b( vu )
{ u t =c( | v | )[ α| u |div( u | u | )+β D 2 u( u,u ) | u | 2 ][ 1c( | v | ) ]( uI ),u(x,y,0)=I(x,y) v t =a( t )div( v | v | )b( vu ),v(x,y,0)=I(x,y)
( u t ) i,j n = u i,j n+1 u i,j n Δt
( u x ) i,j n = ( u i+1,j n u i1,j n ) /2
( u y ) i,j n = ( u i,j+1 n u i,j1 n ) /2
( u xx ) i,j n = u i+1,j n 2 u i,j n + u i1,j n
( u yy ) i,j n = u i,j+1 n 2 u i,j n + u i,j1 n
( u xy ) i,j n = ( u i+1,j+1 n u i1,j+1 n u i+1,j1 n + u i1,j1 n ) /4
| u i,j n |= ( [ ( u x ) i,j n ] 2 + [ ( u y ) i,j n ] 2 ) 1 2
| u i,j n |div( u i,j n | u i,j n | )= [ ( u y ) i,j n ] 2 ( u xx ) i,j n 2 ( u x ) i,j n ( u y ) i,j n ( u xy ) i,j n + [ ( u x ) i,j n ] 2 ( u yy ) i,j n [ ( u x ) i,j n ] 2 + [ ( u y ) i,j n ] 2
D 2 u i,j n ( u i,j n , u i,j n ) | u i,j n | 2 = [ ( u x ) i,j n ] 2 ( u xx ) i,j n +2 ( u x ) i,j n ( u y ) i,j n ( u xy ) i,j n + [ ( u y ) i,j n ] 2 ( u yy ) i,j n [ ( u x ) i,j n ] 2 + [ ( u y ) i,j n ] 2
div( v i,j n | v i,j n | )= [ ( v y ) i,j n ] 2 ( v xx ) i,j n 2 ( v x ) i,j n ( v y ) i,j n ( v xy ) i,j n + [ ( v x ) i,j n ] 2 ( v yy ) i,j n ( [ ( v x ) i,j n ] 2 + [ ( v y ) i,j n ] 2 ) 3 2
{ u i,j n+1 = u i,j n +Δt{ ξ i,j n [ α| u i,j n |div( u i,j n | u i,j n | )+β D 2 u i,j n ( u i,j n , u i,j n ) | u i,j n | 2 ]( 1 ξ i,j n )( u i,j n I i,j ) } v i,j n+1 = v i,j n + a n div( v i,j n | v i,j n | )b( v i,j n u i,j n )
I( x,y )=P( x,y )+Q( x,y ) N m ( x,y )cosφ( x,y )+ N A ( x,y )
ϕ(x,y)=20π[ ( xM/2 M ) 2 + ( y N ) 2 ],x=1,2,,M,y=1,2,,N
s= 1 M×N k=1 M l=1 N σ k,l I k,l
σ k,l = 1 8 i=1 1 j=1 1 ( I ki,lj I k,l ) 2
u t = 2 u,u(x,y,0)= χ D ( x,y )
u={ 1ifu>T 0otherwise
u i,j n+1 = u i,j n +Δt( u i+1,j n + u i1,j n + u i,j+1 n + u i,j1 n 4 u i,j n )
{ u t =c( | v | )[ α| u |div( u | u | )+β D 2 u( u,u ) | u | 2 ][ 1c( | v | ) ]( uI ),u(x,y,0)= χ D ( x,y ) v t =a( t )div( v | v | )b( vu ),v(x,y,0)= χ D ( x,y )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.