Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Subspace-based method for phase retrieval in interferometry

Open Access Open Access

Abstract

A subspace-based method is applied to phase shifting interferometry for obtaining in real time values of phase shifts between data frames at each pixel point. A generalized phase extraction algorithm then allows for computing the phase distribution. The method is applicable to spherical beams and is capable of handling nonsinusoidal waveforms in an effective manner. Numerical simulations demonstrate phase measurement with high accuracy even in the presence of noise.

©2005 Optical Society of America

1. Introduction

In phase shifting interferometry, phase increments between the object and reference beams are usually introduced by piezo-actuator device (PZT). One of the sources of error associated with the use of PZT is the phase shift calibration error [1]. This causes the user to rely upon the shift value obtained by the calibration as being exact. Non-sinusoidal waveform of the signal arising due to laser cavity or CCD nonlinearity is another probable source of error in the computation of phase. Several phase shifting algorithms have been proposed which overcome these errors [24]. However, these algorithms impose conditions on phase step values in order to minimize the errors due to nonsinusoidal waveforms which is assumed to be known a priori [24]. Recently a phase shifting algorithm has been proposed which accommodates higher order harmonics and facilitates use of arbitrary phase steps [5]. However, the algorithm requires incorporation of additional denoising procedure in order to extract phase steps in the presence of noise. This procedure introduces an intermediate step which unnecessarily adds to the computational cost of the method. The step could also be a source of potential error.

In this paper an algorithm that can be used to determine the phase of the wavefront in presence of systematic error sources such as phase shift miscalibration and nonsinusoidal waveforms is developed. A salient feature of the algorithm is that it enables for computing in real-time values of phase steps at each pixel point. The algorithm based on MUltiple SIgnal Classification technique (MUSIC) [67] thus besides allowing to handle miscalibration error and nonsinusoidal waveforms offers the flexibility of choosing spherical beams and arbitrary phase step values between 0 and π radians. The paper also investigates the influence of white Gaussian noise on the performance of the algorithm [8]. The advantage of the proposed algorithm as compared to the one previously suggested [5] lies in its ability to determine the phase step values pixelwise without the incorporation of a denoising procedure. The proposed method does not only reduce the computational burden but also eliminates a source of potential error in the determination of phase step values.

MUSIC, a subspace based method, has been successfully used to estimate the frequencies of sinusoids corrupted with white noise. Although spectrum estimation can be handled efficiently by Fourier transform, resolution of closely spaced frequencies in the presence of noise is troublesome with limited number of data samples. Drawing a parallelism between the frequencies present in the spectrum and the phase shifts applied to the PZT, the proposed method offers an innovative means of estimating the phase step values at each pixel point in real time. The method is based on a canonical decomposition of a positive definite Toeplitz matrix formed by an estimated covariance sequence. The sequence corresponding to the phase shifted intensity images buried in white noise is acquired temporally. This decomposition yields a very important result as far as the estimation of frequency is concerned. Since the frequencies are estimated using the eigendecomposition of the covariance matrix, the method is referred as subspace based method.

Section 2 outlines the MUSIC algorithm. Section 3 shows simulation results corresponding to extraction of phase step values using forward-backward approach [9] applied in the design of covariance sequence. Section 4 presents a generalized approach by which the phase distribution can be obtained. This section also demonstrates typical errors that may arise in computing the phase pixel-wise in the presence of noise using the MUSIC algorithm.

2. Subspace-based method

The recorded fringe intensity at a point (x, y) on the tth frame is given by

I(t)=Idc+k=1κakeikφkukt+k=1κak*eikφk*(uk*)t+η(t)for t=0,1,m,,N1

where, Idc is the local average value of intensity, ak is the complex Fourier coefficient, i=√-1, uk =exp(ikα), superscript ∗ denotes the complex conjugate, η is the white Gaussian noise with mean zero and variance σ 2; and φ, α, and k represent phase distribution, phase step, and the order of harmonics, respectively.

First step consists of forming the covariance matrix from N recorded phase shifted sequences. The covariance of signal I(t) in Eq. (1) is defined as [10]

r(p)=E[I(t)I*(tp)]=n=02κAn2eiωnp+σ2δp,0

where, E[·] represents the expectation operator which averages over the ensemble of realizations; the terms A02 ,A12 ,…,Ak2 ….A2κ2 are explained in Appendix A, σ 2 is the variance, and δ p,0 is the Kronecker delta (δ g,h =1 if g=h ; and δg,h =0 otherwise). The reader is referred to Appendix A for the derivation of Eq. (2). The covariance of function I(t) is assumed to depend only on the lag between the two averaged samples. The covariance matrix can thus be written as [67, 10]

RI=E[Ic(t)I(t)]=[r(0)r*(1)..r*(m1)r(1).............r*(1)r(m1)...r(0)]

where, I(t)=[I(t-1),…..,I(t-m)], m is the covariance length, and (·) c is the conjugate transpose of a vector or matrix. The covariance matrix RI can be shown to have the form

RI=APAcRs+σ2IRε

where, R s and R ε are the signal and noise contributions, A m ×(2κ+1)=[a(ω0 ) . . a(ω2κ)] where for instance element a(ω0 )consists of m×1 matrix with unity entries corresponding to Idc ; a(ω1 )=[1 e . . e (m-1)]T; I is the m×m identity matrix; and P (2κ+1)×(2κ+1) matrix is

P=[A020.00A12......0..A2κ2]

Since R I is positive semidefinite, its eigen values are nonnegative. The eigen values of R I can be ordered as λ 1λ 2≥….≥λ n ≥….λm . Let S m×n =[s1,s2,…..sn ] be the orthonormal eigenvectors associated with λ 1λ 2≥…..λn . The space spanned by {s1,s2,…..sn } is known as signal subspace. The set of orthonormal eigenvectors G m×(m-n) =[g1,g2,…..g m-n ] associated with eigen values λ n+1λ n+2≥…..λm spans a subspace known as noise subspace. Since APAc∈C m×m (C represents complex matrix) has rank n (n<m), it has n eigen values and the remaining m-n eigen values are zero. If we further suppose that (λ, r) is an eigenpair of LC m×m and W=LI with ρ∈ C, then (λ+ρ, r) is an eigenpair of W, and in consequence we obtain λt =λ̃ t +σ 2, where t spans from 1, 2,3,…,m. We observe that λ 1λ 2≥…..λ nσ 2 and λ n+1=…..=λm =σ 2. Following this corollary and from Eq. (4) we get

RIG=G[λn+10..00λn+2............000..λm]=σ2G=APAcG+σ2G

Last equality in Eq. (6) means that APAcG=0 and since AP has full column rank we have A c G=0. This means that sinusoidals {a(ωk)}k=0n are orthogonal to noise subspace. This can be stated as (k=mnmac(ω)Gk=0. Hence, true frequencies {ωk}k=0n are the only solutions to the equation a T(ω)GG c a(ω)=‖G c a(ω)‖2=0 for m>n. Here, (·)T represents transpose of a matrix. Since, in practice, only the estimate I of R I is available, only the estimate Ĝ of G can be determined. In the present study we employ root MUSIC [11] which computes the frequencies as angular positions of n roots of equation

aT(z1)ĜĜca(z)=0

that are nearest and inside the unit circle. Here, a(z) is obtained from a(ω), and by replacing e by z we get a(z)=[1 z -1 . z-(m-1)]T. Since the minimum possible value of m is n +1, from Eq. (7) it can be observed that we need data frames (N) that are at least twice the number of sinusoidal components in the signal. Hence, if κ=2, which means n=5 (since n=2κ+1), we need at least ten data frames (the dc component is also counted as dc frequency). Hence, the minimum number of data frames required while using MUSIC method for phase extraction is 4κ+2.

To apply MUSIC for estimating the phase step, one first needs to find out the number of harmonics present in the signal so that appropriate value of n is determined. The details on selection of m and N will be explained in next Section.

In many cases κ is unknown and can be determined [12] by observing the Singular Value Decomposition (SVD) of R I matrix in Eq. (3). For a noiseless signal the SVD of R I =USV T results in a diagonal matrix S with 2κ+1 nonzero and N-2κ-1 zero singular values, where U and V are unitary matrices. If the data is noisy, the M=2κ+1 principal values of S would still tend to be larger than the N-M values which were originally zero. In addition, the M eigenvectors corresponding to the M eigen values of RIT R I are less susceptible to noise perturbations in comparison to the remaining N-M eigenvectors. Figure 1 illustrates typical singular values of S obtained from SVD of matrix R I for noise, at SNR of 10 dB, and without noise, and when κ=2 in Eq. 1. Although eight frequencies were assumed to be present during the estimation, only five principal values of S (corresponding to Idc , α, -α, 2α, -2α) for noisy and noiseless signals show a distinctly larger magnitude as compared to the remaining values. The plot thus allows a reliable estimation of the number of harmonics.

 figure: Fig. 1.

Fig. 1. Plot of the diagonal of S versus frequency for noiseless signal and signal with SNR=10 dB.

Download Full Size | PDF

3. Evaluation of the algorithm

The concept is tested by simulating the fringe pattern in Eq. (1) with phase step α=π/4, κ=2, and phase φ given by

φ(x,y)=2πλ(xx)2+(xy)2

where x′×x′ is the origin of the fringe pattern. In practice, only the estimate of R I , represented as I , of a covariance matrix is known and the sample covariance matrix is designed using [9]

R̂I=12Nt=mN[[I*(t1)I*(t2)..I*(tm)][I(t1)I(t2)..I(tm)]+[I*(tm)..I*(t2)I*(t1)][I(tm)..I(t2)I(t1)]]

which is as close as possible to R I in Eq. (3) in least squares sense. The methods which obtain the frequency estimates from I given in Eq. (9) are called forward-backward approaches [9]. We first discuss retrieving the phase step in the presence of additive white Gaussian noise with SNR between 0 and 70 dB. First, the number of frequencies present in the signal must be determined. In the present example, the value of n (number of frequencies) is determined to be n=5 using the method suggested in Section 2. Hence, the minimum number of data frames N for extracting the phase step values is 10 (4κ+2). Subsequently, an appropriate value of m must be selected such that N>m>n. It is observed that m>n increases the accuracy of frequency estimates. This is achieved at a higher computational cost and also m too close to N does not yield the covariance matrix I similar to R I , which in turn results in spurious frequency estimates. Performing eigendecomposition of I gives estimates for eigenvectors G and S, represented as Ĝ and Ŝ respectively. Finally using Eq. (7) the frequencies or the phase step values α are estimated pixelwise.

Figures 2(a)–(b) shows the plot for the case when data frame N=10, and m=7 and m=9, respectively. As expected, the phase step α at any arbitrary pixel location on the data frame cannot be estimated at lower SNRs (below 25 dB) from this plot. Figure 2(b) shows that value of m too close to N does not yield result. In the second case the number of data frame N=14 is selected. Figures 2(c)–(e) show typical plot for phase step α with m=7, 9 and 12, respectively. From Figs. 2(c)–2(e), it can be observed that m=9 yield better results as compared to those obtained with m=7 or 12. In the third case N=18 is selected, and plot for m=12 is shown in Fig. 2(f). From the plot it can be observed that phase steps α can be more reliably estimated than that from Fig. 2(d) at lower SNR’s. From these three cases, it can be concluded that phase step α can be reliably estimated even at lower SNR’s with increase in data frames N, and for m not too close to n and N (as a thumb rule, midway between n and N).

 figure: Fig. 2.

Fig. 2. Plots of phase step values α (in degrees) obtained using forward-backward approach at an arbitrary pixel point with different values of N and m.

Download Full Size | PDF

4. Phase distribution measurement

Once the phase step values α has been estimated pixelwise, the parameter ℓ k can be solved using a linear Vandermonde system of equations obtained from Eq. (1), and which can be written as

[eiκα0eiκα0i(κ1)α01eiκα1eiκα1.1....eiκα(N1)..1][κκ*.Idc]=[I0I1.IN1]

where, α 0,…,αN -1are the phase steps for frames I 0,..,I N-1, respectively. The phase φ is computed from the argument of ℓ1. Figure 3 shows typical phase obtained using Eq. (10). For computing the phase φ, the phase step values α obtained using fourteen data frames is used. Figure 4 shows typical error in the computation of phase φ and for the SNR of 30 dB.

 figure: Fig. 3.

Fig. 3. Plot shows typical wrapped phase φ (in radians) for phase step values obtained from Fig. 2(d) for 30dB noise.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Plot shows typical absolute error obtained in computation of phase φ (in radians) for phase step values determined from Fig. 2(d) for 30dB noise.

Download Full Size | PDF

5. Conclusion

To conclude, we have proposed a new generalized approach for recovering phase distribution in the presence of higher order harmonics. The method offers the flexibility to select arbitrary phase shifts between 0 and π. The accuracy in the measurement of the phase step in the presence of additive white Gaussian noise has been shown to increase with large data frames. The proposed technique works well with both diverging and converging beams since it first retrieves the phase step values pixel wise before applying them to the Vandermonde system of equation. The advantage of the proposed method also lies in its ability to measure phase step values in real time. Simulated results demonstrate the effectiveness of the proposed method.

Appendix A

Let us consider a signal

I(t)=Idc+k=1κakeikφeiαkt+k=1κakeikφeiαkt+η(t);

for t=0,1,2,…m,..,N-1

Here, η (t) represents white Gaussian noise. The covariance of a function I (t) is defined as [10]

r(p)=E{I(t)I*(tp)}

For simplicity, let us consider κ=1 and rewrite Eq. (A1) as

I(t)=Idc+a1eiφeiαt+a1eiφeiαt+η(t)

Similarly, let us write I*(t-p) for κ=1 as

I*(tp)=Idc+a1eiφeiα(tp)+a1eiφeiα(tp)+η*(tp)

Substituting Eqs. (A3) and A(4) in Eq. (A2), we obtain the following

r(p)=E{I(t)I*(tp)}=E{Idc2+Idca1eiφeiαt+Idca1eiφeiαt+eiαp(a12+Idca1eiφeiαt+a12e2iφe2iαt)+eiαp(a12+Idca1eiφeiαt+a12e2iφe2iαt)+η(t)η*(tp)}

Equation (A5) can be written in the following compact form

r(p)=E{Idc2+c1+eiαp(a12+c2)+eiαp(a12+c3)+η(t)η*(tp)}

where, c 1=Idca1e-e-iαt +Idca1eiφeiαt , c 2=Idca1e-iφe-iαt +a12 eiφt eiαt, and c3=Idca1e-iφeiαt +a12 e2 e2iαt.

Let, E{Idc2 +c 1}=A02, E{a12+c2}=A12, and E{a12+c 3}=A22. Therefore,

r(p)=A02+A12eiαp+A22eiαp+σ2δp,0

In Eq. (A7), σ 2 δ p ,0 is the expectation for Gaussian noise η (t) and is given by

E{η(k)η*(j)}=σ2δk,jE{η(k)η(k)}=0}

In practice, expectation E in Eq. (2) is computed by averaging over finite number of frames. If a large number of frames is taken for averaging, the exponential terms containing t in c 1, c 2, and c 3, will oscillate uniformly between 0 and 2π. In this limit, the expectation of c 1, c 2, and c 3 will approach zero because

02πeiψdψ=0

However, if finite number of frames are taken for averaging, expectation c 1, c 2, and c 3 will have a small finite value different from zero. Hence, for κ harmonics in the intensity, the final derivation of covariance of I (t) is given by

r(p)=E[I(t)I*(tp)]=n=02κAn2eiωnp+σ2δp,0

Acknowledgements

This research is funded by the Swiss National Science Foundation.

References and Links

1. Y. Surrel, “Phase stepping: a new self-calibrating algorithm,” Appl. Opt. 32, 3598–3600 (1993). [CrossRef]   [PubMed]  

2. Y. Surrel, “Design of algorithms for phase measurements by the use of phase stepping,” Appl. Opt. 35, 51–60 (1996). [CrossRef]   [PubMed]  

3. K. Hibino, B. F. Oreb, D. I. Farrant, and K. G. Larkin, “Phase shifting for nonsinusoidal waveforms with phase-shift errors,” J. Opt. Soc. Am. A 12, 761–768 (1995). [CrossRef]  

4. K. G. Larkin and B. F. Oreb, “Design and assessment of symmetrical phase-shifting algorithms,” J. Opt. Soc. Am. A 9, 1740–1748 (1992). [CrossRef]  

5. A. Patil, R. Langoju, and P Rastogi, “An integral approach to phase shifting interferometry using a super-resolution frequency estimation method,” Opt. Express 12, 4681–4697 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-20-4681. [CrossRef]   [PubMed]  

6. R. O. Schmidt, “Multiple emitter location and signal parameter estimation,” in Proceedings RADC, Spectral Estimation Workshop, Rome, NY, (243–258) 1979.

7. G. Bienvenu, “Influence of the spatial coherence of the background noise on high resolution passive methods,” in Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Washington, DC, 306–309 (1979)

8. C. Rathjen, “Statistical properties of phase-shift algorithms,” J. Opt. Soc. Am. A 12, 1997–2008 (1995). [CrossRef]  

9. B. D. Rao and K. V. S. Hari, “Weighted subspace methods and spatial smoothing: analysis and comparison,” IEEE Transactions on Signal Processing 41, 788–803 (1993). [CrossRef]  

10. T. Söderström and P. Stoica, “Accuracy of higher-order Yule-Walker methods for frequency estimation of complex sine waves,” IEE Proceedings-F 140, 71–80 (1993).

11. A. J. Barabell, “Improving the resolution performance of eigenstructure-based direction-finding algorithms,” in Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Boston, MA, 336–339 (1983).

12. J. J. Fuchs, “Estimating the number of sinusoids in additive white noise,” IEEE Transactions on Acoustics, Speech, and Signal Processing 36, 1846–1853 (1988) [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Plot of the diagonal of S versus frequency for noiseless signal and signal with SNR=10 dB.
Fig. 2.
Fig. 2. Plots of phase step values α (in degrees) obtained using forward-backward approach at an arbitrary pixel point with different values of N and m.
Fig. 3.
Fig. 3. Plot shows typical wrapped phase φ (in radians) for phase step values obtained from Fig. 2(d) for 30dB noise.
Fig. 4.
Fig. 4. Plot shows typical absolute error obtained in computation of phase φ (in radians) for phase step values determined from Fig. 2(d) for 30dB noise.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

I ( t ) = I d c + k = 1 κ a k e i k φ k u k t + k = 1 κ a k * e i k φ k * ( u k * ) t + η ( t )
for  t = 0 , 1 , m , , N 1
r ( p ) = E [ I ( t ) I * ( t p ) ] = n = 0 2 κ A n 2 e i ω n p + σ 2 δ p , 0
R I = E [ I c ( t ) I ( t ) ] = [ r ( 0 ) r * ( 1 ) . . r * ( m 1 ) r ( 1 ) . . . . . . . . . . . . . r * ( 1 ) r ( m 1 ) . . . r ( 0 ) ]
R I = APA c R s + σ 2 I R ε
P = [ A 0 2 0 . 0 0 A 1 2 . . . . . . 0 . . A 2 κ 2 ]
R I G = G [ λ n + 1 0 . . 0 0 λ n + 2 . . . . . . . . . . . . 0 0 0 . . λ m ] = σ 2 G = APA c G + σ 2 G
a T ( z 1 ) G ̂ G ̂ c a ( z ) = 0
φ ( x , y ) = 2 π λ ( x x ) 2 + ( x y ) 2
R ̂ I = 1 2 N t = m N [ [ I * ( t 1 ) I * ( t 2 ) . . I * ( t m ) ] [ I ( t 1 ) I ( t 2 ) . . I ( t m ) ] + [ I * ( t m ) . . I * ( t 2 ) I * ( t 1 ) ] [ I ( t m ) . . I ( t 2 ) I ( t 1 ) ] ]
[ e i κ α 0 e i κ α 0 i ( κ 1 ) α 0 1 e i κ α 1 e i κ α 1 . 1 . . . . e i κ α ( N 1 ) . . 1 ] [ κ κ * . I dc ] = [ I 0 I 1 . I N 1 ]
I ( t ) = I dc + k = 1 κ a k e ik φ e i α kt + k = 1 κ a k e ik φ e i α kt + η ( t ) ;
r ( p ) = E { I ( t ) I * ( t p ) }
I ( t ) = I dc + a 1 e i φ e i α t + a 1 e i φ e i α t + η ( t )
I * ( t p ) = I dc + a 1 e i φ e i α ( t p ) + a 1 e i φ e i α ( t p ) + η * ( t p )
r ( p ) = E { I ( t ) I * ( t p ) } = E { I dc 2 + I dc a 1 e i φ e i α t + I dc a 1 e i φ e i α t + e i α p ( a 1 2 + I dc a 1 e i φ e i α t + a 1 2 e 2 i φ e 2 i α t ) + e i α p ( a 1 2 + I dc a 1 e i φ e i α t + a 1 2 e 2 i φ e 2 i α t ) + η ( t ) η * ( t p ) }
r ( p ) = E { I dc 2 + c 1 + e i α p ( a 1 2 + c 2 ) + e i α p ( a 1 2 + c 3 ) + η ( t ) η * ( t p ) }
r ( p ) = A 0 2 + A 1 2 e i α p + A 2 2 e i α p + σ 2 δ p , 0
E { η ( k ) η * ( j ) } = σ 2 δ k , j E { η ( k ) η ( k ) } = 0 }
0 2 π e i ψ d ψ = 0
r ( p ) = E [ I ( t ) I * ( t p ) ] = n = 0 2 κ A n 2 e i ω n p + σ 2 δ p , 0
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.