Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Hyper ellipse fitting in subspace method for phase-shifting interferometry: practical implementation with automatic pixel selection

Open Access Open Access

Abstract

This paper presents a method of significantly improving the previously proposed simple, flexible, and accurate phase retrieval algorithm for the random phase-shifting interferometry named HEFS [ K. Yatabe, J. Opt. Soc. Am. A 34, 87 (2017)]. Although its effectiveness and performance were confirmed by numerical experiments in the original paper, it is found that the algorithm may not work properly if observed fringe images contains untrusted (extremely noisy) pixels. In this paper, a method of avoiding such untrusted pixels within the estimation processes of HEFS is proposed for the practical use of the algorithm. In addition to the proposal, an experiment of measuring a sound field in air was conducted to show the performance for real data, where the proposed improvement is critical for that situation. MATLAB codes (which can be downloaded from http://goo.gl/upcsFe) are provided within the paper to aid understanding the main concept of the proposed methods.

© 2017 Optical Society of America

1. Introduction

In phase-shifting interferometry (PSI), estimation of the phase from phase-shifted interferograms is one of the most important factors for the accuracy of the measurement. Traditionally, the phase is retrieved based on previous knowledge about phase-shifting steps. However, it is not easy to implement the phase-shifting steps precisely in practice owing to the requirement of extreme accuracy of the measuring system. Therefore, a lot of estimation methods of unknown phase-shifting steps and/or objective phase without the information about phase-shifting steps have been proposed [1–16].

Among many phase retrieving techniques, the ellipse fitting based methods [17–25] and the subspace based methods (PCA, QCA) [26–31] are promising methodologies which are recently investigated by several authors because of their possibilities. Although they have several advantages (such as simplicity and flexibility) comparing to other methods, they also have some limitations leading to accuracy loss which might prevent their practical use [32]. For example, the ellipse fitting methods may not give a reasonable estimation when the so-called Lissajous figure is not appropriately chosen, while the subspace methods may suffer from the number of fringes limitation [28–30].

To overcome the disadvantages of the ellipse fitting method and subspace method, the authors have proposed a phase estimation method by combining the above two methods, namely the hyper ellipse fitting in subspace (HEFS) method [32]. It inherits the advantages of both methods, which makes HEFS a simple, flexible and accurate phase retrieval algorithm. Moreover, the embedding of observed data enables simultaneous phase retrieval and denoising with only slight effort [32, 33]. Although its main target is PSI with totally unknown phase-shifting steps, the numerical experiments showed that HEFS is also beneficial for PSI whose shifting steps are perfectly known thanks to the denoising functionality [32].

However, the original version of HEFS may not perform appropriately when the observed fringes contain untrusted regions with no fringe pattern visible. This is because the original HEFS utilizes the singular value decomposition (SVD) as a global estimation method. That is, all pixels of the fringe images are forced to be included in the estimation process. However, in many situations, some pixels of the observed images are noisier than the others or even do not contain any fringe pattern. Such pixels containing less or no information should be avoided in the estimation process for the reliable phase retrieval. Nevertheless, globally performing SVD does not allow the elimination of the low-quality pixels owing to simultaneous execution of the estimation and projection (see Sections 2.1 and 3.1 for the explanation).

In this paper, an improved version of HEFS is proposed for avoiding untrusted pixels within the estimation processes. It can be regarded as a more flexible variant of HEFS as it allows independent execution of the parameter estimation and phase retrieval processes. A set of MATLAB codes (which can be downloaded from http://goo.gl/upcsFe) is also provided for easier understanding of the main concept of the proposed improvement by comparing it to the original MATLAB implementation in [32]. In addition to the extension of HEFS, a method for automatically distinguishing the low-quality pixels from the well-measured pixels is proposed based on the energy ratio between two subspaces. Furthermore, an experiment of applying HEFS to real data is provided to show the real performance, where the results indicate how the proposed improvement of HEFS is significant in such real situations.

2. Hyper ellipse fitting in subspace (HEFS)

In this section, HEFS is briefly reviewed (for detailed explanation, see [32]). Since it consists of the subspace method and ellipse fitting method, each method is explained independently before the procedure of HEFS is introduced.

2.1. Subspace method

The subspace method retrieves phase information from multiple fringe images by finding the subspace where ideal fringe images lie [26–30].

Let m-th image of M phase-shifted fringe images be written as

Ii,j[m]=Bi,j+Ai,jcos(ϕi,j+δ[m]),
where (i, j) is the two-dimensional index of an image pixel, ϕ is the objective phase, B is background illumination, A is amplitude of the fringe, and δ[m] is an unknown phase shift. In subspace method, an M-dimensional column vector is constructed for each pixel of the images as
di,j=[Ii,j[1],Ii,j[2],,Ii,j[M]]T,
where [z1, z2, . . ., zM] is the horizontal concatenation of {zm}, and zT is the transpose of z. Then, since Eq. (1) is equivalently written as
Ii,j[m]=Bi,j+Ai,j[cos(ϕi,j)cos(δ[m])sin(ϕi,j)sin(δ[m])],
data vector di,j can be expressed in the following form:
di,j=Bi,jo+Ci,jc+Si,js,
where o = [1, 1, . . ., 1]T is the M-dimensional column vector whose elements are all one, Bi,j, Ci,j (= Ai,j cos(ϕi,j)) and Si,j (=−Ai,j sin(ϕi,j)) are scalers independent from the phase shift, and c and s are vectors defined only by the phase shifts {δ[m]} as
c=[cos(δ[1]),cos(δ[2]),,cos(δ[M])]T,
s=[sin(δ[1]),sin(δ[2]),,sin(δ[M])]T.
This equation indicates that any ideal data vector di,j lies in the three-dimensional subspace spanned by o, c and s.

By subtracting the mean value, d¯i,j=1Mm=1MIi,j[m] from di,j as di,ji,j, the subspace spanned by o is eliminated. Thus, a mean subtracted data vector di,ji,j lies in the two-dimensional subspace spanned by c and s. After identifying the subspace of c and s, the objective phase ϕi,j is obtained from the coefficients Ci,j and Si,j as

ϕi,j=Arg[Ci,jιSi,j]=Arg[Ai,j{cos(ϕi,j)+ιsin(ϕi,j)}],
where ι=1, and Arg[z] denotes the principal value of complex argument of z.

For extracting the subspace, the singular value decomposition (SVD) is often applied. Firstly, a matrix is constructed by concatenating mean-subtracted data vector di,j for every pixels horizontally,

D=[d1,1,d2,1,dNv,1,d1,2,d2,2,dNv,Nh],
whose size is M × N, where Nv and Nh are respectively numbers of vertical and horizontal pixels, and N (= NvNh) is the total number of pixels. Note that the order of concatenation can be arbitrary. Then, SVD is applied to the matrix D as
D=UΣVT,
and a basis of the subspace is obtained as the column vectors of U. The objective phase is retrieved from the corresponding vectors of V, or ΣV, as in Eq. (7).

2.2. Modified subspace method

By slightly modifying the subspace method, random noise can be reduced [33]. This noise reduction functionality is important for HEFS to achieve a good overall performance [32].

The noise reduction is realized by the embedding of the data vectors into a higher-dimensional vector space. It is easily introduced by considering an extended data vector which incorporates adjacent pixels into a single vector. For example, a 2M-dimensional extended data vector can be constructed by vertically concatenating two data vectors,

di,j(2)=[di,jT,di+1,jT]T,
where di+1,j is the data vector corresponding to the next pixel of di,j, and the superscript of di,j(2) indicates how many pixels are incorporated. In general, surrounding L pixels can be combined to construct an LM-dimensional extended data vector di,j(L). Note that di,j(1)=di,j corresponds to the usual data vector in Eq. (2). In the previous paper [33], the following embedding (L = 9) is suggested to be a good choice in terms of performance, computational cost, and stability:
di,j(9)=[di,jT,di+1,jT,di1,jT,di,j+1T,di,j1T,di+1,j+1T,di+1,j1T,di1,j+1T,di1,j1T]T.

By horizontally concatenating the above extended data vectors di,j(L) as in Eq. (8), an LM × N extended data matrix is constructed. Then, applying the subspace method to the extended data matrix obtains the objective phase with less random noise.

2.3. Ellipse fitting method

The ellipse fitting method is another method for retrieving phase information from multiple fringes by finding an ellipse which approximates the set of fringes [17–23]. In HEFS, phase is directly extracted from parameters of the fitted ellipse [32].

A general ellipse on the (x, y)-coordinate can be expressed as

α1x2+2α2xy+α3y2+2β(α4x+α5y)+β2α6=0,
where α1, . . ., α6 are real coefficients, and β is a real positive constant to scale the data so that accuracy loss caused by computation in finite precision arithmetic is prevented.

From those parameters, the angle on the ellipse ϕ is obtained as [32]

ϕ=Arg[α1+α3+κ(y˜+τx˜)+ια1+α3κ(x˜τy˜)],
where (, ) is a translated version of (x, y),
x˜=xβ(α3α4α2α5)/(α22α1α3),
y˜=yβ(α1α5α2α4)/(α22α1α3),
and κ and τ are constants defined by
κ=4α22+(α1α3)2,τ=(α1α3+κ)/(2α2).
This angle on the ellipse is the desired phase information if the fringe images are properly projected on a two-dimensional space in advance. Therefore, after projecting the fringes to the two-dimensional space, the phase information is retrieved by estimating the parameters α1, . . . , α6.

For estimating the parameters of the ellipse, the least squares method is often applied. By simplifying Eq. (12) as

αTχ=0,
the least squares solution α is obtained as the eigenvector for the smallest eigenvalue of the following eigenvalue problem [32]:
Xα=λα,
where α = [α1, α2, α3, α4, α5, α6]T is the parameters to be estimated, X is a 6 × 6 symmetric matrix calculated by
X=1Nn=1NχnχnT,
and χn=[xn2,2xnyn,yn2,2βxn,2βyn,β2]T is a vector constructed from the point (xn, yn) which corresponds to the projected data.

2.4. Hyper least squares method for ellipse fitting

Although the least squares method is the most popular method for estimating the parameters of ellipse, it may not be accurate owing to the higher-order statistical bias. For avoiding the bias up to the second-order term, the hyper least squares was proposed [34–36]. It utilizes the 6 × 6 (approximated) weighting matrix,

W=[6x¯26xy¯x¯2+y¯26βx¯2βy¯β26xy¯4(x¯2+y¯2)6xy¯4βy¯4βx¯0x¯2+y¯26xy¯6y¯22βx¯6βy¯β26βx¯4βy¯2βx¯4β2002βy¯4βx¯6βy¯04β20β20β2000],
to formulate the generalized eigenvalue problem,
Xα=λWα,
where x¯=(1/N)n=1Nxn and y¯=(1/N)n=1Nyn are the usual mean, and the following notations were used for simplicity:
x¯2=1Nn=1Nxn2,y¯2=1Nn=1Nyn2,xy¯=1Nn=1Nxnyn.
The hyper-accurate least squares solution is obtained as the eigenvector for the smallest eigenvalue of Eq. (21). For numerical computation, calculating the eigenvector for the eigenvalue with the largest absolute value of,
Wα=λXα,
may be better because of the structure of W.

2.5. Hyper ellipse fitting in subspace (HEFS) method

Combining the above two methods, HEFS is obtained as an accurate ellipse fitting method without a concern about ambiguity of the construction of the two-dimensional space for fitting the ellipse. The procedure of HEFS consists of the following three steps [32]:

  1. Apply (modified) subspace method to project the data onto a two-dimensional subspace,
  2. Estimate parameters of ellipse from the projected data by the hyper least squares method,
  3. Calculate the retrieved phase from the parameters of fitted ellipse by Eq. (13).

For applying the ellipse fitting method, HEFS utilizes the subspace method to project the data vectors onto the two-dimensional subspace where the ideal phase-shifted fringe images lie. Random noise can be suppressed within the projection process by the modified subspace method so that the ellipse fitting method works more robustly. Furthermore, the hyper-accurate version of the least squares method is adopted to retrieve the phase. In [32], the numerical experiments showed that HEFS method can retrieve the phase accurately without computational effort.

3. Improved HEFS for dealing with untrusted pixels

Although direct use of SVD is the easiest way of implementing the subspace method, it has to simultaneously take all pixels into account. This simultaneity of considering all pixels may cause an inaccurate result. In many cases, some pixels are noisier than the others or even do not contain any fringe pattern. Such pixels containing less or no information about the objective phase are the source of estimation error. Therefore, they should be eliminated from the estimation process to obtain a better result. Nevertheless, the original HEFS does not allow such elimination because the usage of SVD (directly to the data matrix) have to simultaneously estimate the subspace and project data onto it. To perform the parameter estimation steps by only using selected well-measured pixels, the parameter estimation process and projection process must be separately executed. That is, the parameter estimation process is performed with only selected pixels, while the projection process is performed for all pixels.

In this section, we propose a more flexible procedure of HEFS which allows independent execution of the parameter estimation and phase retrieval processes. MATLAB codes are provided here for easier understanding of the main concept of the proposed improvement by comparing with the original MATLAB implementation in [32].

3.1. Estimating two-dimensional subspace through eigenvalue decomposition

If the subspace method is executed by using selected pixels only, the other pixels must be projected to the same estimated subspace afterward. To do so, one has to obtain the estimated result of the subspace method as a projection operator P which projects the data matrix D in Eq. (8) (or its extended version introduced in Section 2.2) onto the two-dimensional subspace. One way of obtaining the projection operator P is to apply SVD to a data matrix consisting of selected pixels only. Here, as a computationally efficient alternative, an estimation method using the eigenvalue decomposition is considered.

The projection operator P is represented as a 2 × LM matrix obtained by

P=Σ˜1U˜T,
where Ũ is the LM × 2 matrix consisting of two left-singular vectors corresponding to the largest and second largest singular values, and Σ̃−1 is the inverse of the 2 × 2 diagonal matrix Σ̃ consisting of the two largest singular values. This matrix P gives by multiplying to D from left:
PD=Σ˜1U˜TUΣVT=V˜T,
where is the N×2 matrix whose elements are the coordinate of each data vector on the projected two-dimensional subspace. That is, any data vector can be projected onto the two-dimensional subspace by multiplying the corresponding P from left. However, since this projection operator can be calculated from U and Σ only, direct execution of SVD for obtaining P contains unnecessary computation of V.

A computationally efficient way of calculating U and Σ is realized by using the eigenvalue decomposition. By considering the LM × LM square matrix DDT, its SVD is represented as

DDT=UΣVTVΣUT=UΣ2UT.
This equation indicates that U and Σ can be obtained by eigenvalue decomposition of DDT. Then, Eq. (24) gives the corresponding projection operator.

Note that the projection operator P can be obtained from at least two data vectors because calculation of P only requires two leading eigenvalues (of course, more data vectors should be utilized to obtain better estimation). Therefore, a subset of D, containing more than two pixels, can be selected arbitrarily for the estimation of P.

3.2. Ellipse fitting method based on selected pixels

The ellipse fitting method can also be performed by using selected well-measured pixels only. After the estimation of projection operator P, projected data is obtained by Eq. (25) as the input of the ellipse fitting method. Since P can independently project each data vector onto the two-dimensional subspace, any subset of D can be utilized for the estimation.

As in Eq. (13), the objective phase is calculated from the parameters α1, . . ., α6 and β. Therefore, once these parameters are estimated, any data on the two-dimensional subspace can be converted into the corresponding phase. The estimation of the parameters is based on the eigenvalue decomposition of the symmetric matrix in Eq. (19). This matrix can be constructed from arbitrarily selected data, and thus untrusted pixels can also be avoided in the ellipse fitting method.

Note that the selection of the pixels can be differ from that of the subspace method in the previous subsection. That is, further selection of the pixels can be performed according to the position on the projected subspace. In such cases, it is important to use the hyper-accurate least squares method instead of the ordinary least squares method because the ordinary least squares method may perform badly when the number of pixels decrease [34–36].

3.3. Summary of the improved HEFS

In summary, HEFS can be improved by the following procedure:

  1. Select some good pixels for estimating the parameters in HEFS,
  2. Estimate the projection operator P from the selected pixels by the eigenvalue decomposition of DDT and Eq. (24),
  3. Estimate the parameters of the ellipse from the projected data of selected pixels,
  4. Retrieve the objective phase for every pixel by using the estimated projection operator and parameters of the ellipse.

These processes can be implemented as the separated functions shown in Fig. 1. The estimation and phase retrieving processes are totally divided into two functions so that the estimation process can be executed with the selected subset of pixels. In addition to those two functions, a function for constructing extended version of the data matrix is included in order to activate the denoising functionality of HEFS easily.

 figure: Fig. 1

Fig. 1 MATLAB implementation of the improved HEFS. Only a subset of the data matrix D, which is named as Dsub, is used for the parameter estimation. In addition to HEFS, a code for constructing the extended data matrix is included at the bottom right, where the input variable X is an Nv × Nh × M three-dimensional array which is obtained by concatenating M fringe images, whose size is Nv × Nh, along the third dimension, and the shifting position is an L × 2 matrix consisting of the indices relative to the center pixel (for example, the shifting position corresponding to Eq. (11) is obtained by vertically concatenating [0, 0], [1, 0], [−1, 0], [0, 1], [0, −1], [1, 1], [1, −1], [−1, 1], and [−1, −1]).

Download Full Size | PDF

4. Automatic selection of better pixels

As in the previous section, the improved HEFS enables to utilize only selected pixels in the parameter estimation steps. The selection of pixels can be performed manually if the untrusted regions are obvious. However, such manual selection might be erroneous or impossible for some situations. Therefore, a criterion for selecting “good” pixels is necessary. In this section, we propose a method of automatically selecting better pixels in terms of the energy ratio between two subspaces.

4.1. Estimating signal-to-noise ratio (SNR) as criterion of pixel selection

In the previous section, the projection operator P was defined as an operator projecting an LM-dimensional data vector onto the two-dimensional subspace. This subspace was introduced since every ideal data vector lies on there. In this sense, this two-dimensional subspace can be regarded as the subspace of desired signal because ideal (noiseless) data lies on there. On the other hand, the orthogonal complement of that subspace can be regarded as the subspace consisting of noise (non-ideal components). Therefore, noisy pixels can be distinguished from the better ones by considering the energy on the orthogonal complement.

As in Eq. (24), the projection operator is calculated from the LM × 2 matrix Ũ because this matrix consists of the orthonormal basis of the two-dimensional subspace. Since U is an orthonormal matrix, the rest of the LM × (LM − 2) matrix consists of the basis of the orthogonal complement of that subspace. Hence, the projection operator onto the orthogonal complement P can be obtained in the same manner,

P=Σ1UT,
where Ŭ is the LM × (LM − 2) matrix consisting of LM − 2 left-singular vectors constructed by excluding Ũ from U, and Σ̆−1 is the inverse of the (LM − 2) × (LM − 2) diagonal matrix Σ̆ consisting of the singular values excluding the two largest ones.

For any mean subtracted data vector di,j(L), Pdi,j(L) provides the two-dimensional representation of the data vectors, while Pdi,j(L) is their (LM − 2)-dimensional representation. If the data is ideal (noiseless), then

Pdi,j(L)0
should hold (we express this equation not by equality but approximation because of the embedding of the extended data vector, see [33] for discussion). That is, all the energy of di,j(L) is concentrated on Pdi,j(L). Based on this observation, we define the estimated signal-to-noise ratio (SNR) of the data vector as the ratio of energy between the subspaces:
SNRi,j=Pdi,j(L)22Pdi,j(L)22,
where ‖ · ‖2 is the Euclidean norm. When the data vector contains less noise, the corresponding SNRi,j becomes higher. Therefore, the noisy untrusted pixels can be eliminated by excluding (i, j)-th pixels whose SNRi,j are low.

Note that the term noise in this definition is not only related to additive noise (e.g. Gaussian noise) but also related to any non-ideal property of the fringe images (such as nonuniform spatial distribution of phase-shifting error). Therefore, the proposed SNR can be low for regions where fringe pattern is clearly visible but some non-ideal phenomena occur.

4.2. Automatic pixel selection based on estimated SNR

As in the previous subsection, one can exclude the noisy untrusted pixels by discarding pixels with low SNR. Then, one must determine the boundary of discard in terms of SNR. In this paper, a threshold ϒ ∈ [0, 1) is considered relative to the maximum of SNR. That is, (i, j)-th pixels whose SNR satisfies the following inequality are selected as the pixels for estimating parameters:

SNRi,j>ϒmaxi,j{SNRi,j}.
Note that ϒ = 0 will select all pixels, while ϒ = 1 will select no pixel. The proposed pixel selection is summarized in the following procedure whose MATLAB code is provided in Fig. 2:
  1. Estimate the two projection operators P and P by diagonalizing DDT and utilizing Eqs. (24) and (27),
  2. Calculate SNR by using estimated projection operators as in Eq. (29),
  3. Select (i, j)-th pixels whose SNR satisfies Eq. (30) for a chosen threshold ϒ ∈ [0, 1).

 figure: Fig. 2

Fig. 2 MATLAB implementation of the proposed SNR-based pixel selection method. Input variable D is the data matrix, and threshold is a real number between 0 and 1.

Download Full Size | PDF

Note that choice of the threshold ϒ automatically determines the number of selected pixels. On the other hand, one can first fix the number of selected pixels if a particular number is desired. In that case, the automatic selection is achieved by sorting SNR and picking the pixels from larger SNR until the desired number of pixels are selected.

5. Experiment

For demonstrating the effectiveness and significance of the proposed improvement, HEFS is applied to a measured sound field obtained by the parallel phase-shifting interferometry (PPSI). In this section, PPSI and the experimental setup are briefly explained first, and then the results are shown.

5.1. Parallel phase-shifting interferometry (PPSI)

As in Eq. (1), phase-shifted fringe images are obtained by artificially shifting the phase. The most common method for realizing the phase-shifting step δ is the temporal method which changes the shifting step temporally to reduce the complexity of the interferometer. However, a temporal method requires multiple measurements along time for a single phase map, which restricts its use to time-invariant phenomena.

On the other hand, the spatial methods enables us to obtain multiple phase-shifted fringe images simultaneously. PPSI is one of such spatial methods [37–39]. By using polarized interferometer, each fringe image can be obtained as the polar angle of interfered light. Then, the phase-shifting step can be implemented by a linear polarizer. Therefore, simultaneous observation of multiple fringes can be realized by a polarization camera which is a camera with pixel-wise linear polarizers attached on the image sensor. Its high-speed variant allows us to acquire the fringes in high frame rate [40], and thus high-speed time-varying phenomena can be observed by combining these equipments [41].

5.2. Experimental setup

For illustrating the performance of the improved HEFS, sound in air was chosen as the test data in this paper. Since sound is an important phenomenon for our daily life, optical measurement of sound interests several researchers because of its potentialities. Recent development of PPSI enabled measurement of such time-varying and high-speed phenomena [42–44]. Sound in air is not easy to measure optically because of not only its high-speed nature but also smallness of the effect to light (roughly speaking, the order of typical phase modulation caused by sound in air is 0.1 to 0.001 rad). Optical sound measurement requires a highly accurate system, and therefore it can be a good subject for testing a phase retrieval algorithm in terms of accuracy.

The experimental setup is illustrated in Fig. 3. A Fizeau-type polarized interferometer combined with a polarized high-speed camera was used for the four-step PPSI. A small loudspeaker shown in Fig. 4 was placed between the optical flats, and a stationary sound field of 8,000 Hz was generated from it [45]. As it consists of two loudspeaker units (tweeter and woofer) the sound was emitted from both top and bottom parts of the observed area. PPSI measured the modulated phase of the object light which is resulted from the density variation of air caused by the sound [42–44].

 figure: Fig. 3

Fig. 3 Illustration of the experimental setup. A polarized interferometer is combined with a polarized high-speed camera for realizing the high-speed PPSI. A loudspeaker was placed between the optical flats which reflect the object and reference lights.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 The loudspeaker used in the experiment (110 × 180 × 130 mm). The red dotted circle roughly indicate the observable area apparent in the measured results.

Download Full Size | PDF

The frame rate of the high-speed polarized camera was set to 7,000 fps so that full 512 × 512 pixel imaging is performed. Since a high-speed camera has the trade-off between the framerate and number of active pixels due to the data-transfer rate, one has to choose the framerate according to the desired measurement area. In this paper, we chose the maximum framerate which gives the full resolution of the sensor. The diameter of the object light beam, which is apparent in the next figures as circles, is about 90 mm.

An example of measured fringe images is shown in the left half of Fig. 5. By high-speed four-step PPSI, four fringe images were simultaneously observed for a single time instance (the shadow on the left part of the measured area is the loudspeaker, see Fig. 4). Then, phase retrieval algorithms were applied to each time instance so that time-sequential phase maps were obtained. After phase retrieval, a time-directional high-pass filter was applied to eliminate time-invariant or slowly varying components which are unrelated to the sound [42, 43]. Since the framerate was 7,000 Hz due to the measurement size limitation, the measured sound was aliased as a sine wave of 8,000 Hz was inputted into the loudspeaker. As the result, the observed data seems like an 8,000 Hz sound field sampled by 56,000 fps (one period of the sound was captured by seven frames in that sense). Therefore, the time-directional high-pass filter was set to 4,000 Hz. Two (out of several hundreds) time instances of measured data in this experiment, together with executable MATLAB codes, were available at http://goo.gl/upcsFe.

 figure: Fig. 5

Fig. 5 Illustration of recovering procedure of the measured sound field. Four fringe images were observed simultaneously by PPSI. Phase retrieval algorithms were applied to each time instance for obtaining time-sequential phase maps, and the time-directional filtering was utilized to eliminate time-invariant components unrelated to sound as in [42,43].

Download Full Size | PDF

5.3. Results

Here, the improved HEFS is compared with the ordinary HEFS and the traditional M-step phase-shifting algorithm [46]:

ϕ=Arg[(m=0M1I[m]cos(2πm/M)ι(m=0M1I[m]sin(2πm/M)))]=Arg[m=0M1I[m]e2πιmM].
The obtained result using this four-step algorithm is illustrated in Fig. 6, where eight consecutive instances of the results were shown (time evolves from left to right, from top left to bottom right). From the figure, it can be seen that both tweeter and woofer (see Fig. 4) emitted the sound, which resulted in the interference pattern of sound pressure.

 figure: Fig. 6

Fig. 6 Visualized sound field obtained by PPSI and the traditional four-step algorithm. Eight consecutive time instances of the results were shown, where time evolves from left to right (from top left to bottom right).

Download Full Size | PDF

There are two types of noise which degrade the clarity of the visualized sound field: random noise and patterned noise. The patterned noise is mainly due to error on the phase-shifting steps and nonlinearity of the imaging devices. Both random and patterned noises are conspicuous because the magnitude of the phase modulation caused by sound in air is small (note that the peak value of the observed sound was about ±0.1 rad). Since the four-step algorithm does not remove any noise, noisy fringes resulted in the corresponding noisy phase maps.

The result obtained by using the ordinary HEFS [32] is shown in Fig. 7. Since the original version of HEFS utilizes every pixels of the measured images for the estimations, the result contains huge error that totally distorted the phase maps. This is because there were so many untrusted pixels in this data which do not contain any information about the observing phenomenon. Although the random noise was reduced by the denoising functionality of HEFS [32, 33] [the embedding (L = 9) shown in Eq. (11) was used], the result is meaningless as interpretation of the data becomes quite difficult.

 figure: Fig. 7

Fig. 7 Visualized sound field obtained by PPSI and the ordinary HEFS [32]. The untrusted pixels due to the shadow and unobservable area caused huge error on the phase maps.

Download Full Size | PDF

The selected pixels used for the improved HEFS are shown in Fig. 8. The red rectangle in the left figure corresponds to the manually chosen region which seems to be a well-measured region. Although such manual selection is easy and simple in this case, the choice of the pixels often rely on the intuition whose correctness cannot be guaranteed. In contrast, the proposed SNR-based method automatically selects the pixels based on the estimated SNR which is shown at the center of Fig. 8. Note that the brightness of the estimated SNR in this figure is normalized so that the value of the color bar corresponds to ϒ in Eq. (30). As shown in the right side of Fig. 8, the proposed method correctly eliminated the region which is not observable for this setting (black and dark gray region of the left figure). It is interesting to note that the proposed selection method also eliminated several spots corresponding to some circular patterns in the left figure, where manually discarding such patterns may not be easy. It also eliminated some horizontal belt-shaped patterns, appeared along the fringe pattern shown in Fig. 5, which should be related to the patterned noise in Fig. 6.

 figure: Fig. 8

Fig. 8 Selected pixels used for the improved HEFS. The left figure shows sum of the four fringe images mI[m] which approximates the background illumination, and the red rectangle is the manually selected region. The center figure shows estimated SNR normalized by its maximum. The right figure shows the pixels chosen by the proposed SNR-based selection method with ϒ = 0.1 [see Eq. (30)], where the yellow pixels are the selected ones.

Download Full Size | PDF

The results of the proposed improved version of HEFS are shown in Figs. 9 and 10, where the selected pixels for the parameter estimations are in Fig. 8. By comparing these two cases, the automatically selected pixels obtained by the proposed SNR-based selection method were confirmed to be a reasonable choice because they ended up with a very similar result to that of the manual selection. Since the shadow and unobservable area were not included in the selected pixels in both cases, HEFS was able to work properly as expected (in contrast to Fig. 7). As the results, the improved HEFS with the embedding in Eq. (11) obtained clear images of the sound field with less noise comparing to the result of the traditional four-step algorithm shown in Fig. 6. To illustrate the difference among Figs. 6, 9 and 10 clearly, vertical line at the center of each image is extracted and depicted in Fig. 11. From the figure, the appropriateness of the proposed automatic pixel selection method can be confirmed because the phase values of HEFS corresponding to manual and automatic selection methods almost coincide. In addition, the effect of the denoising functionality of HEFS is also apparent in this figure as the random noise was reduced by the embedding. The patterned noise caused by imperfection of the implementation of the shifting steps (realized by the quarter-wave plate and pixel-wise linear polarizers) were also reduced because HEFS can compensate error of the shifting steps. However, some patterned noise caused by the nonlinearity of the imaging devices [47], which is not taken into account in HEFS, was remained in the results in Figs. 9 and 10. Reduction of such error within the framework of HEFS is a future work which should be considered next.

 figure: Fig. 9

Fig. 9 Visualized sound field obtained by PPSI and the improved HEFS. The selected pixels used for the parameter estimations in Section 3 were chosen manually (Fig. 8 left).

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Visualized sound field obtained by PPSI and the improved HEFS. The selected pixels used for the parameter estimations in Section 3 were chosen automatically by the SNR-based pixel selection method proposed in Section 4 with ϒ = 0.1 (Fig. 8 right).

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 One-dimensional plot of the obtained results by each method. Each line was extracted vertically from each figure in Figs. 6, 9 and 10, where the horizontal position is the center.

Download Full Size | PDF

6. Conclusions

In this paper, an extension of HEFS was proposed to improve its performance. It is important for practical use because the improvement is essential when the measured data contains untrusted pixels which are usual in real measurement. By the proposed improvement, HEFS gained more flexibility, which can be seen from the set of MATLAB codes in Fig. 1.

In addition to the improvement, a method for automatically eliminating the untrusted pixels was proposed. Estimated SNR of each pixel was introduced as the energy ratio of two subspaces so that noisy pixels can be distinguished from the cleaner pixels based on its value. Since SNR is a quantitative measure of noisiness, the proposed method has no ambiguity on selecting “better” pixels in contrast to a manual selection based on intuition.

An experiment of measuring a sound field in front of a loudspeaker was performed to confirm the significance of the proposed improvement in a practical situation. It showed that the original version of HEFS totally failed to retrieve the objective phase because of the pixels which do not contain any information about the phase. In contrast, the improved HEFS was able to simultaneously retrieve the phase and denoise it. The automatic pixel selection method was also confirmed to be reasonable by comparing its result with that of the manual selection.

As mentioned at the end of the previous section, reduction of remaining noise, which may be possible by combining other techniques [47, 48], should be considered in the future works. Moreover, a collaboration of HEFS and methods in [49–51] for realizing some specialized methods for the optical sound measurement should be interesting to investigate.

Funding

Japan Society for the Promotion of Science (JSPS) Grant-in-Aid for Research Activity Start-up (17H07191), and Grant-in-Aid for JSPS Research Fellow (16J06772).

References and links

1. G. Lai and T. Yatagai, “Generalized phase-shifting interferometry,” J. Opt. Soc. Am. A 8, 822–827 (1991). [CrossRef]  

2. K. Okada, A. Sato, and J. Tsujiuchi, “Simultaneous calculation of phase distribution and scanning phase shift in phase shifting interferometry,” Opt. Commun. 84, 118–124 (1991). [CrossRef]  

3. I.-B. Kong and S.-W. Kim, “General algorithm of phase-shifting interferometry by iterative least-squares fitting,” Opt. Eng. 34, 183–188 (1995). [CrossRef]  

4. Z. Wang and B. Han, “Advanced iterative algorithm for phase extraction of randomly phase-shifted interferograms,” Opt. Lett. 29, 1671–1673 (2004). [CrossRef]   [PubMed]  

5. A. Patil and P. Rastogi, “Approaches in generalized phase shifting interferometry,” Opt. Lasers Eng. 43, 475–490 (2005). [CrossRef]  

6. H. Guo, Y. Yu, and M. Chen, “Blind phase shift estimation in phase-shifting interferometry,” J. Opt. Soc. Am. A 24, 25–33 (2007). [CrossRef]  

7. X. F. Xu, L. Z. Cai, Y. R. Wang, X. F. Meng, W. J. Sun, H. Zhang, X. C. Cheng, G. Y. Dong, and X. X. Shen, “Simple direct extraction of unknown phase shift and wavefront reconstruction in generalized phase-shifting interferometry: algorithm and experiments,” Opt. Lett. 33, 776–778 (2008). [CrossRef]   [PubMed]  

8. P. Gao, B. Yao, N. Lindlein, K. Mantel, I. Harder, and E. Geist, “Phase-shift extraction for generalized phase-shifting interferometry,” Opt. Lett. 34, 3553–3555 (2009). [CrossRef]   [PubMed]  

9. J. Deng, H. Wang, D. Zhang, L. Zhong, J. Fan, and X. Lu, “Phase shift extraction algorithm based on Euclidean matrix norm,” Opt. Lett. 38, 1506–1508 (2013). [CrossRef]   [PubMed]  

10. H. Guo and Z. Zhang, “Phase shift estimation from variances of fringe pattern differences,” Appl. Opt. 52, 6572–6578 (2013). [CrossRef]   [PubMed]  

11. R. Juarez-Salazar, C. Robledo-Sánchez, C. Meneses-Fabian, F. Guerrero-Sánchez, and L. A. Aguilar, “Generalized phase-shifting interferometry by parameter estimation with the least squares method,” Opt. Lasers Eng. 51, 626–632 (2013). [CrossRef]  

12. J. Deng, L. Zhong, H. Wang, H. Wang, W. Zhang, F. Zhang, S. Ma, and X. Lu, “1-norm character of phase shifting interferograms and its application in phase shift extraction,” Opt. Commun. 316, 156–160 (2014). [CrossRef]  

13. Q. Liu, Y. Wang, J. He, and F. Ji, “Phase shift extraction and wavefront retrieval from interferograms with background and contrast fluctuations,” J. Opt. 17, 025704 (2015). [CrossRef]  

14. J. Deng, D. Wu, K. Wang, and J. Vargas, “Precise phase retrieval under harsh conditions by constructing new connected interferograms,” Sci. Rep. 6, 24416 (2016). [CrossRef]   [PubMed]  

15. Y. Xu, Y. Wang, Y. Ji, H. Han, and W. Jin, “Three-frame generalized phase-shifting interferometry by a Euclidean matrix norm algorithm,” Opt. Lasers Eng. 84, 89–95 (2016). [CrossRef]  

16. X. Xu, X. Lu, J. Tian, J. Shou, D. Zheng, and L. Zhong, “Random phase-shifting interferometry based on independent component analysis,” Opt. Commun. 370, 75–80 (2016). [CrossRef]  

17. K. Kinnstaetter, A. W. Lohmann, J. Schwider, and N. Streibl, “Accuracy of phase shifting interferometry,” Appl. Opt. 27, 5082–5089 (1988). [CrossRef]   [PubMed]  

18. C. T. Farrell and M. A. Player, “Phase step measurement and variable step algorithms in phase-shifting interferometry,” Meas. Sci. Technol. 3, 953–958 (1992). [CrossRef]  

19. C. T. Farrell and M. A. Player, “Phase-step insensitive algorithms for phase-shifting interferometry,” Meas. Sci. Technol. 5, 648 (1994). [CrossRef]  

20. A. Albertazzi Jr., A. V. Fantin, D. P. Willemann, and M. E. Benedet, “Phase maps retrieval from sequences of phase shifted images with unknown phase steps using generalized N-dimensional Lissajous figures — principles and applications,” Int. J. Optomechatronics 8, 340–356 (2014). [CrossRef]  

21. C. Meneses-Fabian and F. A. Lara-Cortes, “Phase retrieval by Euclidean distance in self-calibrating generalized phase-shifting interferometry of three steps,” Opt. Express 23, 13589–13604 (2015). [CrossRef]   [PubMed]  

22. F. Liu, Y. Wu, and F. Wu, “Correction of phase extraction error in phase-shifting interferometry based on Lissajous figure and ellipse fitting technology,” Opt. Express 23, 10794–10807 (2015). [CrossRef]   [PubMed]  

23. F. Liu, Y. Wu, F. Wu, and W. Song, “Generalized phase shifting interferometry based on Lissajous calibration technology,” Opt. Lasers Eng. 83, 106–115 (2016). [CrossRef]  

24. F. Liu, J. Wang, Y. Wu, F. Wu, M. Trusiak, K. Patorski, Y. Wan, Q. Chen, and X. Hou, “Simultaneous extraction of phase and phase shift from two interferograms using Lissajous figure and ellipse fitting technology with Hilbert–Huang prefiltering,” J. Opt. 18, 105604 (2016). [CrossRef]  

25. A. V. Fantin, D. P. Willemann, M. E. Benedet, and A. G. Albertazzi, “Robust method to improve the quality of shearographic phase maps obtained in harsh environments,” Appl. Opt. 55, 1318–1323 (2016). [CrossRef]   [PubMed]  

26. J. Vargas, J. A. Quiroga, and T. Belenguer, “Phase-shifting interferometry based on principal component analysis,” Opt. Lett. 36, 1326–1328 (2011). [CrossRef]   [PubMed]  

27. J. Vargas, J. A. Quiroga, and T. Belenguer, “Analysis of the principal component algorithm in phase-shifting interferometry,” Opt. Lett. 36, 2215–2217 (2011). [CrossRef]   [PubMed]  

28. J. Xu, W. Jin, L. Chai, and Q. Xu, “Phase extraction from randomly phase-shifted interferograms by combining principal component analysis and least squares method,” Opt. Express 19, 20483–20492 (2011). [CrossRef]   [PubMed]  

29. J. Vargas, C. Sorzano, J. Estrada, and J. Carazo, “Generalization of the principal component analysis algorithm for interferometry,” Opt. Commun. 286, 130–134 (2013). [CrossRef]  

30. J. Vargas and C. Sorzano, “Quadrature component analysis for interferometry,” Opt. Lasers Eng. 51, 637–641 (2013). [CrossRef]  

31. J. Deng, K. Wang, D. Wu, X. Lv, C. Li, J. Hao, J. Qin, and W. Chen, “Advanced principal component analysis method for phase reconstruction,” Opt. Express 23, 12222–12231 (2015). [CrossRef]   [PubMed]  

32. K. Yatabe, K. Ishikawa, and Y. Oikawa, “Simple, flexible, and accurate phase retrieval method for generalized phase-shifting interferometry,” J. Opt. Soc. Am. A 34, 87–96 (2017). [CrossRef]  

33. K. Yatabe, K. Ishikawa, and Y. Oikawa, “Improving principal component analysis based phase extraction method for phase-shifting interferometry by integrating spatial information,” Opt. Express 24, 22881–22891 (2016). [CrossRef]   [PubMed]  

34. K. Kanatani and P. Rangarajan, “Hyper least squares fitting of circles and ellipses,” Comput. Stat. Data Anal. 55, 2197–2208 (2011). [CrossRef]  

35. K. Kanatani, P. Rangarajan, Y. Sugaya, and H. Niitsuma, “HyperLS for parameter estimation in geometric fitting,” IPSJ Trans. Comput. Vis. Appl. 3, 80–94 (2011). [CrossRef]  

36. K. Kanatani, Y. Sugaya, and Y. Kanazawa, Ellipse Fitting for Computer Vision: Implementation and Applications, Synthesis Lectures on Computer Vision (Morgan & Claypool Publishers, 2016).

37. Y. Awatsuji, M. Sasada, and T. Kubota, “Parallel quasi-phase-shifting digital holography,” Appl. Phys. Lett. 85, 1069–1071 (2004). [CrossRef]  

38. J. E. Millerd, N. J. Brock, J. B. Hayes, M. B. North-Morris, M. Novak, and J. C. Wyant, “Pixelated phase-mask dynamic interferometer,” Proc. SPIE 5531, 304–314 (2004). [CrossRef]  

39. M. Novak, J. Millerd, N. Brock, M. North-Morris, J. Hayes, and J. Wyant, “Analysis of a micropolarizer array-based simultaneous phase-shifting interferometer,” Appl. Opt. 44, 6861–6868 (2005). [CrossRef]   [PubMed]  

40. T. Onuma and Y. Otani, “A development of two-dimensional birefringence distribution measurement system with a sampling rate of 1.3 MHz,” Opt. Commun. 315, 69–73 (2014). [CrossRef]  

41. T. Yatagai, B. J. Jackin, A. Ono, K. Kiyohara, M. Noguchi, M. Yoshii, M. Kiyohara, H. Niwa, K. Ikuo, and T. Onuma, “Instantaneous phase-shifting fizeau interferometry with high-speed pixelated phase-mask camera,” Proc. SPIE 9660, 966018 (2015). [CrossRef]  

42. K. Ishikawa, K. Yatabe, N. Chitanont, Y. Ikeda, Y. Oikawa, T. Onuma, H. Niwa, and M. Yoshii, “High-speed imaging of sound using parallel phase-shifting interferometry,” Opt. Express 24, 12922–12932 (2016). [CrossRef]   [PubMed]  

43. K. Ishikawa, K. Yatabe, Y. Ikeda, Y. Oikawa, T. Onuma, H. Niwa, and M. Yoshii, “Interferometric imaging of acoustical phenomena using high-speed polarization camera and 4-step parallel phase-shifting technique,” Proc. SPIE 10328, 103280I (2017).

44. Y. Oikawa, K. Yatabe, K. Ishikawa, and Y. Ikeda, “Optical sound field measurement and imaging using laser and high-speed camera,” Proc. 45th Int. Congr. Noise Control Eng. (INTER-NOISE 2016) pp. 258–266 (2016).

45. K. Ishikawa, K. Yatabe, Y. Ikeda, Y. Oikawa, T. Onuma, H. Niwa, and M. Yoshii, “Optical sensing of sound fields: Non-contact, quantitative, and single-shot imaging of sound using high-speed polarization camera,” Proc. Meet. Acoust. (POMA) 29, 030005 (2016). [CrossRef]  

46. D. Malacara, M. Servín, and Z. Malacara, Interferogram Analysis For Optical Testing (CRC, 2005), 2nd ed. [CrossRef]  

47. K. Yatabe, K. Ishikawa, and Y. Oikawa, “Compensation of fringe distortion for phase-shifting three-dimensional shape measurement by inverse map estimation,” Appl. Opt. 55, 6017–6024 (2016). [CrossRef]   [PubMed]  

48. K. Yatabe and Y. Oikawa, “Convex optimization based windowed Fourier filtering with multiple windows for wrapped phase denoising,” Appl. Opt. , 55, 4632–4641 (2016). [CrossRef]   [PubMed]  

49. N. Chitanont, K. Yatabe, K. Ishikawa, and Y. Oikawa, “Spatio-temporal filter bank for visualizing audible sound field by Schlieren method,” Appl. Acoust. 115, 109–120 (2017). [CrossRef]  

50. K. Yatabe, K. Ishikawa, and Y. Oikawa, “Acousto-optic back-projection: Physical-model-based sound field reconstruction from optical projections,” J. Sound Vib. 394, 171–184 (2017). [CrossRef]  

51. Y. Koyano, K. Yatabe, and Y. Oikawa, “Infinite-dimensional SVD for revealing microphone array’s characteristics,” Appl. Acoust. 129, 116–125 (2018). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 MATLAB implementation of the improved HEFS. Only a subset of the data matrix D, which is named as Dsub, is used for the parameter estimation. In addition to HEFS, a code for constructing the extended data matrix is included at the bottom right, where the input variable X is an Nv × Nh × M three-dimensional array which is obtained by concatenating M fringe images, whose size is Nv × Nh, along the third dimension, and the shifting position is an L × 2 matrix consisting of the indices relative to the center pixel (for example, the shifting position corresponding to Eq. (11) is obtained by vertically concatenating [0, 0], [1, 0], [−1, 0], [0, 1], [0, −1], [1, 1], [1, −1], [−1, 1], and [−1, −1]).
Fig. 2
Fig. 2 MATLAB implementation of the proposed SNR-based pixel selection method. Input variable D is the data matrix, and threshold is a real number between 0 and 1.
Fig. 3
Fig. 3 Illustration of the experimental setup. A polarized interferometer is combined with a polarized high-speed camera for realizing the high-speed PPSI. A loudspeaker was placed between the optical flats which reflect the object and reference lights.
Fig. 4
Fig. 4 The loudspeaker used in the experiment (110 × 180 × 130 mm). The red dotted circle roughly indicate the observable area apparent in the measured results.
Fig. 5
Fig. 5 Illustration of recovering procedure of the measured sound field. Four fringe images were observed simultaneously by PPSI. Phase retrieval algorithms were applied to each time instance for obtaining time-sequential phase maps, and the time-directional filtering was utilized to eliminate time-invariant components unrelated to sound as in [42,43].
Fig. 6
Fig. 6 Visualized sound field obtained by PPSI and the traditional four-step algorithm. Eight consecutive time instances of the results were shown, where time evolves from left to right (from top left to bottom right).
Fig. 7
Fig. 7 Visualized sound field obtained by PPSI and the ordinary HEFS [32]. The untrusted pixels due to the shadow and unobservable area caused huge error on the phase maps.
Fig. 8
Fig. 8 Selected pixels used for the improved HEFS. The left figure shows sum of the four fringe images m I [ m ] which approximates the background illumination, and the red rectangle is the manually selected region. The center figure shows estimated SNR normalized by its maximum. The right figure shows the pixels chosen by the proposed SNR-based selection method with ϒ = 0.1 [see Eq. (30)], where the yellow pixels are the selected ones.
Fig. 9
Fig. 9 Visualized sound field obtained by PPSI and the improved HEFS. The selected pixels used for the parameter estimations in Section 3 were chosen manually (Fig. 8 left).
Fig. 10
Fig. 10 Visualized sound field obtained by PPSI and the improved HEFS. The selected pixels used for the parameter estimations in Section 3 were chosen automatically by the SNR-based pixel selection method proposed in Section 4 with ϒ = 0.1 (Fig. 8 right).
Fig. 11
Fig. 11 One-dimensional plot of the obtained results by each method. Each line was extracted vertically from each figure in Figs. 6, 9 and 10, where the horizontal position is the center.

Equations (31)

Equations on this page are rendered with MathJax. Learn more.

I i , j [ m ] = B i , j + A i , j cos ( ϕ i , j + δ [ m ] ) ,
d i , j = [ I i , j [ 1 ] , I i , j [ 2 ] , , I i , j [ M ] ] T ,
I i , j [ m ] = B i , j + A i , j [ cos ( ϕ i , j ) cos ( δ [ m ] ) sin ( ϕ i , j ) sin ( δ [ m ] ) ] ,
d i , j = B i , j o + C i , j c + S i , j s ,
c = [ cos ( δ [ 1 ] ) , cos ( δ [ 2 ] ) , , cos ( δ [ M ] ) ] T ,
s = [ sin ( δ [ 1 ] ) , sin ( δ [ 2 ] ) , , sin ( δ [ M ] ) ] T .
ϕ i , j = Arg [ C i , j ι S i , j ] = Arg [ A i , j { cos ( ϕ i , j ) + ι sin ( ϕ i , j ) } ] ,
D = [ d 1 , 1 , d 2 , 1 , d N v , 1 , d 1 , 2 , d 2 , 2 , d N v , N h ] ,
D = U Σ V T ,
d i , j ( 2 ) = [ d i , j T , d i + 1 , j T ] T ,
d i , j ( 9 ) = [ d i , j T , d i + 1 , j T , d i 1 , j T , d i , j + 1 T , d i , j 1 T , d i + 1 , j + 1 T , d i + 1 , j 1 T , d i 1 , j + 1 T , d i 1 , j 1 T ] T .
α 1 x 2 + 2 α 2 x y + α 3 y 2 + 2 β ( α 4 x + α 5 y ) + β 2 α 6 = 0 ,
ϕ = Arg [ α 1 + α 3 + κ ( y ˜ + τ x ˜ ) + ι α 1 + α 3 κ ( x ˜ τ y ˜ ) ] ,
x ˜ = x β ( α 3 α 4 α 2 α 5 ) / ( α 2 2 α 1 α 3 ) ,
y ˜ = y β ( α 1 α 5 α 2 α 4 ) / ( α 2 2 α 1 α 3 ) ,
κ = 4 α 2 2 + ( α 1 α 3 ) 2 , τ = ( α 1 α 3 + κ ) / ( 2 α 2 ) .
α T χ = 0 ,
X α = λ α ,
X = 1 N n = 1 N χ n χ n T ,
W = [ 6 x ¯ 2 6 x y ¯ x ¯ 2 + y ¯ 2 6 β x ¯ 2 β y ¯ β 2 6 x y ¯ 4 ( x ¯ 2 + y ¯ 2 ) 6 x y ¯ 4 β y ¯ 4 β x ¯ 0 x ¯ 2 + y ¯ 2 6 x y ¯ 6 y ¯ 2 2 β x ¯ 6 β y ¯ β 2 6 β x ¯ 4 β y ¯ 2 β x ¯ 4 β 2 0 0 2 β y ¯ 4 β x ¯ 6 β y ¯ 0 4 β 2 0 β 2 0 β 2 0 0 0 ] ,
X α = λ W α ,
x ¯ 2 = 1 N n = 1 N x n 2 , y ¯ 2 = 1 N n = 1 N y n 2 , x y ¯ = 1 N n = 1 N x n y n .
W α = λ X α ,
P = Σ ˜ 1 U ˜ T ,
PD = Σ ˜ 1 U ˜ T U Σ V T = V ˜ T ,
D D T = U Σ V T V Σ U T = U Σ 2 U T .
P = Σ 1 U T ,
P d i , j ( L ) 0
SNR i , j = P d i , j ( L ) 2 2 P d i , j ( L ) 2 2 ,
SNR i , j > ϒ max i , j { SNR i , j } .
ϕ = Arg [ ( m = 0 M 1 I [ m ] cos ( 2 π m / M ) ι ( m = 0 M 1 I [ m ] sin ( 2 π m / M ) ) ) ] = Arg [ m = 0 M 1 I [ m ] e 2 π ι m M ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.