Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Systematic error analysis and parameter design of a vision-based phase estimation method for ultra-precision positioning

Open Access Open Access

Abstract

Ultra-precision position measurement is increasingly important in advanced manufacturing such as the semiconductor industry and fiber optics or photonics. A vision-based phase estimation method we proposed previously performs position measurement by imaging a 2D periodic pattern. In this paper, systematic errors of this method are analyzed and derived mathematically, which are classified into two types: spectrum leakage error caused by image truncation and window function modulation, and sub-pixel error resulting from discrete Fourier transform (DFT) intensity interpolation. Key design parameters are concluded including pattern period T, camera pixel size t and resolution N, as well as the type of window function used. Numerical simulations are conducted to investigate the relationship between the phase errors and design parameters. Then an error reduction method is proposed. Finally, the improved performance of parameter optimization is validated by a comparative experiment. Experimental results show the measurement errors of the prototype are within ∼2 nm in X or Y axis, and ∼1 µrad in axis, which reaches the sub-pixel accuracy better than 10−3pixel.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Ultra-precision position measurement is a key issue in the fields of advanced manufacturing [1] like semiconductor production [2] and fiber optics or photonics [3]. Many measurement methods are used to solve this fundamental problem [4], which typically include laser interferometers [5], encoders [6], position-sensitive detectors (PSD) [7], and vision-based technology [8]. Among them, Vision-based methods have been rapidly developed since they provide a promising solution with low cost, simple configuration and multi-degrees-of-freedom (DOF) simultaneous measurement.

In recent years, more and more vision applications require sub-pixel measurement accuracy, such as industrial automation, remote sensing, and biomedical imaging [9]. Cheng and Menq [10] creatively extracted phases from a periodic pattern, intending to develop a high-precision, multi-DOF, and low-cost displacement measurement system. Phase retrieval from a periodic pattern is typically used in such measurement systems [11]. Sandoz and Zea [12] used spatial frequency analysis of micro periodic patterns and achieved simultaneous 3-DOF displacement and rotation measurement at a high resolution of ${10^{ - 2}}$ pixel. According to different ranges and resolutions, different patterns and coding schemes were proposed [13]. Yamahata et al. [14] employed 1-D DFT phase correlation method to calculate the sub-nanometer translation of micro electro-mechanical system (MEMS). Their method is simple but could only measure translation along one axis. Besides, it did not take the DFT leakage problem into consideration, which could cause measurement errors. Zhao et al. [15] made use of a precisely machined polar microstructure and a template-matching method for optical position measurement. They further proposed a measurement system that combined Hough circle transform, neural classifier, and sub-pixel interpolation algorithms, achieving a length uncertainty of 90 nm [16].

Previously Chen and Huang proposed a vision-based position measurement technique using a phase estimation method [17]. The vision system only employs one camera and a 2D periodic grid pattern as the target, and this method extracts phases of the fundamental frequency component and estimates only the center point of the image, which has great advantages of high-precision, robustness to noise and defocusing, and much smaller computation load. However, it still has some problems stay unexplained in the research. (1) What is the well-known DFT leakage influence on the measurement accuracy? (2) As for sub-pixel displacement, how well does the proposed phase estimation method perform? (3) It has been found that design parameters of the vision system have a great impact on estimation accuracy, thus how to design the parameters to obtain the best performance? They are instructive and meaningful issues to promote the vision measurement method in practical applications.

In this paper, systematic errors of the phase estimation method are analyzed from the perspective of digital signal processing. Based on error analysis, optimization of design parameters for error reduction is proposed. Numerical simulation and experimental results demonstrate that the proposed method could reach better than ${10^{ - 3}}$ sub-pixel accuracy (at the nanometer and micro-radian orders respectively). And the vision system has great potential in practical applications that require a simple and high-precision system for position measurement.

The rest of this paper is organized as follows. Section 2 introduces the basic principle of the phase estimation method proposed by Chen and Huang. Section 3 analyzes the systematic errors of the phase estimation method in mathematical derivation and explains the compositions of errors. Section 4 discusses numerical simulation and parameters design for error reduction. Accuracy improvement is assessed, and the error model is validated by experimental results in Section 5. Finally, Section 6 concludes this work.

2. Phase estimation method

As shown in Fig. 1, the vision-based position measurement system employs a camera as the reading head and a 2D periodic pattern as the grating scale. By analyzing the image of the periodic pattern captured by the camera, the relative position between the camera and the periodic pattern can be determined. That is:

$$d = \frac{\varphi }{{2\pi }}T, $$
where d is the relative position within one period of the pattern, $\varphi$ is the phase of the center point of the image to be estimated, and T is the period of the pattern which is known.

 figure: Fig. 1.

Fig. 1. Schematic diagram of the vision-based measurement system.

Download Full Size | PDF

To estimate the phase $\varphi$, 2D-DFT is used to calculate the phases of the fundamental frequency components ${\varphi _x}$ and ${\varphi _y}$ along X and Y axes respectively. The fundamental frequency components can be identified by finding the maximum amplitudes in the spectrum of the image. The following paragraphs briefly explain the theory of the phase estimation method proposed.

Considering a 2D discrete sinusoidal signal as an example, the original 2D image ${g_0}[m,n]$ and its windowed image $g[m,n]$ are defined as follows:

$${g_0}[m,n] = {A_0}\cos ({k_1}m + {k_2}n + {\theta _0}),\textrm{ } - \infty < m,n < \infty \textrm{ },$$
$$g[m,n] = {g_\textrm{0}}[m,n]w[m,n]\textrm{ ,}$$
where m and n are coordinates in the image plane, ${k_1}$ and ${k_2}$ are angular frequencies along two axes of the image, ${A_0}$ is the amplitude of the sinusoidal image, ${\theta _0}$ is the initial phase, and w[m, n] is the 2D rectangular window function. The spectrum of the image $G[u,v]$ by 2D-DFT can be represented as Eq. (4).
$$G[u,v] = G\left( {\frac{{\mathrm{2\pi }(u - {u_\textrm{0}})}}{M},\frac{{\mathrm{2\pi }(v - {v_\textrm{0}})}}{N}} \right),\textrm{0} \le u \le M - \textrm{1},\textrm{0} \le v \le N - \textrm{1}\textrm{.}$$
Here u and v are coordinates in the frequency domain, ${u_0}$ and ${v_0}$ are the coordinates of the DC component, and M and N are the total numbers of the rows and columns of the image. Suppose that the coordinates of the points with the maximum amplitude in the frequency domain is $({u_1},{v_1})$, the estimated phase of the center point of the image can be found as:
$${\varphi _{est}} = {\theta _\textrm{0}} + \left( {\frac{{M - \textrm{1}}}{\textrm{2}}} \right){k_\textrm{1}} + \left( {\frac{{N - \textrm{1}}}{\textrm{2}}} \right){k_\textrm{2}} = \arg ({G[{u_\textrm{1}},{v_\textrm{1}}]} )+ \frac{{\mathrm{\pi }({u_\textrm{1}} - {u_\textrm{0}})(M - \textrm{1})}}{M} + \frac{{\mathrm{\pi }({v_\textrm{1}} - {v_\textrm{0}})(N - \textrm{1})}}{N}.$$
Equation (5) is the 2D phase estimation equation. In other words, the phase of the center point of the image can be calculated as the phase of the maximum amplitude component of the spectrum plus two additional terms within the range of $(0,2\pi )$. By substituting Eq. (5) into Eq. (1), we can calculate the phase ${\varphi _x}$ and ${\varphi _y}$, and then the corresponding relative position ${d_x}$ and ${d_y}$ at the center point of the image. The detailed derivation and rotation principles can be found in [17].

3. Systematic error analysis and modeling

The phase estimation method described in Section 2 is, in essence, a one-point interpolated DFT phase estimation method that accurately estimates the phase of the center point of the image. The DFT approach of spectrum analysis is computationally efficient and robust. However, it may generate systematic phase estimation errors due to spectrum leakage and intensity interpolation. Figure 2 illustrates the image signal processing flowchart. For simplicity, the following discussion on systematic errors is based on a 1-D signal. Systematic errors for a 2-D image are the same as those for a 1-D signal along each dimension.

 figure: Fig. 2.

Fig. 2. Image processing error analysis of the phase estimation method. (a) Interpolation error. (b) Leakage error.

Download Full Size | PDF

Spectrum leakage [1820] is mainly caused by truncation of the signal with a window function. The error caused by spectrum leakage is periodical and is related to the pattern period. This error depends on the signal waveforms and the type of window functions used, such as rectangular, Hanning and other combined windows. On the other hand, in the spatial domain, the periodic pattern waveforms are gray-value interpolation by discrete pixel sampling, which intuitively makes the signals distorted and will result in interpolation error [21,22]. This error period is the same as the pixel size, so we also defined it as sub-pixel error.

3.1 Spectrum leakage error

The goal of the phase estimation method used in this research is to calculate the phase of the fundamental frequency component to approximate the phase of the midpoint of a discrete signal. As we mentioned above, spectrum leakage error is caused by signal truncation with a non-integer number of signal periods, which is contributed by the leakage from the non-fundamental frequency components and the window spectrum modulation. To make it clear, the two parts are explained respectively.

3.1.1 Spectrum leakage error from other frequency components

Taking a windowed cosine signal as an example, it ideally has two conjugated spectrum peaks at the positive and negative single frequencies. To calculate the phase of the cosine signal, the positive part of the spectrum is normally used, which results in phase errors due to the spectrum leakage from the negative part of the spectrum to positive part. The positive and negative parts of the spectrum overlap, which can be seen in Fig. 2(b). To analyze the spectrum leakage error of a single-frequency signal, it is necessary to use the complete function to calculate the phase and then compare it with the approximated result. Assume ${x_0}[n]$ is the original signal, and $x[n]$ is the truncated signal multiplied by a rectangular window ${w_r}[n]$. We have

$${x_\textrm{0}}[n] = {A_\textrm{0}}\cos ({\omega _\textrm{0}}n + {\theta _\textrm{0}}),\textrm{ } - \infty < n < + \infty \textrm{ },$$
$$x[n] = {A_0}\cos ({\omega _0}n + {\theta _0}) \cdot {w_r}[n] = \frac{{{A_0}}}{2}{w_r}[n]{e^{j{\theta _0}}}{e^{j{\omega _0}n}} + \frac{{{A_0}}}{2}{w_r}[n]{e^{ - j{\theta _0}}}{e^{ - j{\omega _0}n}}.$$
Here ${\theta _0}$ is the initial phase and ${\omega _\textrm{0}} = \mathrm{2\pi }{t / T} = {{\mathrm{2\pi }{f_\textrm{0}}} / {{f_s}}}$ is the signal frequency, where T is the signal period and t is the sampling interval.

Assume that the index of the frequency closest to the fundamental frequency ${\omega _0}$ (where the amplitude is the maximum) is ${k_0}$. The DFT of Eq. (7) at the frequency index ${k_0}$ is

$$\begin{array}{l} X[{k_0}] = \frac{{{A_0}}}{2}{e^{j{\theta _0}}}{W_R}\left( {\frac{{2\pi {k_0}}}{N} - {\omega_0}} \right) + \frac{{{A_0}}}{2}{e^{ - j{\theta _0}}}{W_R}\left( {\frac{{2\pi {k_0}}}{N} + {\omega_0}} \right)\\ = \frac{{{A_\textrm{0}}}}{\textrm{2}}{e^{ - j\frac{{(N - \textrm{1)}\pi {k_\textrm{0}}}}{N}}}\left\{ {a{e^{j\left[ {\frac{{(N - \textrm{1)}{\omega_\textrm{0}}}}{\textrm{2}} + {\theta_\textrm{0}}} \right]}} + b{e^{ - j\left[ {\frac{{(N - \textrm{1)}{\omega_\textrm{0}}}}{\textrm{2}} + {\theta_\textrm{0}}} \right]}}} \right\}\textrm{ }, \end{array}$$
where
$$a = \frac{{\sin \left[ {\left( {\frac{{2\pi {k_0}}}{N} - {\omega_0}} \right)\frac{N}{2}} \right]}}{{\sin \left[ {\left( {\frac{{2\pi {k_0}}}{N} - {\omega_0}} \right)\frac{1}{2}} \right]}},\textrm{ }b = \frac{{\sin \left[ {\left( {\frac{{2\pi {k_0}}}{N} + {\omega_0}} \right)\frac{N}{2}} \right]}}{{\sin \left[ {\left( {\frac{{2\pi {k_0}}}{N} + {\omega_0}} \right)\frac{1}{2}} \right]}}\textrm{ }.$$
The phase of $X[{k_\textrm{0}}]$ can be found as
$$\arg ({X[{k_\textrm{0}}]} )={-} \frac{{(N - \textrm{1)}\pi {k_\textrm{0}}}}{N} + {\tan ^{ - \textrm{1}}}\left[ {\frac{{a - b}}{{a + b}}\tan\left( {{\theta_\textrm{0}} + \frac{{(N - \textrm{1)}{\omega_\textrm{0}}}}{\textrm{2}}} \right)} \right]\textrm{ },$$
which represents the accurate phase of the frequency component at the index ${k_0}$.

If the negative part of the spectrum is ignored, which means b = 0 in the calculation of the phase, we have

$$\arg (X[{k_\textrm{0}}]) ={-} \frac{{(N - \textrm{1)}\pi {k_\textrm{0}}}}{N} + {\theta _\textrm{0}} + \frac{{(N - \textrm{1)}{\omega _\textrm{0}}}}{\textrm{2}}\textrm{ }.$$
Subtracting Eq. (11) from Eq. (10), we can find the leakage error ${E_L}(\theta )$ of a single-frequency signal in the proposed phase estimation method as
$${E_L}(\theta ) = {\tan ^{ - \textrm{1}}}\left[ {\frac{{a - b}}{{a + b}}\tan\left( {{\theta_\textrm{0}} + \frac{{(N - \textrm{1)}{\omega_\textrm{0}}}}{\textrm{2}}} \right)} \right] - \left( {{\theta_\textrm{0}} + \frac{{(N - \textrm{1)}{\omega_\textrm{0}}}}{\textrm{2}}} \right).$$
It is a phase-dependent error whose curve is verified numerically as approximately sinusoidal.

Further, for a non-sinusoidal periodic signal, which can be seen as the sum of a series of frequency components, the spectrum of the window function is not only shifted to the fundamental frequency but also the high-order harmonic frequencies. The side lobes of the spectrum of the window function at high-order harmonic frequencies will probably affect the phase estimation results. Assuming a multi-frequency signal $g[n]$ and its spectrum ${G_m}(\omega )$ as

$$g[n] = \sum\limits_{m = 0}^M {{A_m}} \cos ({2\pi {f_m}n/{f_s} + {\theta_m}} ),$$
$${G_m}(\omega ) = \sum\limits_{m = 0}^M {\frac{{{A_m}}}{2}} [{W({\omega - {\omega_m}} ){e^{j{\theta_m}}} + W({\omega + {\omega_m}} ){e^{ - j{\theta_m}}}} ],$$
where ${A_m}$, ${f_m}$, and ${\theta _m}$ are amplitude, frequency, and initial phase of each frequency component, respectively. It can be seen from Eq. (14) that the fundamental frequency spectrum $G({k_0})$ is not only affected by the leakage from the spectrum at the negative fundamental frequency ($- {\omega _0}$), but also by the leakage from the spectrum at other frequencies (${\pm} {\omega _m}$). Figure 3 illustrates the measurement error due to leakage from other frequency components in the case of multi-frequency waveforms. The magnitude of the leakage error depends on the position of each frequency component and the degree of frequency separation.

 figure: Fig. 3.

Fig. 3. Spectrum leakage error for a multi-frequency waveform.

Download Full Size | PDF

3.1.2 Spectrum leakage error from window modulation

As mentioned above, the spectrum leakage error also depends on the spectrum properties of the window function to a large degree. The phase estimation equation Eq. (11) is derived based on the assumption of the rectangular window. A rectangular window has no window modulation error when the size of the window is an integer number of periods. However, when the size of the window is not an integer number of periods, the error is very significant. To investigate the performance of other window functions, we take a discrete cosine signal modulated by a Hanning window as an example to analyze the window modulation error ${E_w}$. A Hanning window can be considered as the sum of several rectangular windows. Thus its spectrum can be expressed as the sum of the rectangular window spectrum. Using ${w_h}[n]$ to denote the Hanning window in the spatial domain ($n = 0,1, \ldots N - 1$), we have

$${w_h}[n] = 0.5 - 0.5\cos\left( {\frac{{2\pi n}}{N}} \right) = \frac{1}{2}{w_r}[n ]- \frac{1}{4}{w_r}[n ]{e^{j\frac{{2\pi n}}{N}}} - \frac{1}{4}{w_r}[n ]{e^{ - j\frac{{2\pi n}}{N}}}\textrm{ }.$$
The DFT’s positive part of the original signal truncated by the above Hanning window is
$$X[{k_0}] = \frac{{{A_0}}}{2}{e^{j{\theta _0}}}{W_H}\left( {\frac{{2\pi {k_0}}}{N} - {\omega_0}} \right)\textrm{ },$$
where
$${W_H}\left( {\frac{{2\pi {k_0}}}{N} - {\omega_0}} \right) = \frac{1}{2}{W_R}\left( {\frac{{2\pi {k_0}}}{N} - {\omega_0}} \right) + \frac{1}{4}\left[ {{W_R}\left( {\frac{{2\pi ({k_0} - 1)}}{N} - {\omega_0}} \right) + {W_R}\left( {\frac{{2\pi ({k_0} + 1)}}{N} - {\omega_0}} \right)} \right].$$
When $N > > 1$, $\frac{{N - 1}}{N} \approx 1$, we have the following approximate result as Eq. (18).
$${W_H}\left( {\frac{{2\pi {k_0}}}{N} - {\omega_0}} \right) \approx \frac{1}{2}{e^{ - j\left[ {\frac{{(N - 1)}}{N}\pi {k_0} - \frac{{(N - 1){\omega_0}}}{2}} \right]}}\left( {A + \frac{1}{2}B + \frac{1}{2}C} \right)\textrm{ },$$
where A, B, and C represent the spectrum amplitude coefficients of the three rectangular window functions. Substitute Eq. (18) into Eq. (16), we have
$$X[{k_0}] = \frac{{{A_0}}}{4}\left( {A + \frac{1}{2}B + \frac{1}{2}C} \right){e^{j\left[ {{\theta_0} - \left( {\frac{{(N - 1)\pi }}{N}{k_0} - \frac{{{\omega_0}(N - 1)}}{2}} \right)} \right]}}\textrm{ }.$$
Calculate the phase of Eq. (19), and rearrange the two sides we obtain
$${\theta _0} + \frac{{{\omega _0}(N - 1)}}{2} = \arg ({X[{k_0}]} )+ \frac{{(N - 1)\pi }}{N}{k_0}\textrm{ }.$$
Equation (20) suggests that the phase estimation equation for a Hanning window is approximately the same as that for a rectangular window. However, their phase estimation errors caused by spectrum leakage may be different because their spectrum amplitude coefficients are different. Actually, all cosine windows can be written as a combination of several rectangular windows. This means we can use Eq. (20) to estimate the phase at the midpoint of the signal when cosine windows are used. The phase estimation error depends on the type of windows used, which is discussed in Section 4 by numerical simulation.

3.2 Sub-pixel interpolation error

When the displacement is smaller than the pixel size of the image sensor, spatial-domain discrete sampling causes systematic phase estimation error as well, which we name as the sub-pixel interpolation error. As shown in Fig. 4, for a non-sinusoidal periodic waveform, discrete sampling the waveform causes distortion, which in turn results in phase estimation error in the frequency domain. This error is approximately sinusoidal with respect to sub-pixel shift, with an error period equal to the pixel size [23].

 figure: Fig. 4.

Fig. 4. Sub-pixel interpolation error analysis. (a) A non-sinusoidal periodic waveform sampling by sub-pixel shift. (b) Its sinusoidal-like phase error curve produced by image sensor discrete sampling.

Download Full Size | PDF

Briefly, the phase estimation method firstly converts the translation in the spatial domain into a phase variation in the frequency domain. By calculating the phase variation then converting back to the spatial domain, the corresponding displacement is accurately estimated. For the shift of a continuous signal, it is completely linearly related to the phase variation in the frequency domain. However, the discrete signal is a different situation because the intensity information between two pixels is interpolated. Interpolation methods are commonly used in digital image processing, such as phase correlation algorithms [24], intensity interpolation algorithms [25], etc. Among intensity interpolation methods, DFT is a relatively more accurate approach [23].

Assume that the original discrete signal is $x[n]$ and its shifted discrete signal is $y[n] = x[n - d]$, where d is the shift in pixels. Their spectrums are expressed as:

$$X[k] = \sum\limits_{n = 0}^{N - 1} {x[n]} {e^{ - j\frac{{2\pi k}}{N}n}}\textrm{ ,}$$
$$Y[k] = X[k]{e^{ - j\frac{{2\pi k}}{N}d}}\textrm{ ,}$$
when d is an integer, Eq. (22) is established. However, if d is a non-integer, that is, a sub-pixel shift, then Eq. (22) is not exactly accurate, it will introduce the position-dependent interpolation error. Interestingly, it is found that this error is zero when d = 0.5 (i.e. shift of half a pixel). When d is at 0.25 and 0.75 pixels, the errors are at the positive and negative maximums, because of the symmetry property of the interpolation error, as shown in Fig. 4(b).

Although the explicit analytic error function is hard to express, as it will change with the shape of the waveform, the sampling frequency, and the interpolation method. Fortunately, the interpolation error properties are relatively easy to master, and the design parameters like T, t can be designed accordingly, which will be further discussed in Section 4.

4. Parameters design for error reduction

4.1 Key design parameters

In Section 3, we analyzed major sources of the systematic errors of the phase estimation method. In this section, we focus on the design of key parameters related to the camera and the periodic pattern to minimize the errors. From Eq. (9), there are two key variables in the coefficients a and b, the normalized angular frequency ${\omega _0}$ and the total number of sampling points N. These two variables determine the magnitude of the spectrum leakage error according to Eq. (12). However, since ${\omega _0}$ and N are coupled and cannot be investigated independently, we introduce two new parameters as Eq. (23): the number of sampling periods P and the number of sampling points per period S.

Parameters P and S can be designed by choosing the parameters of the camera and periodic pattern. The geometric relationship is depicted in the schematic camera-pattern model in Fig. 5. We assume a pinhole camera model and a 1 × camera lens. For example, if we set the pattern period T = 100$\mu m$ and the pixel size t = 10$\mu m$, we have S =10. If we choose a camera with a resolution of 640 × 480 pixels, then ${P_x}$= 64 along the X-axis and ${P_y}$= 48 along the Y-axis.

$$\left\{ \begin{array}{l} P = \frac{{Nt}}{T} = \frac{{{\omega_0}}}{{\textrm{2}\pi }}N\\ S = \frac{N}{P} = \frac{T}{t} = \frac{{\textrm{2}\pi }}{{{\omega_0}}} \end{array} \right.\textrm{ }\textrm{.}$$
Substituting P and S into the coefficients a and b, and replacing ${\omega _0}$ and N with P and S in Eq. (12), the leakage error ${E_L}$ of a cosine signal with a rectangular window is obtained as Eq. (24). Once P and S are given, when waveform moves, the phase of signal ${\theta _0}$ changes then ${E_L}$ varies periodically.
$${E_L} = {\tan ^{ - \textrm{1}}}\left[ {\frac{{a - b}}{{a + b}}\tan\left( {P\pi - \frac{\pi }{S} + {\theta_\textrm{0}}} \right)} \right] - \left( {P\pi - \frac{\pi }{S} + {\theta_\textrm{0}}} \right)\textrm{ }.$$

 figure: Fig. 5.

Fig. 5. Camera-Pattern geometric model.

Download Full Size | PDF

Similar to ${E_L}$, window modulation error ${E_w}$ is also determined by P and S. The original signal is ${y_0}[n] = \cos ({{2\pi {f_0}n} / {{f_s}}})$. Since ${f_n} = {{{f_0}} / {{f_s}}} = {t / T} = {1 / S}$ and N = PS, thus ${1 / N} = {{{f_n}} / P}$. The Hanning window truncated signal $y[n]$ can be written as:

$$\begin{array}{l} y[n] = {y_0}[n] \cdot {w_h}[n] = \cos ({2{{\pi {f_0}n} / {{f_s}}}} )\cdot \left[ {0.5 - 0.5\cos \left( {\frac{{2\pi n}}{N}} \right)} \right]\\ = \frac{1}{2}\left[ {\cos({2\pi {f_n}n} )- \frac{1}{2}\cos\left( {2\pi \frac{{P + 1}}{P}{f_n}n} \right) - \frac{1}{2}\cos\left( {2\pi \frac{{P - 1}}{P}{f_n}n} \right)} \right]\textrm{ }. \end{array}$$
It can be seen from Eq. (25) that the original signal ${y_0}[n]$ is not only modulated by the window function in amplitude, but also added in two leakage cosine components whose distances from the main peak position (${f_n}$) are both ${{{f_n}} / P}$. From the perspective of energy leakage, as Fig. 6 shows, if P increases and ${{{f_n}} / P}$ decreases, the two leakage peaks will approach the main peak, then window modulation will be closer to the rectangular window, the measured value of the maximum spectrum line $|{{G_{\max }}} |$ will be closer to the ideal value. Similarly, when S increases, ${f_n}$ decreases, then ${{{f_n}} / P}$ decreases, the measured error is supposed to reduce.

 figure: Fig. 6.

Fig. 6. Window function modulation error analysis by design parameters P and S.

Download Full Size | PDF

Also, the interpolation error ${E_I}$ of our phase estimation method is related to the sampling points S directly. The relation $S = {T / t}$ (the spatial frequency of periodic pattern T and the sampling frequency of pixel size t) determines the sampling density.

The above conclusion provides us with a useful approach to investigate the influencing factors of systematic errors in practice and design appropriate parameters to reduce errors. To sum up, the design parameters include P, S, and window function.

4.2 Numerical simulation

In our experiment, the periodic pattern is a 2D array of squares etched on a piece of chrome-plated glass. When captured by the camera, its image shows an intensity profile that can be best modeled by a Gaussian function, which can be understood as a square wave with a Gaussian filter. Therefore, we use a 1D Gaussian function to simulate the pattern waveform to investigate the relationships between systematic errors and the design parameters.

We first investigate the relationship between the spectrum leakage error ${E_L}$ and the parameters S and P. This is a pixel-level error, so we set the moving step d the same as the pixel size t in the simulation. We also used a Hanning window as the window function. Simulations are performed on two different waveforms, a Cosine waveform and a Gaussian waveform. Through the control variable method, the simulation results are shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. Relation of leakage error and S/P. (a) Cosine waveform, P =39.9; (b) Gaussian waveform, P = 39.9; (c) Cosine waveform, S = 20; (d) Gaussian waveform, S = 20.

Download Full Size | PDF

From Figs. 7(a) and (b), we find that with the increase of S while maintaining P constant, ${E_L}$ decreases exponentially, which means increasing the number of sampling points per period is highly effective in reducing the phase error. Figures 7(c) and (d) show the relationships between P and ${E_L}$ when S is kept constant. We see that the phase error also decreases exponentially as P increases. When P is an integer, the error is at its local minimum because of no leakage error caused by truncation. Generally, when S is constant, the sharper the waveform (meaning more high-frequency components), the greater the leakage error. Hence, the cosine waveform causes the smallest leakage error. Note that the leakage phase error has nothing to do with the absolute size of T and t.

Similar to ${E_L}$, with increasing of S or P, the window modulation error ${E_w}$ will also decrease. Especially, when P reaches a certain value, the error will be down to almost zero, On the other hand, the error reduction effect of S is less significant. The simulation of the window modulation error ${E_w}$ is performed by setting P as an integer and the moving step equal to the pixel size, to eliminate the spectrum leakage error and interpolation error. The window modulation errors ${E_w}$ of the ten commonly-used window functions are compared in Table 1.

Tables Icon

Table 1. Maximum Window Modulation Error of Different Windows

Comparing the simulation results, we find: 1) the rectangular window has no modulation error since it does not modulate the amplitude of the waveform; 2) Among the other window functions, the Rife-Vincent window results in the smallest error. It is noted that although there is no modulation error with the rectangular window, the spectrum leakage error is large when the window size is not an integer number of periods, which is normally the case. Therefore, the rectangular window is not a suitable choice as a window function and should be avoided.

As analyzed in Section 4.1, the interpolation error results from a sub-pixel shift of sampling, and its magnitude is only related to the number of samples per period S. To analyze this error, we shift the sampling of the waveform with a step d between zero and one pixel in the simulation and record the maximum error at each step. Of course, the results of simulation are the superposition of sub-pixel error and pixel-level error, but it readily makes sense because when S is not large enough, the former is much larger than the latter in amplitude.

The cosine function is known to have no such interpolation error. However, other waveforms can be influenced by this kind of sub-pixel error significantly. Figures 8(a) and (b) show a fitted Gaussian waveform and its interpolation error ${E_I}$ as a function of the sampling points per period S respectively. We see that the interpolation error decreases rapidly with the increase of S. When S is larger than 20, ${E_I}$ becomes negligibly small. This conclusion can guide us on how to choose the ratio of T and t according to accuracy requirements.

 figure: Fig. 8.

Fig. 8. Relation of the interpolation error ${E_I}$ and the number of samples per period S. (a) The fitted Gaussian waveform used in the simulation; (b) Interpolation error versus S.

Download Full Size | PDF

4.3 Error reduction method

According to the theoretical derivation and numerical simulation above, the parameters P and S, together with the type of window function used determine the magnitude of the systematic phase errors. Considering these three errors separately: the leakage error is a systematic error induced by the DFT itself, which can only be reduced, but not eliminated. Window modulation error can be minimized by choosing an appropriate window function. Interpolation errors can be eliminated by some technical means according to its characteristic.

For leakage error ${E_L}$, considering that integer-period sampling is almost impossible in practice, so we can improve the estimation accuracy in the following ways: 1) Design cosine waveform periodic pattern, if possible. Since the single-frequency signal will introduce the least leakage error when the pattern period is the same. 2) Using combined cosine windows with good performance for faster attenuation of side lobes, like Hanning and Rife-Vincent window. 3) For a general case, one can choose the corresponding camera with a resolution higher than N = PS, and design parameters value according to the error curves.

For window modulation error ${E_w}$, enlarging the parameters P/S both can reduce this error. Among these window functions, one is supposed to choose the window with a relatively small modulation error, referring to Table 1. In addition, the triangular window is also found as a good choice when the interpolation error is the major phase error source [26].

For interpolation error ${E_I}$, 1) intrinsically, cosine waveform periodic pattern has no interpolation error; 2) according to the symmetric property, it is a sine-like curve with a period of one-pixel size, so we can apply some technical methods to eliminate this error. For instance, one can move the image forward and backward of an amount of 0.25 pixels [27], and use these two images taken at different positions to counteract the interpolation errors. 3) For a general case, choose an appropriate S value for a certain periodic pattern. For example, S = T/t is supposed to be set larger than 20 when we use the 2D Gaussian fitted pattern in the paper.

To sum up, whether leakage error ${E_L}$ or window modulation error ${E_w}$, they both can be reduced by increasing parameter P/S, therefore using a high-resolution industrial camera within allowed conditions is feasible to improve the measurement accuracy. Design an appropriate S (i.e., the ratio pattern pitch and pixel size) to reduce interpolation error ${E_I}$. Since the accuracy is not determined by only one factor, and there are not so many selections of camera configurations, thus it becomes a trade-off optimization problem to obtain a better performance. In practice, according to the measurement range and accuracy requirements of the application, we can obey the following design process, as Fig. 9 demonstrated.

  • 1) Pattern size can be coarsely decided by the measurement range. This step involves pattern coding to solve large-stroke displacement measurements, which is beyond the scope of this article.
  • 2) Design the pattern period T according to accuracy requirements. Due to the linear relation in Eq. (1), the larger the pattern pitch T, the larger the displacement error with the same phase error level.
  • 3) Fit the pattern waveform with the grayscale of real images, then choose an appropriate window function to process the captured images.
  • 4) Through the simulation curves of E-P and E-S as above to select the parameters P and S, then the camera pixel size t and the camera resolution N can be calculated.

 figure: Fig. 9.

Fig. 9. Parameters design flowchart for error reduction.

Download Full Size | PDF

5. Experiments

5.1 Experimental setup

To validate the effectiveness of the parameter design method for error reduction, an experimental setup has been developed, as shown in Fig. 10. This experimental setup consists of a B/W industrial camera, a high-precision motion stage, an LED backlight, and a 2D periodic pattern. The motion stage used is a 6-axis nano-positioning stage from PI, which has a linear travel range of 200 µm with a resolution of 1nm and an angular travel range of 0.5 mrad with a resolution of 0.1 µrad. The pattern is a piece of chrome-plated glass engraved with periodic grids. This pattern is fixed on the LED light, which is mounted on the stage in succession. A 1× low-distortion macro lens is used to generate high-quality images. Camera calibration results showed that image distortion caused by this lens could be neglected.

 figure: Fig. 10.

Fig. 10. Experimental setup for position measurement.

Download Full Size | PDF

5.2 Experiments and results

Two experimental setups with different parameters (see Table 2) are used as examples to verify the proposed method for error reduction. The main differences between the two setups are the resolution and pixel size of the cameras used. Setup #1 uses a low-resolution camera with a larger pixel size, while Setup #2 uses a high-resolution camera with a smaller pixel size. Since the period of the pattern used is the same, these differences result in a much higher S and P for Setup #2 than for Setup #1 (S = 17.1, P = [112.3, 70.2] for Setup #2 versus S = 10.1, P = [63.4, 47.5] for Setup #1). According to the simulation results presented in Section 4.2, higher S and P lead to smaller phase errors. In addition, Setup #2 uses a Rife-Vincent window and a 12-bit-depth camera versus a Hanning window and an 8-bit-depth camera for Setup #1 to further reduce phase errors.

Tables Icon

Table 2. Parameters of two experimental setups.

In-plane 3-DOF step-tests were conducted to compare the results from the two experimental setups. The motion stage is driven to move along the X or Y axis with steps of 100 nm or to rotate around the Z-axis with steps of 5 µrad. At each step, 20 images are captured from which displacements or rotation angles and their errors are calculated. The results are shown in Fig. 11. The left vertical axis of the plots and the blue solid line represent the displacement or rotation angle and the right vertical axis and the red dots represent the measurement errors.

 figure: Fig. 11.

Fig. 11. Step-test results. (a) Setup #1 translation step-test. (b) Setup #1 rotation step-test. (c) Setup #2 translation step-test. (d) Setup #2 rotation step-test.

Download Full Size | PDF

As we can see from the experimental results, the displacement errors and rotation errors from Setup #2 are smaller than those from Setup #1, which is in line with our expectations based on numerical simulation results that larger S and P result in a smaller error and thus a better performance. It should be noted that there are many other factors affecting the accuracy in the real experiments, such as environmental vibration, alignment error, temperature drift, and so on. Therefore, the errors in the real experiments are not the same as those obtained by numerical simulation.

6. Conclusion

In this paper, we analyzed and derived the systematic errors of a novel phase estimation method for high-precision vision-based position measurement. There are two systematic error sources: spectrum leakage error and interpolation error. By means of numerical simulation, the properties and influencing factors of each error source are investigated respectively. Three important design parameters P, S, and window function are proposed for error reduction, which is very meaningful to instruct users to choose the parameters of the vision measurement system. To demonstrate the improved performance of the proposed method in precision measurement, a prototype consisting of an industrial camera and grating patterns has been developed. Experimental results show the measurement errors of the prototype are within ∼2 nm in X or Y axis and ∼1 µrad in ${\theta _z}$­axis, which reaches the sub-pixel accuracy of ${10^{ - 3}}$ pixel. Although the proposed method is based on artificial prior knowledge which limits some applications in the natural environment to a certain extent, this does not prevent the wide range of applications in the fields such as precision measurement and robot manipulation and navigation.

Funding

National Natural Science Foundation of China (51375310).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. P. Sandoz, J. C. Ravassard, S. Dembélé, and A. Janex, “Phase-sensitive vision technique for high accuracy position measurement of moving targets,” IEEE Trans. Instrum. Meas. 49(4), 867–872 (2000). [CrossRef]  

2. H. Li, B. Zhu, Z. Chen, and X. Zhang, “Realtime in-plane displacements tracking of the precision positioning stage based on computer micro-vision,” Mech. Syst. Signal Process. 124, 111–123 (2019). [CrossRef]  

3. P. S. Huang, T. Kai, and X. Qiang, “An active encoder for precision position measurement,” Am. Soc. Precis. Eng. 56, 421–426 (2013).

4. H. Yu, Q. Wan, X. Lu, C. Zhao, and Y. Du, “A robust sub-pixel subdivision algorithm for image-type angular displacement measurement,” Opt. Lasers Eng. 100, 234–238 (2018). [CrossRef]  

5. N. A. Arias H, H. P. Sandoz, J. E. Meneses, M. A. Suarez, and T. Gharbi, “3D localization of a labeled target by means of a stereo vision configuration with subvoxel resolution,” Opt. Express 18(23), 24152 (2010). [CrossRef]  

6. H.-L. Hsieh and S.-W. Pan, “Development of a grating-based interferometer for six-degree-of-freedom displacement and angle measurements,” Opt. Express 23(3), 2451 (2015). [CrossRef]  

7. C. Cui, Q. Feng, B. Zhang, and Y. Zhao, “System for simultaneously measuring 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser,” Opt. Express 24(6), 6735 (2016). [CrossRef]  

8. W. Gao, S. Dejima, Y. Shimizu, and S. Kiyono, “Precision measurement of two-axis positions and tilt motions using a surface encoder,” CIRP Ann. 52(1), 435–438 (2003). [CrossRef]  

9. H. Foroosh, J. B. Zerubia, and M. Berthod, “Extension of phase correlation to subpixel registration,” IEEE Trans. Image Process. 11(3), 188–200 (2002). [CrossRef]  

10. P. Cheng and C. H. Menq, “Real-time continuous image registration enabling ultraprecise 2-D motion tracking,” IEEE Trans. Image Process. 22(5), 2081–2090 (2013). [CrossRef]  

11. P. Sandoz, V. Bonnans, and T. Gharbi, “High-accuracy position and orientation measurement of extended two-dimensional surfaces by a phase-sensitive vision method,” Appl. Opt. 41(26), 5503–5511 (2002). [CrossRef]  

12. P. Sandoz and J. A. G. Zea, “Space-frequency analysis of pseudo-periodic patterns for subpixel position control,” ISOT 2009 - Int. Symp. Optomechatronic Technol.16–21 (2009).

13. P. Sandoz, J.-M. Friedt, and É. Carry, “In-plane displacement measurement with sub-pixel resolution: application to vibration characterization of a shear-force scanning probe,” Opt. Meas. Syst. Ind. Insp. V 6616, 66162W (2007). [CrossRef]  

14. C. Yamahata, E. Sarajlic, G. J. M. Krijnen, and M. A. M. Gijs, “Subnanometer Translation of Microelectromechanical Systems Measured by Discrete Fourier Analysis of CCD Images,” J. Microelectromech. Syst. 19(5), 1273–1275 (2010). [CrossRef]  

15. C. Zhao, C. Cheung, and M. Liu, “Integrated polar microstructure and template-matching method for optical position measurement,” Opt. Express 26(4), 4330 (2018). [CrossRef]  

16. C. Zhao, C. Fai, and P. Xu, “High-efficiency sub-microscale uncertainty measurement method using pattern recognition,” ISA Trans. 101, 503–514 (2020). [CrossRef]  

17. Z. H. Chen and P. S. Huang, “A vision-based method for planar position measurement,” Meas. Sci. Technol. 27(12), 125018 (2016). [CrossRef]  

18. D. Agrež, “Improving phase estimation with leakage minimization,” IEEE Trans. Instrum. Meas. 1(4), 162–167 (2004). [CrossRef]  

19. C. Offelli and D. Petri, “The Influence of Windowing on the Accuracy of Multifrequency Signal Parameter Estimation,” IEEE Trans. Instrum. Meas. 41(2), 256–261 (1992). [CrossRef]  

20. D. Agrež, “Weighted multipoint interpolated DFT to improve amplitude estimation of multifrequency signal,” IEEE Trans. Instrum. Meas. 51(2), 287–292 (2002). [CrossRef]  

21. B. F. Alexander and K. C. Ng, “Elimination of systematic error in subpixel accuracy centroid estimation,” Opt. Eng. 30(9), 1320–1331 (1991). [CrossRef]  

22. W. Chen, M. Li, and X. Su, “Error analysis about CCD sampling in Fourier transform profilometry,” Optik (Munich, Ger.) 120(13), 652–657 (2009). [CrossRef]  

23. P. L. Reu, “Experimental and Numerical Methods for Exact Subpixel Shifting,” Exp. Mech. 51(4), 443–452 (2011). [CrossRef]  

24. H. W. Schreier, J. R. Braasch, and M. a Sutton, “Systematic errors in digital image correlation caused by intensity interpolation,” Opt. Eng. 39(11), 2915–2921 (2000). [CrossRef]  

25. J. Schoukens, R. Pintelon, and H. Van Hamme, “The Interpolated Fast Fourier Transform: A Comparative Study,” IEEE Trans. Instrum. Meas. 41(2), 226–232 (1992). [CrossRef]  

26. K. Qian, N. Agarwal, S. Ri, and Q. Wang, “Sampling moiré as a special windowed Fourier ridges algorithm in demodulation of carrier fringe patterns [J],” Opt. Eng. 57(10), 1 (2018). [CrossRef]  

27. E. Zappa and R. Liu, “A novel compensation method for systematic effect in displacement measurement based on vision systems,” Meas. Sci. Technol. 28(6), 064003 (2017). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Schematic diagram of the vision-based measurement system.
Fig. 2.
Fig. 2. Image processing error analysis of the phase estimation method. (a) Interpolation error. (b) Leakage error.
Fig. 3.
Fig. 3. Spectrum leakage error for a multi-frequency waveform.
Fig. 4.
Fig. 4. Sub-pixel interpolation error analysis. (a) A non-sinusoidal periodic waveform sampling by sub-pixel shift. (b) Its sinusoidal-like phase error curve produced by image sensor discrete sampling.
Fig. 5.
Fig. 5. Camera-Pattern geometric model.
Fig. 6.
Fig. 6. Window function modulation error analysis by design parameters P and S.
Fig. 7.
Fig. 7. Relation of leakage error and S/P. (a) Cosine waveform, P =39.9; (b) Gaussian waveform, P = 39.9; (c) Cosine waveform, S = 20; (d) Gaussian waveform, S = 20.
Fig. 8.
Fig. 8. Relation of the interpolation error ${E_I}$ and the number of samples per period S. (a) The fitted Gaussian waveform used in the simulation; (b) Interpolation error versus S.
Fig. 9.
Fig. 9. Parameters design flowchart for error reduction.
Fig. 10.
Fig. 10. Experimental setup for position measurement.
Fig. 11.
Fig. 11. Step-test results. (a) Setup #1 translation step-test. (b) Setup #1 rotation step-test. (c) Setup #2 translation step-test. (d) Setup #2 rotation step-test.

Tables (2)

Tables Icon

Table 1. Maximum Window Modulation Error of Different Windows

Tables Icon

Table 2. Parameters of two experimental setups.

Equations (25)

Equations on this page are rendered with MathJax. Learn more.

d = φ 2 π T ,
g 0 [ m , n ] = A 0 cos ( k 1 m + k 2 n + θ 0 ) ,   < m , n <   ,
g [ m , n ] = g 0 [ m , n ] w [ m , n ]  ,
G [ u , v ] = G ( 2 π ( u u 0 ) M , 2 π ( v v 0 ) N ) , 0 u M 1 , 0 v N 1 .
φ e s t = θ 0 + ( M 1 2 ) k 1 + ( N 1 2 ) k 2 = arg ( G [ u 1 , v 1 ] ) + π ( u 1 u 0 ) ( M 1 ) M + π ( v 1 v 0 ) ( N 1 ) N .
x 0 [ n ] = A 0 cos ( ω 0 n + θ 0 ) ,   < n < +   ,
x [ n ] = A 0 cos ( ω 0 n + θ 0 ) w r [ n ] = A 0 2 w r [ n ] e j θ 0 e j ω 0 n + A 0 2 w r [ n ] e j θ 0 e j ω 0 n .
X [ k 0 ] = A 0 2 e j θ 0 W R ( 2 π k 0 N ω 0 ) + A 0 2 e j θ 0 W R ( 2 π k 0 N + ω 0 ) = A 0 2 e j ( N 1) π k 0 N { a e j [ ( N 1) ω 0 2 + θ 0 ] + b e j [ ( N 1) ω 0 2 + θ 0 ] }   ,
a = sin [ ( 2 π k 0 N ω 0 ) N 2 ] sin [ ( 2 π k 0 N ω 0 ) 1 2 ] ,   b = sin [ ( 2 π k 0 N + ω 0 ) N 2 ] sin [ ( 2 π k 0 N + ω 0 ) 1 2 ]   .
arg ( X [ k 0 ] ) = ( N 1) π k 0 N + tan 1 [ a b a + b tan ( θ 0 + ( N 1) ω 0 2 ) ]   ,
arg ( X [ k 0 ] ) = ( N 1) π k 0 N + θ 0 + ( N 1) ω 0 2   .
E L ( θ ) = tan 1 [ a b a + b tan ( θ 0 + ( N 1) ω 0 2 ) ] ( θ 0 + ( N 1) ω 0 2 ) .
g [ n ] = m = 0 M A m cos ( 2 π f m n / f s + θ m ) ,
G m ( ω ) = m = 0 M A m 2 [ W ( ω ω m ) e j θ m + W ( ω + ω m ) e j θ m ] ,
w h [ n ] = 0.5 0.5 cos ( 2 π n N ) = 1 2 w r [ n ] 1 4 w r [ n ] e j 2 π n N 1 4 w r [ n ] e j 2 π n N   .
X [ k 0 ] = A 0 2 e j θ 0 W H ( 2 π k 0 N ω 0 )   ,
W H ( 2 π k 0 N ω 0 ) = 1 2 W R ( 2 π k 0 N ω 0 ) + 1 4 [ W R ( 2 π ( k 0 1 ) N ω 0 ) + W R ( 2 π ( k 0 + 1 ) N ω 0 ) ] .
W H ( 2 π k 0 N ω 0 ) 1 2 e j [ ( N 1 ) N π k 0 ( N 1 ) ω 0 2 ] ( A + 1 2 B + 1 2 C )   ,
X [ k 0 ] = A 0 4 ( A + 1 2 B + 1 2 C ) e j [ θ 0 ( ( N 1 ) π N k 0 ω 0 ( N 1 ) 2 ) ]   .
θ 0 + ω 0 ( N 1 ) 2 = arg ( X [ k 0 ] ) + ( N 1 ) π N k 0   .
X [ k ] = n = 0 N 1 x [ n ] e j 2 π k N n  ,
Y [ k ] = X [ k ] e j 2 π k N d  ,
{ P = N t T = ω 0 2 π N S = N P = T t = 2 π ω 0   .
E L = tan 1 [ a b a + b tan ( P π π S + θ 0 ) ] ( P π π S + θ 0 )   .
y [ n ] = y 0 [ n ] w h [ n ] = cos ( 2 π f 0 n / f s ) [ 0.5 0.5 cos ( 2 π n N ) ] = 1 2 [ cos ( 2 π f n n ) 1 2 cos ( 2 π P + 1 P f n n ) 1 2 cos ( 2 π P 1 P f n n ) ]   .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.