Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Systematic pole-zero sorting method for neuro-TF modeling of electromagnetic response

Open Access Open Access

Abstract

Neuro-transfer functions (neuro-TF) modeling method has been developed as one of the popular methods for parametric modeling of electromagnetic (EM) filter responses. The discontinuity issue of zero and pole data caused by extraction using vector fitting w.r.t. geometrical parameters change affects the neuro-TF training process and limits its modeling accuracy. This issue is addressed by this paper which proposes a novel systematic pole-zero sorting method for neuro-TF parametric modeling. The proposed method can obtain continuous pole-zero data which change much more smooth w.r.t. geometrical parameters change than the existing neuro-TF method, especially solves the difficulty of disorder of positive and negative values due to small values. The proposed systematic sorting method can substantially improve the modeling accuracy during the establishment and training of neuro-TF model over the existing neuro-TF method without systematic sorting.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Parametric modeling of electromagnetic (EM) response plays a vital part in the electromagnetic design of optical and photonic components. Using an efficient surrogate model instead of electromagnetic field simulations can greatly increase design productivity by reducing the number of required simulations in the optimization process. Artificial neural network (ANN), which is a potent tool in modeling and optimization technology, is introduced into the EM optimization design [1], which effectively improves the design efficiency. Furthermore, recent research in surrogate model-assisted evolutionary algorithms [2] and knowledge-based neural network (KBNN)-based modeling methods [3] have yielded significant advancements in surrogate modeling within the EM design field.

Neuro-TF modeling methods, as popular KBNN approaches, have gained growing interest in recent years for their development and refinement. The incorporation of zero-poles characteristics into neuro-transfer function modeling and optimization algorithms has been introduced in several recent studies [48]. These studies have highlighted the importance of extracting zero-poles as key features and proposed a range of alternative optimization algorithms based on these feature parameters [912]. The vector fitting process is a technique employed to determine the transfer function parameters based on electromagnetic (EM) responses [13]. However, the relationship of pole-zeros and geometrical parameters becomes discontinuous caused by extraction using vector fitting , which limits the neuro-TF modeling accuracy. How to increase the continuity of pole-zeros w.r.t. geometrical parameters still remains an area of active exploration and development.

This paper proposes a novel systematic pole-zero sorting method for neuro-TF parametric modeling, specifically for EM filter responses, aiming at resolving the problem of discontinuous zero and pole data caused by extraction using vector fitting w.r.t. geometrical parameters change. The proposed method can obtain continuous pole-zero data which change much more smooth w.r.t. geometrical parameters change than the existing neuro-TF method, especially solves the difficulty of disorder of positive and negative values due to small values. The proposed systematic sorting method can substantially improve the modeling accuracy during the establishment and training of neuro-TF model over the existing neuro-TF method without systematic sorting.

2. Proposed systematic pole-zero sorting method for neuro-TF modeling

The neuro-TF using pole-zero format is defined as

$$H(\boldsymbol{x},\boldsymbol{w},s) = G(\boldsymbol{x},\boldsymbol{w})\ \frac{{\prod\limits_{i = 1}^N (s - Z_i(\boldsymbol{x},\boldsymbol{w}))(s-Z^{*}_i(\boldsymbol{x},\boldsymbol{w}))}}{{ \prod\limits_{i = 1}^N (s - P_i(\boldsymbol{x},\boldsymbol{w}))(s - P^{*}_i(\boldsymbol{x},\boldsymbol{w}))}}$$
where the vector of geometrical parameters is represented by $\boldsymbol {x}$ and $N$ is defined as the order of the transfer function. $G$ depicts the gain factor of the transfer function. The value of frequency in Laplace domain is defined as $s$. The vector $w$ represents the weighting parameters in neural networks. ${Z_i}$ and ${P_i}$ are regarded as effective zero and poles which have positive imaginary parts. And the ${Z^{*}_i}$ and ${P^{*}_i}$ are the conjugate pairs of ${Z_i}$ and ${P_i}$.

Discontinuity of zeros and poles in the fitted data usually results in an inaccurate neuro-TF model. Due to the various reasons, Pole/zero data disordering issue is one of the major reasons that causes the discontinuity of zeros and poles in the fitted data. This paper address this issue and proposes a systematic pole-zero sorting method. We divide the proposed method into four stages, namely the extraction of the maximum value of the real part of the poles and zeros, sorting according to the size of the imaginary part, thresholding to distinguish the minimal real part with mixed positive and negative values, and model development. At the starting stage, we propose a systematic technique to extract the maximum value of the real part of the poles and zeros. These poles and zeros are data that have the greatest impact on the fitting effect, which will obtain a smaller error in the system fitting. In the second stage, a basic sorting algorithm is proposed. The third stage introduces the concept of a thresholding method by processing the real part of adjacent data and comparing it with the threshold. The purpose is to adjust the position of the adjacent data to address the issue of confusion between positive and negative values that arises from very small values. In the fourth stage, the neuro-TF model is developed with the extracted data of poles and zeros obtained using the previous three stages. The following sub-sections describe the algorithm in detail.

2.1 Proposed pole/zero grouping according to the values of its real parts

The number of reflection zero frequencies for a given EM filter response is known and remains constant. $N_f$ is defined as the numbers of reflection zero frequencies of the filter. To perform neuro-TF modeling, vector fitting is used to extract poles and zeros from the EM response of a filter. To fit the EM response accurately using vector fitting, the number of extracted zeros usually larger than $N_f$ [13]. Extra order $N_e$ for accurate fitting is calculated as $N_e = N-N_f$. To fit a filter curve well using vector fitting, there are usually $N_f$ pole/zero pairs who has relatively small real part values and $N_e$ pole/zero pairs who has relatively large real part values. So we sort the poles/zeros by dividing them into two parts and performing separate sorting to increase the continuity of the pole/zeros changing w.r.t. geometrical parameters.

We first extract the $N_e$ pole/zero pairs who has relatively large real part values. Let $\boldsymbol {p}^{(k)}$ and $\boldsymbol {z}^{(k)}$ represent a set of pole and zero data for the $k$th geometrical sample, respectively. Let $p^{(k)}_i$ and $z^{(k)}_i$ represent data of the $i$th pole and zero for the $k$th geometrical sample, respectively. The sets of poles and zeros corresponding to the $k$th geometrical sample are respectively defined as $\boldsymbol {M}_i$ and $\boldsymbol {L}_i$. These sets indicate that the poles and zeros within them are sorted in descending order based on their real parts, which can be expressed as follows:

$$\begin{aligned} {M_i} &= \left\{ {\begin{array}{l} \mathop {\arg\max }\limits_{ {m \in \{ 1,\ldots,N\} } } \left\{\ {{\left\|\text{Re}(p^{(k)}_m)\right\|}}\ \right\}, \qquad {i = 1}\\ \mathop {\arg \max }\limits_{m \in \{ 1,\ldots,N\} \atop m \notin \{ {M_1},\ldots,{M_{i - 1}}\} } \left\{\ {{\left\|\text{Re}(p^{(k)}_m)\right\|}}\ \right\}, \qquad {i \ge 2} \end{array}} \right.\\ {L_i} &= \left\{ {\begin{array}{l} \mathop {\arg\max }\limits_{ {l \in \{ 1,\ldots,N\} } } \left\{\ {{\left\|\text{Re}(z^{(k)}_l)\right\|}}\ \right\}, \qquad {i = 1}\\ \mathop {\arg \max }\limits_{l \in \{ 1,\ldots,N\} \atop l \notin \{ {L_1},\ldots,{L_{i - 1}}\} } \left\{\ {{\left\|\text{Re}(z^{(k)}_l)\right\|}}\ \right\}, \qquad {i \ge 2} \end{array}} \right. \end{aligned}$$

Let $\boldsymbol {p}^{(k)}_e$ and $\boldsymbol {z}^{(k)}_e$ respectively represent the set of poles and zeros with relatively large real part values. The formulation for extracting $\boldsymbol {p}^{(k)}_e$ and $\boldsymbol {z}^{(k)}_e$ are derived as:

$$\begin{aligned} \boldsymbol{p}^{(k)}_e &= {\left\{p^{(k)}_{M_1} ,\quad p^{(k)}_{M_2} ,\quad \cdots , \quad p^{(k)}_{M_{N_e}}\right\}^T}\\ \boldsymbol{z}^{(k)}_e &= {\left\{z^{(k)}_{L_1}, \quad z^{(k)}_{L_2}, \quad \cdots, \quad z^{(k)}_{L_{N_e}}\right\}^T} \end{aligned}$$

After extract the $N_e$ pole/zeros, the remaining set of $N_f$ pole/zeros are with relatively small values, defined as $\boldsymbol {p}^{(k)}_f$ and $\boldsymbol {z}^{(k)}_f$, respectively. The formulation for extracting $\boldsymbol {p}^{(k)}_f$ and $\boldsymbol {z}^{(k)}_f$ are derived as,

$$\begin{aligned} \boldsymbol{p}^{(k)}_f &= \boldsymbol{p}^{(k)} \backslash \boldsymbol{p}^{(k)}_e\\ \boldsymbol{z}^{(k)}_f &= \boldsymbol{z}^{(k)} \backslash \boldsymbol{z}^{(k)}_e \end{aligned}$$
where the symbol "\" is a set operator that represents the relative complement of set $p^{(k)}_e$ in set $p^{(k)}$ or the relative complement of set $z^{(k)}_e$ in set $z^{(k)}$. According to the real parts of the pole/zeros, we group the pole/zero into two subgroups. The sequence of pole/zeros will be further done in the next subsection.

2.2 Preliminary pole/zero sorting according to the values of its imaginary parts

After the pole-zeros are divided into two subgroups, we need to further sort the pole-zeros in each subset to make the pole-zeros change continuously w.r.t. the change of geometrical parameter values. In this stage, we propose to sort $\boldsymbol {p}^{(k)}_e$, $\boldsymbol {z}^{(k)}_e$, $\boldsymbol {p}^{(k)}_f$, and $\boldsymbol {z}^{(k)}_f$ due to its imaginary parts, respectively. Consider the case of pole-zeros, where the imaginary parts are sorted in ascending order from least to greatest, and the real parts are sorted based on their corresponding imaginary parts. The sets $\hat {M}_i$ and $\hat {L}_i$ are respectively defined as the sets of poles and zeros, sorted in ascending order based on their imaginary parts. And we present both parts of the sets at the same time, indicating their respective sorted order, as follows:

$$\begin{aligned} {\hat{M}_i} &= \left\{ {\begin{array}{l} \mathop {\arg\min }\limits_{ {m \in \{ 1,\ldots,N_{e/f}\} } } \left\{\ {{\left\|\text{Im}(p^{(k)}_{{e/f},m})\right\|}}\ \right\}, \qquad {i = 1}\\ \mathop {\arg \min }\limits_{m \in \{ 1,\ldots,N_{e/f}\} \atop m \notin \{ {\hat{M}_1},\ldots,{\hat{M}_{i - 1}}\} } \left\{\ {{\left\|\text{Im}(p^{(k)}_{{e/f},m})\right\|}}\ \right\}, \qquad{i \ge 2} \end{array}} \right.\\ {\hat{L}_i} &= \left\{ {\begin{array}{l} \mathop {\arg\min }\limits_{ {l \in \{ 1,\ldots,N_{e/f}\} } } \left\{\ {{\left\|\text{Im}(z^{(k)}_{{e/f},l})\right\|}}\ \right\}, \qquad{i = 1}\\ \mathop {\arg \min }\limits_{l \in \{ 1,\ldots,N_{e/f}\} \atop l \notin \{ {\hat{L}_1},\ldots,{\hat{L}_{i - 1}}\} } \left\{\ {{\left\|\text{Im}(z^{(k)}_{{e/f},l})\right\|}}\ \right\}, \qquad{i \ge 2} \end{array}} \right. \end{aligned}$$

Let $\widehat {\boldsymbol {p}}^{(k)}_e$, $\widehat {\boldsymbol {z}}^{(k)}_e$, $\widehat {\boldsymbol {p}}^{(k)}_f$, and $\widehat {\boldsymbol {z}}^{(k)}_f$ be defined as the first group of poles and zeros after sorting. The formulation for $\widehat {\boldsymbol {p}}^{(k)}_e$, $\widehat {\boldsymbol {z}}^{(k)}_e$, $\widehat {\boldsymbol {p}}^{(k)}_f$, and $\widehat {\boldsymbol {z}}^{(k)}_f$ are derived as:

$$\begin{aligned} \widehat{\boldsymbol{p}}^{(k)}_{e/f} &= {\left[p^{(k)}_{\hat{M}_1} \quad p^{(k)}_{\hat{M}_2} \quad \cdots \quad p^{(k)}_{\hat{M}_{N_{e/f}}}\right]^T}\\ \widehat{\boldsymbol{z}}^{(k)}_{e/f} &= {\left[z^{(k)}_{\hat{L}_1} \quad z^{(k)}_{\hat{L}_2} \quad \cdots\quad z^{(k)}_{\hat{L}_{N_{e/f}}}\right]^T} \end{aligned}$$

2.3 Thresholding to distinguish the extremely minimal real part with mixed positive and negative values

At this stage, we judged whether there is a problem of confusion between the positive and negative values of the real part caused by the value being too small by proposing the threshold concept, and adjusted it to the correct order.

Let $\alpha ^{(k)}_{p,i}$ and $\alpha ^{(k)}_{z,i}$ be defined as the normalized differences between the imaginary parts of the adjacent $i$th and $(i+1)$th pole and zero of the $k$th geometrical sample, respectively, formulated as

$$\alpha^{(k)}_{p,i}= \left\|\frac{\text{Im}(\widehat{p}^{(k)}_{f,i} )- \text{Im}(\widehat{p}^{(k)}_{f,{i+1}})}{\text{Im}(\widehat{p}^{(k)}_{f,i} )}\right\|$$
$$\alpha^{(k)}_{z,i}= \left\|\frac{\text{Im}(\widehat{z}^{(k)}_{f,i} )- \text{Im}(\widehat{z}^{(k)}_{f,{i+1}})}{\text{Im}(\widehat{z}^{(k)}_{f,i} )}\right\|$$
where $i=\left \{1,2,\ldots,N_f-1\right \}$; $\widehat {p}^{(k)}_{f,i}$ and $\widehat {z}^{(k)}_{f,i}$ represent the $i$th pole and zero in $\widehat {\boldsymbol {p}}^{(k)}_f$ and $\widehat {\boldsymbol {z}}^{(k)}_f$, respectively. Let $\epsilon$ be defined as the threshold for confusion judgment. Usually, $\epsilon$ is set to a very small value. If $\alpha ^{(k)}_{p,i}$ or $\alpha ^{(k)}_{z,i}$ is less than $\epsilon$, i.e., $\alpha ^{(k)}_{p,i} < \epsilon$ or $\alpha ^{(k)}_{z,i} < \epsilon$, $\widehat {p}^{(k)}_{f,i}$ and $\widehat {z}^{(k)}_{f,i}$ will be relocated due to the comparison with $(k-1)$th sample. The judgment condition for the poles is derived as,
$$\left\|\frac{\text{Re}(\widehat{p}^{(k)}_{f,i} - \widehat{p}^{(k-1)}_{f,i+1} )}{\text{Re}(\widehat{p}^{(k)}_{f,i} - \widehat{p}^{(k-1)}_{f,i} )}\right\| < 1 \ \text{and}\ \left\|\frac{\text{Re}(\widehat{p}^{(k)}_{f,i+1} - \widehat{p}^{(k-1)}_{f,i} )}{\text{Re}(\widehat{p}^{(k)}_{f,i+1} - \widehat{p}^{(k-1)}_{f,i+1} )}\right\| < 1$$

If the poles are satisfied with the judgment condition (9), the corresponding poles in the $k$th sample should be relocated, formulated as,

$$\left\{ \begin{array}{l} p^{(k)}_{tmp} = \widehat{p}^{(k)}_{f,i+1} \\ \widehat{p}^{(k)}_{f,i+1} = \widehat{p}^{(k)}_{f,i} \\ \widehat{p}^{(k)}_{f,i} = p^{(k)}_{tmp} \\ \end{array} \right.$$
where $p^{(k)}_{tmp}$ is an temporary variable for swapping the positions of two poles, with the zero point being handled similarly to the aforementioned process. The judgement condition and relocation formulation of the zeros are similar to that of the poles in (9) and (10), respectively.

2.4 Model development

By utilizing the proposed systematic ordering method, zeros and poles can be obtained in a fixed order to build and train the neuro-TF model. This approach results in better fitting outcomes than existing techniques by using continuously ordered zeros and poles, leading to higher accuracy and smaller training and testing errors. The model development process has preliminary training and model refinement for neuro-TF modeling [6].

The training error function for the model refinement process is formulated as [14]:

$$E_{T_r}\left(\boldsymbol{w}\right)=\frac{1}{2}\sum_{k\in T_{r}}^{}\sum_{j=1}^{N_f}\left|H\left(\boldsymbol{x_k},\boldsymbol{w},s_j\right)-d_{jk}\right|^2$$
where $T_r$ represents the index set of the training samples; $\boldsymbol {x}_k$ is the vector of geometrical variables at the $k$th training samples; the number of frequency samples is defined as $N_f$; $d_{jk}$ is the training data of EM response at the $j$th frequency of the $k$th training sample.

After the development of the model, the neuro-TF becomes applicable for high-level design. In the next section, we will provide detailed explanations and examples of the application of the proposed methods and the process of model establishment, and validate the feasibility and superiority of the method through two examples.

3. Application examples

In the first example, the proposed method is demonstrated using a four-pole waveguide filter [15], as shown in the Fig. 1(a). Define the geometric parameters as $x = [h_1\ h_2\ h_3\ h_{c1}\ h_{c2}]^T$. The training and test data of EM response for this example are obtained in the frequency range of 9.7-11.2 GHz. The geometrical samples for generating data are around the geometrical center point $x=[3.84\ 4.8\ 4.32\ 3.626\ 3.332]^T$ (mm). The effective order of transfer function for this example is set as 6. Figure 1(b) shows the comparison between EM data and model output from different methods for the first example.

 figure: Fig. 1.

Fig. 1. (a) Structure of the four-pole waveguide filter. (b) Comparison of the EM data with the output of models trained by different methods for the fourth-order waveguide filters. (c) Structure of the five-order waveguide filter. (d) Comparison of the EM data with the output of models trained by different methods for the five-order waveguide filter.

Download Full Size | PDF

In the second example, we demonstrate the proposed method by applying it to a symmetrical fifth-order waveguide filter [16], with the EM structure shown in Fig. 1(c). Define the geometric parameters as $x = [d_1\ d_2\ d_3\ t_1\ t_2\ t_3\ z_1\ z_2\ z_3]^T$. The training and test data of EM response for this example are obtained in the frequency range of 10-12 GHz. The geometrical samples for generating data are around the geometrical center point $x=[14.56\ 12.77\ 11.47\ 1.66\ 3.24\ 1.69\ 11.84\ 14.13\ 15.22]^T$ (mm). Figure 1(d) shows the comparison between EM data and model output from different methods for the second example.

The detailed comparison of modeling accuracy using different pole-zero extraction methods for neuro-TF modeling are shown in Table 1. It provides evidence that the proposed method outperforms the existing training methods, specifically the brute-force method and the order-change method [6], demonstrating a significant advantage.The brute-force method involves using a fixed sorting order for all obtained data samples, while the order-change method entails a single sorting process based on evaluating the imaginary parts of the poles and zeros. From Table 1, we can see that the existing methods exhibit relatively large training and testing errors. The proposed systematic sorting method can attain continuous training and testing data by obtain a proper sequence of zeros and poles.

Tables Icon

Table 1. Comparisons of Different Pole-Zero Extraction Methods for Neuro-TF Modeling

4. Conclusion

This paper presents a new systematic method for sorting the poles and zeros in neuro-TF parametric modeling of EM filter responses. The proposed method has demonstrated to obtain continuous pole-zero data which change much more smooth w.r.t. geometrical parameters change than the existing neuro-TF method, especially solves the difficulty of disorder of positive and negative values due to small values. The proposed systematic sorting method has improved the modeling accuracy during the establishment and training of neuro-TF model over the existing neuro-TF method without systematic sorting.

Funding

National Natural Science Foundation of China (62101382, 62201379, 62331018); Key Research & Development and Transformation Project of Qinghai Province (2022-QY-212).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. A. Veluswami, M. S. Nakhla, and Q.-J. Zhang, “The application of neural networks to em-based simulation and optimization of interconnects in high-speed vlsi circuits,” IEEE Trans. Microwave Theory Techn. 45(5), 712–723 (1997). [CrossRef]  

2. B. Liu, H. Yang, and M. J. Lancaster, “Global optimization of microwave filters based on a surrogate model-assisted evolutionary algorithm,” IEEE Trans. Microwave Theory Techn. 65(6), 1976–1985 (2017). [CrossRef]  

3. F. Wang and Q.-J. Zhang, “Knowledge-based neural models for microwave design,” IEEE Trans. Microwave Theory Techn. 45(12), 2333–2343 (1997). [CrossRef]  

4. F. Feng, C. Zhang, S. Zhang, et al., “Parallel decomposition approach to gradient-based em optimization,” IEEE Trans. Microwave Theory Techn. 64(11), 3380–3399 (2016). [CrossRef]  

5. F. Feng, W. Na, W. Liu, et al., “Parallel gradient-based em optimization for microwave components using adjoint-sensitivity-based neuro-transfer function surrogate,” IEEE Trans. Microwave Theory Techn. 68(9), 3606–3620 (2020). [CrossRef]  

6. F. Feng, C. Zhang, J. Ma, et al., “Parametric modeling of em behavior of microwave components using combined neural networks and pole-residue-based transfer functions,” IEEE Trans. Microwave Theory Techn. 64(1), 60–77 (2016). [CrossRef]  

7. F. Feng, J. Zhang, W. Na, et al., “Parametric modeling of microwave components using combined neural network and transfer function,” in Surrogate Modeling For High-frequency Design: Recent Advances, (World Scientific, 2022), pp. 81–122.

8. J. Zhang, F. Feng, J. Jin, et al., “Adaptively weighted yield-driven em optimization incorporating neurotransfer function surrogate with applications to microwave filters,” IEEE Trans. Microwave Theory Techn. 69(1), 518–528 (2021). [CrossRef]  

9. F. Feng, W. Na, W. Liu, et al., “Multifeature-assisted neuro-transfer function surrogate-based em optimization exploiting trust-region algorithms for microwave filter design,” IEEE Trans. Microwave Theory Techn. 68(2), 531–542 (2020). [CrossRef]  

10. F. Feng, C. Zhang, W. Na, et al., “Adaptive feature zero assisted surrogate-based em optimization for microwave filter design,” IEEE Microw. Wireless Compon. Lett 29(1), 2–4 (2019). [CrossRef]  

11. J. Jin, F. Feng, W. Na, et al., “Advanced cognition-driven em optimization incorporating transfer function-based feature surrogate for microwave filters,” IEEE Trans. Microwave Theory Techn. 69(1), 15–28 (2021). [CrossRef]  

12. Y. Cao, G. Wang, and Q.-J. Zhang, “A new training approach for parametric modeling of microwave passive components using combined neural networks and transfer functions,” IEEE Trans. Microwave Theory Techn. 57(11), 2727–2742 (2009). [CrossRef]  

13. B. Gustavsen and A. Semlyen, “Rational approximation of frequency domain responses by vector fitting,” IEEE Trans. Power Delivery 14(3), 1052–1061 (1999). [CrossRef]  

14. Q.-J. Zhang, K. C. Gupta, and V. K. Devabhaktuni, “Artificial neural networks for rf and microwave design-from theory to practice,” IEEE Trans. Microwave Theory Tech. 51(4), 1339–1350 (2003). [CrossRef]  

15. C. Zhang, F. Feng, Q.-J. Zhang, et al., “Cognition-driven formulation of space mapping for equal-ripple optimization of microwave filters,” IEEE Trans. Microwave Theory Techn. 63(7), 2154–2165 (2015). [CrossRef]  

16. A. Bekasiewicz and S. Koziel, “Efficient multi-fidelity design optimization of microwave filters using adjoint sensitivity,” Int. J. RF Micro. C. E. 25(2), 178–183 (2015). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (1)

Fig. 1.
Fig. 1. (a) Structure of the four-pole waveguide filter. (b) Comparison of the EM data with the output of models trained by different methods for the fourth-order waveguide filters. (c) Structure of the five-order waveguide filter. (d) Comparison of the EM data with the output of models trained by different methods for the five-order waveguide filter.

Tables (1)

Tables Icon

Table 1. Comparisons of Different Pole-Zero Extraction Methods for Neuro-TF Modeling

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

H ( x , w , s ) = G ( x , w )   i = 1 N ( s Z i ( x , w ) ) ( s Z i ( x , w ) ) i = 1 N ( s P i ( x , w ) ) ( s P i ( x , w ) )
M i = { arg max m { 1 , , N } {   Re ( p m ( k ) )   } , i = 1 arg max m { 1 , , N } m { M 1 , , M i 1 } {   Re ( p m ( k ) )   } , i 2 L i = { arg max l { 1 , , N } {   Re ( z l ( k ) )   } , i = 1 arg max l { 1 , , N } l { L 1 , , L i 1 } {   Re ( z l ( k ) )   } , i 2
p e ( k ) = { p M 1 ( k ) , p M 2 ( k ) , , p M N e ( k ) } T z e ( k ) = { z L 1 ( k ) , z L 2 ( k ) , , z L N e ( k ) } T
p f ( k ) = p ( k ) p e ( k ) z f ( k ) = z ( k ) z e ( k )
M ^ i = { arg min m { 1 , , N e / f } {   Im ( p e / f , m ( k ) )   } , i = 1 arg min m { 1 , , N e / f } m { M ^ 1 , , M ^ i 1 } {   Im ( p e / f , m ( k ) )   } , i 2 L ^ i = { arg min l { 1 , , N e / f } {   Im ( z e / f , l ( k ) )   } , i = 1 arg min l { 1 , , N e / f } l { L ^ 1 , , L ^ i 1 } {   Im ( z e / f , l ( k ) )   } , i 2
p ^ e / f ( k ) = [ p M ^ 1 ( k ) p M ^ 2 ( k ) p M ^ N e / f ( k ) ] T z ^ e / f ( k ) = [ z L ^ 1 ( k ) z L ^ 2 ( k ) z L ^ N e / f ( k ) ] T
α p , i ( k ) = Im ( p ^ f , i ( k ) ) Im ( p ^ f , i + 1 ( k ) ) Im ( p ^ f , i ( k ) )
α z , i ( k ) = Im ( z ^ f , i ( k ) ) Im ( z ^ f , i + 1 ( k ) ) Im ( z ^ f , i ( k ) )
Re ( p ^ f , i ( k ) p ^ f , i + 1 ( k 1 ) ) Re ( p ^ f , i ( k ) p ^ f , i ( k 1 ) ) < 1   and   Re ( p ^ f , i + 1 ( k ) p ^ f , i ( k 1 ) ) Re ( p ^ f , i + 1 ( k ) p ^ f , i + 1 ( k 1 ) ) < 1
{ p t m p ( k ) = p ^ f , i + 1 ( k ) p ^ f , i + 1 ( k ) = p ^ f , i ( k ) p ^ f , i ( k ) = p t m p ( k )
E T r ( w ) = 1 2 k T r j = 1 N f | H ( x k , w , s j ) d j k | 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.