Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Excellent predictive-performances of photonic reservoir computers for chaotic time-series using the fusion-prediction approach

Open Access Open Access

Abstract

In this work, based on two parallel reservoir computers realized by the two polarization components of the optically pumped spin-VCSEL with double optical feedbacks, we propose the fusion-prediction scheme for the Mackey-Glass (MG) and Lorenz (LZ) chaotic time series. Here, the direct prediction and iterative prediction results are fused in a weighted average way. Compared with the direct-prediction errors, the fusion-prediction errors appear great decrease. Their values are far less than the values of the direct-prediction errors when the iteration step-size are no more than 15. By the optimization of the temporal interval and the sampling period, under the iteration step-size of 3, the fusion-prediction errors for the MG and LZ chaotic time-series can be reduced to 0.00178 and 0.004627, which become 8.1% of the corresponding direct-prediction error and 28.68% of one, respectively. Even though the iteration step-size reaches to 15, the fusion-prediction errors for the MG and LZ chaotic time-series can be reduced to 55.61% of the corresponding direct-prediction error and 77.28% of one, respectively. In addition, the fusion-prediction errors have strong robustness on the perturbations of the system parameters. Our studied results can potentially apply in the improvement of prediction accuracy for some complex nonlinear time series.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Time series have been widely applied in scientific research and various engineering problems, such as, the precipitation water, river runoff and temperature in meteorological field, the consumer price index and gross domestic product in the economic field, the short messages and traffic flows in mobile communication. Time series prediction has always been one of the hot topics in academic and engineering fields [110]. The time series obtained in practical applications usually contain different types of noise. For example, since the actual monitoring data of the operation status of large and complex equipment are affected by the interference of various external signals and various interferences, they include various noises. Moreover, since time series from different systems often show nonlinear, non-stationary and fast variation dynamics, their accurate prediction still faces great challenge.

In recent years, some artificial intelligence methods, such as feed-forward neural network and support vector machine, are gradually applied to the prediction of various time series. Recurrent neural network (RNN) [1115], with its good nonlinear approximation ability, has gradually become the main tool for the prediction of nonlinear time series. However, some inherent characteristics of RNN, such as the training efficiency and excessive computation, seriously limit its application in practice. Reservoir computing (RC) is a simplified form of RNN, where the input and connection weights are trained by learning. Compared with the traditional RNN, RC simplifies the design and training process of the network, and solves some key problems, such as the complexity of the traditional RNN training algorithm, the heavy computational burden, the difficulty in determining the network structure, and the vulnerability to falling into local optimization. The original concept of RC requires using a real network. However, the concept of a virtual network for RC using a time-delayed feedback system has been proposed by Appeltant et al. [16]. The delay-based RC has opened a new field for artificial intelligence. In a delay-based RC, the transient temporal dynamics of the reservoir system are sampled and considered as virtual nodes. Many schemes of delay-based RC using photonic devices have been proposed using optoelectronic system [17,18], semiconductor optical amplifier [19], semiconductor laser with optical feedback [2022]. The delay-based RC using nonlinear semiconductor lasers has the advantages of fast-speed, high efficiency and parallel computing capability for processing time-dependent signals [2325]. Therefore, the prediction methods for time series using delay-based photonic RC have received great attention [26,27], and become one of the most active research directions in the field of time series prediction. Delay-based photonic RC has gradually become the most potential tool for fast time series prediction. Many works on the improvement of the structure of delay-based photonic RC system have been proposed to further improve the prediction accuracy for time series [2830]. For example, in 2020, through adopting an auxiliary method in which the input message is the current input data combined with some past input data in a weighted sum in the input layer (M-input), G. Q. Xia etc observed the predictive performance of a delayed RC system based on a solitary semiconductor laser [28]. They found that predictive error can be further decreased by using the M-input. In 2021, for time series, X. X. Guo etc demonstrated enhanced prediction performance of a neuromorphic RC system using a semiconductor nano-laser with double phase conjugate feedbacks [29]. In 2022, T. Hülser etc discussed ways to improve the computing performance of a reservoir formed by delay-coupled oscillators and show the impact of delay-time tuning in such systems [30]. However, the researches on time series prediction based on a delayed photonic RC is still in starting stage. There are a lot of theoretical and practical problems to be solved both in delay-based RC itself and in time series prediction methods using delay-based RC. Since the reservoir of delay-based RC is generated randomly, this method cannot meet with the requirements of high predictive-accuracies for some complex nonlinear time series. Therefore, for different predictive tasks, it is necessary to further improve and generalize the system of delay-based photonic RC, so as to improve its modeling ability for dynamic systems and enhance the prediction accuracy.

In this paper, we focus on an optically pumped spin vertical-cavity surface-emitting laser (Spin-VCSEL), which can form double parallel reservoirs. Here, the optically-pumped spin-VCSEL offers the flexible spin control of the lasing output, as well as offering more control parameters. Based on these, optically-pumped spin-VCSEL has better controllability for polarization switch [3135], which is conducive to the realization of double parallel RCs. Moreover, an optically spin-VCSEL can yield ultrafast chaotic dynamic when it is subject to short feedback delay, thus forming very short spacing between two virtual nodes under sufficient nodes. These denote that double RCs using the two polarization components (PCs) can deal with two high-speed time-series in parallel data. Based on double delayed RCs using an optically pumped VCSEL, the multi-step prediction method for time series is proposed by using the ability of delay-based RCs to deal with multiple outputs. On this basis, the direct prediction and iterative prediction results are fused in a weighted average way. Using the fusion-prediction approach, we further explore the predictive performances of two sets of time series (Mackey-Glass (MG) and Lorenz (LZ)) in different parameter spaces.

2. Theories and models

Figure 1 depicts a schematic diagram of the prediction of two parallel photonic reservoir computers on time series using the fusion-prediction method, based on the optically pumped spin-VCSEL with double optical feedbacks. In this figure, (a) is the principle block diagram; (b) is the detailed light paths. In Fig. 1(a), the MG and LZ chaotic time series, as two input data, are defined as $C_x(t)$ and $C_y(t)$, respectively. They are sampled by the discrete modules 1 (DM$_1$) and DM$_2$, respectively. The sampled $C_x(t)$ and $C_y(t)$ are defined as $C_x(n)$ and $C_y(n)$ ($n=1, 2, 3, \ldots, N$), respectively. Here, $N$ is the sampled number. Many nonlinear systems, such as the MG and LZ systems, are high-dimensional and complex nonlinear systems. However, often only one-dimensional information is available, such as a time series of a single variable. The challenge is to reconstruct the original system from these limited single-variable time series data. By reconstructing the phase space, hidden evolutionary regular patterns in the data can be identified, and the model of the original dynamical system can be derived from the existing data. Ultimately, the predictive analysis for the chaotic system can be achieved based on phase-space reconstruction [36]. Therefore, to better predict the models of the MG and LZ, $C_x(n)$ and $C_y(n)$ are reconstructed into two matrix vectors with $N$ rows and $m$ columns, i.e., $[ C_x(n), C_x(n-\tau _2), \ldots, C_x(n-(m-1)\tau _2)]$ and $[ C_y(n), C_y(n-\tau _2), \ldots, C_y(n-(m-1)\tau _2)]$, using the phase-space reconstruction modules (PSRM$_1$ and PSRM$_2$). Here, $m$ is embedding dimension. $\tau _2$ is delay time in phase-space reconstruction. By choosing appropriate embedding dimension $m$ and delay time $\tau _2$, phase space reconstruction in nonlinear time series analysis will be economically efficient. Namely, the dynamics of the original system can be reproduced in a low-dimensional reconstructed phase space, using experimental data that contains less redundant information. Additionally, embedding parameters also have significant impacts on the predictive accuracy of nonlinear time series. Here, we take $m$ and $\tau _2$ as 5 and 20, respectively. According to delay-based reservoir computing theory, when one-dimensional input signal is fed into the nonlinear system, the nonlinear system produces transient responses and the virtual nodes have a rich variety of state. The virtual node states provide a nonlinear mapping of the one- dimensional input into high-dimensional space [37]. Based on the above considerations, these two matrix vectors($[ C_x(n), C_x(n-\tau _2), \ldots, C_x(n-(m-1)\tau _2)]$ and $[ C_y(n), C_y(n-\tau _2), \ldots, C_y(n-(m-1)\tau _2)]$) need to be respectively converted into one-dimensional input-data series by using the matrix vector converter (MVC). These two sets of one-dimensional input series are defined as $U_x(n)$ and $U_y(n)$, i.e., $U_x(n) =[ C_x(n), C_x(n-\tau _2), \ldots, C_x(n-(m-1)\tau _2)]$, $U_y(n)= [ C_y(n), C_y(n-\tau _2), \ldots,C_y(n-(m-1)\tau _2)]$. Two sets of one-dimensional input-data series are respectively injected into the input layers 1 and 2, then respectively injected into the reservoir 1 (R$_1$) and R$_2$. In these reservoirs, the input-data series are predicted iteratively and directly (the detailed prediction processes are given in Eqs. (7)–(15)), where one-step-ahead prediction results at $n$ time in each iteration process are fed back to input layers. The predictive results from the reservoirs are fused by selecting appropriate fusion strategy.

 figure: Fig. 1.

Fig. 1. Schematic diagram of the prediction of two parallel optical reservoirs on time series using fusion method of direct and iterative prediction, based on the optically pumped spin-VCSEL with double optical feedbacks. Here, (a): principle block diagram; (b): detailed light paths and circuits; MG: Mackey-Glass chaotic time series; LZ: Lorenz chaotic time series; DM: discrete module; PSRM: phase-space reconstruction module; MVC: matrix vector converter; FCM: Fusion computing module; RC: reservoir computing; KS: key switch; CW: continuous laser; PM: phase modulation; IS: optical isolator; PC: polarization controller; FPC: fiber polarization coupler; NDF: neutral density filter; VCSEL: vertical cavity surface-emitting laser; OC: optical circulator; VOA: variable optical attenuator; DL: delay line; FPBs: fiber polarization beam splitter; PD: photodiode; SDD: signal data distributor; OSC: oscilloscope; EA: electric amplifier; SC: scaling operation circuits; Mask: masked signal.

Download Full Size | PDF

According to the principle block diagram presented by Fig. 1(a), Fig. 1(b) gives the detailed light paths and circuits. Here, the CW$_1$ and CW$_2$ both are continuous laser. The optical isolators (ISs) (the subscripts of 1-5) are used to avoid optical feedback. The neutral density filter (NDF) is used to control light intensity. The variable optical attenuators (VOAs) with subscripts of 1-2 are applied for the adjustment of light-intensities. The PM$_1$ and PM$_2$ are phase modulators. The polarization controllers (PCs) (the subscripts of 1-4) are applied for the control of the polarization of lights. The fiber polarization beam splitters (FPBSs) (the subscripts of 1-2) are used to split the light into two polarization components. The OSC$_1$ and OSC$_2$ both are oscilloscope. The photodiodes (PDs) (the subscripts of 1-4) are applied to convert light waves into current signals. The OC is an optical circulator. The FC is a fiber optic coupler. The VCSEL is vertical cavity surface-emitting laser and considered as the reservoir laser. The KS$_1$ and KS$_2$ both are key switch. The SDD$_1$ and SDD$_2$ both are signal data distributor. The DL$_1$ and DL$_2$ both are delay-line. The EA is electric amplifier. The SC is scaling operation circuits. The Mask$_1$ and Mask$_2$ both are masked signal.

The input layers provide the input connections with reservoirs. In the input layers 1 and 2, these sampled input data $U_x(n)$ and $U_y(n)$ holding a period of $\textit {T}$ are respectively amplified by the EA$_1$ and EA$_2$, then respectively multiplied by the Masks 1 and 2 with the periodicity of $\textit {T}$. Here, the Masks are chaotic signal emitted by two mutually coupled semiconductors, which are presented in [23,38]. After being scaled by a scaling factor $\gamma$ by the SC$_1$ and SC$_2$, these two masked signals are denoted as $\textbf {S}_x(n)$ and $\textbf {S}_y(n)$, respectively. They are utilized to modulate the phases of the optical fields output by the CW$_1$ and CW$_2$ by using the PM$_1$ and PM$_2$, respectively, where two CW lasers are used to convert the masked input signals into optical injection signals.

Within the reservoir, the x-PC and y-PC of the VCSEL with double optical feedbacks provided by the Loop 1 and Loop 2 will exhibit chaotic dynamic. They are utilized as nonlinear nodes to implement two parallel reservoirs. The x-PC and y-PC are fed back into the VCSEL by the Loops1 and 2. The feedback time along any of the delay lines (DL$_1$ and DL$_2$) is set as $\tau$. In output layers, the x-PC and y-PC are divided by the FPBS. After being extracted at time interval $\theta$, their intensities are considered as virtual node sates. The number $N$ of the virtual nodes satisfies $N=\tau /\theta$ and $\tau =T$. For two independent prediction targets $\textbf {U}_x(n)$ and $\textbf {U}_y(n)$, the sates of $N$ virtual nodes along these two delay lines are first converted into current signals by the PD$_1$ and PD$_2$, respectively. In direct prediction process, under multi-step prediction (the step-size of $h$), two parallel reservoirs both can output data-series with the number of $h$. After being weighted and linearly summed up, their outputs are denoted as $[U^0_x(n+1), U^0_x(n+2), \ldots, U^0_x(n+h)]$ and $[U^0_y(n+1), U^0_y(n+2), \ldots, U^0_y(n+h)]$, respectively. For the convenience of discussions, we suppose that $\widehat {U}^0_x(n)$= [$U^0_x(n+1), U^0_x(n+2), \ldots, U^0_x(n+h)]$ and $\widehat {U}^0_y(n)$= [$U^0_y(n+1), U^0_y(n+2), \ldots, U^0_y(n+h)]$. For first-iteration prediction process, the single-step prediction results $U^0_x(n+1)$ and $U^0_y(n+1)$ are fed back to input layers by using the SDD$_1$ and SDD$_2$, respectively, and considered as the first-iteration prediction targets. Under multi-step prediction (the step-size of ($h$-1)), the outputs with the number of ($h$-1) from two parallel reservoirs can be obtained as$[U^1_x(n+2), U^1_x(n+3), \ldots, U^1_x(n+h)]$ and $[U^1_y(n+2), U^1_y(n+3), \ldots, U^1_y(n+h)]$, respectively, after being weighted and linearly summed up. Here, we consider that $\widehat {U}^1_x(n)$= [$U^1_x(n+2), U^1_x(n+3), \ldots, U^1_x(n+h)]$ and $\widehat {U}^1_y(n)$= [$U^1_y(n+2), U^1_y(n+3), \ldots, U^1_y(n+h)]$. For second-iteration prediction process, the single-step prediction results $U^1_x(n+2)$ and $U^1_y(n+2)$ are again fed back to input layers by using the SDD$_1$ and SDD$_2$, respectively, and considered as the second-iteration prediction targets. Under multi-step prediction with the step-size of ($h$-2), after being weighted and linearly summed up, the outputs with the number of ($h$-2) from two parallel reservoirs can be obtained as$[U^2_x(n+3), U^2_x(n+4), \ldots, U^2_x(n+h)]$ and $[U^2_y(n+3), U^2_y(n+4), \ldots, U^2_y(n+h)]$, which are denoted as $\widehat {U}^2_x(n)$ and $\widehat {U}^2_y(n)$, respectively. In the same method, after the iteration-predictions of ($h$-1), the outputs of two parallel reservoirs only can be $U^{h-1}_x(n+h)$ and $U^{h-1}_y(n+h)$, respectively, where $\widehat {U}^{h-1}_x(n)$=$U^{h-1}_x(n+h)$ and $\widehat {U}^{h-1}_y(n)$=$U^{h-1}_y(n+h)$. After the iteration-predictions with the number of ($h$-1), we can obtain the predicted values with the number of $h$ for each target ($\textbf {U}_x(n+h)$ or $\textbf {U}_y(n+h$)) as follows: $[U^0_x(n+h), U^1_x(n+h), \ldots, U^{h-1}_x(n+h)]$ and $[U^0_y(n+h), U^1_y(n+h), \ldots, U^{h-1}_y(n+h)]$. Using the fusion computing module 1 (FCM$_1$) and FCM$_2$, these two sets of the predicted values are fused by selecting appropriate fusion strategy (the detailed descriptions see the following Eqs. (16)–(23)).

Based on the modified spin-dependent model developed by San Miguel et al. [39], the nonlinear dynamics of the VCSEL subject to both optical feedback and optical injection can be described as

$$\begin{aligned} \frac{d{E_{_{x}}}(t)}{dt}= & k (1+i\alpha) \big\{[M(t)-1]{E_{_{x}}}(t)+i{n}(t){E_{_{y}}}(t)\big\}-i(\gamma_{_{p}}+\Delta \omega){E_{_{x}}}(t)\\ & -\gamma_{_{a}}{E_{_{x}}}(t)+\xi_{_x}\left\{{{\beta_{_{sp}} \gamma \big[n(t)-M(t)\big]}}\right\}^{1/2}+k_{_{f}}E_{_{x}}(t-\tau){\rm exp}({-}i\omega\tau) \\ & +k_{_x}E_{_{xinj}}, \end{aligned}$$
$$\begin{aligned} \frac{d{E_{_{y}}}(t)}{dt}= & k (1+i\alpha) \big\{[M(t)-1]{E_{_{y}}}(t)+in(t){E_{_{x}}}(t)\big\}+i(\gamma_{_{p}}-\Delta \omega){E_{_{y}}}(t)\\ & -\gamma_{_{a}}{E_{_{y}}}(t)+\xi_{_y}\left\{{{\beta_{_{sp}} \gamma \big[n(t)-M(t)\big]}}\right\}^{1/2}+k_{_{f}}E_{_{y}}(t-\tau){\rm exp}({-}i\omega\tau) \\ & +k_{_y}E_{_{yinj}}, \end{aligned}$$
$$\begin{aligned} \frac{dM(t)}{dt}= & \gamma \big\{\eta-[1+|E_{_{x}}(t)|^2+|{E_{_{y}}}(t)|^2]{M(t)}\big\} \\ & -i{\gamma n(t)\big[{E_{_{y}}}(t){E_{_{x}}^{*}}(t)- {E_{_{x}}}(t){E_{_{y}}^{*}}(t)\big]}, \end{aligned}$$
$$\begin{aligned} \frac{dn(t)}{dt}= & \gamma p\eta-{n(t)}\big\{\gamma_{_{s}}+\gamma[|E_{_{x}}(t)|^2+|E_{_{y}}(t)|^2]\big\} \\ & -i{\gamma M(t)\big[{{E_{_{y}}}(t)E_{_{x}}^{*}}(t)- {E_{_{x}}}(t){E_{_{y}}^{*}}(t) \big]}. \end{aligned}$$
where $E_x$ and $E_y$ are the complex slowly varying amplitudes of the x-PC and y-PC output by the VCSEL, respectively. The circularly polarized electric field component is coupled by the crystal birefringence, characterized by the rate $\gamma _p$ and the dichroism $\gamma _a$. The normalized carrier variables $M$ and $n$ appearing in Eqs. (1)–(4) are denoted as $M$=(${\rm n}_+$+${\rm n}_-$)/2 and $n$=(${\rm n}_+$+${\rm n}_-$)/2, respectively, where ${\rm n}_+$ and ${\rm n}_-$ are the corresponding normalized densities of electrons with spin-up and spin-down, respectively. $k$ accounts for the decay rate of field. $\alpha$ is the linewidth enhancement factor, $\gamma$ denotes the decay rate of total carrier population. $\gamma _s$ is the spin-flip rate. $\eta$ is the normalized pump rate. $p$ is the pump polarization ellipticity. $k_f$ is the feedback strength; $k_x$ and $k_y$ are the injection strengths of x-PC and y-PC, respectively. $E_{xinj}$ and $E_{yinj}$ are the varying amplitudes output from the CW$_1$ and CW$_2$, respectively. $\Delta \omega$ is the center frequency detuning between the VCSEL and any of two CW lasers (the CW$_1$ and CW$_2$). $\omega$ is the central frequency of the VCSEL. $\beta _{sp}$ is the spontaneous emission coefficient, which can also be considered as noise intensity. $\xi _x$ and $\xi _y$ are independent white Gaussian noise with the mean value of 0 and the variance of 1, which satisfy the expression $<\xi _i(t)\xi _j^*(t^{'})>$=2$\delta _{ij}\delta (t-t^{'})$

The injected complex electric field amplitudes $E_{xinj}$ and $E_{yinj}$ can be described as [40]:

$$\begin{aligned} {E_{j,inj}}=\sqrt{I_{d}}{\rm exp}{[i\pi S_j(t)]}, j=x,y({\rm The} \ {\rm same} \ {\rm below}), \end{aligned}$$
where $I_d$ is the light intensity output by the CW$_1$ or CW$_2$. $S_j(t)$ represents the masked input signal and is expressed as
$$\begin{aligned} S_{x}(t)={Mask_1(t)} {\times} [U_{x}(n)] {\times} {\gamma},\quad S_{y}(t)={Mask_2(t)} {\times} [U_{y}(n)] {\times} {\gamma}, \end{aligned}$$
where $\gamma$ is the scaling factor, Mask$_1$ and Mask$_2$ are chaos signal, which are presented in [23,40]. In direct-prediction process under the multi-step predictions (the step-size of $h$), the time-dependent outputs $[U_x^0(n+1), U_x^0(n+2),\ldots,U_x^0(n+h)]$ and $[U_y^0(n+1), U_y^0(n+2),\ldots,U_y^0(n+h)]$ are regarded as the linear function of the intensities of the x-PC and y-PC, which are expressed as:
$$\begin{aligned} & U_{x}^{0}(n+K)={\rm W}_{x,1}^{0,K}b_{out}+{\rm W}_{x,2}^{0.K}U_{x}(n)+\sum_{i=1}^{N} {\rm W}_{x,i+2}^{0,K}I_{x,i}^0(n), K=1,2,\ldots,h({\rm The} \ {\rm same} \ {\rm below}), \end{aligned}$$
$$\begin{aligned} U_{y}^{0}(n+K)={\rm W}_{y,1}^{0,K}b_{out}+{\rm W}_{y,2}^{0.K}U_{y}(n)+\sum_{i=1}^{N} {\rm W}_{y,i+2}^{0,K}I_{y,i}^0(n), \end{aligned}$$
where ${\rm W}_{x,i}^{0,K}$ and ${\rm W}_{y,i}^{0,K}$ are the $i$th elements of the output weights matrixes ${\textbf {W}}_{x}^{0,K}$ and ${\textbf {W}}_{y}^{0,K}$ for direct prediction with the $K$th step. $I_{x,i}^0(n)$ and $I_{y,i}^0(n)$ denote the $i$th state of the light intensity of the output x-PC and that of y-PC for direct prediction, respectively. The term $b_{out}$ is a constant and equal to 1. ${\textbf {W}}_{x}^{0,K}$ and ${\textbf {W}}_{y}^{0,K}$ can be analytically given by [38]
$$\begin{aligned} {\textbf{W}}_{x}^{0,K}={\textbf{Y}}_{0x}^K{\textbf{X}}_{0x}^{\rm Tr}/({\textbf{X}}_{0x}{\textbf{X}}_{0x}^{\rm Tr}+\delta\boldsymbol{\Pi}),\quad{\textbf{W}}_{y}^{0,K}={\textbf{Y}}_{0y}^K{\textbf{X}}_{0y}^{\rm Tr}/({\textbf{X}}_{0y}{\textbf{X}}_{0y}^{\rm Tr}+\delta\boldsymbol{\Pi}), \end{aligned}$$
where $\delta$ is the ridge regression parameter for avoiding overfitting and set as 10$^{-6}$. $\boldsymbol {\Pi }$ is an identity matrix. The superscript ${\rm Tr}$ represents transpose. $\textbf {X}_{0x}$ is the matrix whose $m$th column is $[b_{out};U_{x}(m);I_{x,i}^0(m)]$. $\textbf {X}_{0y}$ is the matrix whose $m$th column is $[b_{out};U_{y}(m);I_{y,i}^0(m)]$. $\textbf {Y}_{0x}^K$ is the matrix and its $m$th column is $[U_{x}(m+K+1)]$. $\textbf {Y}_{0y}^K$ is the matrix and its $m$th column is $[U_{y}(m+K+1)]$.

For the first-iteration prediction process, under the multi-step predictions of $h$-1, the time-dependent outputs $[U_x^1(n+2),U_x^1(n+3),\ldots,U_x^1(n+h)]$ and $[U_y^1(n+2),U_y^1(n+3),\ldots,U_y^1(n+h)]$ are described as

$$\begin{aligned} & U_{x}^{1}(n+l)={\rm W}_{x,1}^{1,l}b_{out}+{\rm W}_{x,2}^{1.l}U_{x}^0(n+1)+\sum_{i=1}^{N} {\rm W}_{x,i+2}^{1,l}I_{x,i}^1(n), l=2,3,\ldots,h({\rm The} \ {\rm same} \ {\rm below}), \end{aligned}$$
$$\begin{aligned} U_{y}^{1}(n+l)={\rm W}_{y,1}^{1,l}b_{out}+{\rm W}_{y,2}^{1,l}U_{y}^0(n+l)+\sum_{i=1}^{N} {\rm W}_{y,i+2}^{1,l}I_{y,i}^1(n), \end{aligned}$$
where ${\rm W}_{x,i}^{1,l}$ and ${\rm W}_{y,i}^{1,l}$ are the $i$th elements of the output weights matrixes ${\textbf {W}}_{x}^{1,l}$ and ${\textbf {W}}_{y}^{1,l}$ for the first iteration with the $l$th step prediction. $I_{x,i}^1(n)$ and $I_{y,i}^1(n)$ denote the $i$th state of the light intensity of the output x-PC and that of y-PC for the first iteration, respectively. ${\textbf {W}}_{x}^{1,l}$ and ${\textbf {W}}_{y}^{1,l}$ can be analytically expressed as
$$\begin{aligned} {\textbf{W}}_{x}^{1,l}={\textbf{Y}}_{1x}^l{\textbf{X}}_{1x}^{\rm Tr}/({\textbf{X}}_{1x}{\textbf{X}}_{1x}^{\rm Tr}+\delta\boldsymbol{\Pi}),\quad{\textbf{W}}_{y}^{1,l}={\textbf{Y}}_{1y}^l{\textbf{X}}_{1y}^{\rm Tr}/({\textbf{X}}_{1y}{\textbf{X}}_{1y}^{\rm Tr}+\delta\boldsymbol{\Pi}), \end{aligned}$$
where $\textbf {X}_{1x}$ is the matrix whose $m$th column is $[b_{out};U_{x}^0(m+1);I_{x,i}^1(m)]$. $\textbf {X}_{1y}$ is the matrix whose $m$th column is $[b_{out};U_{y}^0(m+1);I_{y,i}^1(m)]$. $\textbf {Y}_{1x}^l$ is the matrix and its $m$th column is $[U_{x}^0(m+l+1)]$. $\textbf {Y}_{1y}^l$ is the matrix and its $m$th column is $[U_{y}^0(m+l+1)]$. In the same way, after the iterations of $h$-1($h\geq$2), we obtain
$$\begin{aligned} U_{x}^{h-1}(n+h)={\rm W}_{x,1}^{h-1,1}b_{out}+{\rm W}_{x,2}^{h-1.1}U_{x}^{h-2}(n+h-1)+\sum_{i=1}^{N} {\rm W}_{x,i+2}^{h-1,1}I_{x,i}^{h-1}(n), \end{aligned}$$
$$\begin{aligned} U_{y}^{h-1}(n+h)={\rm W}_{y,1}^{h-1,1}b_{out}+{\rm W}_{y,2}^{h-1.1}U_{y}^{h-2}(n+h-1)+\sum_{i=1}^{N} {\rm W}_{y,i+2}^{h-1,1}I_{y,i}^{h-1}(n), \end{aligned}$$
where ${\rm W}_{x,i}^{h-1,1}$ and ${\rm W}_{y,i}^{h-1,1}$ are the $i$th elements of the output weights matrixes ${\textbf {W}}_{x}^{h-1,1}$ and ${\textbf {W}}_{y}^{h-1,1}$ for the iteration of $h$-1 with one- step prediction. $I_{x,i}^{h-1}(n)$ and $I_{y,i}^{h-1}(n)$ denote the $i$th state of the light intensity of the output x-PC and that of y-PC for the iteration of $h$-1, respectively. ${\textbf {W}}_{x}^{h-1,1}$ and ${\textbf {W}}_{y}^{h-1,1}$ can be analytically written as
$$\begin{aligned} & {\textbf{W}}_{x}^{h-1,1}={\textbf{Y}}_{(h-1)x}{\textbf{X}}_{(h-1)x}^{\rm Tr}/({\textbf{X}}_{(h-1)x}{\textbf{X}}_{(h-1)x}^{\rm Tr}+\delta\boldsymbol{\Pi}),\\ & {\textbf{W}}_{y}^{h-1,1}={\textbf{Y}}_{(h-1)y}{\textbf{X}}_{(h-1)y}^{\rm Tr}/({\textbf{X}}_{(h-1)y}{\textbf{X}}_{(h-1)y}^{\rm Tr}+\delta\boldsymbol{\Pi}), \end{aligned}$$
where $\textbf {X}_{(h-1)x}$ is the matrix whose $m$th column is $[b_{out};U_{x}^{h-2}(m+h-1);I_{x,i}^{h-1}(m)]$. $\textbf {X}_{(h-1)y}$ is the matrix whose $m$th column is $[b_{out};U_{y}^{h-2}(m+h-1);I_{y,i}^{h-1}(m)]$. $\textbf {Y}_{(h-1)x}$ is the matrix and its $m$th column is $[U_{x}^{h-2}(m+h)]$. $\textbf {Y}_{(h-1)y}$ is the matrix and its $m$th column is $[U_{y}^{h-2}(m+h)]$ .

In order to achieve higher prediction accuracy, the above-obtained two sets of the predictive results after the iteration-predictions with the number of ($h$-1) will be fused, which are described as

$$\begin{aligned} U_{x}^{out}(n+h)=f(U_{x}^{0}(n+h),U_{x}^{1}(n+h),\ldots,U_{x}^{h-1}(n+h)), \end{aligned}$$
$$\begin{aligned} U_{y}^{out}(n+h)=f(U_{y}^{0}(n+h),U_{y}^{1}(n+h),\ldots,U_{y}^{h-1}(n+h)), \end{aligned}$$
where $f(\cdot )$ represents the selected fusion strategy, whose expression can be selected according to the specific problem. In this paper, a simple fusion strategy (the weighted mean) is applied to obtain the predicted results with higher accuracy, i.e.,
$$\begin{aligned} U_{x}^{out}(n+h)=\sum_{j=0}^{h-1}a_{xj} U_{x}^{j}(n+h), \end{aligned}$$
$$\begin{aligned} U_{y}^{out}(n+h)=\sum_{j=0}^{h-1}a_{yj} U_{y}^{j}(n+h), \end{aligned}$$
where $a_{xj}$ and $a_{yj}$ ($j$=0,1,2, $\cdots$, $h$-1) are weight coefficients. Eqs. (18)–(19) can be simplified in matrix form as follows.
$$\begin{aligned} {\textbf{U}}_{x}^{out}={\textbf{a}}_{x}{\textbf{U}}_{x}^{'},\quad{\textbf{U}}_{y}^{out}={\textbf{a}}_{y}{\textbf{U}}_{y}^{'}, \end{aligned}$$
where ${\textbf {U}}_{x}^{out}$=$[U_{x}^{out}(1+h),U_{x}^{out}(2+h),\ldots,U_{x}^{out}(s+h)]$ and ${\textbf {U}}_{y}^{out}$=$[U_{y}^{out}(1+h),U_{y}^{out}(2+h),\ldots,U_{y}^{out}(s+h)]$, ${\textbf {U}}_{x}^{'}$ and ${\textbf {U}}_{y}^{'}$ can be analytically expressed as
$$\begin{aligned} & {\textbf{U}}_{x}^{'}=[U_{x}^{0}(n+h),U_{x}^{1}(n+h),\ldots,U_{x}^{h-1}(n+h)], n=1, 2, 3,\ldots, s ({\rm The} \ {\rm same} \ {\rm below}), \end{aligned}$$
$$\begin{aligned} & {\textbf{U}}_{y}^{'}=[U_{y}^{0}(n+h),U_{y}^{1}(n+h),\ldots,U_{y}^{h-1}(n+h)]. \end{aligned}$$

Moreover, ${\textbf {a}}_{x}$=$[{a}_{x0},{a}_{x1},\ldots,{a}_{x(h-1)}]^{\rm Tr}$ and ${\textbf {a}}_{y}$=$[{a}_{y0},{a}_{y1},\ldots,{a}_{y(h-1)}]^{\rm Tr}$, which are be can be analytically expressed as

$$\begin{aligned} {\textbf{a}_x}={\textbf{U}}_{x}{\textbf{U}}_{x}^{'}/[{(\textbf{U}}_{x}^{'})^{\rm Tr}{\textbf{U}}_{x}^{'}+\delta\boldsymbol{\Pi}],\quad{\textbf{a}_y}={\textbf{U}}_{y}{\textbf{U}}_{y}^{'}/[{(\textbf{U}}_{y}^{'})^{\rm Tr}{\textbf{U}}_{y}^{'}+\delta\boldsymbol{\Pi}], \end{aligned}$$
where ${\textbf {U}}_{x}$ and ${\textbf {U}}_{y}$ are two sets of the original input time-series.

3. Results and discussions

The MG time series are described by delayed differential equations, which are given in [41]. In the delayed differential equations for the MG, when the time delay $\tau$ is set as 17 and the initial value is 0.2, the MG time series appear chaotic state, as shown in Fig. 2(a). The differential equations for the LZ system are presented in [41]. In such a LZ system, the output in the z-axis direction of the LZ system appear chaotic state when the parameters as $a$ = 10, $b$ = 28, and $c$ = 8/3, as displayed in Fig. 2(b). To verify the validity of our method, the chaotic time series with phase-space reconstructions of the MG and LZ, as two input data, are defined as $U_x(n)$ and $U_y(n)$, respectively.

 figure: Fig. 2.

Fig. 2. The samples of the time series data of the Mackey-Glass and Lorenz. Here, (a): Mackey-Glass; (b): Lorenz.

Download Full Size | PDF

In the following calculation, the parameter values of the VCSEL are presented in Table 1. The periods of the sampled data ($U_x(n)$ and $U_y(n)$) of the Mackey-Glass and Lorenz are set as $T$. Eqs. (1)–(4) are numerically solved by the fourth-order Runge-Kutta method with a step size of 1 ps. The double reservoirs formed by the chaotic x-PC and y-PC output from the VCSEL are utilized to predict the sampled data $U_x(n)$ and $U_y(n)$, respectively. In the direct prediction processing, 8000 samples of the input data ($U_x(n)$ and $U_y(n)$) are recorded under the sampling intervals of 10 ps. After discarding the first 1000 samples (to eliminate transient state), we use 6000 points for training these reservoirs, and take their remaining 1000 points to test these reservoirs. In the $l$-iteration prediction processing under $l\geq 1$, for two predictive targets $U_x^{l-1}(n+l)$ and $U_y^{l-1}(n+l)$, we use the samples with the number of ($L_p(l)$-$L_p(l)$/6) to further train these reservoirs, and take their remaining points to further test these reservoirs. Here, $L_p(l)$=$L_p(l)$-$L_p(l)$/6, $L_p(0)$=6000 is the length of the sampled data $U_x(n)$ or $U_y(n)$ in the direct prediction processing. Moreover, since the performances of the prediction tasks can be improved by using the chaos mask signals due to complex dynamical response [23,38], hence two masked signals Mask$_1$ and Mask$_2$ are considered as chaotic signals emitted by two mutually-coupled semiconductor lasers, as presented in [23,38]. The amplitudes of these masked signals are adjusted, making their standard deviations to be 1 and their mean values to be 0. The interval of the virtual nodes of the reservoir are denoted by $\theta$, and set as 1 ps. The sampling $T$ periods of the input data is set to 1 ns. The number $N$ of virtual nodes is 1000. Here, $N=\tau /\theta$ and $\tau =T$. The scale factor $\gamma$ is set to 1.

Tables Icon

Table 1. Parameter values for the VCSEL used in the calculations

To describe the prediction of the double reservoirs on the MG and LZ chaotic time series, for the direct prediction and fusion prediction, the normalized mean squared error ($NMSE$) between the MG chaotic time series and the output of the reservoir based the x-PC is described as

$$\begin{aligned} {\it{NMSE}}_{x}^{out}(h)=\frac{1}{L}\frac{\sum\limits_{n=1}^{L} [U_{x}^{out}(n+h)-U_{x}(n+h)]^2}{var[U_{x}^{out}(n+h)]}, \end{aligned}$$
$$\begin{aligned} {\it{NMSE}}_{x}^{0}(h)=\frac{1}{L}\frac{\sum\limits_{n=1}^{L} [U_{x}^{0}(n+h)-U_{x}(n+h)]^2}{var[U_{x}^{0}(n+h)]}, \end{aligned}$$
where the superscript $out$ answers for the fusion prediction. The superscript 0 is responsible for the direct prediction. $L$ is the total number the test data series. The term $var$ represents the variance. Under the above-mentioned two predictive ways, the $NMSE$ between the LZ chaotic time series and the output of the reservoir based the y-PC is expressed as
$$\begin{aligned} {\it{NMSE}}_{y}^{out}(h)=\frac{1}{L}\frac{\sum\limits_{n=1}^{L} [U_{y}^{out}(n+h)-U_{y}(n+h)]^2}{var[U_{y}^{out}(n+h)]}, \end{aligned}$$
$$\begin{aligned} {\it{NMSE}}_{y}^{0}(h)=\frac{1}{L}\frac{\sum\limits_{n=1}^{L} [U_{y}^{0}(n+h)-U_{y}(n+h)]^2}{var[U_{y}^{0}(n+h)]}. \end{aligned}$$

The $NMSE_x^{out}(h)$ and $NMSE_y^{out}(h)$ are considered as the fusion-prediction errors. The $NMSE_x^{0}(h)$ and $NMSE_y^{0}(h)$ refer to the direct-prediction errors. The $NMSE_x^{out}(h)$, $NMSE_x^{0}(h)$, $NMSE_y^{out}(h)$ and $NMSE_y^{0}(h)$ are used to describe the deviation of the target value from the output of the reservoir. When $NMSE_x^{out}(h)$, $NMSE_x^{0}(h)$, $NMSE_y^{out}(h)$, $NMSE_y^{0}(h)$ =0, two reservoirs based on the x-PC and y-PC of the optical pumped spin-VCSEL can completely model $U_x(n+h)$ and $U_y(n+h)$, respectively, under the direct prediction and fusion prediction. When these $NMSE$s all equal to 1, these two reservoirs cannot predict $U_x(n+h)$ and $U_y(n+h)$, respectively. If they are less than 0.1, these two reservoirs can well predict $U_x(n+h)$ and $U_y(n+h)$, respectively.

In order to intuitively observe the ability to the chaotic dynamics of the MG and LZ in the reservoir computing system under the fusion prediction. Their predictive results are shown in Fig. 3, where $h$=4, $T$=1 ns, $\theta$=1 ps, and $N$=1000. One sees from Fig. 3 that the temporal traces of $U^{out}_x(n+h)$ and $U^{out}_y(n+h)$ are almost identical to those of $U_x(n+h)$ and $U_y(n+h)$, respectively. Under $h$=4, $T$=1 ns and $\theta$=1 ps, the for $NMSE_x^{out}(4)$ is 0.0046, and the for $NMSE_y^{out}(4)$ is 0.0060 (sees Figs. 34). These results show that the two reservoirs based on the x-PC and y-PC can perfectly predict the chaotic dynamics of the MG and LZ, respectively.

 figure: Fig. 3.

Fig. 3. (a) Samples of the chaotic time series data ($U_x(n+h)$) (the blue solid line) and the output of the x-PC-based reservoir ($U^{out}_x(n+h)$) (the red dashed line) under the fusion prediction. (b) The samples of the chaotic time series data ($U_y(n+h)$) (the blue solid line) and the output of the y-PC-based reservoir ($U^{out}_x(n+h)$) (the red dashed line) under the fusion prediction; Here, $h$=4, $T$=1 ns, $\theta$=1 ps, and $N$=1000.

Download Full Size | PDF

Figure 4 further presents the evolutions of the $NMSE_x^{out}$ and $NMSE_y^{out}$ in the parameter space of the period $T$ and $h$ under the fusion prediction, where $\theta$=1 ps. It is found from Fig. 4 that when $T$ is fixed at a certain value, the $NMSE_x^{out}$ and $NMSE_y^{out}$ further increase from 0.002 to 0.014 with the increase of $h$ from 2 to 20. Moreover, with $h$ fixed at a certain value, they show a slow increase with the further increase of $T$. To detailly observe the effects of $T$ on these training errors for different iteration step-sizes, Fig. 5 presents the dependences of these training errors on $T$ under the direct prediction and the fusion prediction. One sees from this figure that for different iteration step-sizes, the direct prediction errors $NMSE_x^{0}$ and $NMSE_y^{0}$ almost decrease linearly with the increase of $T$. The iteration step-size has little influence on the direct-prediction errors. For example, under $h$=3, 6, 9, $NMSE_x^{0}$ and $NMSE_y^{0}$ show a like-linear decrease from 0.0175 to 0.022 and from 0.0131 to 0.0164, respectively. Compared with the direct-prediction errors, the fusion-prediction errors $NMSE_x^{out}$ and $NMSE_y^{out}$ have great decrease and are less than 0.00965. For example, the $NMSE_x^{out}$ and $NMSE_y^{out}$ for $h$=3 changes from 0.00143 to 0.0034 and from 0.00451 to 0.00495, respectively. Under $h$=6, they range from 0.00434 to 0.00673 and from 0.00733 to 0.00802, respectively. With $h$ fixed at 9, they are respectively between 0.00743 and 0.00911, and between 0.00906 and 0.00965. Moreover, when $h$ is fixed at a certain value, the $NMSE_x^{out}$ exhibits a slow increase with the increase of $T$. The $NMSE_y^{out}$ appears an oscillation in small range, which is subjected to little influence by $T$. With the increase of $h$, the variation curves of the $NMSE_x^{out}$ and $NMSE_y^{out}$ via $T$ are moved up in parallel, indicating that the bigger iteration step-size induces larger training errors. The major reason for the case that the $NMSE$ increases as the iteration number $h$ increases is given as follows: according to the selected fusion strategy (sees Eqs. (16)–(23)), the fusion-prediction value $U_x^{out}(n+h)$ in Eq.(24) depends on $U_x^{0}(n+h),U_x^{1}(n+h),\ldots,U_x^{h-1}(n+h)$, and $U_y^{out}(n+h)$ in Eq.(25) depends on $U_y^{0}(n+h),U_y^{1}(n+h),\ldots,U_y^{h-1}(n+h)$. However, according to Eqs. (10)–(11) and (13)–(14), in first-iteration prediction process ($h$=2), $U_x^{out}(n+2)=f(U_x^{0}(n+2),U_x^{1}(n+2))$ and $U_y^{out}(n+2)=f(U_y^{0}(n+2),U_y^{1}(n+2))$. The single-step direction-prediction results $U_x^{0}(n+1)$ and $U_y^{0}(n+1)$ as the first-iteration prediction targets are fed back to input layers respectively, where there exists a certain error between $U_x^{0}(n+1)$ and $U_x(n+1)$ or between $U_y^{0}(n+1)$ or $U_y(n+1)$. Under this condition, the first iteration-prediction values $U_x^{1}(n+2)$ or $U_y^{1}(n+2)$ has bigger error with the target signal $U_x(n+1)$ or $U_y(n+1)$, where $U_x(n+2)\approx U_x(n+1)$ and $U_y(n+2)\approx U_y(n+1)$. Similarly, when the iteration number $h$ further increases from 2, the previous single-step iteration-prediction results $U_x^{h-2}(n+h-1)$ or $U_y^{h-2}(n+h-1)$ accumulated more errors with the target signal $U_x(n+1)$ or $U_y(n+1)$ is fed back to input layer. As a result, under $U_x(n+h)\approx U_x(n+1)$ and $U_y(n+h)\approx U_y(n+1)$, the $NMSE_x^{out}(h)$ and $NMSE_y^{out}(h)$ further increase when the iteration number $h$ further increases.

 figure: Fig. 4.

Fig. 4. Maps of the evolutions of the $NMSE_x^{out}$ and $NMSE_y^{out}$ in the parameter space of $T$ and $h$ under the fusion prediction. Here, $\theta$=1 ps. (a): $NMSE_x^{out}$ via $T$ and $h$; (b): $NMSE_y^{out}$ via $T$ and $h$.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Under the iteration step-size $h$=3, 6, 9, the dependences of the $NMSE$s on the period $T$ under the direct prediction and the fusion prediction. Here, $\theta$=1 ps. (a): $NMSE_x^{out}$, $NMSE_x^{0}$ via $T$ (b) $NMSE_y^{out}$, $NMSE_y^{0}$ via $T$.

Download Full Size | PDF

Figure 6 displays the evolutions of the fusion-prediction errors $NMSE_x^{out}$ and $NMSE_y^{out}$ in the parameter space of $h$ and $\theta$, where $T$=10 ns. As seen from this figure, with $\theta$ fixed at a certain value, these two training errors show an increase trend with the increase of the iteration step-size ($h$) from 2 to 20, but less than 0.014. When $h$ is fixed at a certain value, the $NMSE_x^{out}$ appears a slow decrease, and the $NMSE_y^{out}$ exhibits an oscillation in small range with the increase of $\theta$.

 figure: Fig. 6.

Fig. 6. (a) Maps of the evolutions of the $NMSE_x^{out}$ and $NMSE_y^{out}$ in the parameter space of $h$ and $\theta$ under the fusion prediction. Here, $T$=10 ns. (a): $NMSE_x^{out}$ via $h$ and $\theta$; (b) $NMSE_y^{out}$ via $h$ and $\theta$.

Download Full Size | PDF

In order to detailly observe the dependences of these training errors on $\theta$, for the fusion prediction and direct prediction, Fig. 7 presents the dependences of these training errors ($NMSE_x^{out}$, $NMSE_x^{0}$, $NMSE_y^{out}$ and $NMSE_y^{0}$) on $\theta$ under $h$=3, 6, 9. It is found from Fig. 7 that the direct-prediction errors $NMSE_x^{0}$ and $NMSE_y^{0}$ first gradually increase, then respectively stabilize at 0.0218 and 0.0163 under $h$=3, 6, 9. Compared with the direct-prediction errors, the fusion-prediction errors $NMSE_x^{out}$ and $NMSE_y^{out}$ appear a significant decrease and are no more than 0.0097 when $\theta$ is between 1 ps and 10 ps. For example, the $NMSE_x^{out}$ and $NMSE_y^{out}$ under $h$=3 vary from 0.00187 to 0.00351 and from 0.00451 to 0.00512, respectively. When $h$=6, they change from 0.00455 to 0.00688 and from 0.00734 to 0.00815, respectively. If $h$=9, they range from 0.00749 to 0.0093 and from 0.00875 to 0.0097, respectively. With $h$ fixed at a certain value, the $NMSE_x^{out}$ exhibits a slow like-linear decrease and the $NMSE_y^{out}$ shows an oscillation in small range when $\theta$ is between 1 ps and 10 ps. With the increase of $h$, the change-curves of $NMSE_x^{out}$ and $NMSE_y^{out}$ via $\theta$ are moved up in parallel, showing that for a fixed $\theta$, the bigger $h$ results in the larger training errors.

 figure: Fig. 7.

Fig. 7. Under the iteration step-size $h$=3, 6, 9, the dependences of the training errors on the period $\theta$ under the direct prediction and the fusion prediction. Here, $T$=10 ns. (a): $NMSE_x^{out}$, $NMSE_x^{0}$ via $\theta$; (b) $NMSE_y^{out}$, $NMSE_y^{0}$ via $\theta$.

Download Full Size | PDF

Table 2 further presents the training errors for the MG and LZ chaotic time-series under the direct prediction and fusion prediction with the prediction step-size ($h$) ranging from 3 to 15 or the iteration number of $h$-1, where $\theta$ =1 ps and $T$=10 ns. As observed from this table, the direct-prediction errors for the MG and LZ chaotic time-series have little change with the increase of $h$ from 3 to 15. They vairy from 0.021707 to 0.021847 and from 0.015918 to 0.016537, respectively. However, compared with the direct prediction, the fusion-prediction errors for the MG and LZ chaotic time-series are greatly reduced. For example, for the MG chaotic time-series, the fusion-prediction error under $h$=3 is reduced to 8.1% of the direct-prediction error under $h$=3. If $h$=15, the fusion-prediction error decreases to 55.61% of the corresponding direct-prediction error. For the LZ chaotic time-series, the training error under $h$=3 is reduced to 28.68% of the direct-prediction error under $h$=3. With $h$ fixed at 15, the fusion-prediction error decreases to 77.28% of the corresponding direct-prediction error. These results indicate that the fusion-prediction method can greatly lower the training errors. Moreover, it is found from Table 2 that under any prediction step-size $h$ or arbitrary iteration number of $h$-1, the fusion-prediction errors for the LZ chaotic time-series are larger than those for the MG chaotic time-series. The reason for this case is given as follows: the LZ has more complex and violently oscillating chaotic dynamic behavior than the MG, which makes the prediction for the LZ chaotic time-series to be more difficult.

Tables Icon

Table 2. The training errors for the MG and LZ chaotic time-series under the direct prediction and fusion prediction with the iteration step-size ranging from 3 to 15

To visually observe the influences of the system parameters on the direct-prediction and fusion -prediction errors, we calculate the dependences of $NMSE_x^{out}$, $NMSE_x^{0}$, $NMSE_y^{out}$ and $NMSE_y^{0}$ on the different parameters (such as $\alpha$, $\gamma _p$, $k_f$, $k_x (k_y)$, $\Delta \omega$ and $\eta$) when the iteration step-size $h$=3, 6, 9, as shown in Fig. 8 where $T$=10 ns and $\theta$=10 ps. One sees from this figure that the direct-prediction and the fusion-prediction errors are subjected to little influence by a certain parameter when $h$ is fixed at a certain value. Moreover, in a certain parameter-space, although the fusion-prediction errors ($NMSE_x^{out}$ and $NMSE_y^{out}$) further be enlarged with the increase of $h$, but are far less than the direct-prediction errors ($NMSE_x^{0}$ and $NMSE_y^{0}$) under different iteration step-sizes. However, in different parameter-spaces, the fusion-prediction errors under a certain $h$ only have slightly different. For example, in the parameter space of $\alpha$ displayed in Figs. 8 (a$_1$) and 8 (a$_2$), the $NMSE_x^{out}$ and $NMSE_y^{out}$ under $h$=3 vary from 0.00143 to 0.00196 and from 0.00435 to 0.00464, respectively. With $h$ fixed at 6, they change from 0.00461 to 0.00466 and from 0.00727 to 0.00797, respectively. When $h$=9, they range from 0.00735 to 0.00772 and from 0.00905 to 0.00967, respectively. In the parameter space of $\gamma _p$ shown in Figs. 8 (a$_3$) and 8 (a$_4$), the $NMSE_x^{out}$ and $NMSE_y^{out}$ under $h$=3 change from 0.00142 to 0.00191 and from 0.00435 to 0.0469, respectively. If $h$=6, they range from 0.00425 to 0.00462 and from 0.00745 to 0.00788, respectively. While $h$=9, they vary from 0.00725 to 0.00763 and from 0.00876 to 0.00963, respectively. In the other parameter spaces, the variation ranges of $NMSE_x^{out}$ and $NMSE_y^{out}$ are almost similar to those in the parameter space of $\gamma _p$ or $\alpha$ under the same iteration step-size. These results show that the fusion-prediction errors have strong robustness on the perturbations of the system parameters.

 figure: Fig. 8.

Fig. 8. When the iteration step-size $h$=3, 6, 9, the dependences of $NMSE_x^{out}$, $NMSE_x^{0}$, $NMSE_y^{out}$ and $NMSE_y^{0}$ on the different parameters under the direct prediction and fusion prediction. Here, $T$=10 ns and $\theta$=10 ps. (a$_1$): $NMSE_x^{out}$, $NMSE_x^{0}$ via $\alpha$; (a$_2$): $NMSE_y^{out}$, $NMSE_y^{0}$ via $\alpha$; (a$_3$): $NMSE_x^{out}$, $NMSE_x^{0}$ via $\gamma _p$ ; (a$_4$): $NMSE_y^{out}$, $NMSE_y^{0}$ via $\gamma _p$ ; (b$_1$): $NMSE_x^{out}$, $NMSE_x^{0}$ via $k_f$; (b$_2$) $NMSE_y^{out}$, $NMSE_y^{0}$ via $k_f$; (b$_3$): $NMSE_x^{out}$, $NMSE_x^{0}$ via $k_x (k_y)$; (b$_4$): $NMSE_y^{out}$, $NMSE_y^{0}$ via $k_x (k_y)$; (c$_1$): $NMSE_x^{out}$, $NMSE_x^{0}$ via $\Delta \omega$; (c$_2$): $NMSE_y^{out}$, $NMSE_y^{0}$ via $\Delta \omega$; (c$_3$): $NMSE_x^{out}$, $NMSE_x^{0}$ via $\eta$; (c$_4$): $NMSE_y^{out}$, $NMSE_y^{0}$ via $\eta$.

Download Full Size | PDF

4. Conclusions

In summary, based on two parallel RCs realized using the x-PC and y-PC of the optically pumped spin-VCSEL with double optical feedbacks, we propose the fusion-prediction method for the MG and LZ chaotic time-series, where the direct prediction and iterative prediction results are fused in a weighted average way. The study results show that compared with the direct-prediction errors, the fusion-prediction errors have great decrease under different parameters. When the iteration step-size $h$=3, $\theta$=10 ps and $T$=10 ns, the fusion-prediction errors for the MG and LZ chaotic time-series can decrease to 0.00178 and 0.004627, which are 8.1% of the corresponding direct-prediction error and 28.68% of one, respectively. Even if $h$ reaches to 15, the fusion-prediction errors for the MG and LZ chaotic time-series can be reduced to 55.61% of the corresponding direct prediction error and 77.28% of one, respectively. Moreover, the fusion-prediction errors have strong robustness on the perturbations of the system parameters. Our proposed fusion-prediction method can satisfy with the requirements of high predictive-accuracies for some complex nonlinear time series in practice. Finally, our proposed fusion-prediction scheme can be generalized to apply in any other delay-based reservoir computer, in order to enhance the prediction accuracy for time-series. The fusion-prediction method proposed by us can satisfy with the requirements of high predictive-accuracies for some complex nonlinear time series in practice.

Funding

National Natural Science Foundation of China (62075168); Basic and Applied Basic Research Foundation of Guangdong Province (2023A1515010726); Major Projects of Guangdong Education Department for Foundation Research and Applied Research (2017KZDX086); Innovation team project of colleges and universities in Guangdong Province (2021KCXTD051); Special project in key fields of Guangdong Universities: the new generation of communication technology (2020ZDZX3052).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. C. L. Giles, S. Lawrence, and A. C. Tsoi, “Noisy time series prediction using recurrent neural networks and grammatical inference,” Mach. Learn. 44(1/2), 161–183 (2001). [CrossRef]  

2. S. Soltani, “On the use of the wavelet decomposition for time series prediction,” Neurocomputing 48(1-4), 267–277 (2002). [CrossRef]  

3. N. I. Sapankevych and R. Sankar, “Time series prediction using support vector machines: a survey,” IEEE Comput. Intell. Mag. 4(2), 24–38 (2009). [CrossRef]  

4. E. Kayacan, B. Ulutas, and O. Kaynak, “Grey system theory-based models in time series prediction,” Expert Syst. with Appl. 37(2), 1784–1789 (2010). [CrossRef]  

5. D. Ömer Faruk, “A hybrid neural network and ARIMA model for water quality time series prediction,” Eng. Appl. Artif. Intell. 23(4), 586–594 (2010). [CrossRef]  

6. S. Aghabozorgi, A. Seyed Shirkhorshidi, and T. Ying Wah, “Time-series clustering – a decade review,” Inf. Syst. 53, 16–38 (2015). [CrossRef]  

7. W. Bao, J. Yue, and Y. Rao, “A deep learning framework for financial time series using stacked autoencoders and long-short term memory,” PLoS One 12(7), e0180944 (2017). [CrossRef]  

8. A. Yadav, C. Jha, and A. Sharan, “Optimizing LSTM for time series prediction in indian stock market,” Procedia Comput. Sci. 167, 2091–2100 (2020). [CrossRef]  

9. J.-Z. Yan, S.-S. Zhao, W.-D. Lan, S.-Y. Li, S.-S. Zhou, J.-G. Chen, J.-Y. Zhang, and Y.-J. Yang, “Calculation of high-order harmonic generation of atoms and molecules by combining time series prediction and neural networks,” Opt. Express 30(20), 35444–35456 (2022). [CrossRef]  

10. X.-Z. Li, B. Sheng, and M. Zhang, “Predicting the dynamical behaviors for chaotic semiconductor lasers by reservoir computing,” Opt. Lett. 47(11), 2822–2825 (2022). [CrossRef]  

11. X.-D. Li, J. Ho, and T. Chow, “Approximation of dynamical time-variant systems by continuous-time recurrent neural networks,” IEEE Transactions on Circuits and Systems II: Express Briefs 52(3), 656–667 (2005). [CrossRef]  

12. Z. Jia-Shu and X. Xian-Ci, “Predicting chaotic time series using recurrent neural network,” Chin. Phys. Lett. 17, 100300 (2022).

13. J. Bueno, S. Maktoobi, L. Froehly, I. Fischer, M. Jacquot, L. Larger, and D. Brunner, “Reinforcement learning in a large-scale photonic recurrent neural network,” Optica 5(6), 756–760 (2018). [CrossRef]  

14. J. Kugelman, D. Alonso-Caneiro, S. A. Read, S. J. Vincent, and M. J. Collins, “Automatic segmentation of oct retinal boundaries using recurrent neural networks and graph search,” Biomed. Opt. Express 9(11), 5759–5777 (2018). [CrossRef]  

15. H. Hewamalage, C. Bergmeir, and K. Bandara, “Recurrent neural networks for time series forecasting: Current status and future directions,” Int. J. Forecast. 37(1), 388–427 (2021). [CrossRef]  

16. L. Appeltant, M. C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. Dambre, B. Schrauwen, C. R. Mirasso, and I. Fischer, “Information processing using a single dynamical node as complex system,” Nat. Commun. 2(1), 468 (2011). [CrossRef]  

17. L. Larger, M. C. Soriano, D. Brunner, L. Appeltant, J. M. Gutierrez, L. Pesquera, C. R. Mirasso, and I. Fischer, “Photonic information processing beyond turing: an optoelectronic implementation of reservoir computing,” Opt. Express 20(3), 3241–3249 (2012). [CrossRef]  

18. Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, and S. Massar, “Optoelectronic reservoir computing,” Sci. Rep. 2(1), 287 (2012). [CrossRef]  

19. K. Vandoorne, J. Dambre, D. Verstraeten, B. Schrauwen, and P. Bienstman, “Parallel reservoir computing using optical amplifiers,” IEEE Transactions on Neural Networks 22(9), 1469–1481 (2011). [CrossRef]  

20. R. M. Nguimdo, G. Verschaffelt, J. Danckaert, and G. V. der Sande, “Fast photonic information processing using semiconductor lasers with delayed optical feedback: Role of phase dynamics,” Opt. Express 22(7), 8672–8686 (2014). [CrossRef]  

21. L. Appeltant, G. Van der Sande, J. Danckaert, and I. Fischer, “Constructing optimized binary masks for reservoir computing with delay systems,” Sci. Rep. 4(1), 3629 (2014). [CrossRef]  

22. J. Bueno, D. Brunner, M. C. Soriano, and I. Fischer, “Conditions for reservoir computing performance using semiconductor lasers with delayed optical feedback,” Opt. Express 25(3), 2401–2412 (2017). [CrossRef]  

23. Y. Kuriki, J. Nakayama, K. Takano, and A. Uchida, “Impact of input mask signals on delay-based photonic reservoir computing with semiconductor lasers,” Opt. Express 26(5), 5777–5788 (2018). [CrossRef]  

24. K. Takano, C. Sugano, M. Inubushi, K. Yoshimura, S. Sunada, K. Kanno, and A. Uchida, “Compact reservoir computing with a photonic integrated circuit,” Opt. Express 26(22), 29424–29439 (2018). [CrossRef]  

25. T. Weng, H. Yang, C. Gu, J. Zhang, and M. Small, “Synchronization of chaotic systems and their machine-learning models,” Phys. Rev. E 99(4), 042203 (2019). [CrossRef]  

26. X. Tan, Y. Hou, Z. Wu, and G. Xia, “Parallel information processing by a reservoir computing system based on a VCSEL subject to double optical feedback and optical injection,” Opt. Express 27(18), 26070–26079 (2019). [CrossRef]  

27. J. Bueno, J. Robertson, M. Hejda, and A. Hurtado, “Comprehensive performance analysis of a VCSEL-based photonic reservoir computer,” IEEE Photonics Technol. Lett. 33(16), 920–923 (2021). [CrossRef]  

28. Q. Zeng, Z. Wu, D. Yue, X. Tan, J. Tao, and G. Xia, “Performance optimization of a reservoir computing system based on a solitary semiconductor laser under electrical-message injection,” Appl. Opt. 59(23), 6932–6938 (2020). [CrossRef]  

29. X. X. Guo, S. Y. Xiang, Y. Qu, Y. N. Han, A. J. Wen, and Y. Hao, “Enhanced prediction performance of a neuromorphic reservoir computing system using a semiconductor nanolaser with double phase conjugate feedbacks,” J. Lightwave Technol. 39(1), 129–135 (2021). [CrossRef]  

30. T. Hülser, F. Köster, L. Jaurigue, and K. Lüdge, “Role of delay-times in delay-based photonic reservoir computing [Invited],” Opt. Mater. Express 12(3), 1214–1231 (2022). [CrossRef]  

31. N. Li, H. Susanto, B. Cemlyn, I. D. Henning, and M. J. Adams, “Secure communication systems based on chaos in optically pumped spin-vcsels,” Opt. Lett. 42(17), 3494–3497 (2017). [CrossRef]  

32. N. Li, H. Susanto, B. R. Cemlyn, I. D. Henning, and M. J. Adams, “Stability and bifurcation analysis of spin-polarized vertical-cavity surface-emitting lasers,” Phys. Rev. A 96(1), 013840 (2017). [CrossRef]  

33. X. Jiang, Y. Xie, B. Liu, Y. Ye, T. Song, J. Chai, and Q. Tang, “Dynamics of mutually coupled quantum dot spin-VCSELs subject to key parameters,” Nonlinear Dyn. 105(4), 3659–3671 (2021). [CrossRef]  

34. Y. Yigong, P. Zhou, M. Penghua, and L. Nianqiang, “Time-delayed reservoir computing based on an optically pumped spin VCSEL for high-speed processing,” Nonlinear Dyn. 107(3), 2619–2632 (2022). [CrossRef]  

35. Y. Huang, S. Gu, Y. Zeng, Z. Shen, P. Zhou, and N. Li, “Numerical investigation of photonic microwave generation in an optically pumped spin-VCSEL subject to optical feedback,” Opt. Express 31(6), 9827–9840 (2023). [CrossRef]  

36. M. B. Kennel, R. Brown, and H. D. I. Abarbanel, “Determining embedding dimension for phase-space reconstruction using a geometrical construction,” Phys. Rev. A 45(6), 3403–3411 (1992). [CrossRef]  

37. M. Tezuka, K. Kanno, and M. Bunsen, “Reservoir computing with a slowly modulated mask signal for preprocessing using a mutually coupled optoelectronic system,” Jpn. J. Appl. Phys. 55(8S3), 08RE06 (2016). [CrossRef]  

38. D. Zhong, Y. Hu, K. Zhao, W. Deng, P. Hou, and J. Zhang, “Accurate separation of mixed high-dimension optical-chaotic signals using optical reservoir computing based on optically pumped VCSELs,” Opt. Express 30(22), 39561–39581 (2022). [CrossRef]  

39. M. San Miguel, Q. Feng, and J. V. Moloney, “Light-polarization dynamics in surface-emitting semiconductor lasers,” Phys. Rev. A 52(2), 1728–1739 (1995). [CrossRef]  

40. J. Nakayama, K. Kanno, and A. Uchida, “Laser dynamical reservoir computing with consistency: an approach of a chaos mask signal,” Opt. Express 24(8), 8679–8692 (2016). [CrossRef]  

41. S. Shahi, F. H. Fenton, and E. M. Cherry, “Prediction of chaotic time series using recurrent neural networks and reservoir computing techniques: A comparative study,” Mach. Learn. with Appl. 8, 100300 (2022). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Schematic diagram of the prediction of two parallel optical reservoirs on time series using fusion method of direct and iterative prediction, based on the optically pumped spin-VCSEL with double optical feedbacks. Here, (a): principle block diagram; (b): detailed light paths and circuits; MG: Mackey-Glass chaotic time series; LZ: Lorenz chaotic time series; DM: discrete module; PSRM: phase-space reconstruction module; MVC: matrix vector converter; FCM: Fusion computing module; RC: reservoir computing; KS: key switch; CW: continuous laser; PM: phase modulation; IS: optical isolator; PC: polarization controller; FPC: fiber polarization coupler; NDF: neutral density filter; VCSEL: vertical cavity surface-emitting laser; OC: optical circulator; VOA: variable optical attenuator; DL: delay line; FPBs: fiber polarization beam splitter; PD: photodiode; SDD: signal data distributor; OSC: oscilloscope; EA: electric amplifier; SC: scaling operation circuits; Mask: masked signal.
Fig. 2.
Fig. 2. The samples of the time series data of the Mackey-Glass and Lorenz. Here, (a): Mackey-Glass; (b): Lorenz.
Fig. 3.
Fig. 3. (a) Samples of the chaotic time series data ($U_x(n+h)$) (the blue solid line) and the output of the x-PC-based reservoir ($U^{out}_x(n+h)$) (the red dashed line) under the fusion prediction. (b) The samples of the chaotic time series data ($U_y(n+h)$) (the blue solid line) and the output of the y-PC-based reservoir ($U^{out}_x(n+h)$) (the red dashed line) under the fusion prediction; Here, $h$=4, $T$=1 ns, $\theta$=1 ps, and $N$=1000.
Fig. 4.
Fig. 4. Maps of the evolutions of the $NMSE_x^{out}$ and $NMSE_y^{out}$ in the parameter space of $T$ and $h$ under the fusion prediction. Here, $\theta$=1 ps. (a): $NMSE_x^{out}$ via $T$ and $h$; (b): $NMSE_y^{out}$ via $T$ and $h$.
Fig. 5.
Fig. 5. Under the iteration step-size $h$=3, 6, 9, the dependences of the $NMSE$s on the period $T$ under the direct prediction and the fusion prediction. Here, $\theta$=1 ps. (a): $NMSE_x^{out}$, $NMSE_x^{0}$ via $T$ (b) $NMSE_y^{out}$, $NMSE_y^{0}$ via $T$.
Fig. 6.
Fig. 6. (a) Maps of the evolutions of the $NMSE_x^{out}$ and $NMSE_y^{out}$ in the parameter space of $h$ and $\theta$ under the fusion prediction. Here, $T$=10 ns. (a): $NMSE_x^{out}$ via $h$ and $\theta$; (b) $NMSE_y^{out}$ via $h$ and $\theta$.
Fig. 7.
Fig. 7. Under the iteration step-size $h$=3, 6, 9, the dependences of the training errors on the period $\theta$ under the direct prediction and the fusion prediction. Here, $T$=10 ns. (a): $NMSE_x^{out}$, $NMSE_x^{0}$ via $\theta$; (b) $NMSE_y^{out}$, $NMSE_y^{0}$ via $\theta$.
Fig. 8.
Fig. 8. When the iteration step-size $h$=3, 6, 9, the dependences of $NMSE_x^{out}$, $NMSE_x^{0}$, $NMSE_y^{out}$ and $NMSE_y^{0}$ on the different parameters under the direct prediction and fusion prediction. Here, $T$=10 ns and $\theta$=10 ps. (a$_1$): $NMSE_x^{out}$, $NMSE_x^{0}$ via $\alpha$; (a$_2$): $NMSE_y^{out}$, $NMSE_y^{0}$ via $\alpha$; (a$_3$): $NMSE_x^{out}$, $NMSE_x^{0}$ via $\gamma _p$ ; (a$_4$): $NMSE_y^{out}$, $NMSE_y^{0}$ via $\gamma _p$ ; (b$_1$): $NMSE_x^{out}$, $NMSE_x^{0}$ via $k_f$; (b$_2$) $NMSE_y^{out}$, $NMSE_y^{0}$ via $k_f$; (b$_3$): $NMSE_x^{out}$, $NMSE_x^{0}$ via $k_x (k_y)$; (b$_4$): $NMSE_y^{out}$, $NMSE_y^{0}$ via $k_x (k_y)$; (c$_1$): $NMSE_x^{out}$, $NMSE_x^{0}$ via $\Delta \omega$; (c$_2$): $NMSE_y^{out}$, $NMSE_y^{0}$ via $\Delta \omega$; (c$_3$): $NMSE_x^{out}$, $NMSE_x^{0}$ via $\eta$; (c$_4$): $NMSE_y^{out}$, $NMSE_y^{0}$ via $\eta$.

Tables (2)

Tables Icon

Table 1. Parameter values for the VCSEL used in the calculations

Tables Icon

Table 2. The training errors for the MG and LZ chaotic time-series under the direct prediction and fusion prediction with the iteration step-size ranging from 3 to 15

Equations (27)

Equations on this page are rendered with MathJax. Learn more.

d E x ( t ) d t = k ( 1 + i α ) { [ M ( t ) 1 ] E x ( t ) + i n ( t ) E y ( t ) } i ( γ p + Δ ω ) E x ( t ) γ a E x ( t ) + ξ x { β s p γ [ n ( t ) M ( t ) ] } 1 / 2 + k f E x ( t τ ) e x p ( i ω τ ) + k x E x i n j ,
d E y ( t ) d t = k ( 1 + i α ) { [ M ( t ) 1 ] E y ( t ) + i n ( t ) E x ( t ) } + i ( γ p Δ ω ) E y ( t ) γ a E y ( t ) + ξ y { β s p γ [ n ( t ) M ( t ) ] } 1 / 2 + k f E y ( t τ ) e x p ( i ω τ ) + k y E y i n j ,
d M ( t ) d t = γ { η [ 1 + | E x ( t ) | 2 + | E y ( t ) | 2 ] M ( t ) } i γ n ( t ) [ E y ( t ) E x ( t ) E x ( t ) E y ( t ) ] ,
d n ( t ) d t = γ p η n ( t ) { γ s + γ [ | E x ( t ) | 2 + | E y ( t ) | 2 ] } i γ M ( t ) [ E y ( t ) E x ( t ) E x ( t ) E y ( t ) ] .
E j , i n j = I d e x p [ i π S j ( t ) ] , j = x , y ( T h e   s a m e   b e l o w ) ,
S x ( t ) = M a s k 1 ( t ) × [ U x ( n ) ] × γ , S y ( t ) = M a s k 2 ( t ) × [ U y ( n ) ] × γ ,
U x 0 ( n + K ) = W x , 1 0 , K b o u t + W x , 2 0. K U x ( n ) + i = 1 N W x , i + 2 0 , K I x , i 0 ( n ) , K = 1 , 2 , , h ( T h e   s a m e   b e l o w ) ,
U y 0 ( n + K ) = W y , 1 0 , K b o u t + W y , 2 0. K U y ( n ) + i = 1 N W y , i + 2 0 , K I y , i 0 ( n ) ,
W x 0 , K = Y 0 x K X 0 x T r / ( X 0 x X 0 x T r + δ Π ) , W y 0 , K = Y 0 y K X 0 y T r / ( X 0 y X 0 y T r + δ Π ) ,
U x 1 ( n + l ) = W x , 1 1 , l b o u t + W x , 2 1. l U x 0 ( n + 1 ) + i = 1 N W x , i + 2 1 , l I x , i 1 ( n ) , l = 2 , 3 , , h ( T h e   s a m e   b e l o w ) ,
U y 1 ( n + l ) = W y , 1 1 , l b o u t + W y , 2 1 , l U y 0 ( n + l ) + i = 1 N W y , i + 2 1 , l I y , i 1 ( n ) ,
W x 1 , l = Y 1 x l X 1 x T r / ( X 1 x X 1 x T r + δ Π ) , W y 1 , l = Y 1 y l X 1 y T r / ( X 1 y X 1 y T r + δ Π ) ,
U x h 1 ( n + h ) = W x , 1 h 1 , 1 b o u t + W x , 2 h 1.1 U x h 2 ( n + h 1 ) + i = 1 N W x , i + 2 h 1 , 1 I x , i h 1 ( n ) ,
U y h 1 ( n + h ) = W y , 1 h 1 , 1 b o u t + W y , 2 h 1.1 U y h 2 ( n + h 1 ) + i = 1 N W y , i + 2 h 1 , 1 I y , i h 1 ( n ) ,
W x h 1 , 1 = Y ( h 1 ) x X ( h 1 ) x T r / ( X ( h 1 ) x X ( h 1 ) x T r + δ Π ) , W y h 1 , 1 = Y ( h 1 ) y X ( h 1 ) y T r / ( X ( h 1 ) y X ( h 1 ) y T r + δ Π ) ,
U x o u t ( n + h ) = f ( U x 0 ( n + h ) , U x 1 ( n + h ) , , U x h 1 ( n + h ) ) ,
U y o u t ( n + h ) = f ( U y 0 ( n + h ) , U y 1 ( n + h ) , , U y h 1 ( n + h ) ) ,
U x o u t ( n + h ) = j = 0 h 1 a x j U x j ( n + h ) ,
U y o u t ( n + h ) = j = 0 h 1 a y j U y j ( n + h ) ,
U x o u t = a x U x , U y o u t = a y U y ,
U x = [ U x 0 ( n + h ) , U x 1 ( n + h ) , , U x h 1 ( n + h ) ] , n = 1 , 2 , 3 , , s ( T h e   s a m e   b e l o w ) ,
U y = [ U y 0 ( n + h ) , U y 1 ( n + h ) , , U y h 1 ( n + h ) ] .
a x = U x U x / [ ( U x ) T r U x + δ Π ] , a y = U y U y / [ ( U y ) T r U y + δ Π ] ,
N M S E x o u t ( h ) = 1 L n = 1 L [ U x o u t ( n + h ) U x ( n + h ) ] 2 v a r [ U x o u t ( n + h ) ] ,
N M S E x 0 ( h ) = 1 L n = 1 L [ U x 0 ( n + h ) U x ( n + h ) ] 2 v a r [ U x 0 ( n + h ) ] ,
N M S E y o u t ( h ) = 1 L n = 1 L [ U y o u t ( n + h ) U y ( n + h ) ] 2 v a r [ U y o u t ( n + h ) ] ,
N M S E y 0 ( h ) = 1 L n = 1 L [ U y 0 ( n + h ) U y ( n + h ) ] 2 v a r [ U y 0 ( n + h ) ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.