Abstract

We propose a fast and accurate signal quality monitoring scheme that uses convolutional neural networks for error vector magnitude (EVM) estimation in coherent optical communications. We build a regression model to extract EVM information from complex signal constellation diagrams using a small number of received symbols. For the additive-white-Gaussian-noise-impaired channel, the proposed EVM estimation scheme shows a normalized mean absolute estimation error of 3.7% for quadrature phase-shift keying, 2.2% for 16-ary quadrature amplitude modulation (16QAM), and 1.1% for 64QAM signals, requiring only 100 symbols per constellation cluster in each observation period. Therefore, it can be used as a low-complexity alternative to conventional bit-error-rate estimation, enabling solutions for intelligent optical performance monitoring.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Coherent optical communications are widely used in metro and long-haul networks, where physical-layer impairments result in dynamically varying optical signal degradation. Advanced modulation formats, such as quadrature phase-shift keying (QPSK) and quadrature amplitude modulation (QAM), are now commonly used in coherent transceivers operating at 100 Gbps and beyond [1]. As optical networks become more heterogeneous and dynamic, accurate optical performance monitoring (OPM) characterized by, e.g., the signal-to-noise ratio (SNR) or bit error rate (BER) is important to ensure reliable data transmission [2]. Error vector magnitude (EVM), as one of the commonly used performance metrics, contains signal quality information of high-order modulated signals. For an additive white Gaussian noise (AWGN) channel, the EVM can be mapped to a BER and SNR [3]. Normally, millions of received symbols are used in EVM calculation [1,3]. However, such a cumulative process is time consuming and unsuitable to track fast network dynamics. Instead, a fast and accurate EVM monitoring scheme that requires only a small number of received symbols is needed.

Deep learning is a promising technique for OPM due to its ability to extract knowledge from high-dimensional data [46]. Various neural network types have been exploited for OPM, such as deep neural networks (DNNs), recurrent neural networks (RNNs), and convolutional neural networks (CNNs) [7,8]. For example, in [7], a DNN is used to estimate the optical SNR (OSNR) and perform modulation format recognition (MFR) based on signal amplitude histograms. In [9], an RNN architecture called long short-term memory (LSTM) is used to estimate the OSNR and nonlinear noise power in the frequency domain, achieving 1 dB test error both in OSNR and nonlinear noise power estimation. The LSTM capabilities in the monitoring of chromatic dispersion (CD) have also been demonstrated [10]. However, CNN attracts even more attention due to its powerful capabilities of extracting knowledge from graphical information [11,12]. AlexNet [13], ResNet [14], and VGGNet [15] are well-known CNN structures that show high accuracy on pattern recognition tasks. CNNs have been used for MFR and OSNR estimation based on graphical information in [8,16]. This method has been shown to extract signal features from complex signal constellation diagrams, with a 100% MFR accuracy and less than 0.7 dB OSNR estimation error. However, to the best of our knowledge, CNN-based EVM estimation using constellation diagram images has not been studied.

In this paper, we propose an EVM estimation scheme that uses a CNN together with complex signal constellation diagrams for fast and accurate OPM. In coherent optical communications, constellation diagrams are commonly used to qualitatively evaluate the received signal, as they provide a human-friendly visualization format whose observation provides insight into, e.g., the modulation format, OSNR, EVM, phase noise, etc. However, distilling exact values of, e.g., the BER from the diagram requires complex calculations that demand improvements in terms of automation and scalability. To this end, we apply supervised learning to analyze the visual representation of the channel state. The CNN structure is designed in such a way that each convolutional layer is followed by a max-pooling operation for complexity reduction. The resulting CNN can be applied directly to the constellation images for EVM estimation. Additional conversions, e.g., amplitude histogram [7], transfer frequency domain [9], or two-tap delay sampling [17], are avoided, enabling simplified data acquisition directly from transceivers. Moreover, unlike previous demonstrations of CNN-based OPM schemes (e.g., [16]), the proposed approach allows for utilizing only a small number of received symbols to estimate the EVM. The accuracy of the proposed EVM estimator is verified using 32 Gbaud QPSK, 16QAM, and 64QAM signals across the OSNR range of interest for practical operations. While using only 100 symbols per cluster in the complex signal constellation diagram, we achieve the accurate EVM estimation with a normalized mean absolute error (MAE) of 3.7% for QPSK (${\rm M} = {4}$ clusters), 2.2% for 16QAM (${\rm M} = {16}$ clusters), and 1.1% for 64QAM (${\rm M} = {64}$ clusters).

 figure: Fig. 1.

Fig. 1. Schematic diagram of the CNN structure (two convolutional layers) for EVM estimation from constellation diagrams. K denotes the kernel size, F is the number of filters, and S is the stride step.

Download Full Size | PPT Slide | PDF

The rest of the paper is organized as follows. Section 2 gives an overview of the proposed technique and the methodology developed to obtain the constellation diagram dataset. Section 3 analyzes the CNN performance with respect to its architecture, i.e., the number and size of the convolutional layers, and the number of symbols/cluster. Section 4 provides concluding remarks.

2. OPERATION PRINCIPLE AND DATASET PREPARATION

In this section, we introduce the main characteristics of CNNs and the key hyperparameters for model optimization. Moreover, we describe the simulation setup used to collect the dataset comprising images of complex signal constellation diagrams for QPSK, 16QAM, and 64QAM at different OSNR levels.

A. Employed CNN Structures

The proposed CNN structure used for EVM estimation is shown in Fig. 1. In general, a CNN consists of an input layer that receives an $n$-dimensional array and a number of convolutional, pooling, and fully connected layers [18]. For instance, ordinary digital images (composed of red, blue, and green components) are represented as a three-dimensional array. Two of the dimensions represent the image in the vertical and horizontal directions, where each position in the array represents one pixel in the image. The third dimension contains the colors, where usually three positions represent the three color components (red, blue, and green) of each pixel. In single-color images (e.g., grayscale), only two dimensions are needed (for the horizontal and vertical directions).

CNNs have been successfully applied to image recognition problems in many research areas [15,1821] due to their excellent capabilities of capturing spatial correlations. Each convolutional layer contains multiple kernels, which are used to scan the entire image (or feature maps) from left to right and from top to bottom to obtain the output feature maps. Convolutional layers generate rich feature maps by convolving the input image (or feature maps from previous layers) and filters. Filter kernels are updated during the training process. The $i$th feature map in convolutional layer $l$ calculation can be expressed as

$${ x}_{ i}^{ l} = { f}\left({\mathop \sum \limits_{{ j} = 1}^{{{ m}^{{ l} - 1}}} { x}_{ j}^{{ l} - 1} *{ k}_{{ i},{ j}}^{ l} + { b}_{ i}^{ l}} \right),$$
where $x_j^{l - 1}$ and $x_i^l$ are feature maps of previous layer $l - 1$ and current layer $l$, $k_{i,j}^l$ denotes the filter kernel connecting the $j$th feature map in the previous layer with the $i$th feature map in the current layer, $b_i^l$ represents a bias matrix, and $f(\cdot)$ is the activation function. The number of trainable parameters grows sharply as the number of convolutional layers increases.

We add max-pooling layers to grasp the main features of a certain region, reducing the dimensions of the feature map passed to the following layer. This allows us to effectively reduce the number of trainable parameters in the network. After the series of convolutional layers, the resulting $n$-dimensional array is transformed into a one-dimensional array (flattened) and passed to fully connected layers. Finally, the output layer outputs the estimated EVM value. In our case, we set one neuron in the output layer indicating an EVM value with a linear activation function. A rectified linear unit (ReLU) is selected as an activation function for the convolutional and fully connected layers. The ReLU operation for the input $x$ of a neutron is given as [21]

 figure: Fig. 2.

Fig. 2. Simulation setup for data collection. PRBS, pseudo-random binary sequence; CW, continuous wave; I/Q in-phase/quadrature; MZM, Mach–Zehnder modulator; B2B, back-to-back; OBPF, optical bandpass filter; DSP, digital signal processing.

Download Full Size | PPT Slide | PDF

 figure: Fig. 3.

Fig. 3. True EVM values of simulated QPSK, 16QAM, and 64QAM signals with respect to the OSNR. (a)–(f) Corresponding constellation diagrams for the end-points of the considered OSNR ranges.

Download Full Size | PPT Slide | PDF

$${ f} ({ x} ) = {\max}({0,{ x}}).$$

It is well known that the estimation error of a defined estimator is related to the model complexity. A low complexity estimator may result in large prediction bias and poor generalization. Conversely, a high complexity model may adapt too closely to the training data, while estimation results on the test set have a high variance [22]. Therefore, it is important to adjust the model structure (number of layers and filters) for the proposed signal quality monitoring scheme so that the model obtains a good balance between training and testing estimation error.

One particularity of the EVM estimation is that even small estimation errors are relevant. Therefore, during training, small errors should have a relevant contribution to the CNN updates. For this purpose, we use the mean squared logarithmic error (MSLE) as the error function. The MSLE between the true EVM (${{\rm EVM}_t}$) and the estimated EVM (${{\rm EVM}_e}$) can be expressed as

$${\rm MSLE} = \frac{1}{{ k}}\mathop \sum \limits_{{ i} = 1}^{ k} {\left({\log \left({{{{\rm EVM}}_{{{ t}_{ i}}}} + 1} \right) - \log \left({{{{\rm EVM}}_{{{ e}_{ i}}}} + 1} \right)} \right)^2},$$
where $k$ represents the total number of samples.

B. Simulation Setup for Data Collection

To collect the data, we set up a 32 Gbaud coherent optical system in VPItransmissionMaker [23], as shown in Fig. 2. The transmitter includes a continuous-wave (CW) laser and a dual-parallel Mach–Zehnder modulator (MZM) driven by an in-phase/quadrature (I/Q) driver for symbol mapping and pulse shaping. The pulse shaper is a root-raised cosine filter with a 0.15 roll-off factor. We use QPSK, 16QAM, and 64QAM modulation formats. For each format, we simulate ${{2}^{19}}$ symbols choosing 10 OSNR values to cover the EVM range of practical interest: ${\rm OSNR} = [{12} : {30}]\;{\rm dB}$ for QPSK, ${\rm OSNR} = [{20} : {38}]\;{\rm dB}$ for 16QAM, and ${\rm OSNR} = [{26} : {44}]\;{\rm dB}$ for 64QAM, as shown in Fig. 3. These selected values ensure a BER below the hard-decision forward error correction (HD-FEC) threshold of 3.8e–3 for QPSK and below the soft-decision FEC (SD-FEC) threshold of 1e–2 for 16QAM and 64QAM. Figure 3 shows the modulated signal EVM conditions with respect to their OSNR values and the corresponding constellation diagrams. The true EVM values are computed from received symbols using k-means clustering to obtain the constellation cluster centroids, which makes it possible to achieve high accuracy while avoiding the use of pilot symbols [24].

Figure 4 illustrates the dataset preparation stages. First, we define a number of symbols per constellation cluster (N) to plot on a diagram. Next, we plot constellation diagrams without fixed elements (axis boxes, labels, and ticks) as shown in Figs. 3(a)–3(f). Then, we save ${\rm L} = {100}$ such images during the training period T. Later, these input images are fed to a CNN scheme. A smaller N corresponds to a shorter training period T for signal quality monitoring. We train the CNN model over the constellation diagrams collected during the training period. The trained model is then applied to estimate the EVM by using data collected during the observation period. The monitoring interval shown in the figure is set by the network management system.

 figure: Fig. 4.

Fig. 4. Schematic diagram of the dataset collection. N is the number of symbols per cluster, M is the number of clusters in a complex signal constellation diagram, and L is the number of constellation diagrams used for training.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Configuration of Convolutional Layers for the Considered CNN Structures

To explore how many symbols are enough for accurate EVM estimation, we generate dataset options with ${\rm N} = {10}$, 100, 300, and 500 symbols per cluster in the complex signal constellation diagram. We refer to this as an N-symbol/cluster dataset. Each N-symbol/cluster dataset consists of 30 simulation scenarios (3 modulation formats and 10 different OSNR values), and each of them contains 100 images of constellation diagrams accumulated during the training period. Therefore, each N-symbol/cluster dataset contains 3000 images and 30 EVM labels. The training period and the observation window are set such that 50% and 25% of the N-symbol/cluster datasets are divided for training and testing, respectively. The remaining 25% of the dataset is used for validation during training.

3. RESULTS AND DISCUSSION

We first investigate the impact of the CNN structure on the EVM estimation accuracy. For this purpose, the proposed monitoring scheme is evaluated with the dataset containing constellation diagrams of 300 symbols per constellation cluster. We use the Adam algorithm with a learning rate 1e–4 as the optimizer [25]. The CNN is built using the Keras framework and TensorFlow library [26,27]. The Python code together with the entire dataset used to obtain the results presented in this paper is available for download [28].

Table 1 summarizes the configurations of the tested CNN structures. We vary the number of convolutional layers, and the number of filters and their kernel size. The tested CNNs consist of up to five convolutional layers. The kernel size is ${3} \times {3}$ (3,3) for all structures except structure 6, where it is set to ${5} \times {5}$ (5,5). As an example, Fig. 1 shows a schematic diagram of one of the CNN structures that we use for EVM estimation. It corresponds to structure 2. The convolutional layers are followed by two fully connected layers with 500 and 100 nodes, respectively.

 figure: Fig. 5.

Fig. 5. Validation loss across training epochs for different CNN structures.

Download Full Size | PPT Slide | PDF

Figure 5 shows the validation loss for the tested configurations. From the figure, one can see that structures 1 to 6 converge, whereas structures 7 and 8 fail to learn EVM information. Further investigation indicates that training these structures with mean square error (MSE) as the loss function would enable the network to decrease the loss over the epochs. The MSE is defined as

$${\rm MSE} = \frac{1}{{ k}}\mathop \sum \limits_{{ i} = 1}^{ k} {\left({{{{\rm EVM}}_{{{ t}_{ i}}}} - {{{\rm EVM}}_{{{ e}_{ i}}}}} \right)^2},$$
Yet, after convergence, these structures still present the worst performance among the structures tested; i.e., structures 7 and 8 still have poorer performance than structures 1 to 6. Once converged, the loss performance does not improve over subsequent epochs. By comparing structures 1 to 6, it is observed that the loss performance improves with the structure complexity up to a certain extent, while a further increase of complexity degrades the performance. Structures 4 and 5 ensure similarly low validation losses. However, considering the computational complexity, structure 4 is more favorable, as it balances the trade-off between the model complexity and its EVM estimation accuracy.
 figure: Fig. 6.

Fig. 6. Mean absolute error (MAE) of the estimated EVM values for the 300-symbol/cluster dataset with different layer configurations. The black solid line is the reference (Ref.) of the conventional method.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7.

Fig. 7. Estimation performance for the test dataset of 300 symbols/cluster.

Download Full Size | PPT Slide | PDF

Figure 6 illustrates the estimation MAE for the 300-symbol/cluster dataset. The MAE is defined as

$${\rm MAE}\left[{\rm \%} \right] = \frac{1}{{ n}}\mathop \sum \limits_{{ i} = 1}^{ n} \left| {{{{\rm EVM}}_{{{ t}_{ i}}}}\left[{\rm \%} \right] - {{{\rm EVM}}_{{{ e}_{ i}}}}\left[{\rm \%} \right]} \right|,$$
where $n$ is the number of images in the test dataset. For structures 1 to 5, the estimation MAE steadily decreases with the increase in OSNR for all three modulation formats. Figure 6 also shows at which conditions structure 6 performance is worse than structures 4 and 5. For OSNRs above 30 dB, EVM of 16QAM and 64QAM signals cannot be estimated as accurately with structure 6 as using structures 4 and 5. Therefore, these four-layer structures balance out the trade-off between the model complexity and its EVM estimation accuracy. Also, these results show that the conventional EVM estimation method, shown as “Ref.” curves in Fig. 6, provides comparable performance for QPSK signals. However, its accuracy is worse for 16QAM and 64QAM signals, which can be explained by the higher number of clusters. As the modulation order increases, the CNN-based estimation has more information available for feature extraction and thus more accurate EVM estimation. In contrast, this additional information might be averaged out when the conventional centroid-based EVM estimation is used.
 figure: Fig. 8.

Fig. 8. BER calculated from the true (solid line) and estimated (+) EVMs for a 300-symbol/cluster dataset using the proposed CNN scheme with structure 4.

Download Full Size | PPT Slide | PDF

We further evaluate the deviation of the EVM estimation for the 300-symbol/cluster dataset. In Fig. 7, we show two-layer (structure 1) and four-layer (structures 4 and 5) CNN performance for the test dataset in terms of EVM deviation and its normalized value. The normalized EVM deviation is calculated as follows:

$${\rm Normalized\,EVM\,deviation} [{ \%} ] = \frac{{{{{\rm EVM}}_{ e}} - {{{\rm EVM}}_{ t}}}}{{{{{\rm EVM}}_{ t}}}} \times 100.$$
The results in Fig. 7 show that all three structures ensure below 4% EVM deviation, which corresponds to the normalized EVM deviation of 15%. Since in practice the BER is a more commonly used measure for signal quality monitoring, we select structure 4 and the 300-symbol/cluster dataset to quantify the estimation accuracy under the assumption of an AWGN channel. The distribution of the BER calculated from the estimated EVM is shown in Fig. 8. The solid lines are the BER calculated from the true EVM (${{\rm EVM}_t}$). For each considered combination of OSNR and modulation format, 25 estimations are obtained. One can see that BER fluctuations, observed due to the accuracy of the EVM estimation, are not significant enough to trigger a false alarm, especially when operating close to the BER threshold of a certain FEC code. Therefore, the proposed scheme represents an accurate tool for signal quality monitoring relying on EVM estimation.

To investigate how long the observation period should be for accurate EVM estimation, we numerically evaluate the proposed scheme using datasets with 10 to 1000 symbols/cluster. We use structure 4 as the best-performing structure and show its estimation accuracy in Fig. 9 expressed with the MAE and normalized MAE. The normalized MAE is defined as

$${\rm Normalized\,MAE} [{\%}] = \frac{{{\rm MAE}}}{{{{{\rm EVM}}_{ t}}}} \times 100.$$
The normalized MAE allows us to compare the performance of the proposed scheme across different modulation formats and their true EVM values. From Fig. 9, one can observe that the estimation is more accurate for higher OSNRs, as the symbols are located closer to a reference point. Yet we see that only 100 symbols/cluster is sufficient to ensure a normalized MAE of 3.7% for QPSK, 2.2% for 16QAM, and 1.1% for 64QAM signals. Considering the order of the estimated values, such small errors result in a negligible EVM fluctuation that does not impact the system status even when operating close to the FEC limit.

 figure: Fig. 9.

Fig. 9. Test performance of the proposed EVM monitoring scheme relying on CNN structure 4 and datasets containing constellation diagrams with 10 to 500 symbols/cluster. The Ref. curves are baselines obtained using the conventional method applied for the 100-symbol/cluster dataset. (a) QPSK, (b) 16QAM, and (c) 64QAM.

Download Full Size | PPT Slide | PDF

 figure: Fig. 10.

Fig. 10. True versus estimated EVM with respect to transmission distance. (a) QPSK, (b) 16QAM, and (c) 64QAM. M, MAE; N, normalized MAE.

Download Full Size | PPT Slide | PDF

To test the performance of the proposed EVM estimation scheme for signal quality monitoring after a long-haul transmission, we build an amplified fiber-optic link using 100 km long spans of standard single-mode fiber (SSMF, CD coefficient ${\rm D} = 16{{\rm e} {-} 6}\;{{\rm s/m}^2}$, attenuation coefficient $\alpha = {0.2}\;{\rm dB/km}$, and nonlinear refractive index $n = {2.6{\rm e} {-} 20}\;{{\rm m}^2}/{\rm W}$). As previously, we set the OSNR at the transmitter to 45 dB and control it after every two to four 100 km spans using an optical spectrum analyzer. In such a way, we know the exact OSNR value at a specific point of the link where we also collect constellation diagrams for EVM estimation and accuracy analysis. Figure 10 shows how the true EVM, estimated EVM, and MAE evolve with the transmission distance. The proposed scheme provides a normalized MAE below 6.2% (QPSK), 2.6% (16QAM), and 2.8% (64QAM) using structure 4 and only 100 symbols per constellation cluster. These results indicate that the proposed scheme achieves good generalization capability. However, lower errors might be achieved by a structure that is specialized for long-haul transmission. This will be addressed in further research, as it deserves special attention.

The computation time is also evaluated for structure 4 and the conventional approach with a computer powered by an Intel Xeon E5-2630-v3 processor running at 2.4 GHz with 64 GB of RAM and a GTX TITAN Black graphics card. For the 100-symbol/cluster dataset, the inference time of the conventional estimation method is 11.2 ms, 14.5 ms, and 35.7 ms for QPSK, 16QAM, and 64QAM, respectively. For the proposed EVM estimation scheme, the inference of a single constellation diagram takes approximately 2.7 ms regardless of the modulation type. Such a fast inference time makes the proposed scheme a viable candidate for real-time OPM. Note that the training time for 1500 training samples (constellations) is 1400 s or 7 s per epoch but it is done offline.

4. CONCLUSION

A CNN-based EVM estimation scheme is proposed for signal quality monitoring in coherent optical communication systems. It relies on images of complex signal constellation diagrams fed into the low-complexity regression model that consists of interleaved convolutional layers and max-pooling layers. The performance of the proposed scheme is validated with 32 Gbaud QPSK, 16QAM, and 64QAM signals at different OSNR values of practical interest. Also, two different transmission configurations are tested: an optical-back-to-back, representing an AWGN-impaired optical channel, and a long-haul (${\gt}{1000}\;{\rm km}$) fiber transmission, including the AWGN and fiber nonlinearity induced noise. The estimation accuracy is investigated considering CNN architecture and the number of symbols in the constellation diagrams. The results show that CNN structures consisting of two to five convolutional layers ensure the best performance in terms of computational complexity and EVM estimation accuracy. Further increase in complexity might be inadequate for the specific problem and thus lead to the degradation of estimation accuracy. The four-layer CNN architecture with structure 4 [${8} + {16} + {16} + {8}$ filters per layer and (3,3) kernel] provides the most accurate EVM estimation regardless of the OSNR values in the system. For the AWGN-impaired channel, the normalized MAE of 3.7% for QPSK, 2.2% for 16QAM, and 1.1% for 64QAM is achieved with only 100 symbols per cluster in the complex signal constellation diagram. The corresponding values for a long-haul fiber transmission are 6.2% for QPSK after 2000 km, 2.6% for 16QAM after 1500 km, and 2.8% for 64QAM after 1000 km. Such accuracy together with a 2.7 ms short observation period makes it possible to consider the proposed scheme as an enabler for intelligent OPM.

Funding

European Regional Development Fund (1.1.1.2/VIAA/4/20/660); Vetenskapsrådet (2016-04510, 2019-05197); RISE.

Acknowledgment

This work was supported by the ERDF-funded project CARAT (1.1.1.2/VIAA/4/20/660), the Swedish Research Council (Vetenskapsrådet) within the project PHASE (2016-04510) and the project 2019-05197, and the RISE project “AI in optical transmission.”

REFERENCES

1. R. Schmogrow, B. Nebendahl, M. Winter, A. Josten, D. Hillerkuss, S. Koenig, J. Meyer, M. Dreschmann, M. Hübner, C. Koos, J. Becker, W. Freude, and J. Leuthold, “Error vector magnitude as a performance measure for advanced modulation,” IEEE Photon. Technol. Lett.24, 61–63 (2012). [CrossRef]  

2. D. C. Kilper, R. Bach, D. J. Blumenthal, D. Einstein, T. Landolsi, L. Ostar, M. Preiss, and A. E. Willner, “Optical performance monitoring,” J. Lightwave Technol.22, 294–304 (2004). [CrossRef]  

3. R. Shafik and A. R. Islam, “On the extended relationships among EVM, BER and SNR as performance metrics,” in Proceedings of the International Conference on Electrical and Computer Engineering, Dhaka, Bangladesh, December 2006, pp. 408–411.

4. F. Musumeci, C. Rottondi, A. Nag, I. Macaluso, D. Zibar, M. Ruffini, and M. Tornatore, “An overview on application of machine learning techniques in optical networks,” IEEE Commun. Surv. Tutorials21, 1383–1408 (2019). [CrossRef]  

5. F. N. Khan, Q. Fan, C. Lu, and A. P. T. Lau, “An optical communication’s perspective on machine learning and its applications,” J. Lightwave Technol.37, 493–516 (2019). [CrossRef]  

6. F. N. Khan, Q. Fan, C. Lu, and A. P. T. Lau, “Machine learning methods for optical communication systems and networks,” in Optical Fiber Telecommunications, 7th ed. (Academic, 2019), Chap. 21.

7. F. N. Khan, K. Zhong, X. Zhou, W. H. Al-Arashi, C. Yu, C. Lu, and A. P. T. Lau, “Joint OSNR monitoring and modulation format identification in digital coherent receivers using deep neural networks,” Opt. Express25, 17767–17776 (2017). [CrossRef]  

8. C. Natalino, A. Udalcovs, L. Wosinska, O. Ozolins, and M. Furdek, “One-shot learning for modulation format identification in evolving optical networks,” in OSA APC (IPR, Networks, NOMA, SPPCom, PVLED), OSA Technical Digest (Optical Society of America, 2019), paper JW4A.2.

9. Z. Wang, A. Yang, P. Guo, and P. He, “OSNR and nonlinear noise power estimation for optical fiber communication systems using LSTM based deep learning technique,” Opt. Express26, 21346–21357 (2018). [CrossRef]  

10. C. Wang, S. Fu, H. Wu, and M. Luo, “Joint OSNR and CD monitoring in digital coherent receiver using long short-term memory neural network,” Opt. Express27, 6936–6945 (2019). [CrossRef]  

11. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: a large-scale hierarchical image database,” in IEEE Conference on Computer Vision and Pattern Recognition (2009), pp. 248–255.

12. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis.115, 211–252 (2015). [CrossRef]  

13. A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM60, 84–90 (2017). [CrossRef]  

14. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nevada, USA (2016), pp. 770–778.

15. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556 (2014).

16. D. Wang, M. Zhang, J. Li, Z. Li, J. Li, C. Song, and X. Chen, “Intelligent constellation diagram analyzer using convolutional neural network-based deep learning,” Opt. Express25, 17150–17166 (2017). [CrossRef]  

17. S. D. Dods and T. B. Anderson, “Optical performance monitoring technique using delay tap asynchronous waveform sampling,” in Optical Fiber Communication Conference and the National Fiber Optic Engineers Conference, California, USA (2006), pp. 175–192.

18. Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” in The Handbook of Brain Theory and Neural Networks (1995).

19. Y. Lecun, K. Kavukcuoglu, and C. Farabet, “Convolutional networks and applications in vision,” in Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS) (2010), pp. 253–256.

20. K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” CoRR abs/1406.2199 (2014).

21. X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural network,” J. Mach. Learn. Res.15, 315–323 (2011).

22. T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed. (Springer, 2009).

23. VPIphotonics GmbH, “VPItransmissionMaker10,” 2020, https://www.vpiphotonics.com/.

24. Q. Zhang, Y. Yang, C. Guo, X. Zhou, Y. Yao, A. P. T. Lau, and C. Lu, “Accurate BER estimation scheme based on K-means clustering assisted Gaussian approach for arbitrary modulation format,” J. Lightwave Technol.38, 2152–2157 (2020). [CrossRef]  

25. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” in International Conference for Learning Representations (ICLR), California, USA (2015).

26. F. Chollet, “Keras,” 2020, https://keras.io.

27. M. Abadi, “TensorFlow: large-scale machine learning on heterogeneous distributed systems,” in Proceedings of the Conference on Language Resources and Evaluation, March 2016, pp. 3243–3249.

28. The code and dataset on GitHub: https://github.com/JhoneFan/2020_JOCN_EVM_Estimation_using_CNN.git.

Yuchuan Fan received the M.Sc. degree in electrical engineering from Tampere University of Technology, Finland, in 2017. He is currently pursuing his Ph.D. degree in applied physics from the School of Engineering Sciences at KTH Royal Institute of Technology in Stockholm, Sweden. His research interests include optical communication systems and machine learning application for the physical layer (PHY) aspect in fiber optics and networking.

Aleksejs Udalcovs (M’16) received his M.Sc. degree in telecommunications and the Ph.D. degree (Dr.sc.ing.) in electronic communications from Riga Technical University in 2011 and 2015, respectively. During 2012–2016, he worked at KTH Royal Institute of Technology as a Ph.D. Researcher within the Swedish Institute’s Visby program and then as a Postdoctoral Researcher within the EU project GRIFFON in Stockholm, Sweden. During this period, he became a member of the Kista High Speed Transmission Lab jointly owned and operated by KTH and by RISE Research Institutes of Sweden. In 2016, after receiving the research grant named “SCENE: Spectrum, cost and energy trade-offs in optical networks” from the Swedish ICT TNG consortium, he moved to RISE. Since 2019 he has been a Senior Scientist at RISE, working in cooperation with industry and academia providing his expertise in communication technologies. His main research interests are within the PHY-layer aspects in optical transport and photonic-wireless networks. Dr. Udalcovs is a (co-)author of more than 100 papers in peer-reviewed international journals and conferences. He has participated in numerous experimental activities in a number of research groups (Sweden: KTH, RISE; Germany: VPIphotonics GmbH; Belgium: Ghent University; France: III-V Lab; Denmark: DTU Fotonik) on fiber-optic transmission experiments, modeling of fiber-optic links and (sub-)systems, and optical network planning. Dr. Udalcovs is a Member of IEEE.

Xiaodan Pang (SM’19) received the M.Sc. degree from KTH Royal Institute of Technology, Sweden, in 2010, and the Ph.D. degree from DTU Fotonik, Technical University of Denmark in 2013. He was a Post Doc at RISE Research Institutes of Sweden (former ACREO Swedish ICT) from October 2013 to March 2017 and then worked as a researcher with the KTH Optical Networks Lab (ONLab) from March 2017 to February 2018. From March 2018 to February 2020, he worked as a Staff Opto Engineer and a Marie Curie Research Fellow at Infinera Corporation. Since March 2020 he has been working as a Senior Researcher in the Department of Applied Physics, KTH Royal Institute of Technology, Sweden. He is/has been the PI of a Swedish Research Council Starting Grant, the EU H2020 Marie Curie Individual Fellowship Project NEWMAN, and a Swedish SRA ICT-TNG Post Doc project. Dr. Pang’s research focuses on ultrafast communications with MMW/THz, free-space optics and fiber optics. He has authored or coauthored over 170 publications in journals and conferences. He has been a TPC member of in total of 17 conferences including OFC’20–21, ACP’18–20, and GLOBECOM’20. He is an IEEE Senior Member, an OSA Senior Member, and a Board Member of the IEEE Photonics Society Sweden Chapter.

Carlos Natalino (M’17) received the M.Sc. and Ph.D. degrees in electrical engineering from the Federal University of Pará, Brazil, in 2011 and 2016, respectively. He was a Post-Doctoral Fellow with the KTH Royal Institute of Technology from June 2016 to March 2019, where he was a Visiting Researcher from 2013 to 2014. He is a Post-Doctoral Researcher with the Chalmers University of Technology. Dr. Natalino has authored/coauthored over 40 papers published in international conferences and journals. His research focuses on the application of machine learning techniques for the optimization and operation of optical networks. He has served as a TPC member in several international conferences and workshops and as a reviewer for several journals.

Marija Furdek (M’09-SM’17) obtained her Ph.D. and Dipl. Ing. degrees in electrical engineering from the University of Zagreb, Croatia, in 2012 and 2008, respectively. She has been an Assistant Professor at Chalmers University of Technology in Gothenburg, Sweden, since 2019. From 2013 to 2019 she was a Postdoc and then Senior Researcher at KTH Royal Institute of Technology in Stockholm, Sweden. She was a visiting researcher at Telecom Italia, Italy; Massachusetts Institute of Technology, USA; and Auckland University of Technology, New Zealand. Her research interests encompass optical network design and automation, with a focus on resiliency and physical-layer security. Dr. Furdek is the PI of the project Safeguarding optical communication networks from cyber-security attacks, funded by the Swedish Research Council. As (co)PI, work group leader, and researcher, Dr. Furdek has participated in several European, Swedish, and Croatian research projects with a wide network of collaborators from industry and academia. She has co-authored more than 100 scientific publications in international journals and conferences, five of which received best paper awards. She is currently serving as a General Chair of the Optical Network Design and Modeling (ONDM) conference and was a General Chair of the Photonic Networks and Devices conference, part of the OSA Advanced Photonics Congress 2016–2019. She is an Associate Editor of the IEEE/OSA Journal of Optical Communications and Networking and Photonic Network Communications and a Guest Editor of the IEEE/OSA Journal of Lightwave Technology and IEEE Journal of Selected Topics in Quantum Electronics. She is a Senior Member of IEEE and OSA.

Sergei Popov is a Professor in the Applied Physics Department at KTH Royal Institute of Technology, Stockholm, Sweden. He holds M.Sc. degrees in applied physics (1987) and computer science (1989), Russia, and a Ph.D. in applied physics (1999), Finland. His expertise covers optical communication, laser physics, plasmonics, and optical materials. He was with Ericsson Telecom AB and Acreo AB (both in Sweden) before joining KTH. Prof. Popov is an OSA fellow and the Editor-in-Chief of the JEOS:RP journal (EOS), and he has published over 300 papers and conference contributions.

Oskars Ozolins (M’09) received his M.Sc. degree in telecommunications and his degree of Doctor of Engineering Science (Dr.sc.ing.) in electronics and telecommunications from Riga Technical University in Riga, Latvia, in 2009 and 2013, respectively. He is a Senior Scientist and a Technical Lead of the KTH/RISE Kista High-speed Transmission Lab at RISE Research Institutes of Sweden, where he is working under the Swedish Research Council starting grant project “Photonic-assisted signal processing techniques (PHASE).” He is also appointed as an Affiliated Faculty and Senior Researcher on optical communication in the Department of Applied Physics at KTH Royal Institute of Technology. His research interests are in the areas of digital and photonic assisted signal processing techniques, high-speed short-reach communications and devices, optical and photonic-wireless interconnects, and single photon quantum communication. In his professional career, Dr. Ozolins has been a guest researcher at III-V Lab (Nokia Bell Labs and Thales, France), Keysight Technologies (Boblingen Germany), DTU Fotonik (Technical University of Denmark, Denmark), IDLab (Ghent University--imec, Belgium), OFO (KTH Royal Institute of Technology, Sweden), and FOTON laboratory (University of Rennes 1, France). Dr. Ozolins is (co-)author of more than 195 international journal publications, conference contributions, invited talks/tutorials/keynotes/lectures, patents, and book chapters.

References

  • View by:

  1. R. Schmogrow, B. Nebendahl, M. Winter, A. Josten, D. Hillerkuss, S. Koenig, J. Meyer, M. Dreschmann, M. Hübner, C. Koos, J. Becker, W. Freude, and J. Leuthold, “Error vector magnitude as a performance measure for advanced modulation,” IEEE Photon. Technol. Lett. 24, 61–63 (2012).
    [Crossref]
  2. D. C. Kilper, R. Bach, D. J. Blumenthal, D. Einstein, T. Landolsi, L. Ostar, M. Preiss, and A. E. Willner, “Optical performance monitoring,” J. Lightwave Technol. 22, 294–304 (2004).
    [Crossref]
  3. R. Shafik and A. R. Islam, “On the extended relationships among EVM, BER and SNR as performance metrics,” in Proceedings of the International Conference on Electrical and Computer Engineering, Dhaka, Bangladesh, December2006, pp. 408–411.
  4. F. Musumeci, C. Rottondi, A. Nag, I. Macaluso, D. Zibar, M. Ruffini, and M. Tornatore, “An overview on application of machine learning techniques in optical networks,” IEEE Commun. Surv. Tutorials 21, 1383–1408 (2019).
    [Crossref]
  5. F. N. Khan, Q. Fan, C. Lu, and A. P. T. Lau, “An optical communication’s perspective on machine learning and its applications,” J. Lightwave Technol. 37, 493–516 (2019).
    [Crossref]
  6. F. N. Khan, Q. Fan, C. Lu, and A. P. T. Lau, “Machine learning methods for optical communication systems and networks,” in Optical Fiber Telecommunications, 7th ed. (Academic, 2019), Chap. 21.
  7. F. N. Khan, K. Zhong, X. Zhou, W. H. Al-Arashi, C. Yu, C. Lu, and A. P. T. Lau, “Joint OSNR monitoring and modulation format identification in digital coherent receivers using deep neural networks,” Opt. Express 25, 17767–17776 (2017).
    [Crossref]
  8. C. Natalino, A. Udalcovs, L. Wosinska, O. Ozolins, and M. Furdek, “One-shot learning for modulation format identification in evolving optical networks,” in OSA APC (IPR, Networks, NOMA, SPPCom, PVLED), OSA Technical Digest (Optical Society of America, 2019), paper JW4A.2.
  9. Z. Wang, A. Yang, P. Guo, and P. He, “OSNR and nonlinear noise power estimation for optical fiber communication systems using LSTM based deep learning technique,” Opt. Express 26, 21346–21357 (2018).
    [Crossref]
  10. C. Wang, S. Fu, H. Wu, and M. Luo, “Joint OSNR and CD monitoring in digital coherent receiver using long short-term memory neural network,” Opt. Express 27, 6936–6945 (2019).
    [Crossref]
  11. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: a large-scale hierarchical image database,” in IEEE Conference on Computer Vision and Pattern Recognition (2009), pp. 248–255.
  12. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
    [Crossref]
  13. A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60, 84–90 (2017).
    [Crossref]
  14. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nevada, USA (2016), pp. 770–778.
  15. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556 (2014).
  16. D. Wang, M. Zhang, J. Li, Z. Li, J. Li, C. Song, and X. Chen, “Intelligent constellation diagram analyzer using convolutional neural network-based deep learning,” Opt. Express 25, 17150–17166 (2017).
    [Crossref]
  17. S. D. Dods and T. B. Anderson, “Optical performance monitoring technique using delay tap asynchronous waveform sampling,” in Optical Fiber Communication Conference and the National Fiber Optic Engineers Conference, California, USA (2006), pp. 175–192.
  18. Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” in The Handbook of Brain Theory and Neural Networks (1995).
  19. Y. Lecun, K. Kavukcuoglu, and C. Farabet, “Convolutional networks and applications in vision,” in Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS) (2010), pp. 253–256.
  20. K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” CoRR abs/1406.2199 (2014).
  21. X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural network,” J. Mach. Learn. Res. 15, 315–323 (2011).
  22. T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed. (Springer, 2009).
  23. VPIphotonics GmbH, “VPItransmissionMaker10,” 2020, https://www.vpiphotonics.com/ .
  24. Q. Zhang, Y. Yang, C. Guo, X. Zhou, Y. Yao, A. P. T. Lau, and C. Lu, “Accurate BER estimation scheme based on K-means clustering assisted Gaussian approach for arbitrary modulation format,” J. Lightwave Technol. 38, 2152–2157 (2020).
    [Crossref]
  25. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” in International Conference for Learning Representations (ICLR), California, USA (2015).
  26. F. Chollet, “Keras,” 2020, https://keras.io .
  27. M. Abadi, “TensorFlow: large-scale machine learning on heterogeneous distributed systems,” in Proceedings of the Conference on Language Resources and Evaluation, March2016, pp. 3243–3249.
  28. The code and dataset on GitHub: https://github.com/JhoneFan/2020_JOCN_EVM_Estimation_using_CNN.git .

2020 (1)

2019 (3)

2018 (1)

2017 (3)

2015 (1)

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

2012 (1)

R. Schmogrow, B. Nebendahl, M. Winter, A. Josten, D. Hillerkuss, S. Koenig, J. Meyer, M. Dreschmann, M. Hübner, C. Koos, J. Becker, W. Freude, and J. Leuthold, “Error vector magnitude as a performance measure for advanced modulation,” IEEE Photon. Technol. Lett. 24, 61–63 (2012).
[Crossref]

2011 (1)

X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural network,” J. Mach. Learn. Res. 15, 315–323 (2011).

2004 (1)

Abadi, M.

M. Abadi, “TensorFlow: large-scale machine learning on heterogeneous distributed systems,” in Proceedings of the Conference on Language Resources and Evaluation, March2016, pp. 3243–3249.

Al-Arashi, W. H.

Anderson, T. B.

S. D. Dods and T. B. Anderson, “Optical performance monitoring technique using delay tap asynchronous waveform sampling,” in Optical Fiber Communication Conference and the National Fiber Optic Engineers Conference, California, USA (2006), pp. 175–192.

Ba, J.

D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” in International Conference for Learning Representations (ICLR), California, USA (2015).

Bach, R.

Becker, J.

R. Schmogrow, B. Nebendahl, M. Winter, A. Josten, D. Hillerkuss, S. Koenig, J. Meyer, M. Dreschmann, M. Hübner, C. Koos, J. Becker, W. Freude, and J. Leuthold, “Error vector magnitude as a performance measure for advanced modulation,” IEEE Photon. Technol. Lett. 24, 61–63 (2012).
[Crossref]

Bengio, Y.

X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural network,” J. Mach. Learn. Res. 15, 315–323 (2011).

Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” in The Handbook of Brain Theory and Neural Networks (1995).

Berg, A. C.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Bernstein, M.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Blumenthal, D. J.

Bordes, A.

X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural network,” J. Mach. Learn. Res. 15, 315–323 (2011).

Chen, X.

Deng, J.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: a large-scale hierarchical image database,” in IEEE Conference on Computer Vision and Pattern Recognition (2009), pp. 248–255.

Dods, S. D.

S. D. Dods and T. B. Anderson, “Optical performance monitoring technique using delay tap asynchronous waveform sampling,” in Optical Fiber Communication Conference and the National Fiber Optic Engineers Conference, California, USA (2006), pp. 175–192.

Dong, W.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: a large-scale hierarchical image database,” in IEEE Conference on Computer Vision and Pattern Recognition (2009), pp. 248–255.

Dreschmann, M.

R. Schmogrow, B. Nebendahl, M. Winter, A. Josten, D. Hillerkuss, S. Koenig, J. Meyer, M. Dreschmann, M. Hübner, C. Koos, J. Becker, W. Freude, and J. Leuthold, “Error vector magnitude as a performance measure for advanced modulation,” IEEE Photon. Technol. Lett. 24, 61–63 (2012).
[Crossref]

Einstein, D.

Fan, Q.

F. N. Khan, Q. Fan, C. Lu, and A. P. T. Lau, “An optical communication’s perspective on machine learning and its applications,” J. Lightwave Technol. 37, 493–516 (2019).
[Crossref]

F. N. Khan, Q. Fan, C. Lu, and A. P. T. Lau, “Machine learning methods for optical communication systems and networks,” in Optical Fiber Telecommunications, 7th ed. (Academic, 2019), Chap. 21.

Farabet, C.

Y. Lecun, K. Kavukcuoglu, and C. Farabet, “Convolutional networks and applications in vision,” in Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS) (2010), pp. 253–256.

Fei-Fei, L.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: a large-scale hierarchical image database,” in IEEE Conference on Computer Vision and Pattern Recognition (2009), pp. 248–255.

Freude, W.

R. Schmogrow, B. Nebendahl, M. Winter, A. Josten, D. Hillerkuss, S. Koenig, J. Meyer, M. Dreschmann, M. Hübner, C. Koos, J. Becker, W. Freude, and J. Leuthold, “Error vector magnitude as a performance measure for advanced modulation,” IEEE Photon. Technol. Lett. 24, 61–63 (2012).
[Crossref]

Friedman, J.

T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed. (Springer, 2009).

Fu, S.

Furdek, M.

C. Natalino, A. Udalcovs, L. Wosinska, O. Ozolins, and M. Furdek, “One-shot learning for modulation format identification in evolving optical networks,” in OSA APC (IPR, Networks, NOMA, SPPCom, PVLED), OSA Technical Digest (Optical Society of America, 2019), paper JW4A.2.

Glorot, X.

X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural network,” J. Mach. Learn. Res. 15, 315–323 (2011).

Guo, C.

Guo, P.

Hastie, T.

T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed. (Springer, 2009).

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nevada, USA (2016), pp. 770–778.

He, P.

Hillerkuss, D.

R. Schmogrow, B. Nebendahl, M. Winter, A. Josten, D. Hillerkuss, S. Koenig, J. Meyer, M. Dreschmann, M. Hübner, C. Koos, J. Becker, W. Freude, and J. Leuthold, “Error vector magnitude as a performance measure for advanced modulation,” IEEE Photon. Technol. Lett. 24, 61–63 (2012).
[Crossref]

Hinton, G.

A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60, 84–90 (2017).
[Crossref]

Huang, Z.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Hübner, M.

R. Schmogrow, B. Nebendahl, M. Winter, A. Josten, D. Hillerkuss, S. Koenig, J. Meyer, M. Dreschmann, M. Hübner, C. Koos, J. Becker, W. Freude, and J. Leuthold, “Error vector magnitude as a performance measure for advanced modulation,” IEEE Photon. Technol. Lett. 24, 61–63 (2012).
[Crossref]

Islam, A. R.

R. Shafik and A. R. Islam, “On the extended relationships among EVM, BER and SNR as performance metrics,” in Proceedings of the International Conference on Electrical and Computer Engineering, Dhaka, Bangladesh, December2006, pp. 408–411.

Josten, A.

R. Schmogrow, B. Nebendahl, M. Winter, A. Josten, D. Hillerkuss, S. Koenig, J. Meyer, M. Dreschmann, M. Hübner, C. Koos, J. Becker, W. Freude, and J. Leuthold, “Error vector magnitude as a performance measure for advanced modulation,” IEEE Photon. Technol. Lett. 24, 61–63 (2012).
[Crossref]

Karpathy, A.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Kavukcuoglu, K.

Y. Lecun, K. Kavukcuoglu, and C. Farabet, “Convolutional networks and applications in vision,” in Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS) (2010), pp. 253–256.

Khan, F. N.

Khosla, A.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Kilper, D. C.

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” in International Conference for Learning Representations (ICLR), California, USA (2015).

Koenig, S.

R. Schmogrow, B. Nebendahl, M. Winter, A. Josten, D. Hillerkuss, S. Koenig, J. Meyer, M. Dreschmann, M. Hübner, C. Koos, J. Becker, W. Freude, and J. Leuthold, “Error vector magnitude as a performance measure for advanced modulation,” IEEE Photon. Technol. Lett. 24, 61–63 (2012).
[Crossref]

Koos, C.

R. Schmogrow, B. Nebendahl, M. Winter, A. Josten, D. Hillerkuss, S. Koenig, J. Meyer, M. Dreschmann, M. Hübner, C. Koos, J. Becker, W. Freude, and J. Leuthold, “Error vector magnitude as a performance measure for advanced modulation,” IEEE Photon. Technol. Lett. 24, 61–63 (2012).
[Crossref]

Krause, J.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Krizhevsky, A.

A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60, 84–90 (2017).
[Crossref]

Landolsi, T.

Lau, A. P. T.

Lecun, Y.

Y. Lecun, K. Kavukcuoglu, and C. Farabet, “Convolutional networks and applications in vision,” in Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS) (2010), pp. 253–256.

Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” in The Handbook of Brain Theory and Neural Networks (1995).

Leuthold, J.

R. Schmogrow, B. Nebendahl, M. Winter, A. Josten, D. Hillerkuss, S. Koenig, J. Meyer, M. Dreschmann, M. Hübner, C. Koos, J. Becker, W. Freude, and J. Leuthold, “Error vector magnitude as a performance measure for advanced modulation,” IEEE Photon. Technol. Lett. 24, 61–63 (2012).
[Crossref]

Li, J.

Li, K.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: a large-scale hierarchical image database,” in IEEE Conference on Computer Vision and Pattern Recognition (2009), pp. 248–255.

Li, L.-J.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: a large-scale hierarchical image database,” in IEEE Conference on Computer Vision and Pattern Recognition (2009), pp. 248–255.

Li, Z.

Lu, C.

Luo, M.

Ma, S.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Macaluso, I.

F. Musumeci, C. Rottondi, A. Nag, I. Macaluso, D. Zibar, M. Ruffini, and M. Tornatore, “An overview on application of machine learning techniques in optical networks,” IEEE Commun. Surv. Tutorials 21, 1383–1408 (2019).
[Crossref]

Meyer, J.

R. Schmogrow, B. Nebendahl, M. Winter, A. Josten, D. Hillerkuss, S. Koenig, J. Meyer, M. Dreschmann, M. Hübner, C. Koos, J. Becker, W. Freude, and J. Leuthold, “Error vector magnitude as a performance measure for advanced modulation,” IEEE Photon. Technol. Lett. 24, 61–63 (2012).
[Crossref]

Musumeci, F.

F. Musumeci, C. Rottondi, A. Nag, I. Macaluso, D. Zibar, M. Ruffini, and M. Tornatore, “An overview on application of machine learning techniques in optical networks,” IEEE Commun. Surv. Tutorials 21, 1383–1408 (2019).
[Crossref]

Nag, A.

F. Musumeci, C. Rottondi, A. Nag, I. Macaluso, D. Zibar, M. Ruffini, and M. Tornatore, “An overview on application of machine learning techniques in optical networks,” IEEE Commun. Surv. Tutorials 21, 1383–1408 (2019).
[Crossref]

Natalino, C.

C. Natalino, A. Udalcovs, L. Wosinska, O. Ozolins, and M. Furdek, “One-shot learning for modulation format identification in evolving optical networks,” in OSA APC (IPR, Networks, NOMA, SPPCom, PVLED), OSA Technical Digest (Optical Society of America, 2019), paper JW4A.2.

Nebendahl, B.

R. Schmogrow, B. Nebendahl, M. Winter, A. Josten, D. Hillerkuss, S. Koenig, J. Meyer, M. Dreschmann, M. Hübner, C. Koos, J. Becker, W. Freude, and J. Leuthold, “Error vector magnitude as a performance measure for advanced modulation,” IEEE Photon. Technol. Lett. 24, 61–63 (2012).
[Crossref]

Ostar, L.

Ozolins, O.

C. Natalino, A. Udalcovs, L. Wosinska, O. Ozolins, and M. Furdek, “One-shot learning for modulation format identification in evolving optical networks,” in OSA APC (IPR, Networks, NOMA, SPPCom, PVLED), OSA Technical Digest (Optical Society of America, 2019), paper JW4A.2.

Preiss, M.

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nevada, USA (2016), pp. 770–778.

Rottondi, C.

F. Musumeci, C. Rottondi, A. Nag, I. Macaluso, D. Zibar, M. Ruffini, and M. Tornatore, “An overview on application of machine learning techniques in optical networks,” IEEE Commun. Surv. Tutorials 21, 1383–1408 (2019).
[Crossref]

Ruffini, M.

F. Musumeci, C. Rottondi, A. Nag, I. Macaluso, D. Zibar, M. Ruffini, and M. Tornatore, “An overview on application of machine learning techniques in optical networks,” IEEE Commun. Surv. Tutorials 21, 1383–1408 (2019).
[Crossref]

Russakovsky, O.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Satheesh, S.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Schmogrow, R.

R. Schmogrow, B. Nebendahl, M. Winter, A. Josten, D. Hillerkuss, S. Koenig, J. Meyer, M. Dreschmann, M. Hübner, C. Koos, J. Becker, W. Freude, and J. Leuthold, “Error vector magnitude as a performance measure for advanced modulation,” IEEE Photon. Technol. Lett. 24, 61–63 (2012).
[Crossref]

Shafik, R.

R. Shafik and A. R. Islam, “On the extended relationships among EVM, BER and SNR as performance metrics,” in Proceedings of the International Conference on Electrical and Computer Engineering, Dhaka, Bangladesh, December2006, pp. 408–411.

Simonyan, K.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556 (2014).

K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” CoRR abs/1406.2199 (2014).

Socher, R.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: a large-scale hierarchical image database,” in IEEE Conference on Computer Vision and Pattern Recognition (2009), pp. 248–255.

Song, C.

Su, H.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nevada, USA (2016), pp. 770–778.

Sutskever, I.

A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60, 84–90 (2017).
[Crossref]

Tibshirani, R.

T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed. (Springer, 2009).

Tornatore, M.

F. Musumeci, C. Rottondi, A. Nag, I. Macaluso, D. Zibar, M. Ruffini, and M. Tornatore, “An overview on application of machine learning techniques in optical networks,” IEEE Commun. Surv. Tutorials 21, 1383–1408 (2019).
[Crossref]

Udalcovs, A.

C. Natalino, A. Udalcovs, L. Wosinska, O. Ozolins, and M. Furdek, “One-shot learning for modulation format identification in evolving optical networks,” in OSA APC (IPR, Networks, NOMA, SPPCom, PVLED), OSA Technical Digest (Optical Society of America, 2019), paper JW4A.2.

Wang, C.

Wang, D.

Wang, Z.

Willner, A. E.

Winter, M.

R. Schmogrow, B. Nebendahl, M. Winter, A. Josten, D. Hillerkuss, S. Koenig, J. Meyer, M. Dreschmann, M. Hübner, C. Koos, J. Becker, W. Freude, and J. Leuthold, “Error vector magnitude as a performance measure for advanced modulation,” IEEE Photon. Technol. Lett. 24, 61–63 (2012).
[Crossref]

Wosinska, L.

C. Natalino, A. Udalcovs, L. Wosinska, O. Ozolins, and M. Furdek, “One-shot learning for modulation format identification in evolving optical networks,” in OSA APC (IPR, Networks, NOMA, SPPCom, PVLED), OSA Technical Digest (Optical Society of America, 2019), paper JW4A.2.

Wu, H.

Yang, A.

Yang, Y.

Yao, Y.

Yu, C.

Zhang, M.

Zhang, Q.

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nevada, USA (2016), pp. 770–778.

Zhong, K.

Zhou, X.

Zibar, D.

F. Musumeci, C. Rottondi, A. Nag, I. Macaluso, D. Zibar, M. Ruffini, and M. Tornatore, “An overview on application of machine learning techniques in optical networks,” IEEE Commun. Surv. Tutorials 21, 1383–1408 (2019).
[Crossref]

Zisserman, A.

K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” CoRR abs/1406.2199 (2014).

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556 (2014).

Commun. ACM (1)

A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60, 84–90 (2017).
[Crossref]

IEEE Commun. Surv. Tutorials (1)

F. Musumeci, C. Rottondi, A. Nag, I. Macaluso, D. Zibar, M. Ruffini, and M. Tornatore, “An overview on application of machine learning techniques in optical networks,” IEEE Commun. Surv. Tutorials 21, 1383–1408 (2019).
[Crossref]

IEEE Photon. Technol. Lett. (1)

R. Schmogrow, B. Nebendahl, M. Winter, A. Josten, D. Hillerkuss, S. Koenig, J. Meyer, M. Dreschmann, M. Hübner, C. Koos, J. Becker, W. Freude, and J. Leuthold, “Error vector magnitude as a performance measure for advanced modulation,” IEEE Photon. Technol. Lett. 24, 61–63 (2012).
[Crossref]

Int. J. Comput. Vis. (1)

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

J. Lightwave Technol. (3)

J. Mach. Learn. Res. (1)

X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural network,” J. Mach. Learn. Res. 15, 315–323 (2011).

Opt. Express (4)

Other (16)

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: a large-scale hierarchical image database,” in IEEE Conference on Computer Vision and Pattern Recognition (2009), pp. 248–255.

C. Natalino, A. Udalcovs, L. Wosinska, O. Ozolins, and M. Furdek, “One-shot learning for modulation format identification in evolving optical networks,” in OSA APC (IPR, Networks, NOMA, SPPCom, PVLED), OSA Technical Digest (Optical Society of America, 2019), paper JW4A.2.

F. N. Khan, Q. Fan, C. Lu, and A. P. T. Lau, “Machine learning methods for optical communication systems and networks,” in Optical Fiber Telecommunications, 7th ed. (Academic, 2019), Chap. 21.

R. Shafik and A. R. Islam, “On the extended relationships among EVM, BER and SNR as performance metrics,” in Proceedings of the International Conference on Electrical and Computer Engineering, Dhaka, Bangladesh, December2006, pp. 408–411.

S. D. Dods and T. B. Anderson, “Optical performance monitoring technique using delay tap asynchronous waveform sampling,” in Optical Fiber Communication Conference and the National Fiber Optic Engineers Conference, California, USA (2006), pp. 175–192.

Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” in The Handbook of Brain Theory and Neural Networks (1995).

Y. Lecun, K. Kavukcuoglu, and C. Farabet, “Convolutional networks and applications in vision,” in Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS) (2010), pp. 253–256.

K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” CoRR abs/1406.2199 (2014).

T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed. (Springer, 2009).

VPIphotonics GmbH, “VPItransmissionMaker10,” 2020, https://www.vpiphotonics.com/ .

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nevada, USA (2016), pp. 770–778.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556 (2014).

D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” in International Conference for Learning Representations (ICLR), California, USA (2015).

F. Chollet, “Keras,” 2020, https://keras.io .

M. Abadi, “TensorFlow: large-scale machine learning on heterogeneous distributed systems,” in Proceedings of the Conference on Language Resources and Evaluation, March2016, pp. 3243–3249.

The code and dataset on GitHub: https://github.com/JhoneFan/2020_JOCN_EVM_Estimation_using_CNN.git .

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Schematic diagram of the CNN structure (two convolutional layers) for EVM estimation from constellation diagrams. K denotes the kernel size, F is the number of filters, and S is the stride step.
Fig. 2.
Fig. 2. Simulation setup for data collection. PRBS, pseudo-random binary sequence; CW, continuous wave; I/Q in-phase/quadrature; MZM, Mach–Zehnder modulator; B2B, back-to-back; OBPF, optical bandpass filter; DSP, digital signal processing.
Fig. 3.
Fig. 3. True EVM values of simulated QPSK, 16QAM, and 64QAM signals with respect to the OSNR. (a)–(f) Corresponding constellation diagrams for the end-points of the considered OSNR ranges.
Fig. 4.
Fig. 4. Schematic diagram of the dataset collection. N is the number of symbols per cluster, M is the number of clusters in a complex signal constellation diagram, and L is the number of constellation diagrams used for training.
Fig. 5.
Fig. 5. Validation loss across training epochs for different CNN structures.
Fig. 6.
Fig. 6. Mean absolute error (MAE) of the estimated EVM values for the 300-symbol/cluster dataset with different layer configurations. The black solid line is the reference (Ref.) of the conventional method.
Fig. 7.
Fig. 7. Estimation performance for the test dataset of 300 symbols/cluster.
Fig. 8.
Fig. 8. BER calculated from the true (solid line) and estimated (+) EVMs for a 300-symbol/cluster dataset using the proposed CNN scheme with structure 4.
Fig. 9.
Fig. 9. Test performance of the proposed EVM monitoring scheme relying on CNN structure 4 and datasets containing constellation diagrams with 10 to 500 symbols/cluster. The Ref. curves are baselines obtained using the conventional method applied for the 100-symbol/cluster dataset. (a) QPSK, (b) 16QAM, and (c) 64QAM.
Fig. 10.
Fig. 10. True versus estimated EVM with respect to transmission distance. (a) QPSK, (b) 16QAM, and (c) 64QAM. M, MAE; N, normalized MAE.

Tables (1)

Tables Icon

Table 1. Configuration of Convolutional Layers for the Considered CNN Structures

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

x i l = f ( j = 1 m l 1 x j l 1 k i , j l + b i l ) ,
f ( x ) = max ( 0 , x ) .
M S L E = 1 k i = 1 k ( log ( E V M t i + 1 ) log ( E V M e i + 1 ) ) 2 ,
M S E = 1 k i = 1 k ( E V M t i E V M e i ) 2 ,
M A E [ % ] = 1 n i = 1 n | E V M t i [ % ] E V M e i [ % ] | ,
N o r m a l i z e d E V M d e v i a t i o n [ % ] = E V M e E V M t E V M t × 100.
N o r m a l i z e d M A E [ % ] = M A E E V M t × 100.