Abstract
To circumvent the non-parallelizability of recurrent neural network-based equalizers, we propose knowledge distillation to recast the RNN into a parallelizable feed-forward structure. The latter shows 38% latency decrease, while impacting the Q-factor by only 0.5 dB.
© 2023 The Author(s)
PDF Article | Presentation VideoMore Like This
Naveenta Gautam, Vinay Kaushik, Amol Choudhary, and Brejesh Lall
JTu5B.61 Frontiers in Optics (FiO) 2022
Xiaoan Huang, Dongxu Zhang, Xiaofeng Hu, Chenhui Ye, and Kaibin Zhang
M3G.2 Optical Fiber Communication Conference (OFC) 2021
Bixia Tang, Jianying Chen, Yue-Cai Huang, Yun Xue, and Weixing Zhou
T4A.82 Asia Communications and Photonics Conference (ACP) 2021