Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep-reinforcement-learning-based RMSCA for space division multiplexing networks with multi-core fibers [Invited Tutorial]

Not Accessible

Your library or personal account may give you access

Abstract

The escalating demands for network capacities catalyze the adoption of space division multiplexing (SDM) technologies. With continuous advances in multi-core fiber (MCF) fabrication, MCF-based SDM networks are positioned as a viable and promising solution to achieve higher transmission capacities in multi-dimensional optical networks. However, with the extensive network resources offered by MCF-based SDM networks comes the challenge of traditional routing, modulation, spectrum, and core allocation (RMSCA) methods to achieve appropriate performance. This paper proposes an RMSCA approach based on deep reinforcement learning (DRL) for MCF-based elastic optical networks (MCF-EONs). Within the solution, a novel state representation with essential network information and a fragmentation-aware reward function were designed to direct the agent in learning effective RMSCA policies. Additionally, we adopted a proximal policy optimization algorithm featuring an action mask to enhance the sampling efficiency of the DRL agent and speed up the training process. The performance of the proposed algorithm was evaluated with two different network topologies with varying traffic loads and fibers with different numbers of cores. The results confirmed that the proposed algorithm outperforms the heuristics and the state-of-the-art DRL-based RMSCA algorithm in reducing the service blocking probability by around 83% and 51%, respectively. Moreover, the proposed algorithm can be applied to networks with and without core switching capability and has an inference complexity compatible with real-world deployment requirements.

© 2024 Optica Publishing Group

Full Article  |  PDF Article
More Like This
Dynamic slicing of multidimensional resources in DCI-EON with penalty-aware deep reinforcement learning

Meng Lian, Yongli Zhao, Yajie Li, Avishek Nag, and Jie Zhang
J. Opt. Commun. Netw. 16(2) 112-126 (2024)

Experimental evaluation of a latency-aware routing and spectrum assignment mechanism based on deep reinforcement learning

C. Hernández-Chulde, R. Casellas, R. Martínez, R. Vilalta, and R. Muñoz
J. Opt. Commun. Netw. 15(11) 925-937 (2023)

Routing in optical transport networks with deep reinforcement learning

José Suárez-Varela, Albert Mestres, Junlin Yu, Li Kuang, Haoyu Feng, Albert Cabellos-Aparicio, and Pere Barlet-Ros
J. Opt. Commun. Netw. 11(11) 547-558 (2019)

Cited By

You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Figures (12)

You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Tables (5)

You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Equations (10)

You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.