Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

User and resource allocation in latency constrained Xhaul via reinforcement learning

Not Accessible

Your library or personal account may give you access

Abstract

The Flexible Ethernet (FlexE) is envisioned for the provisioning of different services and hard slicing of the Xhaul in 5G and beyond networks. For efficient bandwidth utilization in the Xhaul, traffic prediction for slot allocation in FlexE calendars is required. Further, if coordinated multipoint (CoMP) is used, the allocation of users to remote units (RUs) with an Xhaul path of lower latency to the distributed unit/central unit will increase the achievable user bit rate. In this paper, the use of multi-agent deep reinforcement learning (DRL) for optimal slot allocations in a FlexE-enabled Xhaul, for traffic generated through CoMP, and for offloading users among different RUs is explored. In simulation results, the DRL agent can learn to predict input traffic patterns and allocate slots with the necessary granularity of 5 Gbps in the FlexE calendar. The resulting gains are expressed in terms of the reduction of mean over-allocation of slots in the FlexE calendar in comparison to the prediction obtained from an autoregressive integrated moving average (ARIMA) model. Simulations indicate that DRL outperforms ARIMA-based prediction by up to 11.6%

© 2023 Optica Publishing Group

Full Article  |  PDF Article
More Like This
Experimental evaluation of a latency-aware routing and spectrum assignment mechanism based on deep reinforcement learning

C. Hernández-Chulde, R. Casellas, R. Martínez, R. Vilalta, and R. Muñoz
J. Opt. Commun. Netw. 15(11) 925-937 (2023)

Dynamic slicing of multidimensional resources in DCI-EON with penalty-aware deep reinforcement learning

Meng Lian, Yongli Zhao, Yajie Li, Avishek Nag, and Jie Zhang
J. Opt. Commun. Netw. 16(2) 112-126 (2024)

Resource-efficient and QoS guaranteed 5G RAN slice migration in elastic metro aggregation networks using heuristic-assisted deep reinforcement learning

Jiahua Gu, Min Zhu, Yunwu Wang, Xiaofeng Cai, Yuancheng Cai, Jiao Zhang, Mingzheng Lei, Bingchang Hua, Pingping Gu, and Guo Zhao
J. Opt. Commun. Netw. 15(11) 854-870 (2023)

Cited By

You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Figures (12)

You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Tables (5)

You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Equations (13)

You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.