IDEAS home Printed from https://ideas.repec.org/a/gam/jftint/v16y2024i12p434-d1526394.html
   My bibliography  Save this article

A Transfer Reinforcement Learning Approach for Capacity Sharing in Beyond 5G Networks

Author

Listed:
  • Irene Vilà

    (Signal Theory and Communications Department (TSC), Universitat Politècnica de Catalunya (UPC), 08034 Barcelona, Spain)

  • Jordi Pérez-Romero

    (Signal Theory and Communications Department (TSC), Universitat Politècnica de Catalunya (UPC), 08034 Barcelona, Spain)

  • Oriol Sallent

    (Signal Theory and Communications Department (TSC), Universitat Politècnica de Catalunya (UPC), 08034 Barcelona, Spain)

Abstract

The use of Reinforcement Learning (RL) techniques has been widely addressed in the literature to cope with capacity sharing in 5G Radio Access Network (RAN) slicing. These algorithms consider a training process to learn an optimal capacity sharing decision-making policy, which is later applied to the RAN environment during the inference stage. When relevant changes occur in the RAN, such as the deployment of new cells in the network, RL-based capacity sharing solutions require a re-training process to update the optimal decision-making policy, which may require long training times. To accelerate this process, this paper proposes a novel Transfer Learning (TL) approach for RL-based capacity sharing solutions in multi-cell scenarios that is implementable following the Open-RAN (O-RAN) architecture and exploits the availability of computing resources at the edge for conducting the training/inference processes. The proposed approach allows transferring the weights of the previously learned policy to learn the new policy to be used after the addition of new cells. The performance assessment of the TL solution highlights its capability to reduce the training process duration of the policies when adding new cells. Considering that the roll-out of 5G networks will continue for several years, TL can contribute to enhancing the practicality and feasibility of applying RL-based solutions for capacity sharing.

Suggested Citation

  • Irene Vilà & Jordi Pérez-Romero & Oriol Sallent, 2024. "A Transfer Reinforcement Learning Approach for Capacity Sharing in Beyond 5G Networks," Future Internet, MDPI, vol. 16(12), pages 1-17, November.
  • Handle: RePEc:gam:jftint:v:16:y:2024:i:12:p:434-:d:1526394
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1999-5903/16/12/434/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1999-5903/16/12/434/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Zhipeng Liang & Hao Chen & Junhao Zhu & Kangkang Jiang & Yanran Li, 2018. "Adversarial Deep Reinforcement Learning in Portfolio Management," Papers 1808.09940, arXiv.org, revised Nov 2018.
    2. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    2. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    3. Pinciroli, Luca & Baraldi, Piero & Compare, Michele & Zio, Enrico, 2023. "Optimal operation and maintenance of energy storage systems in grid-connected microgrids by deep reinforcement learning," Applied Energy, Elsevier, vol. 352(C).
    4. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    5. Pinciroli, Luca & Baraldi, Piero & Ballabio, Guido & Compare, Michele & Zio, Enrico, 2022. "Optimization of the Operation and Maintenance of renewable energy systems by Deep Reinforcement Learning," Renewable Energy, Elsevier, vol. 183(C), pages 752-763.
    6. Lu, Jing & Meng, Yucan & Timmermans, Harry & Zhang, Anming, 2021. "Modeling hesitancy in airport choice: A comparison of discrete choice and machine learning methods," Transportation Research Part A: Policy and Practice, Elsevier, vol. 147(C), pages 230-250.
    7. Ngo, Vu Minh & Nguyen, Huan Huu & Van Nguyen, Phuc, 2023. "Does reinforcement learning outperform deep learning and traditional portfolio optimization models in frontier and developed financial markets?," Research in International Business and Finance, Elsevier, vol. 65(C).
    8. Yinheng Li & Junhao Wang & Yijie Cao, 2019. "A General Framework on Enhancing Portfolio Management with Reinforcement Learning," Papers 1911.11880, arXiv.org, revised Oct 2023.
    9. Cheng, Haoxin & Li, Haihong & Dai, Qionglin & Yang, Junzhong, 2023. "A deep reinforcement learning method to control chaos synchronization between two identical chaotic systems," Chaos, Solitons & Fractals, Elsevier, vol. 174(C).
    10. Tidor-Vlad Pricope, 2021. "Deep Reinforcement Learning in Quantitative Algorithmic Trading: A Review," Papers 2106.00123, arXiv.org.
    11. Eric Benhamou & David Saltiel & Serge Tabachnik & Sui Kai Wong & Franc{c}ois Chareyron, 2021. "Adaptive learning for financial markets mixing model-based and model-free RL for volatility targeting," Papers 2104.10483, arXiv.org, revised Apr 2021.
    12. Gang Huang & Xiaohua Zhou & Qingyang Song, 2020. "Deep Reinforcement Learning for Long-Short Portfolio Optimization," Papers 2012.13773, arXiv.org, revised Mar 2025.
    13. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    14. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    15. Lixiang Zhang & Yan Yan & Yaoguang Hu, 2024. "Deep reinforcement learning for dynamic scheduling of energy-efficient automated guided vehicles," Journal of Intelligent Manufacturing, Springer, vol. 35(8), pages 3875-3888, December.
    16. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    17. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    18. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    19. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    20. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:16:y:2024:i:12:p:434-:d:1526394. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.