Author
Listed:
- Haibo Chen
(School of Information Science and Engineering, Shandong University, Qingdao 266237, China
These authors contributed equally to this work.)
- Zhongwei Huang
(School of Information Science and Engineering, Shandong University, Qingdao 266237, China
These authors contributed equally to this work.)
- Xiaorong Zhao
(School of Information Science and Engineering, Shandong University, Qingdao 266237, China)
- Xiao Liu
(School of Information Science and Engineering, Shandong University, Qingdao 266237, China)
- Youjun Jiang
(School of Information Science and Engineering, Shandong University, Qingdao 266237, China)
- Pinyong Geng
(School of Information Science and Engineering, Shandong University, Qingdao 266237, China)
- Guang Yang
(College of Electronic and Information Engineering, Shandong University of Science and Technology, Qingdao 266590, China)
- Yewen Cao
(School of Information Science and Engineering, Shandong University, Qingdao 266237, China)
- Deqiang Wang
(School of Information Science and Engineering, Shandong University, Qingdao 266237, China)
Abstract
A practical solution to the power allocation problem in ultra-dense small cell networks can be achieved by using deep reinforcement learning (DRL) methods. Unlike traditional algorithms, DRL methods are capable of achieving low latency and operating without the need for global real-time channel state information (CSI). Based on the actor–critic framework, we propose a policy optimization of the power allocation algorithm (POPA) for small cell networks in this paper. The POPA adopts the proximal policy optimization (PPO) algorithm to update the policy, which has been shown to have stable exploration and convergence effects in our simulations. Thanks to our proposed actor–critic architecture with distributed execution and centralized exploration training, the POPA can meet real-time requirements and has multi-dimensional scalability. Through simulations, we demonstrate that the POPA outperforms existing methods in terms of spectral efficiency. Our findings suggest that the POPA can be of practical value for power allocation in small cell networks.
Suggested Citation
Haibo Chen & Zhongwei Huang & Xiaorong Zhao & Xiao Liu & Youjun Jiang & Pinyong Geng & Guang Yang & Yewen Cao & Deqiang Wang, 2023.
"Policy Optimization of the Power Allocation Algorithm Based on the Actor–Critic Framework in Small Cell Networks,"
Mathematics, MDPI, vol. 11(7), pages 1-12, April.
Handle:
RePEc:gam:jmathe:v:11:y:2023:i:7:p:1702-:d:1114118
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:11:y:2023:i:7:p:1702-:d:1114118. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.