Author
Listed:
- Wang, Can
- Wang, Mingchao
- Wang, Aoqi
- Zhang, Xiaojia
- Zhang, Jiaheng
- Ma, Hui
- Yang, Nan
- Zhao, Zhuoli
- Lai, Chun Sing
- Lai, Loi Lei
Abstract
With the rapid development of smart home technology, residential microgrid (RM) clusters have become an important way to utilize the demand-side resources of large-scale housing. However, there are some key problems in existing RM cluster optimization methods, such as difficult in adapting to the local observable environment and with poor privacy and scalability. Therefore, this paper proposes a multi-agent deep reinforcement learning (MADRL)-based RM cluster optimization operation method. First, with the aim of minimizing the energy cost of each residence while satisfying the comfort level of residents and avoiding transformer overload, the optimization scheduling problem of an RM cluster is described as a Markov game with an unknown state transition probability function. Then, a novel MADRL method is proposed to determine the optimal operation strategy of multiple RMs in this game paradigm. Each agent in the proposed method contains a collective strategy model and an independent learner. The collective strategy model can simulate the energy consumption of other RMs in the system and reflect its operating behavior. In addition, an independent learner based on a soft actor-critic (SAC) framework is used to learn the optimal scheduling strategy interactively with the environment. The proposed method has a completely decentralized and scalable structure, which can deal with continuous high-dimensional state and action spaces only requires local observations and approximations during training. Finally, a numerical example is given to verify that the proposed method can not only learn a stable cooperative energy management strategy but can also be extended to large-scale RM cluster problems. This gives the strong scalability and a high potential for practical application.
Suggested Citation
Wang, Can & Wang, Mingchao & Wang, Aoqi & Zhang, Xiaojia & Zhang, Jiaheng & Ma, Hui & Yang, Nan & Zhao, Zhuoli & Lai, Chun Sing & Lai, Loi Lei, 2025.
"Multiagent deep reinforcement learning-based cooperative optimal operation with strong scalability for residential microgrid clusters,"
Energy, Elsevier, vol. 314(C).
Handle:
RePEc:eee:energy:v:314:y:2025:i:c:s0360544224039434
DOI: 10.1016/j.energy.2024.134165
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:314:y:2025:i:c:s0360544224039434. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.