IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v332y2023ics0306261922017573.html
   My bibliography  Save this article

Multi-agent hierarchical reinforcement learning for energy management

Author

Listed:
  • Jendoubi, Imen
  • Bouffard, François

Abstract

The increasingly complex energy systems are turning the attention towards model-free control approaches such as reinforcement learning (RL). This work proposes novel RL-based energy management approaches for scheduling the operation of controllable devices within an electric network. The proposed approaches provide a tool for efficiently solving multi-dimensional, multi-objective and partially observable power system problems. The novelty in this work is threefold: We implement a hierarchical RL-based control strategy to solve a typical energy scheduling problem. Second, multi-agent reinforcement learning (MARL) is put forward to efficiently coordinate different units with no communication burden. Third, a control strategy that merges hierarchical RL and MARL theory is proposed for a robust control framework that can handle complex power system problems. A comparative performance evaluation of various RL-based and model-based control approaches is also presented. Experimental results of three typical energy dispatch scenarios show the effectiveness of the proposed control framework.

Suggested Citation

  • Jendoubi, Imen & Bouffard, François, 2023. "Multi-agent hierarchical reinforcement learning for energy management," Applied Energy, Elsevier, vol. 332(C).
  • Handle: RePEc:eee:appene:v:332:y:2023:i:c:s0306261922017573
    DOI: 10.1016/j.apenergy.2022.120500
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261922017573
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2022.120500?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Long, Chao & Wu, Jianzhong & Zhou, Yue & Jenkins, Nick, 2018. "Peer-to-peer energy sharing through a two-stage aggregated battery control in a community Microgrid," Applied Energy, Elsevier, vol. 226(C), pages 261-276.
    2. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    3. Christos-Spyridon Karavas & Konstantinos Arvanitis & George Papadakis, 2017. "A Game Theory Approach to Multi-Agent Decentralized Energy Management of Autonomous Polygeneration Microgrids," Energies, MDPI, vol. 10(11), pages 1-22, November.
    4. Ying Ji & Jianhui Wang & Jiacan Xu & Xiaoke Fang & Huaguang Zhang, 2019. "Real-Time Energy Management of a Microgrid Using Deep Reinforcement Learning," Energies, MDPI, vol. 12(12), pages 1-21, June.
    5. Watari, Daichi & Taniguchi, Ittetsu & Goverde, Hans & Manganiello, Patrizio & Shirazi, Elham & Catthoor, Francky & Onoye, Takao, 2021. "Multi-time scale energy management framework for smart PV systems mixing fast and slow dynamics," Applied Energy, Elsevier, vol. 289(C).
    6. Coelho, Vitor N. & Weiss Cohen, Miri & Coelho, Igor M. & Liu, Nian & Guimarães, Frederico Gadelha, 2017. "Multi-agent systems applied for energy systems integration: State-of-the-art applications and trends in microgrids," Applied Energy, Elsevier, vol. 187(C), pages 820-832.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Li, Sichen & Hu, Weihao & Cao, Di & Chen, Zhe & Huang, Qi & Blaabjerg, Frede & Liao, Kaiji, 2023. "Physics-model-free heat-electricity energy management of multiple microgrids based on surrogate model-enabled multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 346(C).
    2. Hua, Min & Zhang, Cetengfei & Zhang, Fanggang & Li, Zhi & Yu, Xiaoli & Xu, Hongming & Zhou, Quan, 2023. "Energy management of multi-mode plug-in hybrid electric vehicle using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 348(C).
    3. Cheng, Xiu & Li, Wenbo & Yang, Jiameng & Zhang, Linling, 2023. "How convenience and informational tools shape waste separation behavior: A social network approach," Resources Policy, Elsevier, vol. 86(PB).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Davarzani, Sima & Pisica, Ioana & Taylor, Gareth A. & Munisami, Kevin J., 2021. "Residential Demand Response Strategies and Applications in Active Distribution Network Management," Renewable and Sustainable Energy Reviews, Elsevier, vol. 138(C).
    2. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    3. Bio Gassi, Karim & Baysal, Mustafa, 2023. "Improving real-time energy decision-making model with an actor-critic agent in modern microgrids with energy storage devices," Energy, Elsevier, vol. 263(PE).
    4. Gabriel Santos & Pedro Faria & Zita Vale & Tiago Pinto & Juan M. Corchado, 2020. "Constrained Generation Bids in Local Electricity Markets: A Semantic Approach," Energies, MDPI, vol. 13(15), pages 1-27, August.
    5. Amrutha Raju Battula & Sandeep Vuddanti & Surender Reddy Salkuti, 2021. "Review of Energy Management System Approaches in Microgrids," Energies, MDPI, vol. 14(17), pages 1-32, September.
    6. Ruiqiu Yao & Yukun Hu & Liz Varga, 2023. "Applications of Agent-Based Methods in Multi-Energy Systems—A Systematic Literature Review," Energies, MDPI, vol. 16(5), pages 1-36, March.
    7. Jin, Xiaolong & Wu, Qiuwei & Jia, Hongjie, 2020. "Local flexibility markets: Literature review on concepts, models and clearing methods," Applied Energy, Elsevier, vol. 261(C).
    8. Soleimanzade, Mohammad Amin & Kumar, Amit & Sadrzadeh, Mohtada, 2022. "Novel data-driven energy management of a hybrid photovoltaic-reverse osmosis desalination system using deep reinforcement learning," Applied Energy, Elsevier, vol. 317(C).
    9. Yao, Ganzhou & Luo, Zirong & Lu, Zhongyue & Wang, Mangkuan & Shang, Jianzhong & Guerrerob, Josep M., 2023. "Unlocking the potential of wave energy conversion: A comprehensive evaluation of advanced maximum power point tracking techniques and hybrid strategies for sustainable energy harvesting," Renewable and Sustainable Energy Reviews, Elsevier, vol. 185(C).
    10. Yin, Linfei & Zhang, Bin, 2021. "Time series generative adversarial network controller for long-term smart generation control of microgrids," Applied Energy, Elsevier, vol. 281(C).
    11. Alqahtani, Mohammed & Hu, Mengqi, 2022. "Dynamic energy scheduling and routing of multiple electric vehicles using deep reinforcement learning," Energy, Elsevier, vol. 244(PA).
    12. Park, Sung-Won & Zhang, Zhong & Li, Furong & Son, Sung-Yong, 2021. "Peer-to-peer trading-based efficient flexibility securing mechanism to support distribution system stability," Applied Energy, Elsevier, vol. 285(C).
    13. Kirchhoff, Hannes & Strunz, Kai, 2019. "Key drivers for successful development of peer-to-peer microgrids for swarm electrification," Applied Energy, Elsevier, vol. 244(C), pages 46-62.
    14. Lyu, Cheng & Jia, Youwei & Xu, Zhao, 2021. "Fully decentralized peer-to-peer energy sharing framework for smart buildings with local battery system and aggregated electric vehicles," Applied Energy, Elsevier, vol. 299(C).
    15. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    16. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    17. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    18. Anvari-Moghaddam, Amjad & Rahimi-Kian, Ashkan & Mirian, Maryam S. & Guerrero, Josep M., 2017. "A multi-agent based energy management solution for integrated buildings and microgrid system," Applied Energy, Elsevier, vol. 203(C), pages 41-56.
    19. Ning Wang & Weisheng Xu & Weihui Shao & Zhiyu Xu, 2019. "A Q-Cube Framework of Reinforcement Learning Algorithm for Continuous Double Auction among Microgrids," Energies, MDPI, vol. 12(15), pages 1-26, July.
    20. Jani, Ali & Karimi, Hamid & Jadid, Shahram, 2022. "Two-layer stochastic day-ahead and real-time energy management of networked microgrids considering integration of renewable energy resources," Applied Energy, Elsevier, vol. 323(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:332:y:2023:i:c:s0306261922017573. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.