IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v367y2024ics0306261924007979.html
   My bibliography  Save this article

Scalable energy management approach of residential hybrid energy system using multi-agent deep reinforcement learning

Author

Listed:
  • Wang, Zixuan
  • Xiao, Fu
  • Ran, Yi
  • Li, Yanxue
  • Xu, Yang

Abstract

Deploying renewable energy and implementing smart energy management strategies are crucial for decarbonizing Building Energy Systems (BES). Despite recent advancements in data-driven Deep Reinforcement Learning (DRL) for BES optimization, significant challenges still exist, such as the time-consuming and data-intensive nature of training DRL controllers and the complexity of environment dynamics in Multi-Agent Reinforcement Learning (MARL). Consequently, these obstacles impede the synchronization and coordination of multiple agent control, leading to slow DRL convergence performance. To address these issues. This paper proposes a novel approach to optimize hybrid building energy systems. We introduce an integrated system combining a multi-stage Proximal Policy Optimization (PPO) on-policy framework with Imitation Learning (IL), interacting with the model environment. To improve scalability and robustness of Multi-agent Systems (MAS), this approach is designed to enhance training efficiency with centralized training and decentralized execution. Simulation results of case studies demonstrate the effectiveness of the Multi-agent Deep Reinforcement Learning (MADRL) model in optimizing the operations of hybrid building energy systems in terms of indoor thermal comfort and energy efficiency. Results show the proposed framework significantly improve performance in achieving convergence in just 50 episodes for dynamic decision-making. The scalability and robustness of the proposed model have been validated across various scenarios. Compared with the baseline during cold and warm weeks, the proposed control approach achieved improvements of 34.86% and 46.10% in energy self-sufficiency ratio, respectively. Additionally, the developed MADRL effectively improved solar photovoltaic (PV) self-consumption and reduced household energy costs. Notably, it increased the average indoor temperature closer to the desired set-point by 1.33 °C, and improved the self-consumption ratio by 15.78% in the colder week and 18.47% in the warmer week, compared to baseline measurements. These findings highlight the advantages of the multi-stage PPO on-policy framework, enabling faster learning and reduced training time, resulting in cost-effective solutions and enhanced solar PV self-consumption.

Suggested Citation

  • Wang, Zixuan & Xiao, Fu & Ran, Yi & Li, Yanxue & Xu, Yang, 2024. "Scalable energy management approach of residential hybrid energy system using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 367(C).
  • Handle: RePEc:eee:appene:v:367:y:2024:i:c:s0306261924007979
    DOI: 10.1016/j.apenergy.2024.123414
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261924007979
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2024.123414?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    2. Joanna Clarke & Justin Searle, 2021. "Active Building demonstrators for a low-carbon future," Nature Energy, Nature, vol. 6(12), pages 1087-1089, December.
    3. Bratislav Svetozarevic & Moritz Begle & Prageeth Jayathissa & Stefan Caranovic & Robert F. Shepherd & Zoltan Nagy & Illias Hischier & Johannes Hofer & Arno Schlueter, 2019. "Publisher Correction: Dynamic photovoltaic building envelopes for adaptive energy and comfort management," Nature Energy, Nature, vol. 4(8), pages 719-719, August.
    4. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    5. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Operational optimization for off-grid renewable building energy system using deep reinforcement learning," Applied Energy, Elsevier, vol. 325(C).
    6. Qin, Haosen & Yu, Zhen & Li, Tailu & Liu, Xueliang & Li, Li, 2023. "Energy-efficient heating control for nearly zero energy residential buildings with deep reinforcement learning," Energy, Elsevier, vol. 264(C).
    7. Yang, Ting & Zhao, Liyuan & Li, Wei & Wu, Jianzhong & Zomaya, Albert Y., 2021. "Towards healthy and cost-effective indoor environment management in smart homes: A deep reinforcement learning approach," Applied Energy, Elsevier, vol. 300(C).
    8. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    9. Jiang, C.X. & Jing, Z.X. & Cui, X.R. & Ji, T.Y. & Wu, Q.H., 2018. "Multiple agents and reinforcement learning for modelling charging loads of electric taxis," Applied Energy, Elsevier, vol. 222(C), pages 158-168.
    10. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    11. Xiaoyi Zhang & Weijun Gao & Yanxue Li & Zixuan Wang & Yoshiaki Ushifusa & Yingjun Ruan, 2021. "Operational Performance and Load Flexibility Analysis of Japanese Zero Energy House," IJERPH, MDPI, vol. 18(13), pages 1-19, June.
    12. Wu, Wenbo & Dong, Bing & Wang, Qi (Ryan) & Kong, Meng & Yan, Da & An, Jingjing & Liu, Yapan, 2020. "A novel mobility-based approach to derive urban-scale building occupant profiles and analyze impacts on building energy consumption," Applied Energy, Elsevier, vol. 278(C).
    13. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    14. Arroyo, Javier & Manna, Carlo & Spiessens, Fred & Helsen, Lieve, 2022. "Reinforced model predictive control (RL-MPC) for building energy management," Applied Energy, Elsevier, vol. 309(C).
    15. Bratislav Svetozarevic & Moritz Begle & Prageeth Jayathissa & Stefan Caranovic & Robert F. Shepherd & Zoltan Nagy & Illias Hischier & Johannes Hofer & Arno Schlueter, 2019. "Dynamic photovoltaic building envelopes for adaptive energy and comfort management," Nature Energy, Nature, vol. 4(8), pages 671-682, August.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Gao, Yuan & Hu, Zehuan & Chen, Wei-An & Liu, Mingzhe, 2024. "Solutions to the insufficiency of label data in renewable energy forecasting: A comparative and integrative analysis of domain adaptation and fine-tuning," Energy, Elsevier, vol. 302(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    2. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    3. Ayas Shaqour & Aya Hagishima, 2022. "Systematic Review on Deep Reinforcement Learning-Based Energy Management for Different Building Types," Energies, MDPI, vol. 15(22), pages 1-27, November.
    4. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    5. Wenya Xu & Yanxue Li & Guanjie He & Yang Xu & Weijun Gao, 2023. "Performance Assessment and Comparative Analysis of Photovoltaic-Battery System Scheduling in an Existing Zero-Energy House Based on Reinforcement Learning Control," Energies, MDPI, vol. 16(13), pages 1-19, June.
    6. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    7. Chen, Qi & Kuang, Zhonghong & Liu, Xiaohua & Zhang, Tao, 2024. "Application-oriented assessment of grid-connected PV-battery system with deep reinforcement learning in buildings considering electricity price dynamics," Applied Energy, Elsevier, vol. 364(C).
    8. Elsisi, Mahmoud & Amer, Mohammed & Dababat, Alya’ & Su, Chun-Lien, 2023. "A comprehensive review of machine learning and IoT solutions for demand side energy management, conservation, and resilient operation," Energy, Elsevier, vol. 281(C).
    9. Gao, Yuan & Miyata, Shohei & Akashi, Yasunori, 2023. "Energy saving and indoor temperature control for an office building using tube-based robust model predictive control," Applied Energy, Elsevier, vol. 341(C).
    10. Liang, Shen & Zheng, Hongfei & Wang, Xuanlin & Ma, Xinglong & Zhao, Zhiyong, 2022. "Design and performance validation on a solar louver with concentrating-photovoltaic-thermal modules," Renewable Energy, Elsevier, vol. 191(C), pages 71-83.
    11. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    12. Zhang, Bin & Hu, Weihao & Ghias, Amer M.Y.M. & Xu, Xiao & Chen, Zhe, 2022. "Multi-agent deep reinforcement learning-based coordination control for grid-aware multi-buildings," Applied Energy, Elsevier, vol. 328(C).
    13. Keerthana Sivamayil & Elakkiya Rajasekar & Belqasem Aljafari & Srete Nikolovski & Subramaniyaswamy Vairavasundaram & Indragandhi Vairavasundaram, 2023. "A Systematic Study on Reinforcement Learning Based Applications," Energies, MDPI, vol. 16(3), pages 1-23, February.
    14. Deng, Xiangtian & Zhang, Yi & Jiang, Yi & Zhang, Yi & Qi, He, 2024. "A novel operation method for renewable building by combining distributed DC energy system and deep reinforcement learning," Applied Energy, Elsevier, vol. 353(PB).
    15. Huang, Xinyu & Li, Fangfei & Liu, Zhengguang & Gao, Xinyu & Yang, Xiaohu & Yan, Jinyue, 2023. "Design and optimization of a novel phase change photovoltaic thermal utilization structure for building envelope," Renewable Energy, Elsevier, vol. 218(C).
    16. Skandalos, Nikolaos & Wang, Meng & Kapsalis, Vasileios & D'Agostino, Delia & Parker, Danny & Bhuvad, Sushant Suresh & Udayraj, & Peng, Jinqing & Karamanis, Dimitris, 2022. "Building PV integration according to regional climate conditions: BIPV regional adaptability extending Köppen-Geiger climate classification against urban and climate-related temperature increases," Renewable and Sustainable Energy Reviews, Elsevier, vol. 169(C).
    17. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    18. Jiankai Gao & Yang Li & Bin Wang & Haibo Wu, 2023. "Multi-Microgrid Collaborative Optimization Scheduling Using an Improved Multi-Agent Soft Actor-Critic Algorithm," Energies, MDPI, vol. 16(7), pages 1-21, April.
    19. Cui, Can & Xue, Jing, 2024. "Energy and comfort aware operation of multi-zone HVAC system through preference-inspired deep reinforcement learning," Energy, Elsevier, vol. 292(C).
    20. Fang, Xi & Gong, Guangcai & Li, Guannan & Chun, Liang & Peng, Pei & Li, Wenqiang & Shi, Xing, 2023. "Cross temporal-spatial transferability investigation of deep reinforcement learning control strategy in the building HVAC system level," Energy, Elsevier, vol. 263(PB).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:367:y:2024:i:c:s0306261924007979. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.