IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v376y2024ipas0306261924015381.html
   My bibliography  Save this article

Interior-point policy optimization based multi-agent deep reinforcement learning method for secure home energy management under various uncertainties

Author

Listed:
  • Zhang, Yiwen
  • Lin, Rui
  • Mei, Zhen
  • Lyu, Minghao
  • Jiang, Huaiguang
  • Xue, Ying
  • Zhang, Jun
  • Gao, David Wenzhong

Abstract

Improving the efficiency of home energy management (HEM) is of great significance in reducing resource waste and prompting renewable energy consumption. With the development of advanced HEM systems and internet-of-things technology, more residents are willing to participate in the electricity market through a demand response mechanism, which can not only reduce the electricity bill but also stabilize the operation of the main grid through peak shaving and valley filling. However, the uncertainties from household appliance parameters, user behaviors, penetrated renewable energy, and electricity prices render the HEM problem a non-trivial task. Although previous deep reinforcement learning (DRL) methods have no requirement for system dynamics due to their model-free and data-driven merits, the unrestricted action space may violate the physical constraints of various devices, and few works have concentrated on this area before. To solve the above challenges, this paper proposes a novel interior-point policy optimization-based multi-agent DRL algorithm to optimize the home energy scheduling procedure while guaranteeing safety during its operation. Besides, a time-series prediction model based on a non-stationary Transformer neural network is proposed to predict the future trend of solar generation and electricity price, which can provide the agents with abundant information to make better decisions. The superior performance of our proposed predictive-control coupled method is demonstrated through extensive numerical experiments, which achieves more precise prediction results, near-zero constraint violations, and better trade-offs between cost and safety than several benchmark methods.

Suggested Citation

  • Zhang, Yiwen & Lin, Rui & Mei, Zhen & Lyu, Minghao & Jiang, Huaiguang & Xue, Ying & Zhang, Jun & Gao, David Wenzhong, 2024. "Interior-point policy optimization based multi-agent deep reinforcement learning method for secure home energy management under various uncertainties," Applied Energy, Elsevier, vol. 376(PA).
  • Handle: RePEc:eee:appene:v:376:y:2024:i:pa:s0306261924015381
    DOI: 10.1016/j.apenergy.2024.124155
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261924015381
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2024.124155?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Langer, Lissy & Volling, Thomas, 2022. "A reinforcement learning approach to home energy management for modulating heat pumps and photovoltaic systems," Applied Energy, Elsevier, vol. 327(C).
    2. Qiu, Dawei & Dong, Zihang & Zhang, Xi & Wang, Yi & Strbac, Goran, 2022. "Safe reinforcement learning for real-time automatic control in a smart energy-hub," Applied Energy, Elsevier, vol. 309(C).
    3. Kou, Peng & Liang, Deliang & Wang, Chen & Wu, Zihao & Gao, Lin, 2020. "Safe deep reinforcement learning-based constrained optimal control scheme for active distribution networks," Applied Energy, Elsevier, vol. 264(C).
    4. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    5. Guo, Chenyu & Wang, Xin & Zheng, Yihui & Zhang, Feng, 2022. "Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    6. David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
    7. Ren, Kezheng & Liu, Jun & Wu, Zeyang & Liu, Xinglei & Nie, Yongxin & Xu, Haitao, 2024. "A data-driven DRL-based home energy management system optimization framework considering uncertain household parameters," Applied Energy, Elsevier, vol. 355(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    3. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    4. Gokhale, Gargya & Claessens, Bert & Develder, Chris, 2022. "Physics informed neural networks for control oriented thermal modeling of buildings," Applied Energy, Elsevier, vol. 314(C).
    5. Oleh Lukianykhin & Tetiana Bogodorova, 2021. "Voltage Control-Based Ancillary Service Using Deep Reinforcement Learning," Energies, MDPI, vol. 14(8), pages 1-22, April.
    6. Zhu, Dafeng & Yang, Bo & Liu, Yuxiang & Wang, Zhaojian & Ma, Kai & Guan, Xinping, 2022. "Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park," Applied Energy, Elsevier, vol. 311(C).
    7. Alabi, Tobi Michael & Lu, Lin & Yang, Zaiyue, 2024. "Real-time automatic control of multi-energy system for smart district community: A coupling ensemble prediction model and safe deep reinforcement learning," Energy, Elsevier, vol. 304(C).
    8. Esmaeili Aliabadi, Danial & Chan, Katrina, 2022. "The emerging threat of artificial intelligence on competition in liberalized electricity markets: A deep Q-network approach," Applied Energy, Elsevier, vol. 325(C).
    9. Michael Bachseitz & Muhammad Sheryar & David Schmitt & Thorsten Summ & Christoph Trinkl & Wilfried Zörner, 2024. "PV-Optimized Heat Pump Control in Multi-Family Buildings Using a Reinforcement Learning Approach," Energies, MDPI, vol. 17(8), pages 1-16, April.
    10. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    11. Gong, Xun & Wang, Xiaozhe & Cao, Bo, 2023. "On data-driven modeling and control in modern power grids stability: Survey and perspective," Applied Energy, Elsevier, vol. 350(C).
    12. Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
    13. Prabawa, Panggah & Choi, Dae-Hyun, 2024. "Safe deep reinforcement learning-assisted two-stage energy management for active power distribution networks with hydrogen fueling stations," Applied Energy, Elsevier, vol. 375(C).
    14. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    15. Xu, Xuesong & Xu, Kai & Zeng, Ziyang & Tang, Jiale & He, Yuanxing & Shi, Guangze & Zhang, Tao, 2024. "Collaborative optimization of multi-energy multi-microgrid system: A hierarchical trust-region multi-agent reinforcement learning approach," Applied Energy, Elsevier, vol. 375(C).
    16. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    17. Zeng, Lanting & Qiu, Dawei & Sun, Mingyang, 2022. "Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks," Applied Energy, Elsevier, vol. 324(C).
    18. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    19. Shi, Zhongtuo & Yao, Wei & Li, Zhouping & Zeng, Lingkang & Zhao, Yifan & Zhang, Runfeng & Tang, Yong & Wen, Jinyu, 2020. "Artificial intelligence techniques for stability analysis and control in smart grids: Methodologies, applications, challenges and future directions," Applied Energy, Elsevier, vol. 278(C).
    20. Xue, Lin & Zhang, Yao & Wang, Jianxue & Li, Haotian & Li, Fangshi, 2024. "Privacy-preserving multi-level co-regulation of VPPs via hierarchical safe deep reinforcement learning," Applied Energy, Elsevier, vol. 371(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:376:y:2024:i:pa:s0306261924015381. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.