IDEAS home Printed from https://ideas.repec.org/a/eee/energy/v305y2024ics0360544224016475.html
   My bibliography  Save this article

Deep reinforcement learning based energy management of a hybrid electricity-heat-hydrogen energy system with demand response

Author

Listed:
  • Ye, Jin
  • Wang, Xianlian
  • Hua, Qingsong
  • Sun, Li

Abstract

Hybrid electricity-heat-hydrogen energy system with demand response (DR) is promising in enhancing flexibility and energy efficiency. However, the multi-energy coupling and source-load uncertainties makes it challenging to efficiently schedule energy flows of electricity generation, storage and DR. To this end, this paper proposes a continues deep reinforcement learning algorithm, specifically the deep deterministic policy gradient (DDPG), for the energy management optimization. Different Markov decision processes are firstly employed to analyze and compare two kinds of incentive-based electro-thermal DR contracts, i.e., load curtailment and load shifting. Simulation results exemplify the superiority of the proposed DDPG-based scheduling incorporating electro-thermal DR in terms of economy and sustainability, leading to a 16.02 % reduction in scheduling costs for contract load curtailment and an 8.52 % reduction for contract load shifting when compared to that without DR consideration. Furthermore, the robustness of DDPG-based scheduling is verified under 60 random source-load scenarios compared with different algorithms. Compared to results obtained by DDPG, DDPG-LC and DDPG-LS reduce the mean cost by 22.15 % and 12.84 %, while their error from the theoretical optimum is only around 5 %. The results demonstrate that the approximate optimality and rapid decision-making illustrate DDPG's efficient real-time scheduling capability, thereby enhancing the system's adaptability to uncertain environments.

Suggested Citation

  • Ye, Jin & Wang, Xianlian & Hua, Qingsong & Sun, Li, 2024. "Deep reinforcement learning based energy management of a hybrid electricity-heat-hydrogen energy system with demand response," Energy, Elsevier, vol. 305(C).
  • Handle: RePEc:eee:energy:v:305:y:2024:i:c:s0360544224016475
    DOI: 10.1016/j.energy.2024.131874
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0360544224016475
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.energy.2024.131874?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Tahir, Muhammad Faizan & Chen, Haoyong & Khan, Asad & Javed, Muhammad Sufyan & Cheema, Khalid Mehmood & Laraik, Noman Ali, 2020. "Significance of demand response in light of current pilot projects in China and devising a problem solution for future advancements," Technology in Society, Elsevier, vol. 63(C).
    2. Upadhyay, Subho & Sharma, M.P., 2014. "A review on configurations, control and sizing methodologies of hybrid energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 38(C), pages 47-63.
    3. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    4. Hassan, Qusay, 2021. "Evaluation and optimization of off-grid and on-grid photovoltaic power system for typical household electrification," Renewable Energy, Elsevier, vol. 164(C), pages 375-390.
    5. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    6. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    3. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    4. Zhong, Shengyuan & Wang, Xiaoyuan & Zhao, Jun & Li, Wenjia & Li, Hao & Wang, Yongzhen & Deng, Shuai & Zhu, Jiebei, 2021. "Deep reinforcement learning framework for dynamic pricing demand response of regenerative electric heating," Applied Energy, Elsevier, vol. 288(C).
    5. Guo, Yuxiang & Qu, Shengli & Wang, Chuang & Xing, Ziwen & Duan, Kaiwen, 2024. "Optimal dynamic thermal management for data center via soft actor-critic algorithm with dynamic control interval and combined-value state space," Applied Energy, Elsevier, vol. 373(C).
    6. Zhu, Dafeng & Yang, Bo & Liu, Yuxiang & Wang, Zhaojian & Ma, Kai & Guan, Xinping, 2022. "Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park," Applied Energy, Elsevier, vol. 311(C).
    7. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    8. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    9. Eduardo J. Salazar & Mauro Jurado & Mauricio E. Samper, 2023. "Reinforcement Learning-Based Pricing and Incentive Strategy for Demand Response in Smart Grids," Energies, MDPI, vol. 16(3), pages 1-33, February.
    10. Caputo, Cesare & Cardin, Michel-Alexandre & Ge, Pudong & Teng, Fei & Korre, Anna & Antonio del Rio Chanona, Ehecatl, 2023. "Design and planning of flexible mobile Micro-Grids using Deep Reinforcement Learning," Applied Energy, Elsevier, vol. 335(C).
    11. Mohammad Mahdi Forootan & Iman Larki & Rahim Zahedi & Abolfazl Ahmadi, 2022. "Machine Learning and Deep Learning in Energy Systems: A Review," Sustainability, MDPI, vol. 14(8), pages 1-49, April.
    12. Ahmed Abdelaziz & Vitor Santos & Miguel Sales Dias, 2021. "Machine Learning Techniques in the Energy Consumption of Buildings: A Systematic Literature Review Using Text Mining and Bibliometric Analysis," Energies, MDPI, vol. 14(22), pages 1-31, November.
    13. Ahmad, Tanveer & Madonski, Rafal & Zhang, Dongdong & Huang, Chao & Mujeeb, Asad, 2022. "Data-driven probabilistic machine learning in sustainable smart energy/smart energy systems: Key developments, challenges, and future research opportunities in the context of smart grid paradigm," Renewable and Sustainable Energy Reviews, Elsevier, vol. 160(C).
    14. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    15. Ajagekar, Akshay & Decardi-Nelson, Benjamin & You, Fengqi, 2024. "Energy management for demand response in networked greenhouses with multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 355(C).
    16. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    17. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    18. Golmohamadi, Hessam, 2022. "Demand-side management in industrial sector: A review of heavy industries," Renewable and Sustainable Energy Reviews, Elsevier, vol. 156(C).
    19. Zeng, Lanting & Qiu, Dawei & Sun, Mingyang, 2022. "Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks," Applied Energy, Elsevier, vol. 324(C).
    20. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:305:y:2024:i:c:s0360544224016475. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.