IDEAS home Printed from https://ideas.repec.org/a/eee/energy/v282y2023ics0360544223015682.html
   My bibliography  Save this article

Research on energy management of hydrogen electric coupling system based on deep reinforcement learning

Author

Listed:
  • Shi, Tao
  • Xu, Chang
  • Dong, Wenhao
  • Zhou, Hangyu
  • Bokhari, Awais
  • Klemeš, Jiří Jaromír
  • Han, Ning

Abstract

In this paper, a deep reinforcement learning-based energy optimization management method for hydrogen-electric coupling system is proposed for the conversion and utilization and joint optimization operation of hydrogen, wind and solar energy forms considering information uncertainty on the demand side of smart grid. Based on the wind energy, photovoltaic energy generation and load forecast information, the method uses deep Q network to simulate the energy management strategy set of the hydrogen-electric coupling system, and obtains the optimal strategy through reinforcement learning to finally realize the optimal operation of the hydrogen-electric coupling system based on the demand response. Firstly, based on the energy management model, a research framework and equipment model for integrated energy systems is established. On the basis of fundamental theories of reinforcement learning framework, Q-learning algorithm and DQN algorithm, the empirical replay mechanism and freezing parameter mechanism to improve the performance of DQN are analyzed, and the energy management and optimization of integrated energy system is completed with the objective of economy. By comparing the performance of DQN algorithms with different parameters in integrated energy system energy management, the simulation results demonstrate the improvement of algorithm performance after inheriting the set of strategies, and verify the feasibility and superiority of deep reinforcement learning compared to genetic algorithm in integrated energy system energy management applications.

Suggested Citation

  • Shi, Tao & Xu, Chang & Dong, Wenhao & Zhou, Hangyu & Bokhari, Awais & Klemeš, Jiří Jaromír & Han, Ning, 2023. "Research on energy management of hydrogen electric coupling system based on deep reinforcement learning," Energy, Elsevier, vol. 282(C).
  • Handle: RePEc:eee:energy:v:282:y:2023:i:c:s0360544223015682
    DOI: 10.1016/j.energy.2023.128174
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0360544223015682
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.energy.2023.128174?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Hua, Haochen & Qin, Yuchao & Hao, Chuantong & Cao, Junwei, 2019. "Optimal energy management strategies for energy Internet via deep reinforcement learning approach," Applied Energy, Elsevier, vol. 239(C), pages 598-609.
    2. David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
    3. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Chen, Jinzhou & He, Hongwen & Wang, Ya-Xiong & Quan, Shengwei & Zhang, Zhendong & Wei, Zhongbao & Han, Ruoyan, 2024. "Research on energy management strategy for fuel cell hybrid electric vehicles based on improved dynamic programming and air supply optimization," Energy, Elsevier, vol. 300(C).
    2. Moiz Ahmad & Muhammad Babar Ramzan & Muhammad Omair & Muhammad Salman Habib, 2024. "Integrating Risk-Averse and Constrained Reinforcement Learning for Robust Decision-Making in High-Stakes Scenarios," Mathematics, MDPI, vol. 12(13), pages 1-32, June.
    3. Li, Ruiqi & Ren, Hongbo & Wu, Qiong & Li, Qifen & Gao, Weijun, 2024. "Cooperative economic dispatch of EV-HV coupled electric-hydrogen integrated energy system considering V2G response and carbon trading," Renewable Energy, Elsevier, vol. 227(C).
    4. Chen, Qi & Kuang, Zhonghong & Liu, Xiaohua & Zhang, Tao, 2024. "Application-oriented assessment of grid-connected PV-battery system with deep reinforcement learning in buildings considering electricity price dynamics," Applied Energy, Elsevier, vol. 364(C).
    5. Zhang, Tianhao & Dong, Zhe & Huang, Xiaojin, 2024. "Multi-objective optimization of thermal power and outlet steam temperature for a nuclear steam supply system with deep reinforcement learning," Energy, Elsevier, vol. 286(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
    2. Zhang, Bin & Hu, Weihao & Xu, Xiao & Li, Tao & Zhang, Zhenyuan & Chen, Zhe, 2022. "Physical-model-free intelligent energy management for a grid-connected hybrid wind-microturbine-PV-EV energy system via deep reinforcement learning approach," Renewable Energy, Elsevier, vol. 200(C), pages 433-448.
    3. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    4. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    5. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.
    6. Taejong Joo & Hyunyoung Jun & Dongmin Shin, 2022. "Task Allocation in Human–Machine Manufacturing Systems Using Deep Reinforcement Learning," Sustainability, MDPI, vol. 14(4), pages 1-18, February.
    7. Oleh Lukianykhin & Tetiana Bogodorova, 2021. "Voltage Control-Based Ancillary Service Using Deep Reinforcement Learning," Energies, MDPI, vol. 14(8), pages 1-22, April.
    8. Chen, Jiaxin & Shu, Hong & Tang, Xiaolin & Liu, Teng & Wang, Weida, 2022. "Deep reinforcement learning-based multi-objective control of hybrid power system combined with road recognition under time-varying environment," Energy, Elsevier, vol. 239(PC).
    9. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    10. Zhang, Yihao & Chai, Zhaojie & Lykotrafitis, George, 2021. "Deep reinforcement learning with a particle dynamics environment applied to emergency evacuation of a room with obstacles," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 571(C).
    11. Qi, Chunyang & Zhu, Yiwen & Song, Chuanxue & Yan, Guangfu & Xiao, Feng & Da wang, & Zhang, Xu & Cao, Jingwei & Song, Shixin, 2022. "Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle," Energy, Elsevier, vol. 238(PA).
    12. Yifeng Guo & Xingyu Fu & Yuyan Shi & Mingwen Liu, 2018. "Robust Log-Optimal Strategy with Reinforcement Learning," Papers 1805.00205, arXiv.org.
    13. Hamed Khalili, 2024. "Deep Learning Pricing of Processing Firms in Agricultural Markets," Agriculture, MDPI, vol. 14(5), pages 1-14, April.
    14. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    15. Yang, Ting & Zhao, Liyuan & Li, Wei & Wu, Jianzhong & Zomaya, Albert Y., 2021. "Towards healthy and cost-effective indoor environment management in smart homes: A deep reinforcement learning approach," Applied Energy, Elsevier, vol. 300(C).
    16. Chengmin Zhou & Bingding Huang & Pasi Fränti, 2022. "A review of motion planning algorithms for intelligent robots," Journal of Intelligent Manufacturing, Springer, vol. 33(2), pages 387-424, February.
    17. Justin P. Johnson & Andrew Rhodes & Matthijs Wildenbeest, 2023. "Platform Design When Sellers Use Pricing Algorithms," Econometrica, Econometric Society, vol. 91(5), pages 1841-1879, September.
    18. Yingfei Wang & Inbal Yahav & Balaji Padmanabhan, 2024. "Smart Testing with Vaccination: A Bandit Algorithm for Active Sampling for Managing COVID-19," Information Systems Research, INFORMS, vol. 35(1), pages 120-144, March.
    19. Iwao Maeda & David deGraw & Michiharu Kitano & Hiroyasu Matsushima & Hiroki Sakaji & Kiyoshi Izumi & Atsuo Kato, 2020. "Deep Reinforcement Learning in Agent Based Financial Market Simulation," JRFM, MDPI, vol. 13(4), pages 1-17, April.
    20. Qiu, Dawei & Ye, Yujian & Papadaskalopoulos, Dimitrios & Strbac, Goran, 2021. "Scalable coordinated management of peer-to-peer energy trading: A multi-cluster deep reinforcement learning approach," Applied Energy, Elsevier, vol. 292(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:282:y:2023:i:c:s0360544223015682. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.