IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v310y2022ics0306261922000563.html
   My bibliography  Save this article

Multi-agent deep reinforcement learning for resilience-driven routing and scheduling of mobile energy storage systems

Author

Listed:
  • Wang, Yi
  • Qiu, Dawei
  • Strbac, Goran

Abstract

Extreme events are featured by high impact and low probability, which can cause severe damage to power systems. There has been much research focused on resilience-driven operational problems incorporating mobile energy storage systems (MESSs) routing and scheduling due to its mobility and flexibility. However, existing literature focuses on model-based optimization approaches to implement the routing process of MESSs, which can be time consuming and raise privacy issues since the requirement for global information of both power and transportation networks. Furthermore, a real-time automatic control scheme of MESSs has become a challenging task due to the system high variability. As such, this paper develops a model-free real-time multi-agent deep reinforcement learning approach featuring parameterized double deep Q-networks to reformulate the coordination effect of MESSs routing and scheduling process as a Partially Observable Markov Game, which is capable of capturing a hybrid policy including both discrete and continuous actions. A coupled transportation network and linearized AC-OPF algorithm are realized as the environment, while the internal uncertainties associated with renewable energy sources, load profiles, line outages, and traffic volumes are incorporated into the proposed data-driven approach through learning procedure. Extensive case studies including both 6-bus and 33-bus power networks are developed to evaluate the effectiveness of the proposed approach. Specifically, a detailed comparison between different multi-agent reinforcement learning and model-based optimization approaches is conducted to present the superior performance of the proposed approach.

Suggested Citation

  • Wang, Yi & Qiu, Dawei & Strbac, Goran, 2022. "Multi-agent deep reinforcement learning for resilience-driven routing and scheduling of mobile energy storage systems," Applied Energy, Elsevier, vol. 310(C).
  • Handle: RePEc:eee:appene:v:310:y:2022:i:c:s0306261922000563
    DOI: 10.1016/j.apenergy.2022.118575
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261922000563
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2022.118575?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Van-Hai Bui & Akhtar Hussain & Hak-Man Kim, 2019. "Q-Learning-Based Operation Strategy for Community Battery Energy Storage System (CBESS) in Microgrid System," Energies, MDPI, vol. 12(9), pages 1-17, May.
    2. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    3. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    4. Shang, Yuwei & Wu, Wenchuan & Guo, Jianbo & Ma, Zhao & Sheng, Wanxing & Lv, Zhe & Fu, Chenran, 2020. "Stochastic dispatch of energy storage in microgrids: An augmented reinforcement learning approach," Applied Energy, Elsevier, vol. 261(C).
    5. Gan, Wei & Yan, Mingyu & Yao, Wei & Wen, Jinyu, 2021. "Peer to peer transactive energy for multiple energy hub with the penetration of high-level renewable energy," Applied Energy, Elsevier, vol. 295(C).
    6. Jin-Gyeom Kim & Bowon Lee, 2020. "Automatic P2P Energy Trading Model Based on Reinforcement Learning Using Long Short-Term Delayed Reward," Energies, MDPI, vol. 13(20), pages 1-27, October.
    7. Kofinas, P. & Dounis, A.I. & Vouros, G.A., 2018. "Fuzzy Q-Learning for multi-agent decentralized energy management in microgrids," Applied Energy, Elsevier, vol. 219(C), pages 53-67.
    8. Wu, Raphael & Sansavini, Giovanni, 2020. "Integrating reliability and resilience to support the transition from passive distribution grids to islanding microgrids," Applied Energy, Elsevier, vol. 272(C).
    9. Tuchnitz, Felix & Ebell, Niklas & Schlund, Jonas & Pruckner, Marco, 2021. "Development and Evaluation of a Smart Charging Strategy for an Electric Vehicle Fleet Based on Reinforcement Learning," Applied Energy, Elsevier, vol. 285(C).
    10. Xie, Shiwei & Hu, Zhijian & Wang, Jueying & Chen, Yuwei, 2020. "The optimal planning of smart multi-energy systems incorporating transportation, natural gas and active distribution networks," Applied Energy, Elsevier, vol. 269(C).
    11. Sayed, Ahmed R. & Wang, Cheng & Bi, Tianshu, 2019. "Resilient operational strategies for power systems considering the interactions with natural gas systems," Applied Energy, Elsevier, vol. 241(C), pages 548-566.
    12. Wu, Jingda & He, Hongwen & Peng, Jiankun & Li, Yuecheng & Li, Zhanjiang, 2018. "Continuous reinforcement learning of energy management with deep Q network for a power split hybrid electric bus," Applied Energy, Elsevier, vol. 222(C), pages 799-811.
    13. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    14. Zhou, Bo & Song, Qiankun & Zhao, Zhenjiang & Liu, Tangzhi, 2020. "A reinforcement learning scheme for the equilibrium of the in-vehicle route choice problem based on congestion game," Applied Mathematics and Computation, Elsevier, vol. 371(C).
    15. Hussain, Akhtar & Bui, Van-Hai & Kim, Hak-Man, 2019. "Microgrids as a resilience resource and strategies used by microgrids for enhancing resilience," Applied Energy, Elsevier, vol. 240(C), pages 56-72.
    16. Qiu, Dawei & Ye, Yujian & Papadaskalopoulos, Dimitrios & Strbac, Goran, 2021. "Scalable coordinated management of peer-to-peer energy trading: A multi-cluster deep reinforcement learning approach," Applied Energy, Elsevier, vol. 292(C).
    17. Dong, Chaoyu & Gao, Qingbin & Xiao, Qiao & Chu, Ronghe & Jia, Hongjie, 2020. "Spectrum-domain stability assessment and intrinsic oscillation for aggregated mobile energy storage in grid frequency regulation," Applied Energy, Elsevier, vol. 276(C).
    18. Han, Gwangwoo & Kwon, YongKeun & Kim, Joong Bae & Lee, Sanghun & Bae, Joongmyeon & Cho, EunAe & Lee, Bong Jae & Cho, Sungbaek & Park, Jinwoo, 2020. "Development of a high-energy-density portable/mobile hydrogen energy storage system incorporating an electrolyzer, a metal hydride and a fuel cell," Applied Energy, Elsevier, vol. 259(C).
    19. Mishra, Sakshi & Anderson, Kate & Miller, Brian & Boyer, Kyle & Warren, Adam, 2020. "Microgrid resilience: A holistic approach for assessing threats, identifying vulnerabilities, and designing corresponding mitigation strategies," Applied Energy, Elsevier, vol. 264(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Li, Sichen & Hu, Weihao & Cao, Di & Chen, Zhe & Huang, Qi & Blaabjerg, Frede & Liao, Kaiji, 2023. "Physics-model-free heat-electricity energy management of multiple microgrids based on surrogate model-enabled multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 346(C).
    2. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    3. Zhuoxin Lu & Xiaoyuan Xu & Zheng Yan & Dong Han & Shiwei Xia, 2024. "Mobile Energy-Storage Technology in Power Grid: A Review of Models and Applications," Sustainability, MDPI, vol. 16(16), pages 1-19, August.
    4. Li, Yutong & Hou, Jian & Yan, Gangfeng, 2024. "Exploration-enhanced multi-agent reinforcement learning for distributed PV-ESS scheduling with incomplete data," Applied Energy, Elsevier, vol. 359(C).
    5. Qiu, Dawei & Wang, Yi & Zhang, Tingqi & Sun, Mingyang & Strbac, Goran, 2023. "Hierarchical multi-agent reinforcement learning for repair crews dispatch control towards multi-energy microgrid resilience," Applied Energy, Elsevier, vol. 336(C).
    6. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    7. Antonio E. C. Momesso & Pedro H. A. Barra & Pedro I. N. Barbalho & Eduardo N. Asada & José C. M. Vieira & Denis V. Coury, 2024. "An Impact Assessment of a Transportable BESS on the Protection of Conventional Distribution Systems," Energies, MDPI, vol. 17(16), pages 1-15, August.
    8. Zhang, Lu & Yu, Shunjiang & Zhang, Bo & Li, Gen & Cai, Yongxiang & Tang, Wei, 2023. "Outage management of hybrid AC/DC distribution systems: Co-optimize service restoration with repair crew and mobile energy storage system dispatch," Applied Energy, Elsevier, vol. 335(C).
    9. Xu, Jiuping & Tian, Yalou & Wang, Fengjuan & Yang, Guocan & Zhao, Chuandang, 2024. "Resilience-economy-environment equilibrium based configuration interaction approach towards distributed energy system in energy intensive industry parks," Renewable and Sustainable Energy Reviews, Elsevier, vol. 191(C).
    10. Kang, Hyuna & Jung, Seunghoon & Kim, Hakpyeong & Jeoung, Jaewon & Hong, Taehoon, 2024. "Reinforcement learning-based optimal scheduling model of battery energy storage system at the building level," Renewable and Sustainable Energy Reviews, Elsevier, vol. 190(PA).
    11. Gabriel Pesántez & Wilian Guamán & José Córdova & Miguel Torres & Pablo Benalcazar, 2024. "Reinforcement Learning for Efficient Power Systems Planning: A Review of Operational and Expansion Strategies," Energies, MDPI, vol. 17(9), pages 1-25, May.
    12. Wu, Chuantao & Wang, Tao & Zhou, Dezhi & Cao, Shankang & Sui, Quan & Lin, Xiangning & Li, Zhengtian & Wei, Fanrong, 2023. "A distributed restoration framework for distribution systems incorporating electric buses," Applied Energy, Elsevier, vol. 331(C).
    13. Venkatasubramanian, Balaji V. & Panteli, Mathaios, 2023. "Power system resilience during 2001–2022: A bibliometric and correlation analysis," Renewable and Sustainable Energy Reviews, Elsevier, vol. 188(C).
    14. Zhang, Xi & Dong, Zihang & Huangfu, Fenyu & Ye, Yujian & Strbac, Goran & Kang, Chongqing, 2024. "Strategic dispatch of electric buses for resilience enhancement of urban energy systems," Applied Energy, Elsevier, vol. 361(C).
    15. Pegah Alaee & Julius Bems & Amjad Anvari-Moghaddam, 2023. "A Review of the Latest Trends in Technical and Economic Aspects of EV Charging Management," Energies, MDPI, vol. 16(9), pages 1-28, April.
    16. Qiu, Dawei & Wang, Yi & Sun, Mingyang & Strbac, Goran, 2022. "Multi-service provision for electric vehicles in power-transportation networks towards a low-carbon transition: A hierarchical and hybrid multi-agent reinforcement learning approach," Applied Energy, Elsevier, vol. 313(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Wang, Y. & Rousis, A. Oulis & Strbac, G., 2022. "Resilience-driven optimal sizing and pre-positioning of mobile energy storage systems in decentralized networked microgrids," Applied Energy, Elsevier, vol. 305(C).
    2. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    3. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    4. Qiu, Dawei & Wang, Yi & Zhang, Tingqi & Sun, Mingyang & Strbac, Goran, 2023. "Hierarchical multi-agent reinforcement learning for repair crews dispatch control towards multi-energy microgrid resilience," Applied Energy, Elsevier, vol. 336(C).
    5. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    6. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    7. Qiu, Dawei & Ye, Yujian & Papadaskalopoulos, Dimitrios & Strbac, Goran, 2021. "Scalable coordinated management of peer-to-peer energy trading: A multi-cluster deep reinforcement learning approach," Applied Energy, Elsevier, vol. 292(C).
    8. Zeng, Lanting & Qiu, Dawei & Sun, Mingyang, 2022. "Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks," Applied Energy, Elsevier, vol. 324(C).
    9. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    10. Qiu, Dawei & Dong, Zihang & Zhang, Xi & Wang, Yi & Strbac, Goran, 2022. "Safe reinforcement learning for real-time automatic control in a smart energy-hub," Applied Energy, Elsevier, vol. 309(C).
    11. Qiu, Dawei & Wang, Yi & Sun, Mingyang & Strbac, Goran, 2022. "Multi-service provision for electric vehicles in power-transportation networks towards a low-carbon transition: A hierarchical and hybrid multi-agent reinforcement learning approach," Applied Energy, Elsevier, vol. 313(C).
    12. Tobajas, Javier & Garcia-Torres, Felix & Roncero-Sánchez, Pedro & Vázquez, Javier & Bellatreche, Ladjel & Nieto, Emilio, 2022. "Resilience-oriented schedule of microgrids with hybrid energy storage system using model predictive control," Applied Energy, Elsevier, vol. 306(PB).
    13. Jin, Ruiyang & Zhou, Yuke & Lu, Chao & Song, Jie, 2022. "Deep reinforcement learning-based strategy for charging station participating in demand response," Applied Energy, Elsevier, vol. 328(C).
    14. Younes Zahraoui & Tarmo Korõtko & Argo Rosin & Saad Mekhilef & Mehdi Seyedmahmoudian & Alex Stojcevski & Ibrahim Alhamrouni, 2024. "AI Applications to Enhance Resilience in Power Systems and Microgrids—A Review," Sustainability, MDPI, vol. 16(12), pages 1-35, June.
    15. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    16. Zhong, Shengyuan & Wang, Xiaoyuan & Zhao, Jun & Li, Wenjia & Li, Hao & Wang, Yongzhen & Deng, Shuai & Zhu, Jiebei, 2021. "Deep reinforcement learning framework for dynamic pricing demand response of regenerative electric heating," Applied Energy, Elsevier, vol. 288(C).
    17. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    18. Chen, Chunyu & Cui, Mingjian & Fang, Xin & Ren, Bixing & Chen, Yang, 2020. "Load altering attack-tolerant defense strategy for load frequency control system," Applied Energy, Elsevier, vol. 280(C).
    19. Hernandez-Matheus, Alejandro & Löschenbrand, Markus & Berg, Kjersti & Fuchs, Ida & Aragüés-Peñalba, Mònica & Bullich-Massagué, Eduard & Sumper, Andreas, 2022. "A systematic review of machine learning techniques related to local energy communities," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
    20. Zheng, Lingwei & Wu, Hao & Guo, Siqi & Sun, Xinyu, 2023. "Real-time dispatch of an integrated energy system based on multi-stage reinforcement learning with an improved action-choosing strategy," Energy, Elsevier, vol. 277(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:310:y:2022:i:c:s0306261922000563. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.