IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v15y2022i23p9017-d987176.html
   My bibliography  Save this article

Computing Day-Ahead Dispatch Plans for Active Distribution Grids Using a Reinforcement Learning Based Algorithm

Author

Listed:
  • Eleni Stai

    (EEH—Power Systems Laboratory, ETH Zürich, Physikstrasse 3, 8092 Zürich, Switzerland)

  • Josua Stoffel

    (EEH—Power Systems Laboratory, ETH Zürich, Physikstrasse 3, 8092 Zürich, Switzerland)

  • Gabriela Hug

    (EEH—Power Systems Laboratory, ETH Zürich, Physikstrasse 3, 8092 Zürich, Switzerland)

Abstract

The worldwide aspiration for a sustainable energy future has led to an increasing deployment of variable and intermittent renewable energy sources (RESs). As a result, predicting and planning the operation of power grids has become more complex. Batteries can play a critical role to this problem as they can absorb the uncertainties introduced by RESs. In this paper, we solve the problem of computing a dispatch plan for a distribution grid with RESs and batteries with a novel approach based on Reinforcement Learning (RL). Although RL is not inherently suited for planning problems that require open loop policies, we have developed an iterative algorithm that calls a trained RL agent at each iteration to compute the dispatch plan. Since the feedback given to the RL agent cannot be directly observed because the dispatch plan is computed ahead of operation, it is estimated. Compared to the conventional approach of scenario-based optimization, our RL-based approach can exploit significantly more prior information on the uncertainty and computes dispatch plans faster. Our evaluation and comparative results demonstrate the accuracy of the computed dispatch plans as well as the adaptability of our agent to input data that diverge from the training data.

Suggested Citation

  • Eleni Stai & Josua Stoffel & Gabriela Hug, 2022. "Computing Day-Ahead Dispatch Plans for Active Distribution Grids Using a Reinforcement Learning Based Algorithm," Energies, MDPI, vol. 15(23), pages 1-22, November.
  • Handle: RePEc:gam:jeners:v:15:y:2022:i:23:p:9017-:d:987176
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/15/23/9017/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/15/23/9017/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    2. Paul L. Joskow, 2011. "Comparing the Costs of Intermittent and Dispatchable Electricity Generating Technologies," American Economic Review, American Economic Association, vol. 101(3), pages 238-241, May.
    3. Mi, Yunlong & Quan, Pei & Shi, Yong & Wang, Zongrun, 2022. "Concept-cognitive computing system for dynamic classification," European Journal of Operational Research, Elsevier, vol. 301(1), pages 287-299.
    4. Shang, Yuwei & Wu, Wenchuan & Guo, Jianbo & Ma, Zhao & Sheng, Wanxing & Lv, Zhe & Fu, Chenran, 2020. "Stochastic dispatch of energy storage in microgrids: An augmented reinforcement learning approach," Applied Energy, Elsevier, vol. 261(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Neupane, Deependra & Kafle, Sagar & Karki, Kaji Ram & Kim, Dae Hyun & Pradhan, Prajal, 2022. "Solar and wind energy potential assessment at provincial level in Nepal: Geospatial and economic analysis," Renewable Energy, Elsevier, vol. 181(C), pages 278-291.
    3. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    4. Durmaz, Tunç, 2016. "Precautionary Storage in Electricity Markets," Discussion Papers 2016/5, Norwegian School of Economics, Department of Business and Management Science.
    5. Carsten Helm & Mathias Mier, 2020. "Steering the Energy Transition in a World of Intermittent Electricity Supply: Optimal Subsidies and Taxes for Renewables Storage," ifo Working Paper Series 330, ifo Institute - Leibniz Institute for Economic Research at the University of Munich.
    6. Behrang Shirizadeh, Quentin Perrier, and Philippe Quirion, 2022. "How Sensitive are Optimal Fully Renewable Power Systems to Technology Cost Uncertainty?," The Energy Journal, International Association for Energy Economics, vol. 0(Number 1).
    7. Keppler, Jan Horst & Quemin, Simon & Saguan, Marcelo, 2022. "Why the sustainable provision of low-carbon electricity needs hybrid markets," Energy Policy, Elsevier, vol. 171(C).
    8. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    9. Lixiang Zhang & Yan Yan & Yaoguang Hu, 2024. "Deep reinforcement learning for dynamic scheduling of energy-efficient automated guided vehicles," Journal of Intelligent Manufacturing, Springer, vol. 35(8), pages 3875-3888, December.
    10. Simshauser, P., 2019. "On the impact of government-initiated CfD’s in Australia’s National Electricity Market," Cambridge Working Papers in Economics 1901, Faculty of Economics, University of Cambridge.
    11. Karsten Neuhoff & Nils May & Jörn C. Richstein, 2018. "Renewable Energy Policy in the Age of Falling Technology Costs," Discussion Papers of DIW Berlin 1746, DIW Berlin, German Institute for Economic Research.
    12. Emblemsvåg, Jan, 2022. "Wind energy is not sustainable when balanced by fossil energy," Applied Energy, Elsevier, vol. 305(C).
    13. Jeffrey C. Peters & Thomas W. Hertel, 2017. "Achieving the Clean Power Plan 2030 CO2 Target with the New Normal in Natural Gas Prices," The Energy Journal, International Association for Energy Economics, vol. 0(Number 5).
    14. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    15. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    16. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    17. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    18. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    19. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    20. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:15:y:2022:i:23:p:9017-:d:987176. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.