IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v14y2021i7p1991-d529758.html
   My bibliography  Save this article

Adaptive Online-Learning Volt-Var Control for Smart Inverters Using Deep Reinforcement Learning

Author

Listed:
  • Kirstin Beyer

    (German Aerospace Center (DLR)—Institute for Networked Energy Systems, Carl-von-Ossietzky-Straße 15, 26129 Oldenburg, Germany)

  • Robert Beckmann

    (German Aerospace Center (DLR)—Institute for Networked Energy Systems, Carl-von-Ossietzky-Straße 15, 26129 Oldenburg, Germany)

  • Stefan Geißendörfer

    (German Aerospace Center (DLR)—Institute for Networked Energy Systems, Carl-von-Ossietzky-Straße 15, 26129 Oldenburg, Germany)

  • Karsten von Maydell

    (German Aerospace Center (DLR)—Institute for Networked Energy Systems, Carl-von-Ossietzky-Straße 15, 26129 Oldenburg, Germany)

  • Carsten Agert

    (German Aerospace Center (DLR)—Institute for Networked Energy Systems, Carl-von-Ossietzky-Straße 15, 26129 Oldenburg, Germany)

Abstract

The increasing penetration of the power grid with renewable distributed generation causes significant voltage fluctuations. Providing reactive power helps balancing the voltage in the grid. This paper proposes a novel adaptive volt-var control algorithm on the basis of deep reinforcement learning. The learning agent is an online-learning deep deterministic policy gradient that is applicable under real-time conditions in smart inverters for reactive power management. The algorithm only uses input data from the grid connection point of the inverter itself; thus, no additional communication devices are needed and it can be applied individually to any inverter in the grid. The proposed volt-var control is successfully simulated at various grid connection points in a 21-bus low-voltage distribution test feeder. The resulting voltage behavior is analyzed and a systematic voltage reduction is observed both in a static grid environment and a dynamic environment. The proposed algorithm enables flexible adaption to changing environments through continuous exploration during the learning process and, thus, contributes to a decentralized, automated voltage control in future power grids.

Suggested Citation

  • Kirstin Beyer & Robert Beckmann & Stefan Geißendörfer & Karsten von Maydell & Carsten Agert, 2021. "Adaptive Online-Learning Volt-Var Control for Smart Inverters Using Deep Reinforcement Learning," Energies, MDPI, vol. 14(7), pages 1-11, April.
  • Handle: RePEc:gam:jeners:v:14:y:2021:i:7:p:1991-:d:529758
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/14/7/1991/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/14/7/1991/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    2. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Ode Bokker & Henning Schlachter & Vanessa Beutel & Stefan Geißendörfer & Karsten von Maydell, 2022. "Reactive Power Control of a Converter in a Hardware-Based Environment Using Deep Reinforcement Learning," Energies, MDPI, vol. 16(1), pages 1-12, December.
    2. Jing Zhang & Yiqi Li & Zhi Wu & Chunyan Rong & Tao Wang & Zhang Zhang & Suyang Zhou, 2021. "Deep-Reinforcement-Learning-Based Two-Timescale Voltage Control for Distribution Systems," Energies, MDPI, vol. 14(12), pages 1-15, June.
    3. Yu Fujimoto & Akihisa Kaneko & Yutaka Iino & Hideo Ishii & Yasuhiro Hayashi, 2023. "Challenges in Smartizing Operational Management of Functionally-Smart Inverters for Distributed Energy Resources: A Review on Machine Learning Aspects," Energies, MDPI, vol. 16(3), pages 1-26, January.
    4. Jarosław Korpikiewicz & Mostefa Mohamed-Seghir, 2022. "Static Analysis and Optimization of Voltage and Reactive Power Regulation Systems in the HV/MV Substation with Electronic Transformer Tap-Changers," Energies, MDPI, vol. 15(13), pages 1-26, June.
    5. Franz Harke & Philipp Otto, 2023. "Solar Self-Sufficient Households as a Driving Factor for Sustainability Transformation," Sustainability, MDPI, vol. 15(3), pages 1-20, February.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    3. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    4. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    5. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    6. Caputo, Cesare & Cardin, Michel-Alexandre & Ge, Pudong & Teng, Fei & Korre, Anna & Antonio del Rio Chanona, Ehecatl, 2023. "Design and planning of flexible mobile Micro-Grids using Deep Reinforcement Learning," Applied Energy, Elsevier, vol. 335(C).
    7. Luca Pinciroli & Piero Baraldi & Guido Ballabio & Michele Compare & Enrico Zio, 2021. "Deep Reinforcement Learning Based on Proximal Policy Optimization for the Maintenance of a Wind Farm with Multiple Crews," Energies, MDPI, vol. 14(20), pages 1-17, October.
    8. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    9. Yao, Ganzhou & Luo, Zirong & Lu, Zhongyue & Wang, Mangkuan & Shang, Jianzhong & Guerrerob, Josep M., 2023. "Unlocking the potential of wave energy conversion: A comprehensive evaluation of advanced maximum power point tracking techniques and hybrid strategies for sustainable energy harvesting," Renewable and Sustainable Energy Reviews, Elsevier, vol. 185(C).
    10. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    11. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    12. Imen Azzouz & Wiem Fekih Hassen, 2023. "Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach," Energies, MDPI, vol. 16(24), pages 1-18, December.
    13. Jacob W. Crandall & Mayada Oudah & Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael A. Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Nature Communications, Nature, vol. 9(1), pages 1-12, December.
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," TSE Working Papers 17-806, Toulouse School of Economics (TSE).
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," IAST Working Papers 17-68, Institute for Advanced Study in Toulouse (IAST).
      • Jacob Crandall & Mayada Oudah & Fatimah Ishowo-Oloko Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Post-Print hal-01897802, HAL.
    14. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    15. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    16. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    17. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    18. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    19. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    20. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:14:y:2021:i:7:p:1991-:d:529758. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.