IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v16y2022i1p78-d1010469.html
   My bibliography  Save this article

Reactive Power Control of a Converter in a Hardware-Based Environment Using Deep Reinforcement Learning

Author

Listed:
  • Ode Bokker

    (German Aerospace Center (DLR), Institute of Networked Energy Systems, Carl-von-Ossietzky-Str. 15, 26129 Oldenburg, Germany)

  • Henning Schlachter

    (German Aerospace Center (DLR), Institute of Networked Energy Systems, Carl-von-Ossietzky-Str. 15, 26129 Oldenburg, Germany)

  • Vanessa Beutel

    (German Aerospace Center (DLR), Institute of Networked Energy Systems, Carl-von-Ossietzky-Str. 15, 26129 Oldenburg, Germany)

  • Stefan Geißendörfer

    (German Aerospace Center (DLR), Institute of Networked Energy Systems, Carl-von-Ossietzky-Str. 15, 26129 Oldenburg, Germany)

  • Karsten von Maydell

    (German Aerospace Center (DLR), Institute of Networked Energy Systems, Carl-von-Ossietzky-Str. 15, 26129 Oldenburg, Germany)

Abstract

Due to the increasing penetration of the power grid with renewable, distributed energy resources, new strategies for voltage stabilization in low voltage distribution grids must be developed. One approach to autonomous voltage control is to apply reinforcement learning (RL) for reactive power injection by converters. In this work, to implement a secure test environment including real hardware influences for such intelligent algorithms, a power hardware-in-the-loop (PHIL) approach is used to combine a virtually simulated grid with real hardware devices to emulate as realistic grid states as possible. The PHIL environment is validated through the identification of system limits and analysis of deviations to a software model of the test grid. Finally, an adaptive volt–var control algorithm using RL is implemented to control reactive power injection of a real converter within the test environment. Despite facing more difficult conditions in the hardware than in the software environment, the algorithm is successfully integrated to control the voltage at a grid connection point in a low voltage grid. Thus, the proposed study underlines the potential to use RL in the voltage stabilization of future power grids.

Suggested Citation

  • Ode Bokker & Henning Schlachter & Vanessa Beutel & Stefan Geißendörfer & Karsten von Maydell, 2022. "Reactive Power Control of a Converter in a Hardware-Based Environment Using Deep Reinforcement Learning," Energies, MDPI, vol. 16(1), pages 1-12, December.
  • Handle: RePEc:gam:jeners:v:16:y:2022:i:1:p:78-:d:1010469
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/16/1/78/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/16/1/78/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Moiz Muhammad & Holger Behrends & Stefan Geißendörfer & Karsten von Maydell & Carsten Agert, 2021. "Power Hardware-in-the-Loop: Response of Power Components in Real-Time Grid Simulation Environment," Energies, MDPI, vol. 14(3), pages 1-20, January.
    2. Kirstin Beyer & Robert Beckmann & Stefan Geißendörfer & Karsten von Maydell & Carsten Agert, 2021. "Adaptive Online-Learning Volt-Var Control for Smart Inverters Using Deep Reinforcement Learning," Energies, MDPI, vol. 14(7), pages 1-11, April.
    3. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    4. Falko Ebe & Basem Idlbi & David E. Stakic & Shuo Chen & Christoph Kondzialka & Matthias Casel & Gerd Heilscher & Christian Seitl & Roland Bründlinger & Thomas I. Strasser, 2018. "Comparison of Power Hardware-in-the-Loop Approaches for the Testing of Smart Grid Controls," Energies, MDPI, vol. 11(12), pages 1-29, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Pedro Faria & Zita Vale, 2022. "Realistic Load Modeling for Efficient Consumption Management Using Real-Time Simulation and Power Hardware-in-the-Loop," Energies, MDPI, vol. 16(1), pages 1-15, December.
    2. Annette von Jouanne & Emmanuel Agamloh & Alex Yokochi, 2023. "Power Hardware-in-the-Loop (PHIL): A Review to Advance Smart Inverter-Based Grid-Edge Solutions," Energies, MDPI, vol. 16(2), pages 1-27, January.
    3. Yao, Ganzhou & Luo, Zirong & Lu, Zhongyue & Wang, Mangkuan & Shang, Jianzhong & Guerrerob, Josep M., 2023. "Unlocking the potential of wave energy conversion: A comprehensive evaluation of advanced maximum power point tracking techniques and hybrid strategies for sustainable energy harvesting," Renewable and Sustainable Energy Reviews, Elsevier, vol. 185(C).
    4. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    5. Ahmad, Tanveer & Madonski, Rafal & Zhang, Dongdong & Huang, Chao & Mujeeb, Asad, 2022. "Data-driven probabilistic machine learning in sustainable smart energy/smart energy systems: Key developments, challenges, and future research opportunities in the context of smart grid paradigm," Renewable and Sustainable Energy Reviews, Elsevier, vol. 160(C).
    6. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    7. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    8. Michał Michna & Filip Kutt & Łukasz Sienkiewicz & Roland Ryndzionek & Grzegorz Kostro & Dariusz Karkosiński & Bartłomiej Grochowski, 2020. "Mechanical-Level Hardware-In-The-Loop and Simulation in Validation Testing of Prototype Tower Crane Drives," Energies, MDPI, vol. 13(21), pages 1-25, November.
    9. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    10. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    11. Zhong, Shengyuan & Wang, Xiaoyuan & Zhao, Jun & Li, Wenjia & Li, Hao & Wang, Yongzhen & Deng, Shuai & Zhu, Jiebei, 2021. "Deep reinforcement learning framework for dynamic pricing demand response of regenerative electric heating," Applied Energy, Elsevier, vol. 288(C).
    12. Cezar-Petre Simion & Cătălin-Alexandru Verdeș & Alexandra-Andreea Mironescu & Florin-Gabriel Anghel, 2023. "Digitalization in Energy Production, Distribution, and Consumption: A Systematic Literature Review," Energies, MDPI, vol. 16(4), pages 1-30, February.
    13. Nebiyu Kedir & Phuong H. D. Nguyen & Citlaly Pérez & Pedro Ponce & Aminah Robinson Fayek, 2023. "Systematic Literature Review on Fuzzy Hybrid Methods in Photovoltaic Solar Energy: Opportunities, Challenges, and Guidance for Implementation," Energies, MDPI, vol. 16(9), pages 1-38, April.
    14. Jing Zhang & Yiqi Li & Zhi Wu & Chunyan Rong & Tao Wang & Zhang Zhang & Suyang Zhou, 2021. "Deep-Reinforcement-Learning-Based Two-Timescale Voltage Control for Distribution Systems," Energies, MDPI, vol. 14(12), pages 1-15, June.
    15. Franz Harke & Philipp Otto, 2023. "Solar Self-Sufficient Households as a Driving Factor for Sustainability Transformation," Sustainability, MDPI, vol. 15(3), pages 1-20, February.
    16. Bio Gassi, Karim & Baysal, Mustafa, 2023. "Improving real-time energy decision-making model with an actor-critic agent in modern microgrids with energy storage devices," Energy, Elsevier, vol. 263(PE).
    17. Zhu, Dafeng & Yang, Bo & Liu, Yuxiang & Wang, Zhaojian & Ma, Kai & Guan, Xinping, 2022. "Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park," Applied Energy, Elsevier, vol. 311(C).
    18. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    19. Hao, Zhaojun & Di Maio, Francesco & Zio, Enrico, 2023. "A sequential decision problem formulation and deep reinforcement learning solution of the optimization of O&M of cyber-physical energy systems (CPESs) for reliable and safe power production and supply," Reliability Engineering and System Safety, Elsevier, vol. 235(C).
    20. Perera, A.T.D. & Zhao, Bingyu & Wang, Zhe & Soga, Kenichi & Hong, Tianzhen, 2023. "Optimal design of microgrids to improve wildfire resilience for vulnerable communities at the wildland-urban interface," Applied Energy, Elsevier, vol. 335(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:16:y:2022:i:1:p:78-:d:1010469. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.