IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v15y2022i23p9220-d994308.html
   My bibliography  Save this article

Dynamic DNR and Solar PV Smart Inverter Control Scheme Using Heterogeneous Multi-Agent Deep Reinforcement Learning

Author

Listed:
  • Se-Heon Lim

    (Department of Electrical Engineering, Soongsil University, Seoul 06978, Republic of Korea)

  • Sung-Guk Yoon

    (Department of Electrical Engineering, Soongsil University, Seoul 06978, Republic of Korea)

Abstract

The conventional volt-VAR control (VVC) in distribution systems has limitations in solving the overvoltage problem caused by massive solar photovoltaic (PV) deployment. As an alternative method, VVC using solar PV smart inverters (PVSIs) has come into the limelight, which can respond quickly and effectively to solve the overvoltage problem by absorbing reactive power. However, the network power loss, that is, the sum of line losses in the distribution network, increases with reactive power. Dynamic distribution network reconfiguration (DNR), which hourly controls the network topology by controlling sectionalizing and tie switches, can also solve the overvoltage problem and reduce network loss by changing the power flow in the network. In this study, to improve the voltage profile and minimize the network power loss, we propose a control scheme that integrates the dynamic DNR with volt-VAR control of PVSIs. The proposed control scheme is practically usable for three reasons: Primarily, the proposed scheme is based on a deep reinforcement learning (DRL) algorithm, which does not require accurate distribution system parameters. Furthermore, we propose the use of a heterogeneous multiagent DRL algorithm to control the switches centrally and PVSIs locally. Finally, a practical communication network in the distribution system is assumed. PVSIs only send their status to the central control center, and there is no communication between the PVSIs. A modified 33-bus distribution test feeder reflecting the system conditions of South Korea is used for the case study. The results of this case study demonstrates that the proposed control scheme effectively improves the voltage profile of the distribution system. In addition, the proposed scheme reduces the total power loss in the distribution system, which is the sum of the network power loss and curtailed energy, owing to the voltage violation of the solar PV output.

Suggested Citation

  • Se-Heon Lim & Sung-Guk Yoon, 2022. "Dynamic DNR and Solar PV Smart Inverter Control Scheme Using Heterogeneous Multi-Agent Deep Reinforcement Learning," Energies, MDPI, vol. 15(23), pages 1-18, December.
  • Handle: RePEc:gam:jeners:v:15:y:2022:i:23:p:9220-:d:994308
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/15/23/9220/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/15/23/9220/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Kou, Peng & Liang, Deliang & Wang, Chen & Wu, Zihao & Gao, Lin, 2020. "Safe deep reinforcement learning-based constrained optimal control scheme for active distribution networks," Applied Energy, Elsevier, vol. 264(C).
    2. Ardi Tampuu & Tambet Matiisen & Dorian Kodelja & Ilya Kuzovkin & Kristjan Korjus & Juhan Aru & Jaan Aru & Raul Vicente, 2017. "Multiagent cooperation and competition with deep reinforcement learning," PLOS ONE, Public Library of Science, vol. 12(4), pages 1-15, April.
    3. Ji, Haoran & Wang, Chengshan & Li, Peng & Zhao, Jinli & Song, Guanyu & Ding, Fei & Wu, Jianzhong, 2018. "A centralized-based method to determine the local voltage control strategies of distributed generator operation in active distribution networks," Applied Energy, Elsevier, vol. 228(C), pages 2024-2036.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Qingyan Li & Tao Lin & Qianyi Yu & Hui Du & Jun Li & Xiyue Fu, 2023. "Review of Deep Reinforcement Learning and Its Application in Modern Renewable Power System Control," Energies, MDPI, vol. 16(10), pages 1-23, May.
    2. Mak, Davye & Choeum, Daranith & Choi, Dae-Hyun, 2020. "Sensitivity analysis of volt-VAR optimization to data changes in distribution networks with distributed energy resources," Applied Energy, Elsevier, vol. 261(C).
    3. Li, Peng & Ji, Haoran & Yu, Hao & Zhao, Jinli & Wang, Chengshan & Song, Guanyu & Wu, Jianzhong, 2019. "Combined decentralized and local voltage control strategy of soft open points in active distribution networks," Applied Energy, Elsevier, vol. 241(C), pages 613-624.
    4. Oh, Seok Hwa & Yoon, Yong Tae & Kim, Seung Wan, 2020. "Online reconfiguration scheme of self-sufficient distribution network based on a reinforcement learning approach," Applied Energy, Elsevier, vol. 280(C).
    5. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    6. Emilio Calvano & Giacomo Calzolari & Vincenzo Denicolò & Sergio Pastorello, 2019. "Algorithmic Pricing What Implications for Competition Policy?," Review of Industrial Organization, Springer;The Industrial Organization Society, vol. 55(1), pages 155-171, August.
    7. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    8. Jude Suchithra & Amin Rajabi & Duane A. Robinson, 2024. "Enhancing PV Hosting Capacity of Electricity Distribution Networks Using Deep Reinforcement Learning-Based Coordinated Voltage Control," Energies, MDPI, vol. 17(20), pages 1-28, October.
    9. Yin, Linfei & Lu, Yuejiang, 2021. "Expandable deep width learning for voltage control of three-state energy model based smart grids containing flexible energy sources," Energy, Elsevier, vol. 226(C).
    10. Wang, Xuekai & D’Ariano, Andrea & Su, Shuai & Tang, Tao, 2023. "Cooperative train control during the power supply shortage in metro system: A multi-agent reinforcement learning approach," Transportation Research Part B: Methodological, Elsevier, vol. 170(C), pages 244-278.
    11. Zhu, Dafeng & Yang, Bo & Liu, Yuxiang & Wang, Zhaojian & Ma, Kai & Guan, Xinping, 2022. "Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park," Applied Energy, Elsevier, vol. 311(C).
    12. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    13. Young Joon Park & Yoon Sang Cho & Seoung Bum Kim, 2019. "Multi-agent reinforcement learning with approximate model learning for competitive games," PLOS ONE, Public Library of Science, vol. 14(9), pages 1-20, September.
    14. Jude Suchithra & Duane Robinson & Amin Rajabi, 2023. "Hosting Capacity Assessment Strategies and Reinforcement Learning Methods for Coordinated Voltage Control in Electricity Distribution Networks: A Review," Energies, MDPI, vol. 16(5), pages 1-28, March.
    15. Zhang, Zhengfa & da Silva, Filipe Faria & Guo, Yifei & Bak, Claus Leth & Chen, Zhe, 2021. "Double-layer stochastic model predictive voltage control in active distribution networks with high penetration of renewables," Applied Energy, Elsevier, vol. 302(C).
    16. Li, Xingyu & Epureanu, Bogdan I., 2020. "AI-based competition of autonomous vehicle fleets with application to fleet modularity," European Journal of Operational Research, Elsevier, vol. 287(3), pages 856-874.
    17. Kewei Wang & Yonghong Huang & Junjun Xu & Yanbo Liu, 2024. "A Flexible Envelope Method for the Operation Domain of Distribution Networks Based on “Degree of Squareness” Adjustable Superellipsoid," Energies, MDPI, vol. 17(16), pages 1-19, August.
    18. Li, Jiawen & Yu, Tao & Zhang, Xiaoshun, 2022. "Coordinated load frequency control of multi-area integrated energy system using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 306(PA).
    19. Stringer, Naomi & Haghdadi, Navid & Bruce, Anna & Riesz, Jenny. & MacGill, Iain, 2020. "Observed behavior of distributed photovoltaic systems during major voltage disturbances and implications for power system security," Applied Energy, Elsevier, vol. 260(C).
    20. Zhao, Jinli & Zhang, Mengzhen & Yu, Hao & Ji, Haoran & Song, Guanyu & Li, Peng & Wang, Chengshan & Wu, Jianzhong, 2019. "An islanding partition method of active distribution networks based on chance-constrained programming," Applied Energy, Elsevier, vol. 242(C), pages 78-91.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:15:y:2022:i:23:p:9220-:d:994308. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.