IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v351y2023ics0306261923011078.html
   My bibliography  Save this article

Physical-assisted multi-agent graph reinforcement learning enabled fast voltage regulation for PV-rich active distribution network

Author

Listed:
  • Chen, Yongdong
  • Liu, Youbo
  • Zhao, Junbo
  • Qiu, Gao
  • Yin, Hang
  • Li, Zhengbo

Abstract

Active distribution network is encountering serious voltage violations associated with the proliferation of distributed photovoltaic. Cutting-edge research has confirmed that voltage regulation techniques based on deep reinforcement learning manifest superior performance in addressing this issue. However, such techniques are typically applied to the specifically fixed network topologies and have insufficient learning efficiency. To address these challenges, a novel edge intelligence, featured by a multi-agent deep reinforcement learning algorithm with graph attention network and physical-assisted mechanism, is proposed. This novel method is unique in that it includes the graph attention network into reinforcement learning to capture spatial correlations and topological linkages among nodes, allowing agents to be “aware” of topology variations caused by reconfiguration real time. Furthermore, employing a relatively exact physical model to generate reference experiences and storing them in a replay buffer enables agents to identify effective actions faster during training and thus, greatly enhances the efficiency of learning voltage regulation laws. All agents are trained centralized to learn a coordinated voltage regulation strategy, which is then executed decentralized based solely on local observation for fast response. The proposed methodology is evaluated on the IEEE 33-node and 136-node systems, and it outperforms the previously implemented approaches in convergence and control performance.

Suggested Citation

  • Chen, Yongdong & Liu, Youbo & Zhao, Junbo & Qiu, Gao & Yin, Hang & Li, Zhengbo, 2023. "Physical-assisted multi-agent graph reinforcement learning enabled fast voltage regulation for PV-rich active distribution network," Applied Energy, Elsevier, vol. 351(C).
  • Handle: RePEc:eee:appene:v:351:y:2023:i:c:s0306261923011078
    DOI: 10.1016/j.apenergy.2023.121743
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261923011078
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2023.121743?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Wang, Xiaodi & Liu, Youbo & Zhao, Junbo & Liu, Chang & Liu, Junyong & Yan, Jinyue, 2021. "Surrogate model enabled deep reinforcement learning for hybrid energy community operation," Applied Energy, Elsevier, vol. 289(C).
    2. Kou, Peng & Liang, Deliang & Wang, Chen & Wu, Zihao & Gao, Lin, 2020. "Safe deep reinforcement learning-based constrained optimal control scheme for active distribution networks," Applied Energy, Elsevier, vol. 264(C).
    3. Gao, Yuanqi & Yu, Nanpeng, 2022. "Model-augmented safe reinforcement learning for Volt-VAR control in power distribution networks," Applied Energy, Elsevier, vol. 313(C).
    4. Xiang, Yue & Lu, Yu & Liu, Junyong, 2023. "Deep reinforcement learning based topology-aware voltage regulation of distribution networks with distributed energy storage," Applied Energy, Elsevier, vol. 332(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    2. Zhao, Yincheng & Zhang, Guozhou & Hu, Weihao & Huang, Qi & Chen, Zhe & Blaabjerg, Frede, 2023. "Meta-learning based voltage control strategy for emergency faults of active distribution networks," Applied Energy, Elsevier, vol. 349(C).
    3. Zhu, Dafeng & Yang, Bo & Liu, Yuxiang & Wang, Zhaojian & Ma, Kai & Guan, Xinping, 2022. "Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park," Applied Energy, Elsevier, vol. 311(C).
    4. Oh, Seok Hwa & Yoon, Yong Tae & Kim, Seung Wan, 2020. "Online reconfiguration scheme of self-sufficient distribution network based on a reinforcement learning approach," Applied Energy, Elsevier, vol. 280(C).
    5. Zhu, Xingxu & Hou, Xiangchen & Li, Junhui & Yan, Gangui & Li, Cuiping & Wang, Dongbo, 2023. "Distributed online prediction optimization algorithm for distributed energy resources considering the multi-periods optimal operation," Applied Energy, Elsevier, vol. 348(C).
    6. Kabir, Farzana & Yu, Nanpeng & Gao, Yuanqi & Wang, Wenyu, 2023. "Deep reinforcement learning-based two-timescale Volt-VAR control with degradation-aware smart inverters in power distribution systems," Applied Energy, Elsevier, vol. 335(C).
    7. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    8. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    9. Rabea Jamil Mahfoud & Nizar Faisal Alkayem & Emmanuel Fernandez-Rodriguez & Yuan Zheng & Yonghui Sun & Shida Zhang & Yuquan Zhang, 2024. "Evolutionary Approach for DISCO Profit Maximization by Optimal Planning of Distributed Generators and Energy Storage Systems in Active Distribution Networks," Mathematics, MDPI, vol. 12(2), pages 1-33, January.
    10. Se-Heon Lim & Sung-Guk Yoon, 2022. "Dynamic DNR and Solar PV Smart Inverter Control Scheme Using Heterogeneous Multi-Agent Deep Reinforcement Learning," Energies, MDPI, vol. 15(23), pages 1-18, December.
    11. Zhang, Zhengfa & da Silva, Filipe Faria & Guo, Yifei & Bak, Claus Leth & Chen, Zhe, 2021. "Double-layer stochastic model predictive voltage control in active distribution networks with high penetration of renewables," Applied Energy, Elsevier, vol. 302(C).
    12. Grigorios L. Kyriakopoulos, 2022. "Energy Communities Overview: Managerial Policies, Economic Aspects, Technologies, and Models," JRFM, MDPI, vol. 15(11), pages 1-45, November.
    13. Qingyan Li & Tao Lin & Qianyi Yu & Hui Du & Jun Li & Xiyue Fu, 2023. "Review of Deep Reinforcement Learning and Its Application in Modern Renewable Power System Control," Energies, MDPI, vol. 16(10), pages 1-23, May.
    14. Gong, Xun & Wang, Xiaozhe & Cao, Bo, 2023. "On data-driven modeling and control in modern power grids stability: Survey and perspective," Applied Energy, Elsevier, vol. 350(C).
    15. Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
    16. Jianxun Luo & Wei Zhang & Hui Wang & Wenmiao Wei & Jinpeng He, 2023. "Research on Data-Driven Optimal Scheduling of Power System," Energies, MDPI, vol. 16(6), pages 1-15, March.
    17. Mudhafar Al-Saadi & Maher Al-Greer & Michael Short, 2023. "Reinforcement Learning-Based Intelligent Control Strategies for Optimal Power Management in Advanced Power Distribution Systems: A Survey," Energies, MDPI, vol. 16(4), pages 1-38, February.
    18. Alex Chamba & Carlos Barrera-Singaña & Hugo Arcos, 2023. "Optimal Reactive Power Dispatch in Electric Transmission Systems Using the Multi-Agent Model with Volt-VAR Control," Energies, MDPI, vol. 16(13), pages 1-25, June.
    19. Lee, Minwoo & Han, Changho & Kwon, Soonbum & Kim, Yongchan, 2023. "Energy and cost savings through heat trading between two massive prosumers using solar and ground energy systems connected to district heating networks," Energy, Elsevier, vol. 284(C).
    20. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:351:y:2023:i:c:s0306261923011078. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.