IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v17y2024i17p4471-d1472458.html
   My bibliography  Save this article

A Deep Reinforcement Learning Optimization Method Considering Network Node Failures

Author

Listed:
  • Xueying Ding

    (State Grid Information and Telecommunication Group Co., Ltd., Beijing 100029, China)

  • Xiao Liao

    (State Grid Information and Telecommunication Group Co., Ltd., Beijing 100029, China)

  • Wei Cui

    (State Grid Information and Telecommunication Group Co., Ltd., Beijing 100029, China)

  • Xiangliang Meng

    (State Grid Information and Telecommunication Group Co., Ltd., Beijing 100029, China)

  • Ruosong Liu

    (School of Automation Science and Engineering, Xi’an Jiaotong University, Xi’an 710049, China)

  • Qingshan Ye

    (School of Automation Science and Engineering, Xi’an Jiaotong University, Xi’an 710049, China)

  • Donghe Li

    (School of Automation Science and Engineering, Xi’an Jiaotong University, Xi’an 710049, China)

Abstract

Nowadays, the microgrid system is characterized by a diversification of power factors and a complex network structure. Existing studies on microgrid fault diagnosis and troubleshooting mostly focus on the fault detection and operation optimization of a single power device. However, for increasingly complex microgrid systems, it becomes increasingly challenging to effectively contain faults within a specific spatiotemporal range. This can lead to the spread of power faults, posing great harm to the safety of the microgrid. The topology optimization of the microgrid based on deep reinforcement learning proposed in this paper starts from the overall power grid and aims to minimize the overall failure rate of the microgrid by optimizing the topology of the power grid. This approach can limit internal faults within a small range, greatly improving the safety and reliability of microgrid operation. The method proposed in this paper can optimize the network topology for the single node fault and multi-node fault, reducing the influence range of the node fault by 21% and 58%, respectively.

Suggested Citation

  • Xueying Ding & Xiao Liao & Wei Cui & Xiangliang Meng & Ruosong Liu & Qingshan Ye & Donghe Li, 2024. "A Deep Reinforcement Learning Optimization Method Considering Network Node Failures," Energies, MDPI, vol. 17(17), pages 1-13, September.
  • Handle: RePEc:gam:jeners:v:17:y:2024:i:17:p:4471-:d:1472458
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/17/17/4471/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/17/17/4471/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Qingyan Li & Tao Lin & Qianyi Yu & Hui Du & Jun Li & Xiyue Fu, 2023. "Review of Deep Reinforcement Learning and Its Application in Modern Renewable Power System Control," Energies, MDPI, vol. 16(10), pages 1-23, May.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ekaterina V. Orlova, 2023. "Dynamic Regimes for Corporate Human Capital Development Used Reinforcement Learning Methods," Mathematics, MDPI, vol. 11(18), pages 1-22, September.
    2. Andrea Tortorelli & Giulia Sabina & Barbara Marchetti, 2024. "A Cooperative Multi-Agent Q-Learning Control Framework for Real-Time Energy Management in Energy Communities," Energies, MDPI, vol. 17(20), pages 1-27, October.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:17:y:2024:i:17:p:4471-:d:1472458. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.