IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v17y2024i24p6421-d1548226.html
   My bibliography  Save this article

Adaptive Control of VSG Inertia Damping Based on MADDPG

Author

Listed:
  • Demu Zhang

    (College of Electrical Engineering, Guizhou University, Guiyang 550025, China)

  • Jing Zhang

    (College of Electrical Engineering, Guizhou University, Guiyang 550025, China)

  • Yu He

    (College of Electrical Engineering, Guizhou University, Guiyang 550025, China)

  • Tao Shen

    (College of Electrical Engineering, Guizhou University, Guiyang 550025, China)

  • Xingyan Liu

    (Power Grid Planning and Research Center of Guizhou Power Grid Co., Ltd., Guiyang 550002, China)

Abstract

As renewable energy sources become more integrated into the power grid, traditional virtual synchronous generator (VSG) control strategies have become inadequate for the current low-damping, low-inertia power systems. Therefore, this paper proposes a VSG inertia and damping adaptive control method based on multi-agent deep deterministic policy gradient (MADDPG). The paper first introduces the working principles of virtual synchronous generators and establishes a corresponding VSG model. Based on this model, the influence of variations in virtual inertia ( J ) and damping ( D ) coefficients on fluctuations in active power output is examined, defining the action space for J and D . The proposed method is mainly divided into two phases: “centralized training and decentralized execution”. In the centralized training phase, each agent’s critic network shares global observation and action information to guide the actor network in policy optimization. In the decentralized execution phase, agents observe frequency deviations and the rate at which angular frequency changes, using reinforcement learning algorithms to adjust the virtual inertia J and damping coefficient D in real time. Finally, the effectiveness of the proposed MADDPG control strategy is validated through comparison with adaptive control and DDPG control methods.

Suggested Citation

  • Demu Zhang & Jing Zhang & Yu He & Tao Shen & Xingyan Liu, 2024. "Adaptive Control of VSG Inertia Damping Based on MADDPG," Energies, MDPI, vol. 17(24), pages 1-16, December.
  • Handle: RePEc:gam:jeners:v:17:y:2024:i:24:p:6421-:d:1548226
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/17/24/6421/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/17/24/6421/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Luqin Fan & Jing Zhang & Yu He & Ying Liu & Tao Hu & Heng Zhang, 2021. "Optimal Scheduling of Microgrid Based on Deep Deterministic Policy Gradient and Transfer Learning," Energies, MDPI, vol. 14(3), pages 1-15, January.
    2. Erico Gurski & Roman Kuiava & Filipe Perez & Raphael A. S. Benedito & Gilney Damm, 2024. "A Novel VSG with Adaptive Virtual Inertia and Adaptive Damping Coefficient to Improve Transient Frequency Response of Microgrids," Energies, MDPI, vol. 17(17), pages 1-22, September.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Bingyin Lei & Yue Ren & Huiyu Luan & Ruonan Dong & Xiuyuan Wang & Junli Liao & Shu Fang & Kaiye Gao, 2023. "A Review of Optimization for System Reliability of Microgrid," Mathematics, MDPI, vol. 11(4), pages 1-30, February.
    2. Coraci, Davide & Brandi, Silvio & Hong, Tianzhen & Capozzoli, Alfonso, 2023. "Online transfer learning strategy for enhancing the scalability and deployment of deep reinforcement learning control in smart buildings," Applied Energy, Elsevier, vol. 333(C).
    3. Bing Liu & Bowen Xu & Tong He & Wei Yu & Fanghong Guo, 2022. "Hybrid Deep Reinforcement Learning Considering Discrete-Continuous Action Spaces for Real-Time Energy Management in More Electric Aircraft," Energies, MDPI, vol. 15(17), pages 1-21, August.
    4. Ying Ji & Jianhui Wang & Jiacan Xu & Donglin Li, 2021. "Data-Driven Online Energy Scheduling of a Microgrid Based on Deep Reinforcement Learning," Energies, MDPI, vol. 14(8), pages 1-19, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:17:y:2024:i:24:p:6421-:d:1548226. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.