IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v14y2021i22p7749-d682290.html
   My bibliography  Save this article

Operation of Distributed Battery Considering Demand Response Using Deep Reinforcement Learning in Grid Edge Control

Author

Listed:
  • Wenying Li

    (Sichuan Energy Internet Research Institute, Tsinghua University, Chengdu 610042, China)

  • Ming Tang

    (Sichuan Energy Internet Research Institute, Tsinghua University, Chengdu 610042, China)

  • Xinzhen Zhang

    (Sichuan Energy Internet Research Institute, Tsinghua University, Chengdu 610042, China)

  • Danhui Gao

    (Sichuan Energy Internet Research Institute, Tsinghua University, Chengdu 610042, China)

  • Jian Wang

    (Sichuan Energy Internet Research Institute, Tsinghua University, Chengdu 610042, China)

Abstract

Battery energy storage systems (BESSs) are able to facilitate economical operation of the grid through demand response (DR), and are regarded as the most significant DR resource. Among them, distributed BESS integrating home photovoltaics (PV) have developed rapidly, and account for nearly 40% of newly installed capacity. However, the use scenarios and use efficiency of distributed BESS are far from sufficient to be able to utilize the potential loads and overcome uncertainties caused by disorderly operation. In this paper, the low-voltage transformer-powered area (LVTPA) is firstly defined, and then a DR grid edge controller was implemented based on deep reinforcement learning to maximize the total DR benefits and promote three-phase balance in the LVTPA. The proposed DR problem is formulated as a Markov decision process (MDP). In addition, the deep deterministic policy gradient (DDPG) algorithm is applied to train the controller in order to learn the optimal DR strategy. Additionally, a life cycle cost model of the BESS is established and implemented in the DR scheme to measure the income. The numerical results, compared to deep Q learning and model-based methods, demonstrate the effectiveness and validity of the proposed method.

Suggested Citation

  • Wenying Li & Ming Tang & Xinzhen Zhang & Danhui Gao & Jian Wang, 2021. "Operation of Distributed Battery Considering Demand Response Using Deep Reinforcement Learning in Grid Edge Control," Energies, MDPI, vol. 14(22), pages 1-18, November.
  • Handle: RePEc:gam:jeners:v:14:y:2021:i:22:p:7749-:d:682290
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/14/22/7749/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/14/22/7749/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. James Amankwah Adu & Alberto Berizzi & Francesco Conte & Fabio D’Agostino & Valentin Ilea & Fabio Napolitano & Tadeo Pontecorvo & Andrea Vicario, 2022. "Power System Stability Analysis of the Sicilian Network in the 2050 OSMOSE Project Scenario," Energies, MDPI, vol. 15(10), pages 1-33, May.
    2. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    3. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Dominique Barth & Benjamin Cohen-Boulakia & Wilfried Ehounou, 2022. "Distributed Reinforcement Learning for the Management of a Smart Grid Interconnecting Independent Prosumers," Energies, MDPI, vol. 15(4), pages 1-19, February.
    2. Tsoumalis, Georgios I. & Bampos, Zafeirios N. & Biskas, Pandelis N. & Keranidis, Stratos D. & Symeonidis, Polychronis A. & Voulgarakis, Dimitrios K., 2022. "A novel system for providing explicit demand response from domestic natural gas boilers," Applied Energy, Elsevier, vol. 317(C).
    3. Zhang, Yang & Yang, Qingyu & Li, Donghe & An, Dou, 2022. "A reinforcement and imitation learning method for pricing strategy of electricity retailer with customers’ flexibility," Applied Energy, Elsevier, vol. 323(C).
    4. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    5. Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).
    6. Ibrahim, Muhammad Sohail & Dong, Wei & Yang, Qiang, 2020. "Machine learning driven smart electric power systems: Current trends and new perspectives," Applied Energy, Elsevier, vol. 272(C).
    7. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    8. Ussama Assad & Muhammad Arshad Shehzad Hassan & Umar Farooq & Asif Kabir & Muhammad Zeeshan Khan & S. Sabahat H. Bukhari & Zain ul Abidin Jaffri & Judit Oláh & József Popp, 2022. "Smart Grid, Demand Response and Optimization: A Critical Review of Computational Methods," Energies, MDPI, vol. 15(6), pages 1-36, March.
    9. Davarzani, Sima & Pisica, Ioana & Taylor, Gareth A. & Munisami, Kevin J., 2021. "Residential Demand Response Strategies and Applications in Active Distribution Network Management," Renewable and Sustainable Energy Reviews, Elsevier, vol. 138(C).
    10. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    11. Wen, Lulu & Zhou, Kaile & Li, Jun & Wang, Shanyong, 2020. "Modified deep learning and reinforcement learning for an incentive-based demand response model," Energy, Elsevier, vol. 205(C).
    12. Pallonetto, Fabiano & De Rosa, Mattia & Milano, Federico & Finn, Donal P., 2019. "Demand response algorithms for smart-grid ready residential buildings using machine learning models," Applied Energy, Elsevier, vol. 239(C), pages 1265-1282.
    13. Vo-Van Thanh & Wencong Su & Bin Wang, 2022. "Optimal DC Microgrid Operation with Model Predictive Control-Based Voltage-Dependent Demand Response and Optimal Battery Dispatch," Energies, MDPI, vol. 15(6), pages 1-19, March.
    14. Xu, Fangyuan & Zhu, Weidong & Wang, Yi Fei & Lai, Chun Sing & Yuan, Haoliang & Zhao, Yujia & Guo, Siming & Fu, Zhengxin, 2022. "A new deregulated demand response scheme for load over-shifting city in regulated power market," Applied Energy, Elsevier, vol. 311(C).
    15. Kalim Ullah & Sajjad Ali & Taimoor Ahmad Khan & Imran Khan & Sadaqat Jan & Ibrar Ali Shah & Ghulam Hafeez, 2020. "An Optimal Energy Optimization Strategy for Smart Grid Integrated with Renewable Energy Sources and Demand Response Programs," Energies, MDPI, vol. 13(21), pages 1-17, November.
    16. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    17. Lu, Renzhi & Bai, Ruichang & Ding, Yuemin & Wei, Min & Jiang, Junhui & Sun, Mingyang & Xiao, Feng & Zhang, Hai-Tao, 2021. "A hybrid deep learning-based online energy management scheme for industrial microgrid," Applied Energy, Elsevier, vol. 304(C).
    18. Qi, Chunyang & Zhu, Yiwen & Song, Chuanxue & Yan, Guangfu & Xiao, Feng & Da wang, & Zhang, Xu & Cao, Jingwei & Song, Shixin, 2022. "Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle," Energy, Elsevier, vol. 238(PA).
    19. Ajagekar, Akshay & Decardi-Nelson, Benjamin & You, Fengqi, 2024. "Energy management for demand response in networked greenhouses with multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 355(C).
    20. Boza, Pal & Evgeniou, Theodoros, 2021. "Artificial intelligence to support the integration of variable renewable energy sources to the power system," Applied Energy, Elsevier, vol. 290(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:14:y:2021:i:22:p:7749-:d:682290. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.