IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v371y2024ics0306261924010377.html
   My bibliography  Save this article

Privacy-preserving multi-level co-regulation of VPPs via hierarchical safe deep reinforcement learning

Author

Listed:
  • Xue, Lin
  • Zhang, Yao
  • Wang, Jianxue
  • Li, Haotian
  • Li, Fangshi

Abstract

Large amounts of distributed energy sources have brought challenges in terms of the safe and stable operation of the power grid. As a key technology between user-side energy resources and the distribution network (DN), how to realize the online coordinated scheduling of VPP and DN as well as the real-time response strategy of distributed equipment (DE) within VPP is the focus of this study. Thus, the hierarchical deep reinforcement learning (DRL) Hierarchical-TD3 algorithm is designed based on the unified modeling of adjustable space to achieve the real-time economic scheduling of VPPs. The upper layer DN considers the network security constraints and solves the economic scheduling model of VPPs based on the single-agent TD3 algorithm. Based on the scheduling instructions from the upper layer, the lower layer VPPs consider the requirements of privacy protection and control autonomy and realize real-time response of the DE within VPP via multi-agent MATD3 algorithm. Numerical results in the modified 33-nodes system show that the proposed Hierarchical-TD3 algorithm can achieve privacy protection and the coordinated scheduling of VPP and DE to reduce the operating cost. It differs from the optimal value by only 1.46% but can achieve online decision-making on the millisecond scale. Compared with the traditional centralized and decentralized DRL algorithms, the total cost is reduced by 10.15% and 5.52% respectively. Compared with the traditional soft-constraint method, there is no constraint violation during the training and testing phases. Finally, the actual 116-nodes testing system validates the scalability of the proposed Hierarchical-TD3 algorithm.

Suggested Citation

  • Xue, Lin & Zhang, Yao & Wang, Jianxue & Li, Haotian & Li, Fangshi, 2024. "Privacy-preserving multi-level co-regulation of VPPs via hierarchical safe deep reinforcement learning," Applied Energy, Elsevier, vol. 371(C).
  • Handle: RePEc:eee:appene:v:371:y:2024:i:c:s0306261924010377
    DOI: 10.1016/j.apenergy.2024.123654
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261924010377
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2024.123654?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Zamani, Ali Ghahgharaee & Zakariazadeh, Alireza & Jadid, Shahram, 2016. "Day-ahead resource scheduling of a renewable energy based virtual power plant," Applied Energy, Elsevier, vol. 169(C), pages 324-340.
    2. Li, Qiang & Wei, Fanchao & Zhou, Yongcheng & Li, Jiajia & Zhou, Guowen & Wang, Zhonghao & Liu, Jinfu & Yan, Peigang & Yu, Daren, 2023. "A scheduling framework for VPP considering multiple uncertainties and flexible resources," Energy, Elsevier, vol. 282(C).
    3. Zhu, Xingxu & Hou, Xiangchen & Li, Junhui & Yan, Gangui & Li, Cuiping & Wang, Dongbo, 2023. "Distributed online prediction optimization algorithm for distributed energy resources considering the multi-periods optimal operation," Applied Energy, Elsevier, vol. 348(C).
    4. Kou, Peng & Liang, Deliang & Wang, Chen & Wu, Zihao & Gao, Lin, 2020. "Safe deep reinforcement learning-based constrained optimal control scheme for active distribution networks," Applied Energy, Elsevier, vol. 264(C).
    5. Wang, Xuejie & zhao, Huiru & Lu, Hao & Zhang, Yuanyuan & Wang, Yuwei & Wang, Jingbo, 2022. "Decentralized coordinated operation model of VPP and P2H systems based on stochastic-bargaining game considering multiple uncertainties and carbon cost," Applied Energy, Elsevier, vol. 312(C).
    6. Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
    7. Liu, Chunming & Wang, Chunling & Yin, Yujun & Yang, Peihong & Jiang, Hui, 2022. "Bi-level dispatch and control strategy based on model predictive control for community integrated energy system considering dynamic response performance," Applied Energy, Elsevier, vol. 310(C).
    8. Zhou, Huan & Fan, Shuai & Wu, Qing & Dong, Lianxin & Li, Zuyi & He, Guangyu, 2021. "Stimulus-response control strategy based on autonomous decentralized system theory for exploitation of flexibility by virtual power plant," Applied Energy, Elsevier, vol. 285(C).
    9. Pinciroli, Luca & Baraldi, Piero & Compare, Michele & Zio, Enrico, 2023. "Optimal operation and maintenance of energy storage systems in grid-connected microgrids by deep reinforcement learning," Applied Energy, Elsevier, vol. 352(C).
    10. Wang, Jiewei & Wei, Ziqing & Zhu, Yikang & Zheng, Chunyuan & Li, Bin & Zhai, Xiaoqiang, 2023. "Demand response via optimal pre-cooling combined with temperature reset strategy for air conditioning system: A case study of office building," Energy, Elsevier, vol. 282(C).
    11. Park, Sung-Won & Son, Sung-Yong, 2020. "Interaction-based virtual power plant operation methodology for distribution system operator’s voltage management," Applied Energy, Elsevier, vol. 271(C).
    12. Xue, Lin & Wang, Jianxue & Zhang, Yao & Yong, Weizhen & Qi, Jie & Li, Haotian, 2023. "Model-data-event based community integrated energy system low-carbon economic scheduling," Renewable and Sustainable Energy Reviews, Elsevier, vol. 182(C).
    13. Xiang, Yue & Lu, Yu & Liu, Junyong, 2023. "Deep reinforcement learning based topology-aware voltage regulation of distribution networks with distributed energy storage," Applied Energy, Elsevier, vol. 332(C).
    14. Chang, Weiguang & Yang, Qiang, 2023. "Low carbon oriented collaborative energy management framework for multi-microgrid aggregated virtual power plant considering electricity trading," Applied Energy, Elsevier, vol. 351(C).
    15. Kofinas, P. & Dounis, A.I. & Vouros, G.A., 2018. "Fuzzy Q-Learning for multi-agent decentralized energy management in microgrids," Applied Energy, Elsevier, vol. 219(C), pages 53-67.
    16. Zhu, Ziqing & Wing Chan, Ka & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2021. "Real-Time interaction of active distribution network and virtual microgrids: Market paradigm and data-driven stakeholder behavior analysis," Applied Energy, Elsevier, vol. 297(C).
    17. Guo, Guodong & Zhang, Mengfan & Gong, Yanfeng & Xu, Qianwen, 2023. "Safe multi-agent deep reinforcement learning for real-time decentralized control of inverter based renewable energy resources considering communication delay," Applied Energy, Elsevier, vol. 349(C).
    18. Guo, Chenyu & Wang, Xin & Zheng, Yihui & Zhang, Feng, 2022. "Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    19. Fan, Shuai & Liu, Jiang & Wu, Qing & Cui, Mingjian & Zhou, Huan & He, Guangyu, 2020. "Optimal coordination of virtual power plant with photovoltaics and electric vehicles: A temporally coupled distributed online algorithm," Applied Energy, Elsevier, vol. 277(C).
    20. Song, Jiancai & Bian, Tianxiang & Xue, Guixiang & Wang, Hanyu & Shen, Xingliang & Wu, Xiangdong, 2023. "Short-term forecasting model for residential indoor temperature in DHS based on sequence generative adversarial network," Applied Energy, Elsevier, vol. 348(C).
    21. Huang, Ruchen & He, Hongwen & Zhao, Xuyang & Wang, Yunlong & Li, Menglin, 2022. "Battery health-aware and naturalistic data-driven energy management for hybrid electric bus based on TD3 deep reinforcement learning algorithm," Applied Energy, Elsevier, vol. 321(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ju, Liwei & Yin, Zhe & Lu, Xiaolong & Yang, Shenbo & Li, Peng & Rao, Rao & Tan, Zhongfu, 2022. "A Tri-dimensional Equilibrium-based stochastic optimal dispatching model for a novel virtual power plant incorporating carbon Capture, Power-to-Gas and electric vehicle aggregator," Applied Energy, Elsevier, vol. 324(C).
    2. Cao, Jinye & Yang, Dechang & Dehghanian, Payman, 2024. "Cooperative operation for multiple virtual power plants considering energy-carbon trading: A Nash bargaining model," Energy, Elsevier, vol. 307(C).
    3. Hou, Guolian & Huang, Ting & Zheng, Fumeng & Huang, Congzhi, 2024. "A hierarchical reinforcement learning GPC for flexible operation of ultra-supercritical unit considering economy," Energy, Elsevier, vol. 289(C).
    4. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    5. Yi Kuang & Xiuli Wang & Hongyang Zhao & Yijun Huang & Xianlong Chen & Xifan Wang, 2020. "Agent-Based Energy Sharing Mechanism Using Deep Deterministic Policy Gradient Algorithm," Energies, MDPI, vol. 13(19), pages 1-20, September.
    6. Francesco Gulotta & Edoardo Daccò & Alessandro Bosisio & Davide Falabretti, 2023. "Opening of Ancillary Service Markets to Distributed Energy Resources: A Review," Energies, MDPI, vol. 16(6), pages 1-25, March.
    7. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    8. Zhao, Yincheng & Zhang, Guozhou & Hu, Weihao & Huang, Qi & Chen, Zhe & Blaabjerg, Frede, 2023. "Meta-learning based voltage control strategy for emergency faults of active distribution networks," Applied Energy, Elsevier, vol. 349(C).
    9. Ren, Junzhi & Zeng, Yuan & Qin, Chao & Li, Bao & Wang, Ziqiang & Yuan, Quan & Zhai, Hefeng & Li, Peng, 2024. "Characterization and application of flexible operation region of virtual power plant," Applied Energy, Elsevier, vol. 371(C).
    10. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    11. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    12. Zhu, Dafeng & Yang, Bo & Liu, Yuxiang & Wang, Zhaojian & Ma, Kai & Guan, Xinping, 2022. "Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park," Applied Energy, Elsevier, vol. 311(C).
    13. Jiang, Yuzheng & Dong, Jun & Huang, Hexiang, 2024. "Optimal bidding strategy for the price-maker virtual power plant in the day-ahead market based on multi-agent twin delayed deep deterministic policy gradient algorithm," Energy, Elsevier, vol. 306(C).
    14. Chen, Yongdong & Liu, Youbo & Zhao, Junbo & Qiu, Gao & Yin, Hang & Li, Zhengbo, 2023. "Physical-assisted multi-agent graph reinforcement learning enabled fast voltage regulation for PV-rich active distribution network," Applied Energy, Elsevier, vol. 351(C).
    15. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    16. Zhou, Huan & Fan, Shuai & Wu, Qing & Dong, Lianxin & Li, Zuyi & He, Guangyu, 2021. "Stimulus-response control strategy based on autonomous decentralized system theory for exploitation of flexibility by virtual power plant," Applied Energy, Elsevier, vol. 285(C).
    17. Catra Indra Cahyadi & Suwarno Suwarno & Aminah Asmara Dewi & Musri Kona & Muhammad Arif & Muhammad Caesar Akbar, 2023. "Solar Prediction Strategy for Managing Virtual Power Stations," International Journal of Energy Economics and Policy, Econjournals, vol. 13(4), pages 503-512, July.
    18. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    19. Cagnano, A. & De Tuglie, E. & Mancarella, P., 2020. "Microgrids: Overview and guidelines for practical implementations and operation," Applied Energy, Elsevier, vol. 258(C).
    20. Bianca Goia & Tudor Cioara & Ionut Anghel, 2022. "Virtual Power Plant Optimization in Smart Grids: A Narrative Review," Future Internet, MDPI, vol. 14(5), pages 1-22, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:371:y:2024:i:c:s0306261924010377. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.