Reinforcement learning for an enhanced energy flexibility controller incorporating predictive safety filter and adaptive policy updates
Author
Abstract
Suggested Citation
DOI: 10.1016/j.apenergy.2024.123507
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
References listed on IDEAS
- Zhang, Shulei & Jia, Runda & Pan, Hengxin & Cao, Yankai, 2023. "A safe reinforcement learning-based charging strategy for electric vehicles in residential microgrid," Applied Energy, Elsevier, vol. 348(C).
- Qiu, Dawei & Dong, Zihang & Zhang, Xi & Wang, Yi & Strbac, Goran, 2022. "Safe reinforcement learning for real-time automatic control in a smart energy-hub," Applied Energy, Elsevier, vol. 309(C).
- Gong, Xun & Wang, Xiaozhe & Cao, Bo, 2023. "On data-driven modeling and control in modern power grids stability: Survey and perspective," Applied Energy, Elsevier, vol. 350(C).
- Grace Muriithi & Sunetra Chowdhury, 2021. "Optimal Energy Management of a Grid-Tied Solar PV-Battery Microgrid: A Reinforcement Learning Approach," Energies, MDPI, vol. 14(9), pages 1-24, May.
- Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Xu, Xuesong & Xu, Kai & Zeng, Ziyang & Tang, Jiale & He, Yuanxing & Shi, Guangze & Zhang, Tao, 2024. "Collaborative optimization of multi-energy multi-microgrid system: A hierarchical trust-region multi-agent reinforcement learning approach," Applied Energy, Elsevier, vol. 375(C).
- Harri Aaltonen & Seppo Sierla & Rakshith Subramanya & Valeriy Vyatkin, 2021. "A Simulation Environment for Training a Reinforcement Learning Agent Trading a Battery Storage," Energies, MDPI, vol. 14(17), pages 1-20, September.
- Ayman Al-Quraan & Muhannad Al-Qaisi, 2021. "Modelling, Design and Control of a Standalone Hybrid PV-Wind Micro-Grid System," Energies, MDPI, vol. 14(16), pages 1-23, August.
- Omar A. Beg & Asad Ali Khan & Waqas Ur Rehman & Ali Hassan, 2023. "A Review of AI-Based Cyber-Attack Detection and Mitigation in Microgrids," Energies, MDPI, vol. 16(22), pages 1-23, November.
- Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
- Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
- Zhao, Yincheng & Zhang, Guozhou & Hu, Weihao & Huang, Qi & Chen, Zhe & Blaabjerg, Frede, 2023. "Meta-learning based voltage control strategy for emergency faults of active distribution networks," Applied Energy, Elsevier, vol. 349(C).
- Fathy, Ahmed, 2023. "Bald eagle search optimizer-based energy management strategy for microgrid with renewable sources and electric vehicles," Applied Energy, Elsevier, vol. 334(C).
- He, Wangli & Li, Chengyuan & Cai, Chenhao & Qing, Xiangyun & Du, Wenli, 2024. "Suppressing active power fluctuations at PCC in grid-connection microgrids via multiple BESSs: A collaborative multi-agent reinforcement learning approach," Applied Energy, Elsevier, vol. 373(C).
- Sebastian, Oliva H. & Carlos, Bahamonde D., 2024. "Trade-off between frequency stability and renewable generation – Studying virtual inertia from solar PV and operating stability constraints," Renewable Energy, Elsevier, vol. 232(C).
- Liu, Yinyan & Ma, Jin & Xing, Xinjie & Liu, Xinglu & Wang, Wei, 2022. "A home energy management system incorporating data-driven uncertainty-aware user preference," Applied Energy, Elsevier, vol. 326(C).
- Pinciroli, Luca & Baraldi, Piero & Compare, Michele & Zio, Enrico, 2023. "Optimal operation and maintenance of energy storage systems in grid-connected microgrids by deep reinforcement learning," Applied Energy, Elsevier, vol. 352(C).
- Philippe de Bekker & Sho Cremers & Sonam Norbu & David Flynn & Valentin Robu, 2023. "Improving the Efficiency of Renewable Energy Assets by Optimizing the Matching of Supply and Demand Using a Smart Battery Scheduling Algorithm," Energies, MDPI, vol. 16(5), pages 1-26, March.
- Li, Xiangyu & Luo, Fengji & Li, Chaojie, 2024. "Multi-agent deep reinforcement learning-based autonomous decision-making framework for community virtual power plants," Applied Energy, Elsevier, vol. 360(C).
- Spyros Giannelos & Stefan Borozan & Marko Aunedi & Xi Zhang & Hossein Ameli & Danny Pudjianto & Ioannis Konstantelos & Goran Strbac, 2023. "Modelling Smart Grid Technologies in Optimisation Problems for Electricity Grids," Energies, MDPI, vol. 16(13), pages 1-15, June.
- Wang, Can & Zhang, Jiaheng & Wang, Aoqi & Wang, Zhen & Yang, Nan & Zhao, Zhuoli & Lai, Chun Sing & Lai, Loi Lei, 2024. "Prioritized sum-tree experience replay TD3 DRL-based online energy management of a residential microgrid," Applied Energy, Elsevier, vol. 368(C).
- Alabi, Tobi Michael & Lu, Lin & Yang, Zaiyue, 2024. "Real-time automatic control of multi-energy system for smart district community: A coupling ensemble prediction model and safe deep reinforcement learning," Energy, Elsevier, vol. 304(C).
- Akbari, Ehsan & Mousavi Shabestari, Seyed Farzin & Pirouzi, Sasan & Jadidoleslam, Morteza, 2023. "Network flexibility regulation by renewable energy hubs using flexibility pricing-based energy management," Renewable Energy, Elsevier, vol. 206(C), pages 295-308.
- Zeng, Lanting & Qiu, Dawei & Sun, Mingyang, 2022. "Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks," Applied Energy, Elsevier, vol. 324(C).
- Cui, Feifei & An, Dou & Xi, Huan, 2024. "Integrated energy hub dispatch with a multi-mode CAES–BESS hybrid system: An option-based hierarchical reinforcement learning approach," Applied Energy, Elsevier, vol. 374(C).
More about this item
Keywords
Energy flexibility control; Safe continual reinforcement learning; Predictive safety filter; Changepoint detection; Policy updating;All these keywords.
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:368:y:2024:i:c:s0306261924008900. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.