Centralised rehearsal of decentralised cooperation: Multi-agent reinforcement learning for the scalable coordination of residential energy flexibility
Author
Abstract
Suggested Citation
DOI: 10.1016/j.apenergy.2024.124406
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
References listed on IDEAS
- Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Scalable multi-agent reinforcement learning for distributed control of residential energy flexibility," Applied Energy, Elsevier, vol. 314(C).
- Crozier, Constance & Apostolopoulou, Dimitra & McCulloch, Malcolm, 2018. "Mitigating the impact of personal vehicle electrification: A power generation perspective," Energy Policy, Elsevier, vol. 118(C), pages 474-481.
- Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
- Jacopo Torriti, 2022. "Household electricity demand, the intrinsic flexibility index and UK wholesale electricity market prices," Environmental Economics and Policy Studies, Springer;Society for Environmental Economics and Policy Studies - SEEPS, vol. 24(1), pages 7-27, January.
- Darby, Sarah J., 2020. "Demand response and smart technology in theory and practice: Customer experiences and system actors," Energy Policy, Elsevier, vol. 143(C).
- Guerrero, Jaysson & Gebbran, Daniel & Mhanna, Sleiman & Chapman, Archie C. & Verbič, Gregor, 2020. "Towards a transactive energy system for integration of distributed energy resources: Home energy management, distributed optimal power flow, and peer-to-peer energy trading," Renewable and Sustainable Energy Reviews, Elsevier, vol. 132(C).
- Jin-Gyeom Kim & Bowon Lee, 2020. "Automatic P2P Energy Trading Model Based on Reinforcement Learning Using Long Short-Term Delayed Reward," Energies, MDPI, vol. 13(20), pages 1-27, October.
- Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Coordination of resources at the edge of the electricity grid: Systematic review and taxonomy," Applied Energy, Elsevier, vol. 318(C).
- Zhang, Xiaoshun & Bao, Tao & Yu, Tao & Yang, Bo & Han, Chuanjia, 2017. "Deep transfer Q-learning with virtual leader-follower for supply-demand Stackelberg game of smart grid," Energy, Elsevier, vol. 133(C), pages 348-365.
- Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
- Dufo-López, Rodolfo & Lujano-Rojas, Juan M. & Bernal-Agustín, José L., 2014. "Comparison of different lead–acid battery lifetime prediction models for use in simulation of stand-alone photovoltaic systems," Applied Energy, Elsevier, vol. 115(C), pages 242-253.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Scalable multi-agent reinforcement learning for distributed control of residential energy flexibility," Applied Energy, Elsevier, vol. 314(C).
- Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Coordination of resources at the edge of the electricity grid: Systematic review and taxonomy," Applied Energy, Elsevier, vol. 318(C).
- Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
- Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
- Tsaousoglou, Georgios & Giraldo, Juan S. & Paterakis, Nikolaos G., 2022. "Market Mechanisms for Local Electricity Markets: A review of models, solution concepts and algorithmic techniques," Renewable and Sustainable Energy Reviews, Elsevier, vol. 156(C).
- Ibrahim, Muhammad Sohail & Dong, Wei & Yang, Qiang, 2020. "Machine learning driven smart electric power systems: Current trends and new perspectives," Applied Energy, Elsevier, vol. 272(C).
- Cai, Qiran & Xu, Qingyang & Qing, Jing & Shi, Gang & Liang, Qiao-Mei, 2022. "Promoting wind and photovoltaics renewable energy integration through demand response: Dynamic pricing mechanism design and economic analysis for smart residential communities," Energy, Elsevier, vol. 261(PB).
- Seongwoo Lee & Joonho Seon & Byungsun Hwang & Soohyun Kim & Youngghyu Sun & Jinyoung Kim, 2024. "Recent Trends and Issues of Energy Management Systems Using Machine Learning," Energies, MDPI, vol. 17(3), pages 1-24, January.
- Park, Keonwoo & Moon, Ilkyeong, 2022. "Multi-agent deep reinforcement learning approach for EV charging scheduling in a smart grid," Applied Energy, Elsevier, vol. 328(C).
- Hernandez-Matheus, Alejandro & Löschenbrand, Markus & Berg, Kjersti & Fuchs, Ida & Aragüés-Peñalba, Mònica & Bullich-Massagué, Eduard & Sumper, Andreas, 2022. "A systematic review of machine learning techniques related to local energy communities," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
- Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).
- Philipp Wohlgenannt & Sebastian Hegenbart & Elias Eder & Mohan Kolhe & Peter Kepplinger, 2024. "Energy Demand Response in a Food-Processing Plant: A Deep Reinforcement Learning Approach," Energies, MDPI, vol. 17(24), pages 1-19, December.
- Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
- Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
- Eduardo J. Salazar & Mauro Jurado & Mauricio E. Samper, 2023. "Reinforcement Learning-Based Pricing and Incentive Strategy for Demand Response in Smart Grids," Energies, MDPI, vol. 16(3), pages 1-33, February.
- Carlos Cruz & Esther Palomar & Ignacio Bravo & Alfredo Gardel, 2020. "Cooperative Demand Response Framework for a Smart Community Targeting Renewables: Testbed Implementation and Performance Evaluation," Energies, MDPI, vol. 13(11), pages 1-20, June.
- Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
- Aya Amer & Khaled Shaban & Ahmed Massoud, 2022. "Demand Response in HEMSs Using DRL and the Impact of Its Various Configurations and Environmental Changes," Energies, MDPI, vol. 15(21), pages 1-20, November.
- Ottavia Valentini & Nikoleta Andreadou & Paolo Bertoldi & Alexandre Lucas & Iolanda Saviuc & Evangelos Kotsakis, 2022. "Demand Response Impact Evaluation: A Review of Methods for Estimating the Customer Baseline Load," Energies, MDPI, vol. 15(14), pages 1-36, July.
- Tushar, Wayes & Yuen, Chau & Saha, Tapan K. & Morstyn, Thomas & Chapman, Archie C. & Alam, M. Jan E. & Hanif, Sarmad & Poor, H. Vincent, 2021. "Peer-to-peer energy systems for connected communities: A review of recent advances and emerging challenges," Applied Energy, Elsevier, vol. 282(PA).
More about this item
Keywords
Cooperative systems; Distributed control; Demand-side response; Electric vehicles; Energy management system; Multi-agent reinforcement learning;All these keywords.
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:377:y:2025:i:pa:s0306261924017896. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.