IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v310y2022ics0306261921017128.html
   My bibliography  Save this article

Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures

Author

Listed:
  • Pinto, Giuseppe
  • Kathirgamanathan, Anjukan
  • Mangina, Eleni
  • Finn, Donal P.
  • Capozzoli, Alfonso

Abstract

The increasing penetration of renewable energy sources has the potential to contribute towards the decarbonisation of the building energy sector. However, this transition brings its own challenges including that of energy integration and potential grid instability issues arising due the stochastic nature of variable renewable energy sources. One potential approach to address these issues is demand side management, which is increasingly seen as a promising solution to improve grid stability. This is achieved by exploiting demand flexibility and shifting peak demand towards periods of peak renewable energy generation. However, the energy flexibility of a single building needs to be coordinated with other buildings to be used in a flexibility market. In this context, multi-agent systems represent a promising tool for improving the energy management of buildings at the district and grid scale. The present research formulates the energy management of four buildings equipped with thermal energy storage and PV systems as a multi-agent problem. Two multi-agent reinforcement learning methods are explored: a centralised (coordinated) controller and a decentralised (cooperative) controller, which are benchmarked against a rule-based controller. The two controllers were tested for three different climates, outperforming the rule-based controller by 3% and 7% respectively for cost, and 10% and 14% respectively for peak demand. The study shows that the multi-agent cooperative approach may be more suitable for districts with heterogeneous objectives within the individual buildings.

Suggested Citation

  • Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
  • Handle: RePEc:eee:appene:v:310:y:2022:i:c:s0306261921017128
    DOI: 10.1016/j.apenergy.2021.118497
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261921017128
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2021.118497?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Marta Victoria & Kun Zhu & Tom Brown & Gorm B. Andresen & Martin Greiner, 2020. "Early decarbonisation of the European energy system pays off," Nature Communications, Nature, vol. 11(1), pages 1-9, December.
    2. Gianluca Serale & Massimo Fiorentini & Alfonso Capozzoli & Daniele Bernardini & Alberto Bemporad, 2018. "Model Predictive Control (MPC) for Enhancing Building and HVAC System Energy Efficiency: Problem Formulation, Applications and Opportunities," Energies, MDPI, vol. 11(3), pages 1-35, March.
    3. Qiu, Dawei & Ye, Yujian & Papadaskalopoulos, Dimitrios & Strbac, Goran, 2021. "Scalable coordinated management of peer-to-peer energy trading: A multi-cluster deep reinforcement learning approach," Applied Energy, Elsevier, vol. 292(C).
    4. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    5. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    6. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    7. Kofinas, P. & Dounis, A.I. & Vouros, G.A., 2018. "Fuzzy Q-Learning for multi-agent decentralized energy management in microgrids," Applied Energy, Elsevier, vol. 219(C), pages 53-67.
    8. Hu, Maomao & Xiao, Fu & Wang, Shengwei, 2021. "Neighborhood-level coordination and negotiation techniques for managing demand-side flexibility in residential microgrids," Renewable and Sustainable Energy Reviews, Elsevier, vol. 135(C).
    9. Labeodan, Timilehin & Aduda, Kennedy & Boxem, Gert & Zeiler, Wim, 2015. "On the application of multi-agent systems in buildings for improved building operations, performance and smart grid interaction – A survey," Renewable and Sustainable Energy Reviews, Elsevier, vol. 50(C), pages 1405-1414.
    10. Davide Deltetto & Davide Coraci & Giuseppe Pinto & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Exploring the Potentialities of Deep Reinforcement Learning for Incentive-Based Demand Response in a Cluster of Small Commercial Buildings," Energies, MDPI, vol. 14(10), pages 1-25, May.
    11. Lund, Peter D. & Lindgren, Juuso & Mikkola, Jani & Salpakari, Jyri, 2015. "Review of energy system flexibility measures to enable high levels of variable renewable electricity," Renewable and Sustainable Energy Reviews, Elsevier, vol. 45(C), pages 785-807.
    12. Mohamed, Mohamed A. & Jin, Tao & Su, Wencong, 2020. "Multi-agent energy management of smart islands using primal-dual method of multipliers," Energy, Elsevier, vol. 208(C).
    13. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    14. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    15. Kazmi, Hussain & Suykens, Johan & Balint, Attila & Driesen, Johan, 2019. "Multi-agent reinforcement learning for modeling and control of thermostatically controlled loads," Applied Energy, Elsevier, vol. 238(C), pages 1022-1035.
    16. Xiong, Linyun & Li, Penghan & Wang, Ziqiang & Wang, Jie, 2020. "Multi-agent based multi objective renewable energy management for diversified community power consumers," Applied Energy, Elsevier, vol. 259(C).
    17. Warren, Peter, 2014. "A review of demand-side management policy in the UK," Renewable and Sustainable Energy Reviews, Elsevier, vol. 29(C), pages 941-951.
    18. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    19. Klein, Konstantin & Herkel, Sebastian & Henning, Hans-Martin & Felsmann, Clemens, 2017. "Load shifting using the heating and cooling system of an office building: Quantitative potential evaluation for different flexibility and storage options," Applied Energy, Elsevier, vol. 203(C), pages 917-937.
    20. Davide Coraci & Silvio Brandi & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Online Implementation of a Soft Actor-Critic Agent to Enhance Indoor Temperature Control and Energy Efficiency in Buildings," Energies, MDPI, vol. 14(4), pages 1-26, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Wu, Long & Yin, Xunyuan & Pan, Lei & Liu, Jinfeng, 2023. "Distributed economic predictive control of integrated energy systems for enhanced synergy and grid response: A decomposition and cooperation strategy," Applied Energy, Elsevier, vol. 349(C).
    2. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    3. Wenya Xu & Yanxue Li & Guanjie He & Yang Xu & Weijun Gao, 2023. "Performance Assessment and Comparative Analysis of Photovoltaic-Battery System Scheduling in an Existing Zero-Energy House Based on Reinforcement Learning Control," Energies, MDPI, vol. 16(13), pages 1-19, June.
    4. Ayas Shaqour & Aya Hagishima, 2022. "Systematic Review on Deep Reinforcement Learning-Based Energy Management for Different Building Types," Energies, MDPI, vol. 15(22), pages 1-27, November.
    5. Nweye, Kingsley & Sankaranarayanan, Siva & Nagy, Zoltan, 2023. "MERLIN: Multi-agent offline and transfer learning for occupant-centric operation of grid-interactive communities," Applied Energy, Elsevier, vol. 346(C).
    6. Ren, Haoshan & Ma, Zhenjun & Fai Norman Tse, Chung & Sun, Yongjun, 2022. "Optimal control of solar-powered electric bus networks with improved renewable energy on-site consumption and reduced grid dependence," Applied Energy, Elsevier, vol. 323(C).
    7. Qiu, Dawei & Xue, Juxing & Zhang, Tingqi & Wang, Jianhong & Sun, Mingyang, 2023. "Federated reinforcement learning for smart building joint peer-to-peer energy and carbon allowance trading," Applied Energy, Elsevier, vol. 333(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    3. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    4. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    5. Davide Deltetto & Davide Coraci & Giuseppe Pinto & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Exploring the Potentialities of Deep Reinforcement Learning for Incentive-Based Demand Response in a Cluster of Small Commercial Buildings," Energies, MDPI, vol. 14(10), pages 1-25, May.
    6. Ayas Shaqour & Aya Hagishima, 2022. "Systematic Review on Deep Reinforcement Learning-Based Energy Management for Different Building Types," Energies, MDPI, vol. 15(22), pages 1-27, November.
    7. Coraci, Davide & Brandi, Silvio & Hong, Tianzhen & Capozzoli, Alfonso, 2023. "Online transfer learning strategy for enhancing the scalability and deployment of deep reinforcement learning control in smart buildings," Applied Energy, Elsevier, vol. 333(C).
    8. Silvestri, Alberto & Coraci, Davide & Brandi, Silvio & Capozzoli, Alfonso & Borkowski, Esther & Köhler, Johannes & Wu, Duan & Zeilinger, Melanie N. & Schlueter, Arno, 2024. "Real building implementation of a deep reinforcement learning controller to enhance energy efficiency and indoor temperature control," Applied Energy, Elsevier, vol. 368(C).
    9. Guo, Yurun & Wang, Shugang & Wang, Jihong & Zhang, Tengfei & Ma, Zhenjun & Jiang, Shuang, 2024. "Key district heating technologies for building energy flexibility: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 189(PB).
    10. Nweye, Kingsley & Sankaranarayanan, Siva & Nagy, Zoltan, 2023. "MERLIN: Multi-agent offline and transfer learning for occupant-centric operation of grid-interactive communities," Applied Energy, Elsevier, vol. 346(C).
    11. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    12. Zeng, Lanting & Qiu, Dawei & Sun, Mingyang, 2022. "Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks," Applied Energy, Elsevier, vol. 324(C).
    13. Eduardo J. Salazar & Mauro Jurado & Mauricio E. Samper, 2023. "Reinforcement Learning-Based Pricing and Incentive Strategy for Demand Response in Smart Grids," Energies, MDPI, vol. 16(3), pages 1-33, February.
    14. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    15. Nik, Vahid M. & Hosseini, Mohammad, 2023. "CIRLEM: a synergic integration of Collective Intelligence and Reinforcement learning in Energy Management for enhanced climate resilience and lightweight computation," Applied Energy, Elsevier, vol. 350(C).
    16. Wang, Yi & Qiu, Dawei & Strbac, Goran, 2022. "Multi-agent deep reinforcement learning for resilience-driven routing and scheduling of mobile energy storage systems," Applied Energy, Elsevier, vol. 310(C).
    17. Kathirgamanathan, Anjukan & De Rosa, Mattia & Mangina, Eleni & Finn, Donal P., 2021. "Data-driven predictive control for unlocking building energy flexibility: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 135(C).
    18. Seppo Sierla & Heikki Ihasalo & Valeriy Vyatkin, 2022. "A Review of Reinforcement Learning Applications to Control of Heating, Ventilation and Air Conditioning Systems," Energies, MDPI, vol. 15(10), pages 1-25, May.
    19. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    20. Kong, Xiangyu & Kong, Deqian & Yao, Jingtao & Bai, Linquan & Xiao, Jie, 2020. "Online pricing of demand response based on long short-term memory and reinforcement learning," Applied Energy, Elsevier, vol. 271(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:310:y:2022:i:c:s0306261921017128. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.