IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v14y2021i22p7491-d675495.html
   My bibliography  Save this article

A Multi-Agent Reinforcement Learning Approach to Price and Comfort Optimization in HVAC-Systems

Author

Listed:
  • Christian Blad

    (Robotics & Automation Group, Department of Materials and Production, Aalborg University, 9220 Aalborg, Denmark
    Technology and Innovation, Control Department, Grundfos, 8850 Bjerringbro, Denmark
    Current address: Fibigerstræde 16, 9220 Aalborg, Denmark.
    These authors contributed equally to this work.)

  • Simon Bøgh

    (Robotics & Automation Group, Department of Materials and Production, Aalborg University, 9220 Aalborg, Denmark
    These authors contributed equally to this work.)

  • Carsten Kallesøe

    (Technology and Innovation, Control Department, Grundfos, 8850 Bjerringbro, Denmark
    Department of Electronic Systems, Aalborg University, 9220 Aalborg, Denmark
    These authors contributed equally to this work.)

Abstract

This paper addresses the challenge of minimizing training time for the control of Heating, Ventilation, and Air-conditioning (HVAC) systems with online Reinforcement Learning (RL). This is done by developing a novel approach to Multi-Agent Reinforcement Learning (MARL) to HVAC systems. In this paper, the environment formed by the HVAC system is formulated as a Markov Game (MG) in a general sum setting. The MARL algorithm is designed in a decentralized structure, where only relevant states are shared between agents, and actions are shared in a sequence, which are sensible from a system’s point of view. The simulation environment is a domestic house located in Denmark and designed to resemble an average house. The heat source in the house is an air-to-water heat pump, and the HVAC system is an Underfloor Heating system (UFH). The house is subjected to weather changes from a data set collected in Copenhagen in 2006, spanning the entire year except for June, July, and August, where heat is not required. It is shown that: (1) When comparing Single Agent Reinforcement Learning (SARL) and MARL, training time can be reduced by 70% for a four temperature-zone UFH system, (2) the agent can learn and generalize over seasons, (3) the cost of heating can be reduced by 19% or the equivalent to 750 kWh of electric energy per year for an average Danish domestic house compared to a traditional control method, and (4) oscillations in the room temperature can be reduced by 40% when comparing the RL control methods with a traditional control method.

Suggested Citation

  • Christian Blad & Simon Bøgh & Carsten Kallesøe, 2021. "A Multi-Agent Reinforcement Learning Approach to Price and Comfort Optimization in HVAC-Systems," Energies, MDPI, vol. 14(22), pages 1-20, November.
  • Handle: RePEc:gam:jeners:v:14:y:2021:i:22:p:7491-:d:675495
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/14/22/7491/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/14/22/7491/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Kazmi, Hussain & Suykens, Johan & Balint, Attila & Driesen, Johan, 2019. "Multi-agent reinforcement learning for modeling and control of thermostatically controlled loads," Applied Energy, Elsevier, vol. 238(C), pages 1022-1035.
    2. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    3. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Omar al-Ani & Sanjoy Das & Hongyu Wu, 2023. "Imitation Learning with Deep Attentive Tabular Neural Networks for Environmental Prediction and Control in Smart Home," Energies, MDPI, vol. 16(13), pages 1-19, June.
    2. Blad, Christian & Bøgh, Simon & Kallesøe, Carsten Skovmose, 2022. "Data-driven Offline Reinforcement Learning for HVAC-systems," Energy, Elsevier, vol. 261(PB).
    3. Blad, C. & Bøgh, S. & Kallesøe, C. & Raftery, Paul, 2023. "A laboratory test of an Offline-trained Multi-Agent Reinforcement Learning Algorithm for Heating Systems," Applied Energy, Elsevier, vol. 337(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    3. Sun, Hongchang & Niu, Yanlei & Li, Chengdong & Zhou, Changgeng & Zhai, Wenwen & Chen, Zhe & Wu, Hao & Niu, Lanqiang, 2022. "Energy consumption optimization of building air conditioning system via combining the parallel temporal convolutional neural network and adaptive opposition-learning chimp algorithm," Energy, Elsevier, vol. 259(C).
    4. Oleh Lukianykhin & Tetiana Bogodorova, 2021. "Voltage Control-Based Ancillary Service Using Deep Reinforcement Learning," Energies, MDPI, vol. 14(8), pages 1-22, April.
    5. Arroyo, Javier & Manna, Carlo & Spiessens, Fred & Helsen, Lieve, 2022. "Reinforced model predictive control (RL-MPC) for building energy management," Applied Energy, Elsevier, vol. 309(C).
    6. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    7. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    8. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    9. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    10. Park, Keonwoo & Moon, Ilkyeong, 2022. "Multi-agent deep reinforcement learning approach for EV charging scheduling in a smart grid," Applied Energy, Elsevier, vol. 328(C).
    11. Rae-Jun Park & Kyung-Bin Song & Bo-Sung Kwon, 2020. "Short-Term Load Forecasting Algorithm Using a Similar Day Selection Method Based on Reinforcement Learning," Energies, MDPI, vol. 13(10), pages 1-19, May.
    12. Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).
    13. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    14. Kong, Xiangyu & Kong, Deqian & Yao, Jingtao & Bai, Linquan & Xiao, Jie, 2020. "Online pricing of demand response based on long short-term memory and reinforcement learning," Applied Energy, Elsevier, vol. 271(C).
    15. Qiu, Dawei & Ye, Yujian & Papadaskalopoulos, Dimitrios & Strbac, Goran, 2021. "Scalable coordinated management of peer-to-peer energy trading: A multi-cluster deep reinforcement learning approach," Applied Energy, Elsevier, vol. 292(C).
    16. Esmaeili Aliabadi, Danial & Chan, Katrina, 2022. "The emerging threat of artificial intelligence on competition in liberalized electricity markets: A deep Q-network approach," Applied Energy, Elsevier, vol. 325(C).
    17. Blad, Christian & Bøgh, Simon & Kallesøe, Carsten Skovmose, 2022. "Data-driven Offline Reinforcement Learning for HVAC-systems," Energy, Elsevier, vol. 261(PB).
    18. Zeng, Lanting & Qiu, Dawei & Sun, Mingyang, 2022. "Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks," Applied Energy, Elsevier, vol. 324(C).
    19. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    20. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:14:y:2021:i:22:p:7491-:d:675495. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.