IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v368y2024ics0306261924008304.html
   My bibliography  Save this article

Real building implementation of a deep reinforcement learning controller to enhance energy efficiency and indoor temperature control

Author

Listed:
  • Silvestri, Alberto
  • Coraci, Davide
  • Brandi, Silvio
  • Capozzoli, Alfonso
  • Borkowski, Esther
  • Köhler, Johannes
  • Wu, Duan
  • Zeilinger, Melanie N.
  • Schlueter, Arno

Abstract

Deep Reinforcement Learning (DRL) has emerged as a promising approach to address the trade-off between energy efficiency and indoor comfort in buildings, potentially outperforming conventional Rule-Based Controllers (RBC). This paper explores the real-world application of a Soft-Actor Critic (SAC) DRL controller in a building’s Thermally Activated Building System (TABS), focusing on optimising energy consumption and maintaining comfortable indoor temperatures. Our approach involves pre-training the DRL agent using a simplified Resistance-Capacitance (RC) model calibrated with real building data. The study first benchmarks the DRL controller against three RBCs, two Proportional-Integral (PI) controllers and a Model Predictive Controller (MPC) in a simulated environment. In the simulation study, DRL reduces energy consumption by 15% to 50% and decreases temperature violations by 25% compared to RBCs, reducing also energy consumption and temperature violations compared to PI controllers by respectively 23% and 5%. Moreover, DRL achieves comparable performance in terms of temperature control but consuming 29% more energy than an ideal MPC. When implemented in a real building during a two-month cooling season, the DRL controller performances were compared with those of the best-performing RBC, enhancing indoor temperature control by 68% without increasing energy consumption. This research demonstrates an effective strategy for training and deploying DRL controllers in real building energy systems, highlighting the potential of DRL in practical energy management applications.

Suggested Citation

  • Silvestri, Alberto & Coraci, Davide & Brandi, Silvio & Capozzoli, Alfonso & Borkowski, Esther & Köhler, Johannes & Wu, Duan & Zeilinger, Melanie N. & Schlueter, Arno, 2024. "Real building implementation of a deep reinforcement learning controller to enhance energy efficiency and indoor temperature control," Applied Energy, Elsevier, vol. 368(C).
  • Handle: RePEc:eee:appene:v:368:y:2024:i:c:s0306261924008304
    DOI: 10.1016/j.apenergy.2024.123447
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261924008304
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2024.123447?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Gianluca Serale & Massimo Fiorentini & Alfonso Capozzoli & Daniele Bernardini & Alberto Bemporad, 2018. "Model Predictive Control (MPC) for Enhancing Building and HVAC System Energy Efficiency: Problem Formulation, Applications and Opportunities," Energies, MDPI, vol. 11(3), pages 1-35, March.
    2. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    3. Di Natale, L. & Svetozarevic, B. & Heer, P. & Jones, C.N., 2022. "Physically Consistent Neural Networks for building thermal modeling: Theory and analysis," Applied Energy, Elsevier, vol. 325(C).
    4. Blad, C. & Bøgh, S. & Kallesøe, C. & Raftery, Paul, 2023. "A laboratory test of an Offline-trained Multi-Agent Reinforcement Learning Algorithm for Heating Systems," Applied Energy, Elsevier, vol. 337(C).
    5. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    6. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    7. Lei, Yue & Zhan, Sicheng & Ono, Eikichi & Peng, Yuzhen & Zhang, Zhiang & Hasama, Takamasa & Chong, Adrian, 2022. "A practical deep reinforcement learning framework for multivariate occupant-centric control in buildings," Applied Energy, Elsevier, vol. 324(C).
    8. Yang, Lei & Nagy, Zoltan & Goffin, Philippe & Schlueter, Arno, 2015. "Reinforcement learning for optimal control of low exergy buildings," Applied Energy, Elsevier, vol. 156(C), pages 577-586.
    9. Coraci, Davide & Brandi, Silvio & Hong, Tianzhen & Capozzoli, Alfonso, 2023. "Online transfer learning strategy for enhancing the scalability and deployment of deep reinforcement learning control in smart buildings," Applied Energy, Elsevier, vol. 333(C).
    10. Martinopoulos, Georgios & Papakostas, Konstantinos T. & Papadopoulos, Agis M., 2018. "A comparative review of heating systems in EU countries, based on efficiency and fuel cost," Renewable and Sustainable Energy Reviews, Elsevier, vol. 90(C), pages 687-699.
    11. Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
    12. Davide Coraci & Silvio Brandi & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Online Implementation of a Soft Actor-Critic Agent to Enhance Indoor Temperature Control and Energy Efficiency in Buildings," Energies, MDPI, vol. 14(4), pages 1-26, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Coraci, Davide & Brandi, Silvio & Hong, Tianzhen & Capozzoli, Alfonso, 2023. "Online transfer learning strategy for enhancing the scalability and deployment of deep reinforcement learning control in smart buildings," Applied Energy, Elsevier, vol. 333(C).
    2. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    3. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    4. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    5. Nweye, Kingsley & Sankaranarayanan, Siva & Nagy, Zoltan, 2023. "MERLIN: Multi-agent offline and transfer learning for occupant-centric operation of grid-interactive communities," Applied Energy, Elsevier, vol. 346(C).
    6. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    7. Clara Ceccolini & Roozbeh Sangi, 2022. "Benchmarking Approaches for Assessing the Performance of Building Control Strategies: A Review," Energies, MDPI, vol. 15(4), pages 1-30, February.
    8. Ayas Shaqour & Aya Hagishima, 2022. "Systematic Review on Deep Reinforcement Learning-Based Energy Management for Different Building Types," Energies, MDPI, vol. 15(22), pages 1-27, November.
    9. Xu, Wenjie & Svetozarevic, Bratislav & Di Natale, Loris & Heer, Philipp & Jones, Colin N., 2024. "Data-driven adaptive building thermal controller tuning with constraints: A primal–dual contextual Bayesian optimization approach," Applied Energy, Elsevier, vol. 358(C).
    10. Panagiotis Michailidis & Iakovos Michailidis & Dimitrios Vamvakas & Elias Kosmatopoulos, 2023. "Model-Free HVAC Control in Buildings: A Review," Energies, MDPI, vol. 16(20), pages 1-45, October.
    11. Davide Coraci & Silvio Brandi & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Online Implementation of a Soft Actor-Critic Agent to Enhance Indoor Temperature Control and Energy Efficiency in Buildings," Energies, MDPI, vol. 14(4), pages 1-26, February.
    12. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    13. Charalampos Rafail Lazaridis & Iakovos Michailidis & Georgios Karatzinis & Panagiotis Michailidis & Elias Kosmatopoulos, 2024. "Evaluating Reinforcement Learning Algorithms in Residential Energy Saving and Comfort Management," Energies, MDPI, vol. 17(3), pages 1-33, January.
    14. Davide Deltetto & Davide Coraci & Giuseppe Pinto & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Exploring the Potentialities of Deep Reinforcement Learning for Incentive-Based Demand Response in a Cluster of Small Commercial Buildings," Energies, MDPI, vol. 14(10), pages 1-25, May.
    15. Song, Yuguang & Xia, Mingchao & Chen, Qifang & Chen, Fangjian, 2023. "A data-model fusion dispatch strategy for the building energy flexibility based on the digital twin," Applied Energy, Elsevier, vol. 332(C).
    16. Zhuang, Dian & Gan, Vincent J.L. & Duygu Tekler, Zeynep & Chong, Adrian & Tian, Shuai & Shi, Xing, 2023. "Data-driven predictive control for smart HVAC system in IoT-integrated buildings with time-series forecasting and reinforcement learning," Applied Energy, Elsevier, vol. 338(C).
    17. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    18. Lei, Yue & Zhan, Sicheng & Ono, Eikichi & Peng, Yuzhen & Zhang, Zhiang & Hasama, Takamasa & Chong, Adrian, 2022. "A practical deep reinforcement learning framework for multivariate occupant-centric control in buildings," Applied Energy, Elsevier, vol. 324(C).
    19. Seppo Sierla & Heikki Ihasalo & Valeriy Vyatkin, 2022. "A Review of Reinforcement Learning Applications to Control of Heating, Ventilation and Air Conditioning Systems," Energies, MDPI, vol. 15(10), pages 1-25, May.
    20. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:368:y:2024:i:c:s0306261924008304. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.