IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v371y2024ics0306261924010717.html
   My bibliography  Save this article

Forecast-based and data-driven reinforcement learning for residential heat pump operation

Author

Listed:
  • Schmitz, Simon
  • Brucke, Karoline
  • Kasturi, Pranay
  • Ansari, Esmail
  • Klement, Peter

Abstract

Electrified residential heating systems have great potential for flexibility provision to the electricity grid with advanced operation control mechanisms being able to harness that. In this work, we therefore apply a reinforcement learning (RL) approach for the operation of a residential heat pump in a simulation study and compare the results with a classical rule-based approach. Doing so, we consider an apartment complex with 100 living units and a central heat pump along with a central hot water tank serving as heat storage. Unlike other studies in the field, we focus on a data driven approach where no building model is required and living comfort of the residents is never compromised. Both factors maximize the applicability in real world buildings. Additionally, we examine the effects of uncertainty on the heat pump operation. This is carried out by testing four different observation spaces each with different data visibility and availability to the RL agent. With that we also simulate the heat pump operation under forecast conditions which has not been done before to the best of our knowledge. We find that the inertia of typical residential heat systems is high enough so that missing or uncertain information has only a minor effect on the operation. Compared to the rule-based approach all RL agents are able to exploit variable electricity prices and the flexibility of the heat storage in such a way, that electricity costs and energy consumption can be significantly reduced. Additionally, a large proportion of the nominal electrical power of the installed heat pump could be saved with the presented intelligent operation. The robustness of the approach is shown by running ten independent training and testing cycles for all setups with reproducible results.

Suggested Citation

  • Schmitz, Simon & Brucke, Karoline & Kasturi, Pranay & Ansari, Esmail & Klement, Peter, 2024. "Forecast-based and data-driven reinforcement learning for residential heat pump operation," Applied Energy, Elsevier, vol. 371(C).
  • Handle: RePEc:eee:appene:v:371:y:2024:i:c:s0306261924010717
    DOI: 10.1016/j.apenergy.2024.123688
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261924010717
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2024.123688?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Xue, Puning & Jiang, Yi & Zhou, Zhigang & Chen, Xin & Fang, Xiumu & Liu, Jing, 2019. "Multi-step ahead forecasting of heat load in district heating systems using machine learning algorithms," Energy, Elsevier, vol. 188(C).
    2. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    3. Yang, Lei & Nagy, Zoltan & Goffin, Philippe & Schlueter, Arno, 2015. "Reinforcement learning for optimal control of low exergy buildings," Applied Energy, Elsevier, vol. 156(C), pages 577-586.
    4. Langer, Lissy & Volling, Thomas, 2022. "A reinforcement learning approach to home energy management for modulating heat pumps and photovoltaic systems," Applied Energy, Elsevier, vol. 327(C).
    5. Frederik Ruelens & Sandro Iacovella & Bert J. Claessens & Ronnie Belmans, 2015. "Learning Agent for a Heat-Pump Thermostat with a Set-Back Strategy Using Model-Free Reinforcement Learning," Energies, MDPI, vol. 8(8), pages 1-19, August.
    6. Schmeling, Lucas & Schönfeldt, Patrik & Klement, Peter & Vorspel, Lena & Hanke, Benedikt & von Maydell, Karsten & Agert, Carsten, 2022. "A generalised optimal design methodology for distributed energy systems," Renewable Energy, Elsevier, vol. 200(C), pages 1223-1239.
    7. Noye, Sarah & Mulero Martinez, Rubén & Carnieletto, Laura & De Carli, Michele & Castelruiz Aguirre, Amaia, 2022. "A review of advanced ground source heat pump control: Artificial intelligence for autonomous and adaptive control," Renewable and Sustainable Energy Reviews, Elsevier, vol. 153(C).
    8. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    9. Han, Gwangwoo & Joo, Hong-Jin & Lim, Hee-Won & An, Young-Sub & Lee, Wang-Je & Lee, Kyoung-Ho, 2023. "Data-driven heat pump operation strategy using rainbow deep reinforcement learning for significant reduction of electricity cost," Energy, Elsevier, vol. 270(C).
    10. Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    3. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    4. Wang, Xuezheng & Dong, Bing, 2024. "Long-term experimental evaluation and comparison of advanced controls for HVAC systems," Applied Energy, Elsevier, vol. 371(C).
    5. Han, Gwangwoo & Joo, Hong-Jin & Lim, Hee-Won & An, Young-Sub & Lee, Wang-Je & Lee, Kyoung-Ho, 2023. "Data-driven heat pump operation strategy using rainbow deep reinforcement learning for significant reduction of electricity cost," Energy, Elsevier, vol. 270(C).
    6. Coraci, Davide & Brandi, Silvio & Hong, Tianzhen & Capozzoli, Alfonso, 2023. "Online transfer learning strategy for enhancing the scalability and deployment of deep reinforcement learning control in smart buildings," Applied Energy, Elsevier, vol. 333(C).
    7. Silvestri, Alberto & Coraci, Davide & Brandi, Silvio & Capozzoli, Alfonso & Borkowski, Esther & Köhler, Johannes & Wu, Duan & Zeilinger, Melanie N. & Schlueter, Arno, 2024. "Real building implementation of a deep reinforcement learning controller to enhance energy efficiency and indoor temperature control," Applied Energy, Elsevier, vol. 368(C).
    8. Chen, Minghao & Xie, Zhiyuan & Sun, Yi & Zheng, Shunlin, 2023. "The predictive management in campus heating system based on deep reinforcement learning and probabilistic heat demands forecasting," Applied Energy, Elsevier, vol. 350(C).
    9. Gokhale, Gargya & Claessens, Bert & Develder, Chris, 2022. "Physics informed neural networks for control oriented thermal modeling of buildings," Applied Energy, Elsevier, vol. 314(C).
    10. Davide Coraci & Silvio Brandi & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Online Implementation of a Soft Actor-Critic Agent to Enhance Indoor Temperature Control and Energy Efficiency in Buildings," Energies, MDPI, vol. 14(4), pages 1-26, February.
    11. Nweye, Kingsley & Sankaranarayanan, Siva & Nagy, Zoltan, 2023. "MERLIN: Multi-agent offline and transfer learning for occupant-centric operation of grid-interactive communities," Applied Energy, Elsevier, vol. 346(C).
    12. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    13. Davide Deltetto & Davide Coraci & Giuseppe Pinto & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Exploring the Potentialities of Deep Reinforcement Learning for Incentive-Based Demand Response in a Cluster of Small Commercial Buildings," Energies, MDPI, vol. 14(10), pages 1-25, May.
    14. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    15. Clara Ceccolini & Roozbeh Sangi, 2022. "Benchmarking Approaches for Assessing the Performance of Building Control Strategies: A Review," Energies, MDPI, vol. 15(4), pages 1-30, February.
    16. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    17. Michael Bachseitz & Muhammad Sheryar & David Schmitt & Thorsten Summ & Christoph Trinkl & Wilfried Zörner, 2024. "PV-Optimized Heat Pump Control in Multi-Family Buildings Using a Reinforcement Learning Approach," Energies, MDPI, vol. 17(8), pages 1-16, April.
    18. Noye, Sarah & Mulero Martinez, Rubén & Carnieletto, Laura & De Carli, Michele & Castelruiz Aguirre, Amaia, 2022. "A review of advanced ground source heat pump control: Artificial intelligence for autonomous and adaptive control," Renewable and Sustainable Energy Reviews, Elsevier, vol. 153(C).
    19. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    20. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:371:y:2024:i:c:s0306261924010717. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.