IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v17y2024i8p1908-d1377256.html
   My bibliography  Save this article

PV-Optimized Heat Pump Control in Multi-Family Buildings Using a Reinforcement Learning Approach

Author

Listed:
  • Michael Bachseitz

    (Working Group Energy for Buildings & Settlements in the National/European Context, Institute of new Energy Systems (InES), Technische Hochschule Ingolstadt, Esplanade 10, 85049 Ingolstadt, Germany)

  • Muhammad Sheryar

    (Working Group Energy for Buildings & Settlements in the National/European Context, Institute of new Energy Systems (InES), Technische Hochschule Ingolstadt, Esplanade 10, 85049 Ingolstadt, Germany)

  • David Schmitt

    (Working Group Energy for Buildings & Settlements in the National/European Context, Institute of new Energy Systems (InES), Technische Hochschule Ingolstadt, Esplanade 10, 85049 Ingolstadt, Germany)

  • Thorsten Summ

    (Working Group Energy for Buildings & Settlements in the National/European Context, Institute of new Energy Systems (InES), Technische Hochschule Ingolstadt, Esplanade 10, 85049 Ingolstadt, Germany)

  • Christoph Trinkl

    (Working Group Energy for Buildings & Settlements in the National/European Context, Institute of new Energy Systems (InES), Technische Hochschule Ingolstadt, Esplanade 10, 85049 Ingolstadt, Germany)

  • Wilfried Zörner

    (Working Group Energy for Buildings & Settlements in the National/European Context, Institute of new Energy Systems (InES), Technische Hochschule Ingolstadt, Esplanade 10, 85049 Ingolstadt, Germany)

Abstract

For the energy transition in the residential sector, heat pumps are a core technology for decarbonizing thermal energy production for space heating and domestic hot water. Electricity generation from on-site photovoltaic (PV) systems can also contribute to a carbon-neutral building stock. However, both will increase the stress on the electricity grid. This can be reduced by using appropriate control strategies to match electricity consumption and production. In recent years, artificial intelligence-based approaches such as reinforcement learning (RL) have become increasingly popular for energy-system management. However, the literature shows a lack of investigation of RL-based controllers for multi-family building energy systems, including an air source heat pump, thermal storage, and a PV system, although this is a common system configuration. Therefore, in this study, a model of such an energy system and RL-based controllers were developed and simulated with physical models and compared with conventional rule-based approaches. Four RL algorithms were investigated for two objectives, and finally, the soft actor–critic algorithm was selected for the annual simulations. The first objective, to maintain only the required temperatures in the thermal storage, could be achieved by the developed RL agent. However, the second objective, to additionally improve the PV self-consumption, was better achieved by the rule-based controller. Therefore, further research on the reward function, hyperparameters, and advanced methods, including long short-term memory layers, as well as a training for longer time periods than six days are suggested.

Suggested Citation

  • Michael Bachseitz & Muhammad Sheryar & David Schmitt & Thorsten Summ & Christoph Trinkl & Wilfried Zörner, 2024. "PV-Optimized Heat Pump Control in Multi-Family Buildings Using a Reinforcement Learning Approach," Energies, MDPI, vol. 17(8), pages 1-16, April.
  • Handle: RePEc:gam:jeners:v:17:y:2024:i:8:p:1908-:d:1377256
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/17/8/1908/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/17/8/1908/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Langer, Lissy & Volling, Thomas, 2022. "A reinforcement learning approach to home energy management for modulating heat pumps and photovoltaic systems," Applied Energy, Elsevier, vol. 327(C).
    2. Kuboth, Sebastian & Heberle, Florian & König-Haagen, Andreas & Brüggemann, Dieter, 2019. "Economic model predictive control of combined thermal and electric residential building energy systems," Applied Energy, Elsevier, vol. 240(C), pages 372-385.
    3. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    4. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Guo, Yurun & Wang, Shugang & Wang, Jihong & Zhang, Tengfei & Ma, Zhenjun & Jiang, Shuang, 2024. "Key district heating technologies for building energy flexibility: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 189(PB).
    2. Gokhale, Gargya & Claessens, Bert & Develder, Chris, 2022. "Physics informed neural networks for control oriented thermal modeling of buildings," Applied Energy, Elsevier, vol. 314(C).
    3. Langer, Lissy & Volling, Thomas, 2022. "A reinforcement learning approach to home energy management for modulating heat pumps and photovoltaic systems," Applied Energy, Elsevier, vol. 327(C).
    4. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    5. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    6. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    7. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    8. Hindreen Rashid Abdulqadir & Adnan Mohsin Abdulazeez, 2021. "Reinforcement Learning and Modeling Techniques: A Review," International Journal of Science and Business, IJSAB International, vol. 5(3), pages 174-189.
    9. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    10. Davide Fop & Ali Reza Yaghoubi & Alfonso Capozzoli, 2024. "Validation of a Model Predictive Control Strategy on a High Fidelity Building Emulator," Energies, MDPI, vol. 17(20), pages 1-20, October.
    11. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    12. Davide Deltetto & Davide Coraci & Giuseppe Pinto & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Exploring the Potentialities of Deep Reinforcement Learning for Incentive-Based Demand Response in a Cluster of Small Commercial Buildings," Energies, MDPI, vol. 14(10), pages 1-25, May.
    13. Clara Ceccolini & Roozbeh Sangi, 2022. "Benchmarking Approaches for Assessing the Performance of Building Control Strategies: A Review," Energies, MDPI, vol. 15(4), pages 1-30, February.
    14. Lilia Tightiz & Joon Yoo, 2022. "A Review on a Data-Driven Microgrid Management System Integrating an Active Distribution Network: Challenges, Issues, and New Trends," Energies, MDPI, vol. 15(22), pages 1-24, November.
    15. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    16. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    17. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    18. Svetozarevic, B. & Baumann, C. & Muntwiler, S. & Di Natale, L. & Zeilinger, M.N. & Heer, P., 2022. "Data-driven control of room temperature and bidirectional EV charging using deep reinforcement learning: Simulations and experiments," Applied Energy, Elsevier, vol. 307(C).
    19. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Operational optimization for off-grid renewable building energy system using deep reinforcement learning," Applied Energy, Elsevier, vol. 325(C).
    20. Nik, Vahid M. & Hosseini, Mohammad, 2023. "CIRLEM: a synergic integration of Collective Intelligence and Reinforcement learning in Energy Management for enhanced climate resilience and lightweight computation," Applied Energy, Elsevier, vol. 350(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:17:y:2024:i:8:p:1908-:d:1377256. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.