IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v377y2025ipas0306261924018506.html
   My bibliography  Save this article

Deep reinforcement learning control for co-optimizing energy consumption, thermal comfort, and indoor air quality in an office building

Author

Listed:
  • Guo, Fangzhou
  • Ham, Sang woo
  • Kim, Donghun
  • Moon, Hyeun Jun

Abstract

With the recent demand for decarbonization and energy efficiency, advanced HVAC control using Deep Reinforcement Learning (DRL) becomes a promising solution. Due to its flexible structures, DRL has been successful in energy reduction for many HVAC systems. However, only a few researches applied DRL agents to manage the entire central HVAC system and control multiple components in both the water loop and the air loop, owing to its complex system structures. Moreover, those researches have not extended their applications by incorporating the indoor air quality, especially both CO2 and PM2.5concentrations, on top of energy saving and thermal comfort, as achieving those objectives simultaneously can cause multiple control conflicts. What's more, DRL agents are usually trained on the simulation environment before deployment, so another challenge is to develop an accurate but relatively simple simulator. Therefore, we propose a DRL algorithm for a central HVAC system to co-optimize energy consumption, thermal comfort, indoor CO2 level, and indoor PM2.5 level in an office building. To train the controller, we also developed a hybrid simulator that decoupled the complex system into multiple simulation models, which are calibrated separately using laboratory test data. The hybrid simulator combined the dynamics of the HVAC system, the building envelope, as well as moisture, CO2, and particulate matter transfer. Three control algorithms (rule-based, MPC, and DRL) are developed, and their performances are evaluated on the hybrid simulator environment with a realistic scenario (i.e., with stochastic noises). The test results showed that, the DRL controller can save 21.4 % of energy compared to a rule-based controller, and has improved thermal comfort, reduced indoor CO2 concentration. The MPC controller showed an 18.6 % energy saving compared to the DRL controller, mainly due to savings from comfort and indoor air quality boundary violations caused by unmeasured disturbances, and it also highlights computational challenges in real-time control due to non-linear optimization. Finally, we provide the practical considerations for designing and implementing the DRL and MPC controllers based on their respective pros and cons.

Suggested Citation

  • Guo, Fangzhou & Ham, Sang woo & Kim, Donghun & Moon, Hyeun Jun, 2025. "Deep reinforcement learning control for co-optimizing energy consumption, thermal comfort, and indoor air quality in an office building," Applied Energy, Elsevier, vol. 377(PA).
  • Handle: RePEc:eee:appene:v:377:y:2025:i:pa:s0306261924018506
    DOI: 10.1016/j.apenergy.2024.124467
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261924018506
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2024.124467?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    2. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    3. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    4. Yang, Ting & Zhao, Liyuan & Li, Wei & Wu, Jianzhong & Zomaya, Albert Y., 2021. "Towards healthy and cost-effective indoor environment management in smart homes: A deep reinforcement learning approach," Applied Energy, Elsevier, vol. 300(C).
    5. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    6. Jansen, Jelger & Jorissen, Filip & Helsen, Lieve, 2024. "Mixed-integer non-linear model predictive control of district heating networks," Applied Energy, Elsevier, vol. 361(C).
    7. Ma, Zhenjun & Wang, Shengwei, 2011. "Supervisory and optimal control of central chiller plants using simplified adaptive models and genetic algorithm," Applied Energy, Elsevier, vol. 88(1), pages 198-211, January.
    8. Coraci, Davide & Brandi, Silvio & Hong, Tianzhen & Capozzoli, Alfonso, 2023. "Online transfer learning strategy for enhancing the scalability and deployment of deep reinforcement learning control in smart buildings," Applied Energy, Elsevier, vol. 333(C).
    9. Enescu, Diana, 2017. "A review of thermal comfort models and indicators for indoor environments," Renewable and Sustainable Energy Reviews, Elsevier, vol. 79(C), pages 1353-1379.
    10. Blad, Christian & Bøgh, Simon & Kallesøe, Carsten Skovmose, 2022. "Data-driven Offline Reinforcement Learning for HVAC-systems," Energy, Elsevier, vol. 261(PB).
    11. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    12. Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    3. Ayas Shaqour & Aya Hagishima, 2022. "Systematic Review on Deep Reinforcement Learning-Based Energy Management for Different Building Types," Energies, MDPI, vol. 15(22), pages 1-27, November.
    4. Zhang, Bin & Hu, Weihao & Ghias, Amer M.Y.M. & Xu, Xiao & Chen, Zhe, 2022. "Multi-agent deep reinforcement learning-based coordination control for grid-aware multi-buildings," Applied Energy, Elsevier, vol. 328(C).
    5. Zhuang, Dian & Gan, Vincent J.L. & Duygu Tekler, Zeynep & Chong, Adrian & Tian, Shuai & Shi, Xing, 2023. "Data-driven predictive control for smart HVAC system in IoT-integrated buildings with time-series forecasting and reinforcement learning," Applied Energy, Elsevier, vol. 338(C).
    6. Blad, C. & Bøgh, S. & Kallesøe, C. & Raftery, Paul, 2023. "A laboratory test of an Offline-trained Multi-Agent Reinforcement Learning Algorithm for Heating Systems," Applied Energy, Elsevier, vol. 337(C).
    7. Liu, Shuo & Liu, Xiaohua & Zhang, Tao & Wang, Chaoliang & Liu, Wei, 2024. "Joint optimization for temperature and humidity independent control system based on multi-agent reinforcement learning with cooperative mechanisms," Applied Energy, Elsevier, vol. 375(C).
    8. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    9. Nik, Vahid M. & Hosseini, Mohammad, 2023. "CIRLEM: a synergic integration of Collective Intelligence and Reinforcement learning in Energy Management for enhanced climate resilience and lightweight computation," Applied Energy, Elsevier, vol. 350(C).
    10. Guo, Yuxiang & Qu, Shengli & Wang, Chuang & Xing, Ziwen & Duan, Kaiwen, 2024. "Optimal dynamic thermal management for data center via soft actor-critic algorithm with dynamic control interval and combined-value state space," Applied Energy, Elsevier, vol. 373(C).
    11. Fang, Xi & Gong, Guangcai & Li, Guannan & Chun, Liang & Peng, Pei & Li, Wenqiang & Shi, Xing, 2023. "Cross temporal-spatial transferability investigation of deep reinforcement learning control strategy in the building HVAC system level," Energy, Elsevier, vol. 263(PB).
    12. Clara Ceccolini & Roozbeh Sangi, 2022. "Benchmarking Approaches for Assessing the Performance of Building Control Strategies: A Review," Energies, MDPI, vol. 15(4), pages 1-30, February.
    13. Zhou, Xinlei & Du, Han & Xue, Shan & Ma, Zhenjun, 2024. "Recent advances in data mining and machine learning for enhanced building energy management," Energy, Elsevier, vol. 307(C).
    14. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    15. Wang, Zixuan & Xiao, Fu & Ran, Yi & Li, Yanxue & Xu, Yang, 2024. "Scalable energy management approach of residential hybrid energy system using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 367(C).
    16. Homod, Raad Z. & Mohammed, Hayder Ibrahim & Abderrahmane, Aissa & Alawi, Omer A. & Khalaf, Osamah Ibrahim & Mahdi, Jasim M. & Guedri, Kamel & Dhaidan, Nabeel S. & Albahri, A.S. & Sadeq, Abdellatif M. , 2023. "Deep clustering of Lagrangian trajectory for multi-task learning to energy saving in intelligent buildings using cooperative multi-agent," Applied Energy, Elsevier, vol. 351(C).
    17. Jiang, Yuliang & Zhu, Shanying & Xu, Qimin & Yang, Bo & Guan, Xinping, 2023. "Hybrid modeling-based temperature and humidity adaptive control for a multi-zone HVAC system," Applied Energy, Elsevier, vol. 334(C).
    18. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    19. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    20. Qin, Haosen & Meng, Tao & Chen, Kan & Li, Zhengwei, 2024. "A comparative study of DQN and D3QN for HVAC system optimization control," Energy, Elsevier, vol. 307(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:377:y:2025:i:pa:s0306261924018506. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.