IDEAS home Printed from https://ideas.repec.org/a/eee/energy/v307y2024ics0360544224025143.html
   My bibliography  Save this article

A comparative study of DQN and D3QN for HVAC system optimization control

Author

Listed:
  • Qin, Haosen
  • Meng, Tao
  • Chen, Kan
  • Li, Zhengwei

Abstract

Ensuring the optimal performance of Heating, Ventilation, and Air Conditioning (HVAC) systems is paramount for achieving energy efficiency. This paper investigates the application of deep reinforcement learning algorithms in HVAC system control, aiming to identify the most suitable Q-network structure for optimizing HVAC systems and comparing the performance of Deep Q-learning (DQN) and Double Dueling Deep Q-learning (D3QN) algorithms. Initially, this paper evaluates and analyses existing literature to perform a normalization treatment on the state space. Through systematic simulation and rigorous data analysis, the impact of the Q-network structure on the efficacy of the DQN and D3QN algorithms is evaluated, resulting in the proposal of specific values for the Q-network structures within these two algorithms. Subsequently, comparisons are drawn on the optimization effectiveness, stability and reliability of these algorithms across diverse engineering projects. Results highlight the superiority of the D3QN algorithm over the DQN algorithm regarding both optimization effectiveness and stability across all evaluated projects. The proposed efficient Q-network structure comprises two hidden layers, with 64 and 12 neurons respectively in each layer. The findings of this paper are crucial in providing insights for HVAC systems control optimization using reinforcement learning and pave the way for advanced research and applications in the future.

Suggested Citation

  • Qin, Haosen & Meng, Tao & Chen, Kan & Li, Zhengwei, 2024. "A comparative study of DQN and D3QN for HVAC system optimization control," Energy, Elsevier, vol. 307(C).
  • Handle: RePEc:eee:energy:v:307:y:2024:i:c:s0360544224025143
    DOI: 10.1016/j.energy.2024.132740
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0360544224025143
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.energy.2024.132740?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Blad, Christian & Bøgh, Simon & Kallesøe, Carsten Skovmose, 2022. "Data-driven Offline Reinforcement Learning for HVAC-systems," Energy, Elsevier, vol. 261(PB).
    2. Guo, Xiaokai & Yan, Xianguo & Chen, Zhi & Meng, Zhiyu, 2022. "Research on energy management strategy of heavy-duty fuel cell hybrid vehicles based on dueling-double-deep Q-network," Energy, Elsevier, vol. 260(C).
    3. Liu, Teng & Wang, Bo & Yang, Chenglang, 2018. "Online Markov Chain-based energy management for a hybrid tracked vehicle with speedy Q-learning," Energy, Elsevier, vol. 160(C), pages 544-555.
    4. Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
    5. Liu, Xiangfei & Ren, Mifeng & Yang, Zhile & Yan, Gaowei & Guo, Yuanjun & Cheng, Lan & Wu, Chengke, 2022. "A multi-step predictive deep reinforcement learning algorithm for HVAC control systems in smart buildings," Energy, Elsevier, vol. 259(C).
    6. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    7. Kusiak, Andrew & Li, Mingyang & Tang, Fan, 2010. "Modeling and optimization of HVAC energy consumption," Applied Energy, Elsevier, vol. 87(10), pages 3092-3102, October.
    8. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Zhuang, Dian & Gan, Vincent J.L. & Duygu Tekler, Zeynep & Chong, Adrian & Tian, Shuai & Shi, Xing, 2023. "Data-driven predictive control for smart HVAC system in IoT-integrated buildings with time-series forecasting and reinforcement learning," Applied Energy, Elsevier, vol. 338(C).
    2. Blad, C. & Bøgh, S. & Kallesøe, C. & Raftery, Paul, 2023. "A laboratory test of an Offline-trained Multi-Agent Reinforcement Learning Algorithm for Heating Systems," Applied Energy, Elsevier, vol. 337(C).
    3. Homod, Raad Z. & Mohammed, Hayder Ibrahim & Abderrahmane, Aissa & Alawi, Omer A. & Khalaf, Osamah Ibrahim & Mahdi, Jasim M. & Guedri, Kamel & Dhaidan, Nabeel S. & Albahri, A.S. & Sadeq, Abdellatif M. , 2023. "Deep clustering of Lagrangian trajectory for multi-task learning to energy saving in intelligent buildings using cooperative multi-agent," Applied Energy, Elsevier, vol. 351(C).
    4. Panagiotis Michailidis & Iakovos Michailidis & Dimitrios Vamvakas & Elias Kosmatopoulos, 2023. "Model-Free HVAC Control in Buildings: A Review," Energies, MDPI, vol. 16(20), pages 1-45, October.
    5. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    6. Zhang, Bin & Hu, Weihao & Ghias, Amer M.Y.M. & Xu, Xiao & Chen, Zhe, 2022. "Multi-agent deep reinforcement learning-based coordination control for grid-aware multi-buildings," Applied Energy, Elsevier, vol. 328(C).
    7. Guo, Yuxiang & Qu, Shengli & Wang, Chuang & Xing, Ziwen & Duan, Kaiwen, 2024. "Optimal dynamic thermal management for data center via soft actor-critic algorithm with dynamic control interval and combined-value state space," Applied Energy, Elsevier, vol. 373(C).
    8. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    9. Cui, Can & Xue, Jing, 2024. "Energy and comfort aware operation of multi-zone HVAC system through preference-inspired deep reinforcement learning," Energy, Elsevier, vol. 292(C).
    10. Qin, Haosen & Yu, Zhen & Li, Tailu & Liu, Xueliang & Li, Li, 2023. "Energy-efficient heating control for nearly zero energy residential buildings with deep reinforcement learning," Energy, Elsevier, vol. 264(C).
    11. Park, Jong-Whi & Ju, Young-Min & Kim, You-Gwon & Kim, Hak-Sung, 2023. "50% reduction in energy consumption in an actual cold storage facility using a deep reinforcement learning-based control algorithm," Applied Energy, Elsevier, vol. 352(C).
    12. Wang, Zixuan & Xiao, Fu & Ran, Yi & Li, Yanxue & Xu, Yang, 2024. "Scalable energy management approach of residential hybrid energy system using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 367(C).
    13. Nik, Vahid M. & Hosseini, Mohammad, 2023. "CIRLEM: a synergic integration of Collective Intelligence and Reinforcement learning in Energy Management for enhanced climate resilience and lightweight computation," Applied Energy, Elsevier, vol. 350(C).
    14. Nyong-Bassey, Bassey Etim & Giaouris, Damian & Patsios, Charalampos & Papadopoulou, Simira & Papadopoulos, Athanasios I. & Walker, Sara & Voutetakis, Spyros & Seferlis, Panos & Gadoue, Shady, 2020. "Reinforcement learning based adaptive power pinch analysis for energy management of stand-alone hybrid energy storage systems considering uncertainty," Energy, Elsevier, vol. 193(C).
    15. Rongjiang Ma & Xianlin Wang & Ming Shan & Nanyang Yu & Shen Yang, 2020. "Recognition of Variable-Speed Equipment in an Air-Conditioning System Using Numerical Analysis of Energy-Consumption Data," Energies, MDPI, vol. 13(18), pages 1-14, September.
    16. Matteo Acquarone & Claudio Maino & Daniela Misul & Ezio Spessa & Antonio Mastropietro & Luca Sorrentino & Enrico Busto, 2023. "Influence of the Reward Function on the Selection of Reinforcement Learning Agents for Hybrid Electric Vehicles Real-Time Control," Energies, MDPI, vol. 16(6), pages 1-22, March.
    17. Kazmi, Hussain & Suykens, Johan & Balint, Attila & Driesen, Johan, 2019. "Multi-agent reinforcement learning for modeling and control of thermostatically controlled loads," Applied Energy, Elsevier, vol. 238(C), pages 1022-1035.
    18. Yang, Ningkang & Han, Lijin & Xiang, Changle & Liu, Hui & Li, Xunmin, 2021. "An indirect reinforcement learning based real-time energy management strategy via high-order Markov Chain model for a hybrid electric vehicle," Energy, Elsevier, vol. 236(C).
    19. Cui, Can & Zhang, Xin & Cai, Wenjian, 2020. "An energy-saving oriented air balancing method for demand controlled ventilation systems with branch and black-box model," Applied Energy, Elsevier, vol. 264(C).
    20. Du, Guodong & Zou, Yuan & Zhang, Xudong & Kong, Zehui & Wu, Jinlong & He, Dingbo, 2019. "Intelligent energy management for hybrid electric tracked vehicles using online reinforcement learning," Applied Energy, Elsevier, vol. 251(C), pages 1-1.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:307:y:2024:i:c:s0360544224025143. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.