IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v333y2023ics0306261922018189.html
   My bibliography  Save this article

Deep reinforcement learning towards real-world dynamic thermal management of data centers

Author

Listed:
  • Zhang, Qingang
  • Zeng, Wei
  • Lin, Qinjie
  • Chng, Chin-Boon
  • Chui, Chee-Kong
  • Lee, Poh-Seng

Abstract

Deep Reinforcement Learning has been increasingly researched for Dynamic Thermal Management in Data Centers. However, existing works typically evaluate the performance of algorithms on a specific task, utilizing models or data trajectories without discussing in detail their implementation feasibility and their ability to deal with diverse work scenarios. The lack of these works limits the real-world deployment of Deep Reinforcement Learning. To this end, this paper comprehensively evaluates the strengths and limitations of state-of-the-art algorithms by conducting analytical and numerical studies. The analysis is conducted in four dimensions: algorithms, tasks, system dynamics, and knowledge transfer. As an inherent property, the sensitivity to algorithms settings is first evaluated in a simulated data center model. The ability to deal with various tasks and the sensitivity to reward functions are subsequently studied. The trade-off between constraints and power savings is identified by conducting ablation experiments. Next, the performance under different work scenarios is investigated, including various equipment, workload schedules, locations, and power densities. Finally, the transferability of algorithms across tasks and scenarios is also evaluated. The results show that actor-critic, off-policy, and model-based algorithms outperform others in optimality, robustness, and transferability. They can reduce violations and achieve around 8.84% power savings in some scenarios compared to the default controller. However, deploying these algorithms in real-world systems is challenging since they are sensitive to specific hyperparameters, reward functions, and work scenarios. Constraint violations and sample efficiency are some aspects that are still unsatisfactory. This paper presents our well-structured investigations, new findings, and challenges when deploying deep reinforcement learning in Data Centers.

Suggested Citation

  • Zhang, Qingang & Zeng, Wei & Lin, Qinjie & Chng, Chin-Boon & Chui, Chee-Kong & Lee, Poh-Seng, 2023. "Deep reinforcement learning towards real-world dynamic thermal management of data centers," Applied Energy, Elsevier, vol. 333(C).
  • Handle: RePEc:eee:appene:v:333:y:2023:i:c:s0306261922018189
    DOI: 10.1016/j.apenergy.2022.120561
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261922018189
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2022.120561?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Gokhale, Gargya & Claessens, Bert & Develder, Chris, 2022. "Physics informed neural networks for control oriented thermal modeling of buildings," Applied Energy, Elsevier, vol. 314(C).
    2. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    3. Di Natale, L. & Svetozarevic, B. & Heer, P. & Jones, C.N., 2022. "Physically Consistent Neural Networks for building thermal modeling: Theory and analysis," Applied Energy, Elsevier, vol. 325(C).
    4. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    5. Habibi Khalaj, Ali & Halgamuge, Saman K., 2017. "A Review on efficient thermal management of air- and liquid-cooled data centers: From chip to the cooling system," Applied Energy, Elsevier, vol. 205(C), pages 1165-1188.
    6. David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
    7. Afroz, Zakia & Shafiullah, GM & Urmee, Tania & Higgins, Gary, 2018. "Modeling techniques used in building HVAC control systems: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 83(C), pages 64-84.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Han, Ouzhu & Ding, Tao & Yang, Miao & Jia, Wenhao & He, Xinran & Ma, Zhoujun, 2024. "A novel 4-level joint optimal dispatch for demand response of data centers with district autonomy realization," Applied Energy, Elsevier, vol. 358(C).
    2. Feng, Zhiyan & Zhang, Qingang & Zhang, Yiming & Fei, Liangyu & Jiang, Fei & Zhao, Shengdun, 2024. "Practicability analysis of online deep reinforcement learning towards energy management strategy of 4WD-BEVs driven by dual-motor in-wheel motors," Energy, Elsevier, vol. 290(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Fang, Xi & Gong, Guangcai & Li, Guannan & Chun, Liang & Peng, Pei & Li, Wenqiang & Shi, Xing, 2023. "Cross temporal-spatial transferability investigation of deep reinforcement learning control strategy in the building HVAC system level," Energy, Elsevier, vol. 263(PB).
    2. Di Natale, L. & Svetozarevic, B. & Heer, P. & Jones, C.N., 2022. "Physically Consistent Neural Networks for building thermal modeling: Theory and analysis," Applied Energy, Elsevier, vol. 325(C).
    3. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    4. Liang, Xinbin & Zhu, Xu & Chen, Siliang & Jin, Xinqiao & Xiao, Fu & Du, Zhimin, 2023. "Physics-constrained cooperative learning-based reference models for smart management of chillers considering extrapolation scenarios," Applied Energy, Elsevier, vol. 349(C).
    5. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    6. Di Natale, L. & Svetozarevic, B. & Heer, P. & Jones, C.N., 2023. "Towards scalable physically consistent neural networks: An application to data-driven multi-zone thermal building models," Applied Energy, Elsevier, vol. 340(C).
    7. Xiao, Tianqi & You, Fengqi, 2023. "Building thermal modeling and model predictive control with physically consistent deep learning for decarbonization and energy optimization," Applied Energy, Elsevier, vol. 342(C).
    8. Clara Ceccolini & Roozbeh Sangi, 2022. "Benchmarking Approaches for Assessing the Performance of Building Control Strategies: A Review," Energies, MDPI, vol. 15(4), pages 1-30, February.
    9. Zhuang, Dian & Gan, Vincent J.L. & Duygu Tekler, Zeynep & Chong, Adrian & Tian, Shuai & Shi, Xing, 2023. "Data-driven predictive control for smart HVAC system in IoT-integrated buildings with time-series forecasting and reinforcement learning," Applied Energy, Elsevier, vol. 338(C).
    10. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    11. Blad, C. & Bøgh, S. & Kallesøe, C. & Raftery, Paul, 2023. "A laboratory test of an Offline-trained Multi-Agent Reinforcement Learning Algorithm for Heating Systems," Applied Energy, Elsevier, vol. 337(C).
    12. Svetozarevic, B. & Baumann, C. & Muntwiler, S. & Di Natale, L. & Zeilinger, M.N. & Heer, P., 2022. "Data-driven control of room temperature and bidirectional EV charging using deep reinforcement learning: Simulations and experiments," Applied Energy, Elsevier, vol. 307(C).
    13. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Operational optimization for off-grid renewable building energy system using deep reinforcement learning," Applied Energy, Elsevier, vol. 325(C).
    14. Eva Andrés & Manuel Pegalajar Cuéllar & Gabriel Navarro, 2022. "On the Use of Quantum Reinforcement Learning in Energy-Efficiency Scenarios," Energies, MDPI, vol. 15(16), pages 1-24, August.
    15. Hu, Guoqing & You, Fengqi, 2024. "AI-enabled cyber-physical-biological systems for smart energy management and sustainable food production in a plant factory," Applied Energy, Elsevier, vol. 356(C).
    16. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    17. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    18. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    19. Dongsu Kim & Yongjun Lee & Kyungil Chin & Pedro J. Mago & Heejin Cho & Jian Zhang, 2023. "Implementation of a Long Short-Term Memory Transfer Learning (LSTM-TL)-Based Data-Driven Model for Building Energy Demand Forecasting," Sustainability, MDPI, vol. 15(3), pages 1-23, January.
    20. Xiao, Tianqi & You, Fengqi, 2024. "Physically consistent deep learning-based day-ahead energy dispatching and thermal comfort control for grid-interactive communities," Applied Energy, Elsevier, vol. 353(PB).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:333:y:2023:i:c:s0306261922018189. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.