IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v375y2024ics0306261924013874.html
   My bibliography  Save this article

An adaptive switching control model for air conditioning systems based on information completeness

Author

Listed:
  • Ding, Yan
  • Zhang, Haozheng
  • Yang, Xiaochen
  • Tian, Zhe
  • Huang, Chen

Abstract

As building energy management systems are widely applied, a large amount of operational data can be acquired and utilized for building load forecasting and energy system control. However, there's a lack of methods for assessing the completeness of operational data sets and the requisite data quality for various control models. Without these assessments, addressing the loss of model accuracy due to fluctuations in data quality becomes challenging, resulting in deviations from actual operating conditions and decreased control performance. To bridge the gaps, a clustering method is employed to categorize load forecasting training dataset into high and low completeness categories. By matching a reinforcement learning control model combining transfer learning and imitation learning to low completeness datasets and a model-based optimization control model to high completeness datasets, an adaptive switching control model is proposed for air conditioning systems in this study. Case study demonstrates that employing transfer imitation learning results in an 11.5% higher operation benefit compared to model-based optimization under low completeness data conditions. The adaptive switching control model can further reduce operational costs by 5.5% and energy consumption by 5.0% compared to the single control models using only model-based optimization or transfer imitation learning.

Suggested Citation

  • Ding, Yan & Zhang, Haozheng & Yang, Xiaochen & Tian, Zhe & Huang, Chen, 2024. "An adaptive switching control model for air conditioning systems based on information completeness," Applied Energy, Elsevier, vol. 375(C).
  • Handle: RePEc:eee:appene:v:375:y:2024:i:c:s0306261924013874
    DOI: 10.1016/j.apenergy.2024.124004
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261924013874
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2024.124004?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Ceusters, Glenn & Rodríguez, Román Cantú & García, Alberte Bouso & Franke, Rüdiger & Deconinck, Geert & Helsen, Lieve & Nowé, Ann & Messagie, Maarten & Camargo, Luis Ramirez, 2021. "Model-predictive control and reinforcement learning in multi-energy system case studies," Applied Energy, Elsevier, vol. 303(C).
    2. O’Malley, Cormac & de Mars, Patrick & Badesa, Luis & Strbac, Goran, 2023. "Reinforcement learning and mixed-integer programming for power plant scheduling in low carbon systems: Comparison and hybridisation," Applied Energy, Elsevier, vol. 349(C).
    3. Arroyo, Javier & Manna, Carlo & Spiessens, Fred & Helsen, Lieve, 2022. "Reinforced model predictive control (RL-MPC) for building energy management," Applied Energy, Elsevier, vol. 309(C).
    4. Ma, Deyin & Zhang, Lizhi & Sun, Bo, 2021. "An interval scheduling method for the CCHP system containing renewable energy sources based on model predictive control," Energy, Elsevier, vol. 236(C).
    5. Coraci, Davide & Brandi, Silvio & Hong, Tianzhen & Capozzoli, Alfonso, 2023. "Online transfer learning strategy for enhancing the scalability and deployment of deep reinforcement learning control in smart buildings," Applied Energy, Elsevier, vol. 333(C).
    6. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    7. Lu, Yakai & Tian, Zhe & Zhou, Ruoyu & Liu, Wenjing, 2021. "A general transfer learning-based framework for thermal load prediction in regional energy system," Energy, Elsevier, vol. 217(C).
    8. Fang, Xi & Gong, Guangcai & Li, Guannan & Chun, Liang & Peng, Pei & Li, Wenqiang & Shi, Xing, 2023. "Cross temporal-spatial transferability investigation of deep reinforcement learning control strategy in the building HVAC system level," Energy, Elsevier, vol. 263(PB).
    9. Zhan, Sicheng & Chong, Adrian, 2021. "Data requirements and performance evaluation of model predictive control in buildings: A modeling perspective," Renewable and Sustainable Energy Reviews, Elsevier, vol. 142(C).
    10. Dalala, Zakariya & Al-Omari, Murad & Al-Addous, Mohammad & Bdour, Mathhar & Al-Khasawneh, Yaqoub & Alkasrawi, Malek, 2022. "Increased renewable energy penetration in national electrical grids constraints and solutions," Energy, Elsevier, vol. 246(C).
    11. Yang, Wangwang & Shi, Jing & Li, Shujian & Song, Zhaofang & Zhang, Zitong & Chen, Zexu, 2022. "A combined deep learning load forecasting model of single household resident user considering multi-time scale electricity consumption behavior," Applied Energy, Elsevier, vol. 307(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Liu, Mingzhe & Guo, Mingyue & Fu, Yangyang & O’Neill, Zheng & Gao, Yuan, 2024. "Expert-guided imitation learning for energy management: Evaluating GAIL’s performance in building control applications," Applied Energy, Elsevier, vol. 372(C).
    2. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    3. Nweye, Kingsley & Sankaranarayanan, Siva & Nagy, Zoltan, 2023. "MERLIN: Multi-agent offline and transfer learning for occupant-centric operation of grid-interactive communities," Applied Energy, Elsevier, vol. 346(C).
    4. Hua, Pengmin & Wang, Haichao & Xie, Zichan & Lahdelma, Risto, 2024. "Multi-criteria evaluation of novel multi-objective model predictive control method for indoor thermal comfort," Energy, Elsevier, vol. 289(C).
    5. Wang, Hao & Chen, Xiwen & Vital, Natan & Duffy, Edward & Razi, Abolfazl, 2024. "Energy optimization for HVAC systems in multi-VAV open offices: A deep reinforcement learning approach," Applied Energy, Elsevier, vol. 356(C).
    6. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    7. Han, Gwangwoo & Joo, Hong-Jin & Lim, Hee-Won & An, Young-Sub & Lee, Wang-Je & Lee, Kyoung-Ho, 2023. "Data-driven heat pump operation strategy using rainbow deep reinforcement learning for significant reduction of electricity cost," Energy, Elsevier, vol. 270(C).
    8. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    9. Zhang, Bin & Hu, Weihao & Xu, Xiao & Li, Tao & Zhang, Zhenyuan & Chen, Zhe, 2022. "Physical-model-free intelligent energy management for a grid-connected hybrid wind-microturbine-PV-EV energy system via deep reinforcement learning approach," Renewable Energy, Elsevier, vol. 200(C), pages 433-448.
    10. Lei, Yue & Zhan, Sicheng & Ono, Eikichi & Peng, Yuzhen & Zhang, Zhiang & Hasama, Takamasa & Chong, Adrian, 2022. "A practical deep reinforcement learning framework for multivariate occupant-centric control in buildings," Applied Energy, Elsevier, vol. 324(C).
    11. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Operational optimization for off-grid renewable building energy system using deep reinforcement learning," Applied Energy, Elsevier, vol. 325(C).
    12. Wang, Zixuan & Xiao, Fu & Ran, Yi & Li, Yanxue & Xu, Yang, 2024. "Scalable energy management approach of residential hybrid energy system using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 367(C).
    13. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    14. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    15. Imen Azzouz & Wiem Fekih Hassen, 2023. "Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach," Energies, MDPI, vol. 16(24), pages 1-18, December.
    16. Alexandra L’Heureux & Katarina Grolinger & Miriam A. M. Capretz, 2022. "Transformer-Based Model for Electrical Load Forecasting," Energies, MDPI, vol. 15(14), pages 1-23, July.
    17. Jacob W. Crandall & Mayada Oudah & Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael A. Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Nature Communications, Nature, vol. 9(1), pages 1-12, December.
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," TSE Working Papers 17-806, Toulouse School of Economics (TSE).
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," IAST Working Papers 17-68, Institute for Advanced Study in Toulouse (IAST).
      • Jacob Crandall & Mayada Oudah & Fatimah Ishowo-Oloko Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Post-Print hal-01897802, HAL.
    18. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    19. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    20. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:375:y:2024:i:c:s0306261924013874. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.