IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v16y2023i13p4844-d1175981.html
   My bibliography  Save this article

Performance Assessment and Comparative Analysis of Photovoltaic-Battery System Scheduling in an Existing Zero-Energy House Based on Reinforcement Learning Control

Author

Listed:
  • Wenya Xu

    (Innovation Institute for Sustainable Maritime Architecture Research and Technology, Qingdao University of Technology, Qingdao 266033, China)

  • Yanxue Li

    (Innovation Institute for Sustainable Maritime Architecture Research and Technology, Qingdao University of Technology, Qingdao 266033, China
    Department of Building Environment and Energy Engineering, The Hong Kong Polytechnic University, Hong Kong 100872, China)

  • Guanjie He

    (Innovation Institute for Sustainable Maritime Architecture Research and Technology, Qingdao University of Technology, Qingdao 266033, China)

  • Yang Xu

    (Innovation Institute for Sustainable Maritime Architecture Research and Technology, Qingdao University of Technology, Qingdao 266033, China
    Faculty of Environmental Engineering, The University of Kitakyushu, Kitakyushu 808-0135, Japan)

  • Weijun Gao

    (Innovation Institute for Sustainable Maritime Architecture Research and Technology, Qingdao University of Technology, Qingdao 266033, China
    Faculty of Environmental Engineering, The University of Kitakyushu, Kitakyushu 808-0135, Japan)

Abstract

The development of distributed renewable energy resources and smart energy management are efficient approaches to decarbonizing building energy systems. Reinforcement learning (RL) is a data-driven control algorithm that trains a large amount of data to learn control policy. However, this learning process generally presents low learning efficiency using real-world stochastic data. To address this challenge, this study proposes a model-based RL approach to optimize the operation of existing zero-energy houses considering PV generation consumption and energy costs. The model-based approach takes advantage of the inner understanding of the system dynamics; this knowledge improves the learning efficiency. A reward function is designed considering the physical constraints of battery storage, photovoltaic (PV) production feed-in profit, and energy cost. Measured data of a zero-energy house are used to train and test the proposed RL agent control, including Q -learning, deep Q network (DQN), and deep deterministic policy gradient (DDPG) agents. The results show that the proposed RL agents can achieve fast convergence during the training process. In comparison with the rule-based strategy, test cases verify the cost-effectiveness performances of proposed RL approaches in scheduling operations of the hybrid energy system under different scenarios. The comparative analysis of test periods shows that the DQN agent presents better energy cost-saving performances than Q -learning while the Q -learning agent presents more flexible action control of the battery with the fluctuation of real-time electricity prices. The DDPG algorithm can achieve the highest PV self-consumption ratio, 49.4%, and the self-sufficiency ratio reaches 36.7%. The DDPG algorithm outperforms rule-based operation by 7.2% for energy cost during test periods.

Suggested Citation

  • Wenya Xu & Yanxue Li & Guanjie He & Yang Xu & Weijun Gao, 2023. "Performance Assessment and Comparative Analysis of Photovoltaic-Battery System Scheduling in an Existing Zero-Energy House Based on Reinforcement Learning Control," Energies, MDPI, vol. 16(13), pages 1-19, June.
  • Handle: RePEc:gam:jeners:v:16:y:2023:i:13:p:4844-:d:1175981
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/16/13/4844/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/16/13/4844/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Langer, Lissy & Volling, Thomas, 2022. "A reinforcement learning approach to home energy management for modulating heat pumps and photovoltaic systems," Applied Energy, Elsevier, vol. 327(C).
    2. Eslami, M. & Nahani, P., 2021. "How policies affect the cost-effectiveness of residential renewable energy in Iran: A techno-economic analysis for optimization," Utilities Policy, Elsevier, vol. 72(C).
    3. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    4. Gianluca Serale & Massimo Fiorentini & Alfonso Capozzoli & Daniele Bernardini & Alberto Bemporad, 2018. "Model Predictive Control (MPC) for Enhancing Building and HVAC System Energy Efficiency: Problem Formulation, Applications and Opportunities," Energies, MDPI, vol. 11(3), pages 1-35, March.
    5. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    6. Munankarmi, Prateek & Maguire, Jeff & Balamurugan, Sivasathya Pradha & Blonsky, Michael & Roberts, David & Jin, Xin, 2021. "Community-scale interaction of energy efficiency and demand flexibility in residential buildings," Applied Energy, Elsevier, vol. 298(C).
    7. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Operational optimization for off-grid renewable building energy system using deep reinforcement learning," Applied Energy, Elsevier, vol. 325(C).
    8. Wang, Yu & Xu, Yan & Tang, Yi, 2019. "Distributed aggregation control of grid-interactive smart buildings for power system frequency support," Applied Energy, Elsevier, vol. 251(C), pages 1-1.
    9. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    10. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    11. Jin, Xiaoyu & Xiao, Fu & Zhang, Chong & Chen, Zhijie, 2022. "Semi-supervised learning based framework for urban level building electricity consumption prediction," Applied Energy, Elsevier, vol. 328(C).
    12. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    13. Puranen, Pietari & Kosonen, Antti & Ahola, Jero, 2021. "Techno-economic viability of energy storage concepts combined with a residential solar photovoltaic system: A case study from Finland," Applied Energy, Elsevier, vol. 298(C).
    14. Lee, Heeyun & Kim, Kyunghyun & Kim, Namwook & Cha, Suk Won, 2022. "Energy efficient speed planning of electric vehicles for car-following scenario using model-based reinforcement learning," Applied Energy, Elsevier, vol. 313(C).
    15. Bay, Christopher J. & Chintala, Rohit & Chinde, Venkatesh & King, Jennifer, 2022. "Distributed model predictive control for coordinated, grid-interactive buildings," Applied Energy, Elsevier, vol. 312(C).
    16. He, Fan & Bo, Renfei & Hu, Chenxi & Meng, Xi & Gao, Weijun, 2023. "Employing spiral fins to improve the thermal performance of phase-change materials in shell-tube latent heat storage units," Renewable Energy, Elsevier, vol. 203(C), pages 518-528.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ayas Shaqour & Aya Hagishima, 2022. "Systematic Review on Deep Reinforcement Learning-Based Energy Management for Different Building Types," Energies, MDPI, vol. 15(22), pages 1-27, November.
    2. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    3. Wang, Zixuan & Xiao, Fu & Ran, Yi & Li, Yanxue & Xu, Yang, 2024. "Scalable energy management approach of residential hybrid energy system using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 367(C).
    4. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    5. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    6. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    7. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    8. Zhang, Bin & Hu, Weihao & Ghias, Amer M.Y.M. & Xu, Xiao & Chen, Zhe, 2022. "Multi-agent deep reinforcement learning-based coordination control for grid-aware multi-buildings," Applied Energy, Elsevier, vol. 328(C).
    9. Keerthana Sivamayil & Elakkiya Rajasekar & Belqasem Aljafari & Srete Nikolovski & Subramaniyaswamy Vairavasundaram & Indragandhi Vairavasundaram, 2023. "A Systematic Study on Reinforcement Learning Based Applications," Energies, MDPI, vol. 16(3), pages 1-23, February.
    10. Deng, Xiangtian & Zhang, Yi & Jiang, Yi & Zhang, Yi & Qi, He, 2024. "A novel operation method for renewable building by combining distributed DC energy system and deep reinforcement learning," Applied Energy, Elsevier, vol. 353(PB).
    11. Ren, Haoshan & Ma, Zhenjun & Fai Norman Tse, Chung & Sun, Yongjun, 2022. "Optimal control of solar-powered electric bus networks with improved renewable energy on-site consumption and reduced grid dependence," Applied Energy, Elsevier, vol. 323(C).
    12. Wu, Long & Yin, Xunyuan & Pan, Lei & Liu, Jinfeng, 2023. "Distributed economic predictive control of integrated energy systems for enhanced synergy and grid response: A decomposition and cooperation strategy," Applied Energy, Elsevier, vol. 349(C).
    13. Yang, Ting & Zhao, Liyuan & Li, Wei & Wu, Jianzhong & Zomaya, Albert Y., 2021. "Towards healthy and cost-effective indoor environment management in smart homes: A deep reinforcement learning approach," Applied Energy, Elsevier, vol. 300(C).
    14. Chen, Qi & Kuang, Zhonghong & Liu, Xiaohua & Zhang, Tao, 2024. "Application-oriented assessment of grid-connected PV-battery system with deep reinforcement learning in buildings considering electricity price dynamics," Applied Energy, Elsevier, vol. 364(C).
    15. Liao, Wei & Xiao, Fu & Li, Yanxue & Zhang, Hanbei & Peng, Jinqing, 2024. "A comparative study of demand-side energy management strategies for building integrated photovoltaics-battery and electric vehicles (EVs) in diversified building communities," Applied Energy, Elsevier, vol. 361(C).
    16. Elsisi, Mahmoud & Amer, Mohammed & Dababat, Alya’ & Su, Chun-Lien, 2023. "A comprehensive review of machine learning and IoT solutions for demand side energy management, conservation, and resilient operation," Energy, Elsevier, vol. 281(C).
    17. Svetozarevic, B. & Baumann, C. & Muntwiler, S. & Di Natale, L. & Zeilinger, M.N. & Heer, P., 2022. "Data-driven control of room temperature and bidirectional EV charging using deep reinforcement learning: Simulations and experiments," Applied Energy, Elsevier, vol. 307(C).
    18. Coraci, Davide & Brandi, Silvio & Hong, Tianzhen & Capozzoli, Alfonso, 2023. "Online transfer learning strategy for enhancing the scalability and deployment of deep reinforcement learning control in smart buildings," Applied Energy, Elsevier, vol. 333(C).
    19. Anujin Bayasgalan & Yoo Shin Park & Seak Bai Koh & Sung-Yong Son, 2024. "Comprehensive Review of Building Energy Management Models: Grid-Interactive Efficient Building Perspective," Energies, MDPI, vol. 17(19), pages 1-25, September.
    20. Panagiotis Michailidis & Iakovos Michailidis & Dimitrios Vamvakas & Elias Kosmatopoulos, 2023. "Model-Free HVAC Control in Buildings: A Review," Energies, MDPI, vol. 16(20), pages 1-45, October.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:16:y:2023:i:13:p:4844-:d:1175981. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.