IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v364y2024ics0306261924005464.html
   My bibliography  Save this article

Application-oriented assessment of grid-connected PV-battery system with deep reinforcement learning in buildings considering electricity price dynamics

Author

Listed:
  • Chen, Qi
  • Kuang, Zhonghong
  • Liu, Xiaohua
  • Zhang, Tao

Abstract

Deep reinforcement learning (DRL) is decisive in addressing uncertainties in intelligent grid-building interactions. Using DRL algorithms, this research optimizes the operational strategy of the building's grid-connected photovoltaic-battery (PV-battery) system, and examines the economic impact of battery capacity, rooftop PV penetration, and electricity price volatility. Three algorithms are employed, each demonstrating remarkable superiority over rule-based control. Without rooftop PV, the rule-based control achieves the battery cost saving of 0.07 RMB/(d·kWh) with a capacity equal to the average building load, while the three algorithms showcase a more substantial range of 0.17–0.19 RMB/(d·kWh). The cooperation of PV introduces heightened intricacy to the DRL training process. Incorporating PV radiation information into the state space remarkably amplifies the battery's capability to consume surplus PV, thereby enhancing economic benefits within the DRL strategy. Consequently, the battery attains cost savings of approximately 0.46 RMB/(d·kWh) under 50% PV penetration. Finally, the study reveals that as electricity price volatility intensifies, the advantage of DRL becomes more conspicuous. As grid renewable penetration progresses from 24% to 50%, the superiority of DRL over rule-based control in battery's cost savings escalates from 0.11 to 0.17 RMB/(d·kWh).

Suggested Citation

  • Chen, Qi & Kuang, Zhonghong & Liu, Xiaohua & Zhang, Tao, 2024. "Application-oriented assessment of grid-connected PV-battery system with deep reinforcement learning in buildings considering electricity price dynamics," Applied Energy, Elsevier, vol. 364(C).
  • Handle: RePEc:eee:appene:v:364:y:2024:i:c:s0306261924005464
    DOI: 10.1016/j.apenergy.2024.123163
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261924005464
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2024.123163?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Yang, Hongxing & Wei, Zhou & Chengzhi, Lou, 2009. "Optimal design and techno-economic analysis of a hybrid solar-wind power generation system," Applied Energy, Elsevier, vol. 86(2), pages 163-169, February.
    2. Raviv, Eran & Bouwman, Kees E. & van Dijk, Dick, 2015. "Forecasting day-ahead electricity prices: Utilizing hourly prices," Energy Economics, Elsevier, vol. 50(C), pages 227-239.
    3. Park, Jong-Whi & Ju, Young-Min & Kim, You-Gwon & Kim, Hak-Sung, 2023. "50% reduction in energy consumption in an actual cold storage facility using a deep reinforcement learning-based control algorithm," Applied Energy, Elsevier, vol. 352(C).
    4. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Operational optimization for off-grid renewable building energy system using deep reinforcement learning," Applied Energy, Elsevier, vol. 325(C).
    5. Joanna Clarke & Justin Searle, 2021. "Active Building demonstrators for a low-carbon future," Nature Energy, Nature, vol. 6(12), pages 1087-1089, December.
    6. Kim, Donghun & Wang, Zhe & Brugger, James & Blum, David & Wetter, Michael & Hong, Tianzhen & Piette, Mary Ann, 2022. "Site demonstration and performance evaluation of MPC for a large chiller plant with TES for renewable energy integration and grid decarbonization," Applied Energy, Elsevier, vol. 321(C).
    7. Yu Qian Ang & Zachary Michael Berzolla & Samuel Letellier-Duchesne & Christoph F. Reinhart, 2023. "Carbon reduction technology pathways for existing buildings in eight cities," Nature Communications, Nature, vol. 14(1), pages 1-16, December.
    8. Yin, WanJun & Wen, Tao & Zhang, Chao, 2023. "Cooperative optimal scheduling strategy of electric vehicles based on dynamic electricity price mechanism," Energy, Elsevier, vol. 263(PA).
    9. Shi, Tao & Xu, Chang & Dong, Wenhao & Zhou, Hangyu & Bokhari, Awais & Klemeš, Jiří Jaromír & Han, Ning, 2023. "Research on energy management of hydrogen electric coupling system based on deep reinforcement learning," Energy, Elsevier, vol. 282(C).
    10. He, Gang & Kammen, Daniel M., 2016. "Where, when and how much solar is available? A provincial-scale solar resource assessment for China," Renewable Energy, Elsevier, vol. 85(C), pages 74-82.
    11. Salpakari, Jyri & Lund, Peter, 2016. "Optimal and rule-based control strategies for energy flexibility in buildings with PV," Applied Energy, Elsevier, vol. 161(C), pages 425-436.
    12. Edward A. Byers & Gemma Coxon & Jim Freer & Jim W. Hall, 2020. "Drought and climate change impacts on cooling water shortages and electricity prices in Great Britain," Nature Communications, Nature, vol. 11(1), pages 1-12, December.
    13. Kang, Hyuna & Jung, Seunghoon & Kim, Hakpyeong & Jeoung, Jaewon & Hong, Taehoon, 2024. "Reinforcement learning-based optimal scheduling model of battery energy storage system at the building level," Renewable and Sustainable Energy Reviews, Elsevier, vol. 190(PA).
    14. Kang, Dongju & Kang, Doeun & Hwangbo, Sumin & Niaz, Haider & Lee, Won Bo & Liu, J. Jay & Na, Jonggeol, 2023. "Optimal planning of hybrid energy storage systems using curtailed renewable energy through deep reinforcement learning," Energy, Elsevier, vol. 284(C).
    15. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    16. Kuang, Zhonghong & Chen, Qi & Yu, Yang, 2022. "Assessing the CO2-emission risk due to wind-energy uncertainty," Applied Energy, Elsevier, vol. 310(C).
    17. Iain Staffell & Stefan Pfenninger & Nathan Johnson, 2023. "A global model of hourly space heating and cooling demand at multiple spatial scales," Nature Energy, Nature, vol. 8(12), pages 1328-1344, December.
    18. Chen, Qi & Kuang, Zhonghong & Liu, Xiaohua & Zhang, Tao, 2022. "Energy storage to solve the diurnal, weekly, and seasonal mismatch and achieve zero-carbon electricity consumption in buildings," Applied Energy, Elsevier, vol. 312(C).
    19. Ren, Kezheng & Liu, Jun & Wu, Zeyang & Liu, Xinglei & Nie, Yongxin & Xu, Haitao, 2024. "A data-driven DRL-based home energy management system optimization framework considering uncertain household parameters," Applied Energy, Elsevier, vol. 355(C).
    20. Chen, Qi & Kuang, Zhonghong & Liu, Xiaohua & Zhang, Tao, 2024. "Optimal sizing and techno-economic analysis of the hybrid PV-battery-cooling storage system for commercial buildings in China," Applied Energy, Elsevier, vol. 355(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Krystian Janusz Cieślak, 2024. "Profitability Analysis of a Prosumer Photovoltaic Installation in Light of Changing Electricity Billing Regulations in Poland," Energies, MDPI, vol. 17(15), pages 1-16, July.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Chen, Qi & Kuang, Zhonghong & Liu, Xiaohua & Zhang, Tao, 2024. "Optimal sizing and techno-economic analysis of the hybrid PV-battery-cooling storage system for commercial buildings in China," Applied Energy, Elsevier, vol. 355(C).
    2. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    3. Zhang, Tianhao & Dong, Zhe & Huang, Xiaojin, 2024. "Multi-objective optimization of thermal power and outlet steam temperature for a nuclear steam supply system with deep reinforcement learning," Energy, Elsevier, vol. 286(C).
    4. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    5. Wang, Hao & Chen, Xiwen & Vital, Natan & Duffy, Edward & Razi, Abolfazl, 2024. "Energy optimization for HVAC systems in multi-VAV open offices: A deep reinforcement learning approach," Applied Energy, Elsevier, vol. 356(C).
    6. Yan Yang & Qingyu Wei & Shanke Liu & Liang Zhao, 2022. "Distribution Strategy Optimization of Standalone Hybrid WT/PV System Based on Different Solar and Wind Resources for Rural Applications," Energies, MDPI, vol. 15(14), pages 1-21, July.
    7. Wenya Xu & Yanxue Li & Guanjie He & Yang Xu & Weijun Gao, 2023. "Performance Assessment and Comparative Analysis of Photovoltaic-Battery System Scheduling in an Existing Zero-Energy House Based on Reinforcement Learning Control," Energies, MDPI, vol. 16(13), pages 1-19, June.
    8. Han, Gwangwoo & Joo, Hong-Jin & Lim, Hee-Won & An, Young-Sub & Lee, Wang-Je & Lee, Kyoung-Ho, 2023. "Data-driven heat pump operation strategy using rainbow deep reinforcement learning for significant reduction of electricity cost," Energy, Elsevier, vol. 270(C).
    9. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    10. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    11. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    12. Jun Maekawa & Koji Shimada, 2019. "A Speculative Trading Model for the Electricity Market: Based on Japan Electric Power Exchange," Energies, MDPI, vol. 12(15), pages 1-15, July.
    13. Liu, Hailiang & Andresen, Gorm Bruun & Greiner, Martin, 2018. "Cost-optimal design of a simplified highly renewable Chinese electricity network," Energy, Elsevier, vol. 147(C), pages 534-546.
    14. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    15. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    16. repec:bny:wpaper:0088 is not listed on IDEAS
    17. Jing-Li Fan & Zezheng Li & Xi Huang & Kai Li & Xian Zhang & Xi Lu & Jianzhong Wu & Klaus Hubacek & Bo Shen, 2023. "A net-zero emissions strategy for China’s power sector using carbon-capture utilization and storage," Nature Communications, Nature, vol. 14(1), pages 1-16, December.
    18. Shangfeng Han & Baosheng Zhang & Xiaoyang Sun & Song Han & Mikael Höök, 2017. "China’s Energy Transition in the Power and Transport Sectors from a Substitution Perspective," Energies, MDPI, vol. 10(5), pages 1-25, April.
    19. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    20. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    21. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:364:y:2024:i:c:s0306261924005464. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.