Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning
Author
Abstract
Suggested Citation
DOI: 10.1016/j.energy.2021.121958
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
References listed on IDEAS
- Andresen, Gorm B. & Rodriguez, Rolando A. & Becker, Sarah & Greiner, Martin, 2014. "The potential for arbitrage of wind and solar surplus power in Denmark," Energy, Elsevier, vol. 76(C), pages 49-58.
- Díaz, Guzmán & Gómez-Aleixandre, Javier & Coto, José & Conejero, Olga, 2018. "Maximum income resulting from energy arbitrage by battery systems subject to cycle aging and price uncertainty from a dynamic programming perspective," Energy, Elsevier, vol. 156(C), pages 647-660.
- Richard Bellman, 1954. "Some Applications of the Theory of Dynamic Programming---A Review," Operations Research, INFORMS, vol. 2(3), pages 275-288, August.
- Kofinas, P. & Dounis, A.I. & Vouros, G.A., 2018. "Fuzzy Q-Learning for multi-agent decentralized energy management in microgrids," Applied Energy, Elsevier, vol. 219(C), pages 53-67.
- Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
- Totaro, Simone & Boukas, Ioannis & Jonsson, Anders & Cornélusse, Bertrand, 2021. "Lifelong control of off-grid microgrid with model-based reinforcement learning," Energy, Elsevier, vol. 232(C).
- Richard Bellman, 1954. "On some applications of the theory of dynamic programming to logistics," Naval Research Logistics Quarterly, John Wiley & Sons, vol. 1(2), pages 141-153, June.
- Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
- Hansen, Kenneth & Breyer, Christian & Lund, Henrik, 2019. "Status and perspectives on 100% renewable energy systems," Energy, Elsevier, vol. 175(C), pages 471-480.
- Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
- Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
- Kuznetsova, Elizaveta & Li, Yan-Fu & Ruiz, Carlos & Zio, Enrico & Ault, Graham & Bell, Keith, 2013. "Reinforcement learning for microgrid energy management," Energy, Elsevier, vol. 59(C), pages 133-146.
- Carrillo, C. & Obando Montaño, A.F. & Cidrás, J. & Díaz-Dorado, E., 2013. "Review of power curve modelling for wind turbines," Renewable and Sustainable Energy Reviews, Elsevier, vol. 21(C), pages 572-581.
- Bogdanov, Dmitrii & Ram, Manish & Aghahosseini, Arman & Gulagi, Ashish & Oyewo, Ayobami Solomon & Child, Michael & Caldera, Upeksha & Sadovskaia, Kristina & Farfan, Javier & De Souza Noel Simas Barbos, 2021. "Low-cost renewable electricity as the key driver of the global energy transition towards sustainability," Energy, Elsevier, vol. 227(C).
Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
- Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
- Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
- Shabani, Masoume & Wallin, Fredrik & Dahlquist, Erik & Yan, Jinyue, 2023. "The impact of battery operating management strategies on life cycle cost assessment in real power market for a grid-connected residential battery application," Energy, Elsevier, vol. 270(C).
- Ruisheng Wang & Zhong Chen & Qiang Xing & Ziqi Zhang & Tian Zhang, 2022. "A Modified Rainbow-Based Deep Reinforcement Learning Method for Optimal Scheduling of Charging Station," Sustainability, MDPI, vol. 14(3), pages 1-14, February.
- Lai, Chun Sing & Chen, Dashen & Zhang, Jinning & Zhang, Xin & Xu, Xu & Taylor, Gareth A. & Lai, Loi Lei, 2022. "Profit maximization for large-scale energy storage systems to enable fast EV charging infrastructure in distribution networks," Energy, Elsevier, vol. 259(C).
- Wang, Tianjing & Dong, Zhao Yang, 2024. "Adaptive personalized federated reinforcement learning for multiple-ESS optimal market dispatch strategy with electric vehicles and photovoltaic power generations," Applied Energy, Elsevier, vol. 365(C).
- Zhou, Yanting & Ma, Zhongjing & Zhang, Jinhui & Zou, Suli, 2022. "Data-driven stochastic energy management of multi energy system using deep reinforcement learning," Energy, Elsevier, vol. 261(PA).
- Soleimanzade, Mohammad Amin & Kumar, Amit & Sadrzadeh, Mohtada, 2022. "Novel data-driven energy management of a hybrid photovoltaic-reverse osmosis desalination system using deep reinforcement learning," Applied Energy, Elsevier, vol. 317(C).
- Sai, Wei & Pan, Zehua & Liu, Siyu & Jiao, Zhenjun & Zhong, Zheng & Miao, Bin & Chan, Siew Hwa, 2023. "Event-driven forecasting of wholesale electricity price and frequency regulation price using machine learning algorithms," Applied Energy, Elsevier, vol. 352(C).
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
- Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
- Esmaeili Aliabadi, Danial & Chan, Katrina, 2022. "The emerging threat of artificial intelligence on competition in liberalized electricity markets: A deep Q-network approach," Applied Energy, Elsevier, vol. 325(C).
- Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
- Boute, Robert N. & Gijsbrechts, Joren & van Jaarsveld, Willem & Vanvuchelen, Nathalie, 2022. "Deep reinforcement learning for inventory control: A roadmap," European Journal of Operational Research, Elsevier, vol. 298(2), pages 401-412.
- Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
- Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
- Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
- Zheng, Lingwei & Wu, Hao & Guo, Siqi & Sun, Xinyu, 2023. "Real-time dispatch of an integrated energy system based on multi-stage reinforcement learning with an improved action-choosing strategy," Energy, Elsevier, vol. 277(C).
- Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
- Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
- Han, Gwangwoo & Joo, Hong-Jin & Lim, Hee-Won & An, Young-Sub & Lee, Wang-Je & Lee, Kyoung-Ho, 2023. "Data-driven heat pump operation strategy using rainbow deep reinforcement learning for significant reduction of electricity cost," Energy, Elsevier, vol. 270(C).
- Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
- Caputo, Cesare & Cardin, Michel-Alexandre & Ge, Pudong & Teng, Fei & Korre, Anna & Antonio del Rio Chanona, Ehecatl, 2023. "Design and planning of flexible mobile Micro-Grids using Deep Reinforcement Learning," Applied Energy, Elsevier, vol. 335(C).
- Guo, Chenyu & Wang, Xin & Zheng, Yihui & Zhang, Feng, 2022. "Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
- Soleimanzade, Mohammad Amin & Kumar, Amit & Sadrzadeh, Mohtada, 2022. "Novel data-driven energy management of a hybrid photovoltaic-reverse osmosis desalination system using deep reinforcement learning," Applied Energy, Elsevier, vol. 317(C).
- Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
- Van-Hai Bui & Akhtar Hussain & Hak-Man Kim, 2019. "Q-Learning-Based Operation Strategy for Community Battery Energy Storage System (CBESS) in Microgrid System," Energies, MDPI, vol. 12(9), pages 1-17, May.
- Oyewo, Ayobami Solomon & Solomon, A.A. & Bogdanov, Dmitrii & Aghahosseini, Arman & Mensah, Theophilus Nii Odai & Ram, Manish & Breyer, Christian, 2021. "Just transition towards defossilised energy systems for developing economies: A case study of Ethiopia," Renewable Energy, Elsevier, vol. 176(C), pages 346-365.
- Xiaoyue Li & John M. Mulvey, 2023. "Optimal Portfolio Execution in a Regime-switching Market with Non-linear Impact Costs: Combining Dynamic Program and Neural Network," Papers 2306.08809, arXiv.org.
More about this item
Keywords
Actor-critic methods; Deep Q-Networks; Demand response; Microgrids; Renewable energy;All these keywords.
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:238:y:2022:i:pc:s0360544221022064. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.