Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning
Author
Abstract
Suggested Citation
DOI: 10.1016/j.energy.2021.121873
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
References listed on IDEAS
- Chen, Pengzhan & Liu, Mengchao & Chen, Chuanxi & Shang, Xin, 2019. "A battery management strategy in microgrid for personalized customer requirements," Energy, Elsevier, vol. 189(C).
- Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
- Lian, Renzong & Peng, Jiankun & Wu, Yuankai & Tan, Huachun & Zhang, Hailong, 2020. "Rule-interposing deep reinforcement learning based energy management strategy for power-split hybrid electric vehicle," Energy, Elsevier, vol. 197(C).
- Vitale, F. & Rispoli, N. & Sorrentino, M. & Rosen, M.A. & Pianese, C., 2021. "On the use of dynamic programming for optimal energy management of grid-connected reversible solid oxide cell-based renewable microgrids," Energy, Elsevier, vol. 225(C).
- Alagoz, B. Baykant & Kaygusuz, Asim & Akcin, Murat & Alagoz, Serkan, 2013. "A closed-loop energy price controlling method for real-time energy balancing in a smart grid energy market," Energy, Elsevier, vol. 59(C), pages 95-104.
- Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
- Zhang, Yan & Meng, Fanlin & Wang, Rui & Kazemtabrizi, Behzad & Shi, Jianmai, 2019. "Uncertainty-resistant stochastic MPC approach for optimal operation of CHP microgrid," Energy, Elsevier, vol. 179(C), pages 1265-1278.
- Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
- Bischi, Aldo & Taccari, Leonardo & Martelli, Emanuele & Amaldi, Edoardo & Manzolini, Giampaolo & Silva, Paolo & Campanari, Stefano & Macchi, Ennio, 2014. "A detailed MILP optimization model for combined cooling, heat and power system operation planning," Energy, Elsevier, vol. 74(C), pages 12-26.
- Liu, Yixin & Guo, Li & Wang, Chengshan, 2018. "A robust operation-based scheduling optimization for smart distribution networks with multi-microgrids," Applied Energy, Elsevier, vol. 228(C), pages 130-140.
- Du, Guodong & Zou, Yuan & Zhang, Xudong & Liu, Teng & Wu, Jinlong & He, Dingbo, 2020. "Deep reinforcement learning based energy management for a hybrid electric vehicle," Energy, Elsevier, vol. 201(C).
- Yang, Jun & Su, Changqi, 2021. "Robust optimization of microgrid based on renewable distributed power generation and load demand uncertainty," Energy, Elsevier, vol. 223(C).
- Wen, Lulu & Zhou, Kaile & Li, Jun & Wang, Shanyong, 2020. "Modified deep learning and reinforcement learning for an incentive-based demand response model," Energy, Elsevier, vol. 205(C).
- Liu, Hui & Yu, Chengqing & Wu, Haiping & Duan, Zhu & Yan, Guangxi, 2020. "A new hybrid ensemble deep reinforcement learning model for wind speed short term forecasting," Energy, Elsevier, vol. 202(C).
- Kuznetsova, Elizaveta & Li, Yan-Fu & Ruiz, Carlos & Zio, Enrico & Ault, Graham & Bell, Keith, 2013. "Reinforcement learning for microgrid energy management," Energy, Elsevier, vol. 59(C), pages 133-146.
- Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
- Gomes, I.L.R. & Melicio, R. & Mendes, V.M.F., 2021. "A novel microgrid support management system based on stochastic mixed-integer linear programming," Energy, Elsevier, vol. 223(C).
- Wang, Dongxiao & Qiu, Jing & Reedman, Luke & Meng, Ke & Lai, Loi Lei, 2018. "Two-stage energy management for networked microgrids with high renewable penetration," Applied Energy, Elsevier, vol. 226(C), pages 39-48.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Wilson Pavon & Esteban Inga & Silvio Simani & Maddalena Nonato, 2021. "A Review on Optimal Control for the Smart Grid Electrical Substation Enhancing Transition Stability," Energies, MDPI, vol. 14(24), pages 1-15, December.
- Qi, Chunyang & Zhu, Yiwen & Song, Chuanxue & Yan, Guangfu & Xiao, Feng & Da wang, & Zhang, Xu & Cao, Jingwei & Song, Shixin, 2022. "Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle," Energy, Elsevier, vol. 238(PA).
- Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
- Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
- Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
- Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
- Zheng, Lingwei & Wu, Hao & Guo, Siqi & Sun, Xinyu, 2023. "Real-time dispatch of an integrated energy system based on multi-stage reinforcement learning with an improved action-choosing strategy," Energy, Elsevier, vol. 277(C).
- Zhengyu Yao & Hwan-Sik Yoon & Yang-Ki Hong, 2023. "Control of Hybrid Electric Vehicle Powertrain Using Offline-Online Hybrid Reinforcement Learning," Energies, MDPI, vol. 16(2), pages 1-18, January.
- Grace Muriithi & Sunetra Chowdhury, 2021. "Optimal Energy Management of a Grid-Tied Solar PV-Battery Microgrid: A Reinforcement Learning Approach," Energies, MDPI, vol. 14(9), pages 1-24, May.
- Eduardo J. Salazar & Mauro Jurado & Mauricio E. Samper, 2023. "Reinforcement Learning-Based Pricing and Incentive Strategy for Demand Response in Smart Grids," Energies, MDPI, vol. 16(3), pages 1-33, February.
- Qi, Chunyang & Song, Chuanxue & Xiao, Feng & Song, Shixin, 2022. "Generalization ability of hybrid electric vehicle energy management strategy based on reinforcement learning method," Energy, Elsevier, vol. 250(C).
- Tang, Xiaolin & Zhou, Haitao & Wang, Feng & Wang, Weida & Lin, Xianke, 2022. "Longevity-conscious energy management strategy of fuel cell hybrid electric Vehicle Based on deep reinforcement learning," Energy, Elsevier, vol. 238(PA).
- Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
- Wen, Lulu & Zhou, Kaile & Li, Jun & Wang, Shanyong, 2020. "Modified deep learning and reinforcement learning for an incentive-based demand response model," Energy, Elsevier, vol. 205(C).
- Yang, Jingxian & Liu, Junyong & Qiu, Gao & Liu, Jichun & Jawad, Shafqat & Zhang, Shuai, 2023. "A spatio-temporality-enabled parallel multi-agent-based real-time dynamic dispatch for hydro-PV-PHS integrated power system," Energy, Elsevier, vol. 278(PB).
- Han, Gwangwoo & Joo, Hong-Jin & Lim, Hee-Won & An, Young-Sub & Lee, Wang-Je & Lee, Kyoung-Ho, 2023. "Data-driven heat pump operation strategy using rainbow deep reinforcement learning for significant reduction of electricity cost," Energy, Elsevier, vol. 270(C).
- Li, Jie & Wu, Xiaodong & Xu, Min & Liu, Yonggang, 2022. "Deep reinforcement learning and reward shaping based eco-driving control for automated HEVs among signalized intersections," Energy, Elsevier, vol. 251(C).
- Mahdi Khodayar & Jacob Regan, 2023. "Deep Neural Networks in Power Systems: A Review," Energies, MDPI, vol. 16(12), pages 1-38, June.
- Zhang, Yang & Yang, Qingyu & Li, Donghe & An, Dou, 2022. "A reinforcement and imitation learning method for pricing strategy of electricity retailer with customers’ flexibility," Applied Energy, Elsevier, vol. 323(C).
- Zhu, Tao & Wills, Richard G.A. & Lot, Roberto & Ruan, Haijun & Jiang, Zhihao, 2021. "Adaptive energy management of a battery-supercapacitor energy storage system for electric vehicles based on flexible perception and neural network fitting," Applied Energy, Elsevier, vol. 292(C).
More about this item
Keywords
Microgrid; Optimal energy management; Uncertainties; Deep reinforcement learning;All these keywords.
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:238:y:2022:i:pc:s0360544221021216. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.