IDEAS home Printed from https://ideas.repec.org/a/eee/energy/v238y2022ipcs0360544221021216.html
   My bibliography  Save this article

Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning

Author

Listed:
  • Guo, Chenyu
  • Wang, Xin
  • Zheng, Yihui
  • Zhang, Feng

Abstract

Microgrid (MG) is an effective way to integrate renewable energy into power system at the consumer side. In the MG, the energy management system (EMS) is necessary to be deployed to realize efficient utilization and stable operation. To help the EMS make optimal schedule decisions, we proposed a real-time dynamic optimal energy management (OEM) based on deep reinforcement learning (DRL) algorithm. Traditionally, the OEM problem is solved by mathematical programming (MP) or heuristic algorithms, which may lead to low computation accuracy or efficiency. While for the proposed DRL algorithm, the MG-OEM is formulated as a Markov decision process (MDP) considering environment uncertainties, and then solved by the PPO algorithm. The PPO is a novel policy-based DRL algorithm with continuous state and action spaces, which includes two phases: offline training and online operation. In the training process, the PPO can learn from historical data to capture the uncertainty characteristic of renewable energy generation and load consumption. Finally, the case study demonstrates the effectiveness and the computation efficiency of the proposed method.

Suggested Citation

  • Guo, Chenyu & Wang, Xin & Zheng, Yihui & Zhang, Feng, 2022. "Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
  • Handle: RePEc:eee:energy:v:238:y:2022:i:pc:s0360544221021216
    DOI: 10.1016/j.energy.2021.121873
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0360544221021216
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.energy.2021.121873?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Chen, Pengzhan & Liu, Mengchao & Chen, Chuanxi & Shang, Xin, 2019. "A battery management strategy in microgrid for personalized customer requirements," Energy, Elsevier, vol. 189(C).
    2. Du, Guodong & Zou, Yuan & Zhang, Xudong & Liu, Teng & Wu, Jinlong & He, Dingbo, 2020. "Deep reinforcement learning based energy management for a hybrid electric vehicle," Energy, Elsevier, vol. 201(C).
    3. Yang, Jun & Su, Changqi, 2021. "Robust optimization of microgrid based on renewable distributed power generation and load demand uncertainty," Energy, Elsevier, vol. 223(C).
    4. Wen, Lulu & Zhou, Kaile & Li, Jun & Wang, Shanyong, 2020. "Modified deep learning and reinforcement learning for an incentive-based demand response model," Energy, Elsevier, vol. 205(C).
    5. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    6. Lian, Renzong & Peng, Jiankun & Wu, Yuankai & Tan, Huachun & Zhang, Hailong, 2020. "Rule-interposing deep reinforcement learning based energy management strategy for power-split hybrid electric vehicle," Energy, Elsevier, vol. 197(C).
    7. Liu, Hui & Yu, Chengqing & Wu, Haiping & Duan, Zhu & Yan, Guangxi, 2020. "A new hybrid ensemble deep reinforcement learning model for wind speed short term forecasting," Energy, Elsevier, vol. 202(C).
    8. Kuznetsova, Elizaveta & Li, Yan-Fu & Ruiz, Carlos & Zio, Enrico & Ault, Graham & Bell, Keith, 2013. "Reinforcement learning for microgrid energy management," Energy, Elsevier, vol. 59(C), pages 133-146.
    9. Vitale, F. & Rispoli, N. & Sorrentino, M. & Rosen, M.A. & Pianese, C., 2021. "On the use of dynamic programming for optimal energy management of grid-connected reversible solid oxide cell-based renewable microgrids," Energy, Elsevier, vol. 225(C).
    10. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    11. Gomes, I.L.R. & Melicio, R. & Mendes, V.M.F., 2021. "A novel microgrid support management system based on stochastic mixed-integer linear programming," Energy, Elsevier, vol. 223(C).
    12. Alagoz, B. Baykant & Kaygusuz, Asim & Akcin, Murat & Alagoz, Serkan, 2013. "A closed-loop energy price controlling method for real-time energy balancing in a smart grid energy market," Energy, Elsevier, vol. 59(C), pages 95-104.
    13. Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
    14. Zhang, Yan & Meng, Fanlin & Wang, Rui & Kazemtabrizi, Behzad & Shi, Jianmai, 2019. "Uncertainty-resistant stochastic MPC approach for optimal operation of CHP microgrid," Energy, Elsevier, vol. 179(C), pages 1265-1278.
    15. Wang, Dongxiao & Qiu, Jing & Reedman, Luke & Meng, Ke & Lai, Loi Lei, 2018. "Two-stage energy management for networked microgrids with high renewable penetration," Applied Energy, Elsevier, vol. 226(C), pages 39-48.
    16. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    17. Bischi, Aldo & Taccari, Leonardo & Martelli, Emanuele & Amaldi, Edoardo & Manzolini, Giampaolo & Silva, Paolo & Campanari, Stefano & Macchi, Ennio, 2014. "A detailed MILP optimization model for combined cooling, heat and power system operation planning," Energy, Elsevier, vol. 74(C), pages 12-26.
    18. Liu, Yixin & Guo, Li & Wang, Chengshan, 2018. "A robust operation-based scheduling optimization for smart distribution networks with multi-microgrids," Applied Energy, Elsevier, vol. 228(C), pages 130-140.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Alzahrani, Ahmad & Sajjad, Khizar & Hafeez, Ghulam & Murawwat, Sadia & Khan, Sheraz & Khan, Farrukh Aslam, 2023. "Real-time energy optimization and scheduling of buildings integrated with renewable microgrid," Applied Energy, Elsevier, vol. 335(C).
    2. Alireza Gorjian & Mohsen Eskandari & Mohammad H. Moradi, 2023. "Conservation Voltage Reduction in Modern Power Systems: Applications, Implementation, Quantification, and AI-Assisted Techniques," Energies, MDPI, vol. 16(5), pages 1-36, March.
    3. Amine, Hartani Mohamed & Aissa, Benhammou & Rezk, Hegazy & Messaoud, Hamouda & Othmane, Adbdelkhalek & Saad, Mekhilef & Abdelkareem, Mohammad Ali, 2023. "Enhancing hybrid energy storage systems with advanced low-pass filtration and frequency decoupling for optimal power allocation and reliability of cluster of DC-microgrids," Energy, Elsevier, vol. 282(C).
    4. Yang, Jingxian & Liu, Junyong & Qiu, Gao & Liu, Jichun & Jawad, Shafqat & Zhang, Shuai, 2023. "A spatio-temporality-enabled parallel multi-agent-based real-time dynamic dispatch for hydro-PV-PHS integrated power system," Energy, Elsevier, vol. 278(PB).
    5. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    6. Wang, Yijian & Cui, Yang & Li, Yang & Xu, Yang, 2023. "Collaborative optimization of multi-microgrids system with shared energy storage based on multi-agent stochastic game and reinforcement learning," Energy, Elsevier, vol. 280(C).
    7. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    8. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    9. Hong, Yejin & Yoon, Sungmin & Choi, Sebin, 2023. "Operational signature-based symbolic hierarchical clustering for building energy, operation, and efficiency towards carbon neutrality," Energy, Elsevier, vol. 265(C).
    10. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    11. Wilson Pavon & Esteban Inga & Silvio Simani & Maddalena Nonato, 2021. "A Review on Optimal Control for the Smart Grid Electrical Substation Enhancing Transition Stability," Energies, MDPI, vol. 14(24), pages 1-15, December.
    12. Khawaja Haider Ali & Mohammad Abusara & Asif Ali Tahir & Saptarshi Das, 2023. "Dual-Layer Q-Learning Strategy for Energy Management of Battery Storage in Grid-Connected Microgrids," Energies, MDPI, vol. 16(3), pages 1-17, January.
    13. Bio Gassi, Karim & Baysal, Mustafa, 2023. "Improving real-time energy decision-making model with an actor-critic agent in modern microgrids with energy storage devices," Energy, Elsevier, vol. 263(PE).
    14. Zhou, Yanting & Ma, Zhongjing & Zhang, Jinhui & Zou, Suli, 2022. "Data-driven stochastic energy management of multi energy system using deep reinforcement learning," Energy, Elsevier, vol. 261(PA).
    15. Soleimanzade, Mohammad Amin & Kumar, Amit & Sadrzadeh, Mohtada, 2022. "Novel data-driven energy management of a hybrid photovoltaic-reverse osmosis desalination system using deep reinforcement learning," Applied Energy, Elsevier, vol. 317(C).
    16. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    17. Zhu, Dafeng & Yang, Bo & Liu, Yuxiang & Wang, Zhaojian & Ma, Kai & Guan, Xinping, 2022. "Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park," Applied Energy, Elsevier, vol. 311(C).
    18. Yin, Linfei & Li, Yu, 2022. "Hybrid multi-agent emotional deep Q network for generation control of multi-area integrated energy systems," Applied Energy, Elsevier, vol. 324(C).
    19. Fan, Likang & Wang, Jun & Peng, Yiqiang & Sun, Hongwei & Bao, Xiuchao & Zeng, Baoquan & Wei, Hongqian, 2024. "Real-time energy management strategy with dynamically updating equivalence factor for through-the-road (TTR) hybrid vehicles," Energy, Elsevier, vol. 298(C).
    20. Seyed Hasan Mirbarati & Najme Heidari & Amirhossein Nikoofard & Mir Sayed Shah Danish & Mahdi Khosravy, 2022. "Techno-Economic-Environmental Energy Management of a Micro-Grid: A Mixed-Integer Linear Programming Approach," Sustainability, MDPI, vol. 14(22), pages 1-14, November.
    21. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    22. Pinciroli, Luca & Baraldi, Piero & Compare, Michele & Zio, Enrico, 2023. "Optimal operation and maintenance of energy storage systems in grid-connected microgrids by deep reinforcement learning," Applied Energy, Elsevier, vol. 352(C).
    23. Zhang, Bin & Wu, Xuewei & Ghias, Amer M.Y.M. & Chen, Zhe, 2023. "Coordinated carbon capture systems and power-to-gas dynamic economic energy dispatch strategy for electricity–gas coupled systems considering system uncertainty: An improved soft actor–critic approach," Energy, Elsevier, vol. 271(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Wilson Pavon & Esteban Inga & Silvio Simani & Maddalena Nonato, 2021. "A Review on Optimal Control for the Smart Grid Electrical Substation Enhancing Transition Stability," Energies, MDPI, vol. 14(24), pages 1-15, December.
    2. Qi, Chunyang & Zhu, Yiwen & Song, Chuanxue & Yan, Guangfu & Xiao, Feng & Da wang, & Zhang, Xu & Cao, Jingwei & Song, Shixin, 2022. "Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle," Energy, Elsevier, vol. 238(PA).
    3. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    4. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    5. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    6. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    7. Zheng, Lingwei & Wu, Hao & Guo, Siqi & Sun, Xinyu, 2023. "Real-time dispatch of an integrated energy system based on multi-stage reinforcement learning with an improved action-choosing strategy," Energy, Elsevier, vol. 277(C).
    8. Zhengyu Yao & Hwan-Sik Yoon & Yang-Ki Hong, 2023. "Control of Hybrid Electric Vehicle Powertrain Using Offline-Online Hybrid Reinforcement Learning," Energies, MDPI, vol. 16(2), pages 1-18, January.
    9. Grace Muriithi & Sunetra Chowdhury, 2021. "Optimal Energy Management of a Grid-Tied Solar PV-Battery Microgrid: A Reinforcement Learning Approach," Energies, MDPI, vol. 14(9), pages 1-24, May.
    10. Eduardo J. Salazar & Mauro Jurado & Mauricio E. Samper, 2023. "Reinforcement Learning-Based Pricing and Incentive Strategy for Demand Response in Smart Grids," Energies, MDPI, vol. 16(3), pages 1-33, February.
    11. Qi, Chunyang & Song, Chuanxue & Xiao, Feng & Song, Shixin, 2022. "Generalization ability of hybrid electric vehicle energy management strategy based on reinforcement learning method," Energy, Elsevier, vol. 250(C).
    12. Tang, Xiaolin & Zhou, Haitao & Wang, Feng & Wang, Weida & Lin, Xianke, 2022. "Longevity-conscious energy management strategy of fuel cell hybrid electric Vehicle Based on deep reinforcement learning," Energy, Elsevier, vol. 238(PA).
    13. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    14. Wen, Lulu & Zhou, Kaile & Li, Jun & Wang, Shanyong, 2020. "Modified deep learning and reinforcement learning for an incentive-based demand response model," Energy, Elsevier, vol. 205(C).
    15. Yang, Jingxian & Liu, Junyong & Qiu, Gao & Liu, Jichun & Jawad, Shafqat & Zhang, Shuai, 2023. "A spatio-temporality-enabled parallel multi-agent-based real-time dynamic dispatch for hydro-PV-PHS integrated power system," Energy, Elsevier, vol. 278(PB).
    16. Han, Gwangwoo & Joo, Hong-Jin & Lim, Hee-Won & An, Young-Sub & Lee, Wang-Je & Lee, Kyoung-Ho, 2023. "Data-driven heat pump operation strategy using rainbow deep reinforcement learning for significant reduction of electricity cost," Energy, Elsevier, vol. 270(C).
    17. Li, Jie & Wu, Xiaodong & Xu, Min & Liu, Yonggang, 2022. "Deep reinforcement learning and reward shaping based eco-driving control for automated HEVs among signalized intersections," Energy, Elsevier, vol. 251(C).
    18. Mahdi Khodayar & Jacob Regan, 2023. "Deep Neural Networks in Power Systems: A Review," Energies, MDPI, vol. 16(12), pages 1-38, June.
    19. Zhang, Yang & Yang, Qingyu & Li, Donghe & An, Dou, 2022. "A reinforcement and imitation learning method for pricing strategy of electricity retailer with customers’ flexibility," Applied Energy, Elsevier, vol. 323(C).
    20. Zhu, Tao & Wills, Richard G.A. & Lot, Roberto & Ruan, Haijun & Jiang, Zhihao, 2021. "Adaptive energy management of a battery-supercapacitor energy storage system for electric vehicles based on flexible perception and neural network fitting," Applied Energy, Elsevier, vol. 292(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:238:y:2022:i:pc:s0360544221021216. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.