Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load
Author
Abstract
Suggested Citation
DOI: 10.1016/j.energy.2023.127087
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
References listed on IDEAS
- Ifaei, Pouya & Nazari-Heris, Morteza & Tayerani Charmchi, Amir Saman & Asadi, Somayeh & Yoo, ChangKyoo, 2023. "Sustainable energies and machine learning: An organized review of recent applications and challenges," Energy, Elsevier, vol. 266(C).
- Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Operational optimization for off-grid renewable building energy system using deep reinforcement learning," Applied Energy, Elsevier, vol. 325(C).
- Xiang, Yue & Zhou, Lili & Huang, Yuan & Zhang, Xin & Liu, Youbo & Liu, Junyong, 2021. "Reactive coordinated optimal operation of distributed wind generation," Energy, Elsevier, vol. 218(C).
- Yang, Zhichun & Yang, Fan & Min, Huaidong & Tian, Hao & Hu, Wei & Liu, Jian & Eghbalian, Nasrin, 2023. "Energy management programming to reduce distribution network operating costs in the presence of electric vehicles and renewable energy sources," Energy, Elsevier, vol. 263(PA).
- Ma, Wei & Wang, Wei & Chen, Zhe & Wu, Xuezhi & Hu, Ruonan & Tang, Fen & Zhang, Weige, 2021. "Voltage regulation methods for active distribution networks considering the reactive power optimization of substations," Applied Energy, Elsevier, vol. 284(C).
- Zhou, Yanting & Ma, Zhongjing & Zhang, Jinhui & Zou, Suli, 2022. "Data-driven stochastic energy management of multi energy system using deep reinforcement learning," Energy, Elsevier, vol. 261(PA).
- Guo, Chenyu & Wang, Xin & Zheng, Yihui & Zhang, Feng, 2022. "Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
- Sun, Qirun & Wu, Zhi & Gu, Wei & Zhu, Tao & Zhong, Lei & Gao, Ting, 2021. "Flexible expansion planning of distribution system integrating multiple renewable energy sources: An approximate dynamic programming approach," Energy, Elsevier, vol. 226(C).
- Yao, Haotian & Xiang, Yue & Liu, Junyong, 2022. "Exploring multiple investment strategies for non-utility-owned DGs: A decentralized risked-based approach," Applied Energy, Elsevier, vol. 326(C).
- Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
- Tsao, Yu-Chung & Beyene, Tsehaye Dedimas & Thanh, Vo-Van & Gebeyehu, Sisay Geremew & Kuo, Tsai-Chi, 2022. "Power distribution network design considering the distributed generations and differential and dynamic pricing," Energy, Elsevier, vol. 241(C).
- Siqin, Zhuoya & Niu, DongXiao & Wang, Xuejie & Zhen, Hao & Li, MingYu & Wang, Jingbo, 2022. "A two-stage distributionally robust optimization model for P2G-CCHP microgrid considering uncertainty and carbon emission," Energy, Elsevier, vol. 260(C).
- Castillo, Victhalia Zapata & Boer, Harmen-Sytze de & Muñoz, Raúl Maícas & Gernaat, David E.H.J. & Benders, René & van Vuuren, Detlef, 2022. "Future global electricity demand load curves," Energy, Elsevier, vol. 258(C).
- Esmaeili, Mobin & Sedighizadeh, Mostafa & Esmaili, Masoud, 2016. "Multi-objective optimal reconfiguration and DG (Distributed Generation) power allocation in distribution networks using Big Bang-Big Crunch algorithm considering load uncertainty," Energy, Elsevier, vol. 103(C), pages 86-99.
Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
- Guo, Tianyu & Guo, Qi & Huang, Libin & Guo, Haiping & Lu, Yuanhong & Tu, Liang, 2023. "Microgrid source-network-load-storage master-slave game optimization method considering the energy storage overcharge/overdischarge risk," Energy, Elsevier, vol. 282(C).
- Ebrahimi, Mahoor & Ebrahimi, Mahan & Shafie-khah, Miadreza & Laaksonen, Hannu, 2024. "EV-observing distribution system management considering strategic VPPs and active & reactive power markets," Applied Energy, Elsevier, vol. 364(C).
- Elsisi, Mahmoud & Amer, Mohammed & Dababat, Alya’ & Su, Chun-Lien, 2023. "A comprehensive review of machine learning and IoT solutions for demand side energy management, conservation, and resilient operation," Energy, Elsevier, vol. 281(C).
- Weicheng Zhou & Ping Zhao & Yifei Lu, 2023. "Collaborative Optimal Configuration of a Mobile Energy Storage System and a Stationary Energy Storage System to Cope with Regional Grid Blackouts in Extreme Scenarios," Energies, MDPI, vol. 16(23), pages 1-17, December.
- Jianxun Luo & Wei Zhang & Hui Wang & Wenmiao Wei & Jinpeng He, 2023. "Research on Data-Driven Optimal Scheduling of Power System," Energies, MDPI, vol. 16(6), pages 1-15, March.
- Sicheng Wang & Weiqing Sun, 2023. "Capacity Value Assessment for a Combined Power Plant System of New Energy and Energy Storage Based on Robust Scheduling Rules," Sustainability, MDPI, vol. 15(21), pages 1-19, October.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
- Pinciroli, Luca & Baraldi, Piero & Compare, Michele & Zio, Enrico, 2023. "Optimal operation and maintenance of energy storage systems in grid-connected microgrids by deep reinforcement learning," Applied Energy, Elsevier, vol. 352(C).
- Chen, Qi & Kuang, Zhonghong & Liu, Xiaohua & Zhang, Tao, 2024. "Application-oriented assessment of grid-connected PV-battery system with deep reinforcement learning in buildings considering electricity price dynamics," Applied Energy, Elsevier, vol. 364(C).
- Yin, Linfei & Li, Yu, 2022. "Hybrid multi-agent emotional deep Q network for generation control of multi-area integrated energy systems," Applied Energy, Elsevier, vol. 324(C).
- Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
- Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
- Wenya Xu & Yanxue Li & Guanjie He & Yang Xu & Weijun Gao, 2023. "Performance Assessment and Comparative Analysis of Photovoltaic-Battery System Scheduling in an Existing Zero-Energy House Based on Reinforcement Learning Control," Energies, MDPI, vol. 16(13), pages 1-19, June.
- Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
- Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
- Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
- Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
- A.S. Jameel Hassan & Umar Marikkar & G.W. Kasun Prabhath & Aranee Balachandran & W.G. Chaminda Bandara & Parakrama B. Ekanayake & Roshan I. Godaliyadda & Janaka B. Ekanayake, 2021. "A Sensitivity Matrix Approach Using Two-Stage Optimization for Voltage Regulation of LV Networks with High PV Penetration," Energies, MDPI, vol. 14(20), pages 1-24, October.
- Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
- Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
- Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
- Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
- Zhu, Xingxu & Hou, Xiangchen & Li, Junhui & Yan, Gangui & Li, Cuiping & Wang, Dongbo, 2023. "Distributed online prediction optimization algorithm for distributed energy resources considering the multi-periods optimal operation," Applied Energy, Elsevier, vol. 348(C).
- Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.
- Ande Chang & Yuting Ji & Chunguang Wang & Yiming Bie, 2024. "CVDMARL: A Communication-Enhanced Value Decomposition Multi-Agent Reinforcement Learning Traffic Signal Control Method," Sustainability, MDPI, vol. 16(5), pages 1-17, March.
- Sun, Hongchang & Niu, Yanlei & Li, Chengdong & Zhou, Changgeng & Zhai, Wenwen & Chen, Zhe & Wu, Hao & Niu, Lanqiang, 2022. "Energy consumption optimization of building air conditioning system via combining the parallel temporal convolutional neural network and adaptive opposition-learning chimp algorithm," Energy, Elsevier, vol. 259(C).
More about this item
Keywords
Active distribution system; Distributed generation; Deep reinforcement learning; Optimal scheduling;All these keywords.
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:271:y:2023:i:c:s0360544223004814. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.