Reactive Power Optimization for Transient Voltage Stability in Energy Internet via Deep Reinforcement Learning Approach
Author
Abstract
Suggested Citation
Download full text from publisher
References listed on IDEAS
- Haochen Hua & Chuantong Hao & Yuchao Qin & Junwei Cao, 2018. "A Class of Control Strategies for Energy Internet Considering System Robustness and Operation Cost Optimization," Energies, MDPI, vol. 11(6), pages 1-20, June.
- Haochen Hua & Yuchao Qin & Jianye Geng & Chuantong Hao & Junwei Cao, 2019. "Robust Mixed H 2 / H ∞ Controller Design for Energy Routers in Energy Internet," Energies, MDPI, vol. 12(3), pages 1-16, January.
- Hua, Haochen & Qin, Yuchao & Hao, Chuantong & Cao, Junwei, 2019. "Optimal energy management strategies for energy Internet via deep reinforcement learning approach," Applied Energy, Elsevier, vol. 239(C), pages 598-609.
- Wu, Jingda & He, Hongwen & Peng, Jiankun & Li, Yuecheng & Li, Zhanjiang, 2018. "Continuous reinforcement learning of energy management with deep Q network for a power split hybrid electric bus," Applied Energy, Elsevier, vol. 222(C), pages 799-811.
- Yilun Shang, 2018. "Resilient Multiscale Coordination Control against Adversarial Nodes," Energies, MDPI, vol. 11(7), pages 1-17, July.
Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
- Seyed Mahdi Miraftabzadeh & Michela Longo & Federica Foiadelli & Marco Pasetti & Raul Igual, 2021. "Advances in the Application of Machine Learning Techniques for Power System Analytics: A Survey," Energies, MDPI, vol. 14(16), pages 1-24, August.
- Seok-Il Go & Sang-Yun Yun & Seon-Ju Ahn & Joon-Ho Choi, 2020. "Voltage and Reactive Power Optimization Using a Simplified Linear Equations at Distribution Networks with DG," Energies, MDPI, vol. 13(13), pages 1-23, June.
- Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
- Qingle Pang & Lin Ye & Houlei Gao & Xinian Li & Yang Zheng & Chenbin He, 2021. "Penalty Electricity Price-Based Optimal Control for Distribution Networks," Energies, MDPI, vol. 14(7), pages 1-16, March.
- Junyong Wu & Chen Shi & Meiyang Shao & Ran An & Xiaowen Zhu & Xing Huang & Rong Cai, 2019. "Reactive Power Optimization of a Distribution System Based on Scene Matching and Deep Belief Network," Energies, MDPI, vol. 12(17), pages 1-24, August.
- Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
- Linan Qu & Shujie Zhang & Hsiung-Cheng Lin & Ning Chen & Lingling Li, 2020. "Multiobjective Reactive Power Optimization of Renewable Energy Power Plants Based on Time-and-Space Grouping Method," Energies, MDPI, vol. 13(14), pages 1-15, July.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
- Qi, Chunyang & Zhu, Yiwen & Song, Chuanxue & Yan, Guangfu & Xiao, Feng & Da wang, & Zhang, Xu & Cao, Jingwei & Song, Shixin, 2022. "Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle," Energy, Elsevier, vol. 238(PA).
- Zhang, Bin & Hu, Weihao & Xu, Xiao & Li, Tao & Zhang, Zhenyuan & Chen, Zhe, 2022. "Physical-model-free intelligent energy management for a grid-connected hybrid wind-microturbine-PV-EV energy system via deep reinforcement learning approach," Renewable Energy, Elsevier, vol. 200(C), pages 433-448.
- Qi, Chunyang & Song, Chuanxue & Xiao, Feng & Song, Shixin, 2022. "Generalization ability of hybrid electric vehicle energy management strategy based on reinforcement learning method," Energy, Elsevier, vol. 250(C).
- Zhang, Wei & Wang, Jixin & Xu, Zhenyu & Shen, Yuying & Gao, Guangzong, 2022. "A generalized energy management framework for hybrid construction vehicles via model-based reinforcement learning," Energy, Elsevier, vol. 260(C).
- Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
- Matteo Acquarone & Claudio Maino & Daniela Misul & Ezio Spessa & Antonio Mastropietro & Luca Sorrentino & Enrico Busto, 2023. "Influence of the Reward Function on the Selection of Reinforcement Learning Agents for Hybrid Electric Vehicles Real-Time Control," Energies, MDPI, vol. 16(6), pages 1-22, March.
- Li, Shuangqi & He, Hongwen & Li, Jianwei, 2019. "Big data driven lithium-ion battery modeling method based on SDAE-ELM algorithm and data pre-processing technology," Applied Energy, Elsevier, vol. 242(C), pages 1259-1273.
- Yang, Ningkang & Han, Lijin & Xiang, Changle & Liu, Hui & Li, Xunmin, 2021. "An indirect reinforcement learning based real-time energy management strategy via high-order Markov Chain model for a hybrid electric vehicle," Energy, Elsevier, vol. 236(C).
- Zhu, Jiaoyiling & Hu, Weihao & Xu, Xiao & Liu, Haoming & Pan, Li & Fan, Haoyang & Zhang, Zhenyuan & Chen, Zhe, 2022. "Optimal scheduling of a wind energy dominated distribution network via a deep reinforcement learning approach," Renewable Energy, Elsevier, vol. 201(P1), pages 792-801.
- Zhang, Yijie & Ma, Tao & Elia Campana, Pietro & Yamaguchi, Yohei & Dai, Yanjun, 2020. "A techno-economic sizing method for grid-connected household photovoltaic battery systems," Applied Energy, Elsevier, vol. 269(C).
- Ahmad, Tanveer & Chen, Huanxin, 2019. "Deep learning for multi-scale smart energy forecasting," Energy, Elsevier, vol. 175(C), pages 98-112.
- Dong-Hui Ko & Jaekwan Chung & Kwang-Soo Lee & Jin-Soon Park & Jin-Hak Yi, 2019. "Current Policy and Technology for Tidal Current Energy in Korea," Energies, MDPI, vol. 12(9), pages 1-15, May.
- Zeyue Sun & Mohsen Eskandari & Chaoran Zheng & Ming Li, 2022. "Handling Computation Hardness and Time Complexity Issue of Battery Energy Storage Scheduling in Microgrids by Deep Reinforcement Learning," Energies, MDPI, vol. 16(1), pages 1-20, December.
- Chen, Zheng & Hu, Hengjie & Wu, Yitao & Zhang, Yuanjian & Li, Guang & Liu, Yonggang, 2020. "Stochastic model predictive control for energy management of power-split plug-in hybrid electric vehicles based on reinforcement learning," Energy, Elsevier, vol. 211(C).
- Kandasamy, Jeevitha & Ramachandran, Rajeswari & Veerasamy, Veerapandiyan & Irudayaraj, Andrew Xavier Raj, 2024. "Distributed leader-follower based adaptive consensus control for networked microgrids," Applied Energy, Elsevier, vol. 353(PA).
- Gökay Bayrak & Davut Ertekin & Hassan Haes Alhelou & Pierluigi Siano, 2021. "A Real-Time Energy Management System Design for a Developed PV-Based Distributed Generator Considering the Grid Code Requirements in Turkey," Energies, MDPI, vol. 14(20), pages 1-21, October.
- Lian, Renzong & Peng, Jiankun & Wu, Yuankai & Tan, Huachun & Zhang, Hailong, 2020. "Rule-interposing deep reinforcement learning based energy management strategy for power-split hybrid electric vehicle," Energy, Elsevier, vol. 197(C).
- Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
- Fathy, Ahmed, 2023. "Bald eagle search optimizer-based energy management strategy for microgrid with renewable sources and electric vehicles," Applied Energy, Elsevier, vol. 334(C).
More about this item
Keywords
energy Internet; convolutional neural network; decision optimization; deep reinforcement learning;All these keywords.
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:12:y:2019:i:8:p:1556-:d:225630. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.