Reinforcement learning for electric vehicle applications in power systems:A critical review
Author
Abstract
Suggested Citation
DOI: 10.1016/j.rser.2022.113052
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
References listed on IDEAS
- Qiu, Dawei & Dong, Zihang & Zhang, Xi & Wang, Yi & Strbac, Goran, 2022. "Safe reinforcement learning for real-time automatic control in a smart energy-hub," Applied Energy, Elsevier, vol. 309(C).
- Qiu, Dawei & Ye, Yujian & Papadaskalopoulos, Dimitrios & Strbac, Goran, 2021. "Scalable coordinated management of peer-to-peer energy trading: A multi-cluster deep reinforcement learning approach," Applied Energy, Elsevier, vol. 292(C).
- Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
- DeForest, Nicholas & MacDonald, Jason S. & Black, Douglas R., 2018. "Day ahead optimization of an electric vehicle fleet providing ancillary services in the Los Angeles Air Force Base vehicle-to-grid demonstration," Applied Energy, Elsevier, vol. 210(C), pages 987-1001.
- Zhou, Yue & Wu, Jianzhong & Song, Guanyu & Long, Chao, 2020. "Framework design and optimal bidding strategy for ancillary service provision from a peer-to-peer energy trading community," Applied Energy, Elsevier, vol. 278(C).
- Gonzalez Venegas, Felipe & Petit, Marc & Perez, Yannick, 2021. "Active integration of electric vehicles into distribution grids: Barriers and frameworks for flexibility services," Renewable and Sustainable Energy Reviews, Elsevier, vol. 145(C).
- Dowling, Paul, 2013. "The impact of climate change on the European energy system," Energy Policy, Elsevier, vol. 60(C), pages 406-417.
- Bellocchi, Sara & Klöckner, Kai & Manno, Michele & Noussan, Michel & Vellini, Michela, 2019. "On the role of electric vehicles towards low-carbon energy systems: Italy and Germany in comparison," Applied Energy, Elsevier, vol. 255(C).
- Dorokhova, Marina & Martinson, Yann & Ballif, Christophe & Wyrsch, Nicolas, 2021. "Deep reinforcement learning control of electric vehicle charging in the presence of photovoltaic generation," Applied Energy, Elsevier, vol. 301(C).
- Shaukat, N. & Khan, B. & Ali, S.M. & Mehmood, C.A. & Khan, J. & Farid, U. & Majid, M. & Anwar, S.M. & Jawad, M. & Ullah, Z., 2018. "A survey on electric vehicle transportation within smart grid system," Renewable and Sustainable Energy Reviews, Elsevier, vol. 81(P1), pages 1329-1349.
- Ruisheng Wang & Zhong Chen & Qiang Xing & Ziqi Zhang & Tian Zhang, 2022. "A Modified Rainbow-Based Deep Reinforcement Learning Method for Optimal Scheduling of Charging Station," Sustainability, MDPI, vol. 14(3), pages 1-14, February.
- Jiang, C.X. & Jing, Z.X. & Cui, X.R. & Ji, T.Y. & Wu, Q.H., 2018. "Multiple agents and reinforcement learning for modelling charging loads of electric taxis," Applied Energy, Elsevier, vol. 222(C), pages 158-168.
- Peng, Minghong & Liu, Lian & Jiang, Chuanwen, 2012. "A review on the economic dispatch and risk management of the large-scale plug-in electric vehicles (PHEVs)-penetrated power systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 16(3), pages 1508-1515.
- Hussain, Akhtar & Bui, Van-Hai & Kim, Hak-Man, 2019. "Microgrids as a resilience resource and strategies used by microgrids for enhancing resilience," Applied Energy, Elsevier, vol. 240(C), pages 56-72.
- Wang, Xue-Chao & Klemeš, Jiří Jaromír & Dong, Xiaobin & Fan, Weiguo & Xu, Zihan & Wang, Yutao & Varbanov, Petar Sabev, 2019. "Air pollution terrain nexus: A review considering energy generation and consumption," Renewable and Sustainable Energy Reviews, Elsevier, vol. 105(C), pages 71-85.
- Qiu, Dawei & Wang, Yi & Sun, Mingyang & Strbac, Goran, 2022. "Multi-service provision for electric vehicles in power-transportation networks towards a low-carbon transition: A hierarchical and hybrid multi-agent reinforcement learning approach," Applied Energy, Elsevier, vol. 313(C).
- Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
- Ruan, Guangchun & Wu, Jiahan & Zhong, Haiwang & Xia, Qing & Xie, Le, 2021. "Quantitative assessment of U.S. bulk power systems and market operations during the COVID-19 pandemic," Applied Energy, Elsevier, vol. 286(C).
- Wang, Y. & Rousis, A. Oulis & Strbac, G., 2022. "Resilience-driven optimal sizing and pre-positioning of mobile energy storage systems in decentralized networked microgrids," Applied Energy, Elsevier, vol. 305(C).
- Shang, Wen-Long & Chen, Jinyu & Bi, Huibo & Sui, Yi & Chen, Yanyan & Yu, Haitao, 2021. "Impacts of COVID-19 pandemic on user behaviors and environmental benefits of bike sharing: A big-data analysis," Applied Energy, Elsevier, vol. 285(C).
- Lopion, Peter & Markewitz, Peter & Robinius, Martin & Stolten, Detlef, 2018. "A review of current challenges and trends in energy systems modeling," Renewable and Sustainable Energy Reviews, Elsevier, vol. 96(C), pages 156-166.
- Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
- Wang, Yi & Qiu, Dawei & Strbac, Goran, 2022. "Multi-agent deep reinforcement learning for resilience-driven routing and scheduling of mobile energy storage systems," Applied Energy, Elsevier, vol. 310(C).
- Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
- Jaehyun Lee & Eunjung Lee & Jinho Kim, 2020. "Electric Vehicle Charging and Discharging Algorithm Based on Reinforcement Learning with Data-Driven Approach in Dynamic Pricing Scheme," Energies, MDPI, vol. 13(8), pages 1-18, April.
- Bhatti, Ghanishtha & Mohan, Harshit & Raja Singh, R., 2021. "Towards the future of smart electric vehicles: Digital twin technology," Renewable and Sustainable Energy Reviews, Elsevier, vol. 141(C).
- Cheng Wang & Zhou Gao & Peng Yang & Zhenpo Wang & Zhiheng Li, 2021. "Electric Vehicle Charging Facility Planning Based on Flow Demand—A Case Study," Sustainability, MDPI, vol. 13(9), pages 1-23, April.
- Tuchnitz, Felix & Ebell, Niklas & Schlund, Jonas & Pruckner, Marco, 2021. "Development and Evaluation of a Smart Charging Strategy for an Electric Vehicle Fleet Based on Reinforcement Learning," Applied Energy, Elsevier, vol. 285(C).
- Alqahtani, Mohammed & Hu, Mengqi, 2022. "Dynamic energy scheduling and routing of multiple electric vehicles using deep reinforcement learning," Energy, Elsevier, vol. 244(PA).
- Lee, Sangyoon & Choi, Dae-Hyun, 2021. "Dynamic pricing and energy management for profit maximization in multiple smart electric vehicle charging stations: A privacy-preserving deep reinforcement learning approach," Applied Energy, Elsevier, vol. 304(C).
- Guo, Chenyu & Wang, Xin & Zheng, Yihui & Zhang, Feng, 2022. "Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
- Balali, Yasaman & Stegen, Sascha, 2021. "Review of energy storage systems for vehicles based on technology, environmental impacts, and costs," Renewable and Sustainable Energy Reviews, Elsevier, vol. 135(C).
- Yang, Zhile & Li, Kang & Foley, Aoife, 2015. "Computational scheduling methods for integrating plug-in electric vehicles with power systems: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 51(C), pages 396-416.
- Wang, Yi & Rousis, Anastasios Oulis & Strbac, Goran, 2020. "On microgrids and resilience: A comprehensive review on modeling and operational strategies," Renewable and Sustainable Energy Reviews, Elsevier, vol. 134(C).
Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
- Abid, Md. Shadman & Apon, Hasan Jamil & Hossain, Salman & Ahmed, Ashik & Ahshan, Razzaqul & Lipu, M.S. Hossain, 2024. "A novel multi-objective optimization based multi-agent deep reinforcement learning approach for microgrid resources planning," Applied Energy, Elsevier, vol. 353(PA).
- Güven, Aykut Fatih, 2024. "Integrating electric vehicles into hybrid microgrids: A stochastic approach to future-ready renewable energy solutions and management," Energy, Elsevier, vol. 303(C).
- Wang, Yi & Qiu, Dawei & He, Yinglong & Zhou, Quan & Strbac, Goran, 2023. "Multi-agent reinforcement learning for electric vehicle decarbonized routing and scheduling," Energy, Elsevier, vol. 284(C).
- Feng, Zhiyan & Zhang, Qingang & Zhang, Yiming & Fei, Liangyu & Jiang, Fei & Zhao, Shengdun, 2024. "Practicability analysis of online deep reinforcement learning towards energy management strategy of 4WD-BEVs driven by dual-motor in-wheel motors," Energy, Elsevier, vol. 290(C).
- Zhen Huang & Xuechun Xiao & Yuan Gao & Yonghong Xia & Tomislav Dragičević & Pat Wheeler, 2023. "Emerging Information Technologies for the Energy Management of Onboard Microgrids in Transportation Applications," Energies, MDPI, vol. 16(17), pages 1-26, August.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
- Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
- Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
- Qiu, Dawei & Wang, Yi & Sun, Mingyang & Strbac, Goran, 2022. "Multi-service provision for electric vehicles in power-transportation networks towards a low-carbon transition: A hierarchical and hybrid multi-agent reinforcement learning approach," Applied Energy, Elsevier, vol. 313(C).
- Qiu, Dawei & Dong, Zihang & Zhang, Xi & Wang, Yi & Strbac, Goran, 2022. "Safe reinforcement learning for real-time automatic control in a smart energy-hub," Applied Energy, Elsevier, vol. 309(C).
- Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
- Fescioglu-Unver, Nilgun & Yıldız Aktaş, Melike, 2023. "Electric vehicle charging service operations: A review of machine learning applications for infrastructure planning, control, pricing and routing," Renewable and Sustainable Energy Reviews, Elsevier, vol. 188(C).
- Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
- Qiu, Dawei & Wang, Yi & Zhang, Tingqi & Sun, Mingyang & Strbac, Goran, 2023. "Hierarchical multi-agent reinforcement learning for repair crews dispatch control towards multi-energy microgrid resilience," Applied Energy, Elsevier, vol. 336(C).
- Caputo, Cesare & Cardin, Michel-Alexandre & Ge, Pudong & Teng, Fei & Korre, Anna & Antonio del Rio Chanona, Ehecatl, 2023. "Design and planning of flexible mobile Micro-Grids using Deep Reinforcement Learning," Applied Energy, Elsevier, vol. 335(C).
- Wang, Yi & Qiu, Dawei & Strbac, Goran, 2022. "Multi-agent deep reinforcement learning for resilience-driven routing and scheduling of mobile energy storage systems," Applied Energy, Elsevier, vol. 310(C).
- Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
- Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
- Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
- Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
- Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
- Paudel, Diwas & Das, Tapas K., 2023. "A deep reinforcement learning approach for power management of battery-assisted fast-charging EV hubs participating in day-ahead and real-time electricity markets," Energy, Elsevier, vol. 283(C).
- Zeng, Lanting & Qiu, Dawei & Sun, Mingyang, 2022. "Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks," Applied Energy, Elsevier, vol. 324(C).
- Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
- Ahmad Almaghrebi & Fares Aljuheshi & Mostafa Rafaie & Kevin James & Mahmoud Alahmad, 2020. "Data-Driven Charging Demand Prediction at Public Charging Stations Using Supervised Machine Learning Regression Methods," Energies, MDPI, vol. 13(16), pages 1-21, August.
More about this item
Keywords
Electric vehicles; Vehicle-to-grid; Reinforcement learning; Power systems;All these keywords.
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:rensus:v:173:y:2023:i:c:s1364032122009339. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/600126/description#description .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.