IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v13y2020i23p6255-d452210.html
   My bibliography  Save this article

Deep Reinforcement Learning Based Optimal Route and Charging Station Selection

Author

Listed:
  • Ki-Beom Lee

    (Division of Electronic and Information, Department of Computer Engineering, Jeonbuk National University, Jeonju 54896, Korea)

  • Mohamed A. Ahmed

    (Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
    Department of Communications and Electronics, Higher Institute of Engineering & Technology–King Marriott, Alexandria 23713, Egypt)

  • Dong-Ki Kang

    (Division of Electronic and Information, Department of Computer Engineering, Jeonbuk National University, Jeonju 54896, Korea)

  • Young-Chon Kim

    (Division of Electronic and Information, Department of Computer Engineering, Jeonbuk National University, Jeonju 54896, Korea)

Abstract

This paper proposes an optimal route and charging station selection (RCS) algorithm based on model-free deep reinforcement learning (DRL) to overcome the uncertainty issues of the traffic conditions and dynamic arrival charging requests. The proposed DRL based RCS algorithm aims to minimize the total travel time of electric vehicles (EV) charging requests from origin to destination using the selection of the optimal route and charging station considering dynamically changing traffic conditions and unknown future requests. In this paper, we formulate this RCS problem as a Markov decision process model with unknown transition probability. A Deep Q network has been adopted with function approximation to find the optimal electric vehicle charging station (EVCS) selection policy. To obtain the feature states for each EVCS, we define the traffic preprocess module, charging preprocess module and feature extract module. The proposed DRL based RCS algorithm is compared with conventional strategies such as minimum distance, minimum travel time, and minimum waiting time. The performance is evaluated in terms of travel time, waiting time, charging time, driving time, and distance under the various distributions and number of EV charging requests.

Suggested Citation

  • Ki-Beom Lee & Mohamed A. Ahmed & Dong-Ki Kang & Young-Chon Kim, 2020. "Deep Reinforcement Learning Based Optimal Route and Charging Station Selection," Energies, MDPI, vol. 13(23), pages 1-22, November.
  • Handle: RePEc:gam:jeners:v:13:y:2020:i:23:p:6255-:d:452210
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/13/23/6255/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/13/23/6255/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Zhang, Jing & Yan, Jie & Liu, Yongqian & Zhang, Haoran & Lv, Guoliang, 2020. "Daily electric vehicle charging load profiles considering demographics of vehicle users," Applied Energy, Elsevier, vol. 274(C).
    2. Sunyong Kim & Hyuk Lim, 2018. "Reinforcement Learning Based Energy Management Algorithm for Smart Energy Buildings," Energies, MDPI, vol. 11(8), pages 1-19, August.
    3. Felipe Condon Silva & Mohamed A. Ahmed & José Manuel Martínez & Young-Chon Kim, 2019. "Design and Implementation of a Blockchain-Based Energy Trading Platform for Electric Vehicles in Smart Campus Parking Lots," Energies, MDPI, vol. 12(24), pages 1-25, December.
    4. Luo, Lizi & Gu, Wei & Zhou, Suyang & Huang, He & Gao, Song & Han, Jun & Wu, Zhi & Dou, Xiaobo, 2018. "Optimal planning of electric vehicle charging stations comprising multi-types of charging facilities," Applied Energy, Elsevier, vol. 226(C), pages 1087-1099.
    5. Zhang, Xu & Peng, Linyu & Cao, Yue & Liu, Shuohan & Zhou, Huan & Huang, Keli, 2020. "Towards holistic charging management for urban electric taxi via a hybrid deployment of battery charging and swap stations," Renewable Energy, Elsevier, vol. 155(C), pages 703-716.
    6. Aritra Ghosh, 2020. "Possibilities and Challenges for the Inclusion of the Electric Vehicle (EV) to Reduce the Carbon Footprint in the Transport Sector: A Review," Energies, MDPI, vol. 13(10), pages 1-22, May.
    7. Wangyi Mo & Chao Yang & Xin Chen & Kangjie Lin & Shuaiqi Duan, 2019. "Optimal Charging Navigation Strategy Design for Rapid Charging Electric Vehicles," Energies, MDPI, vol. 12(6), pages 1-18, March.
    8. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Yu Feng & Xiaochun Lu, 2021. "Construction Planning and Operation of Battery Swapping Stations for Electric Vehicles: A Literature Review," Energies, MDPI, vol. 14(24), pages 1-19, December.
    2. Walied Alharbi & Abdullah S. Bin Humayd & Praveen R. P. & Ahmed Bilal Awan & Anees V. P., 2022. "Optimal Scheduling of Battery-Swapping Station Loads for Capacity Enhancement of a Distribution System," Energies, MDPI, vol. 16(1), pages 1-12, December.
    3. Ahmed M. Abed & Ali AlArjani, 2022. "The Neural Network Classifier Works Efficiently on Searching in DQN Using the Autonomous Internet of Things Hybridized by the Metaheuristic Techniques to Reduce the EVs’ Service Scheduling Time," Energies, MDPI, vol. 15(19), pages 1-25, September.
    4. Ruisheng Wang & Zhong Chen & Qiang Xing & Ziqi Zhang & Tian Zhang, 2022. "A Modified Rainbow-Based Deep Reinforcement Learning Method for Optimal Scheduling of Charging Station," Sustainability, MDPI, vol. 14(3), pages 1-14, February.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ahmed M. Abed & Ali AlArjani, 2022. "The Neural Network Classifier Works Efficiently on Searching in DQN Using the Autonomous Internet of Things Hybridized by the Metaheuristic Techniques to Reduce the EVs’ Service Scheduling Time," Energies, MDPI, vol. 15(19), pages 1-25, September.
    2. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    3. Yujian Ye & Dawei Qiu & Huiyu Wang & Yi Tang & Goran Strbac, 2021. "Real-Time Autonomous Residential Demand Response Management Based on Twin Delayed Deep Deterministic Policy Gradient Learning," Energies, MDPI, vol. 14(3), pages 1-22, January.
    4. Shubham Mishra & Shrey Verma & Subhankar Chowdhury & Ambar Gaur & Subhashree Mohapatra & Gaurav Dwivedi & Puneet Verma, 2021. "A Comprehensive Review on Developments in Electric Vehicle Charging Station Infrastructure and Present Scenario of India," Sustainability, MDPI, vol. 13(4), pages 1-20, February.
    5. Ying Ji & Jianhui Wang & Jiacan Xu & Xiaoke Fang & Huaguang Zhang, 2019. "Real-Time Energy Management of a Microgrid Using Deep Reinforcement Learning," Energies, MDPI, vol. 12(12), pages 1-21, June.
    6. Svetozarevic, B. & Baumann, C. & Muntwiler, S. & Di Natale, L. & Zeilinger, M.N. & Heer, P., 2022. "Data-driven control of room temperature and bidirectional EV charging using deep reinforcement learning: Simulations and experiments," Applied Energy, Elsevier, vol. 307(C).
    7. Ritu Kandari & Neeraj Neeraj & Alexander Micallef, 2022. "Review on Recent Strategies for Integrating Energy Storage Systems in Microgrids," Energies, MDPI, vol. 16(1), pages 1-24, December.
    8. Khalil Bachiri & Ali Yahyaouy & Hamid Gualous & Maria Malek & Younes Bennani & Philippe Makany & Nicoleta Rogovschi, 2023. "Multi-Agent DDPG Based Electric Vehicles Charging Station Recommendation," Energies, MDPI, vol. 16(16), pages 1-17, August.
    9. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    10. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    11. Hassan S. Hayajneh & Xuewei Zhang, 2020. "Logistics Design for Mobile Battery Energy Storage Systems," Energies, MDPI, vol. 13(5), pages 1-14, March.
    12. Harri Aaltonen & Seppo Sierla & Rakshith Subramanya & Valeriy Vyatkin, 2021. "A Simulation Environment for Training a Reinforcement Learning Agent Trading a Battery Storage," Energies, MDPI, vol. 14(17), pages 1-20, September.
    13. Imen Azzouz & Wiem Fekih Hassen, 2023. "Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach," Energies, MDPI, vol. 16(24), pages 1-18, December.
    14. Alqahtani, Mohammed & Hu, Mengqi, 2022. "Dynamic energy scheduling and routing of multiple electric vehicles using deep reinforcement learning," Energy, Elsevier, vol. 244(PA).
    15. Jacob W. Crandall & Mayada Oudah & Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael A. Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Nature Communications, Nature, vol. 9(1), pages 1-12, December.
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," TSE Working Papers 17-806, Toulouse School of Economics (TSE).
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," IAST Working Papers 17-68, Institute for Advanced Study in Toulouse (IAST).
      • Jacob Crandall & Mayada Oudah & Fatimah Ishowo-Oloko Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Post-Print hal-01897802, HAL.
    16. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    17. Md. Mosaraf Hossain Khan & Amran Hossain & Aasim Ullah & Molla Shahadat Hossain Lipu & S. M. Shahnewaz Siddiquee & M. Shafiul Alam & Taskin Jamal & Hafiz Ahmed, 2021. "Integration of Large-Scale Electric Vehicles into Utility Grid: An Efficient Approach for Impact Analysis and Power Quality Assessment," Sustainability, MDPI, vol. 13(19), pages 1-18, October.
    18. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    19. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    20. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:13:y:2020:i:23:p:6255-:d:452210. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.