Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework
Author
Abstract
Suggested Citation
DOI: 10.1016/j.apenergy.2023.121358
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
References listed on IDEAS
- Shuo Feng & Haowei Sun & Xintao Yan & Haojie Zhu & Zhengxia Zou & Shengyin Shen & Henry X. Liu, 2023. "Dense reinforcement learning for safety validation of autonomous vehicles," Nature, Nature, vol. 615(7953), pages 620-627, March.
- Chen, Huicui & Pei, Pucheng & Song, Mancun, 2015. "Lifetime prediction and the economic lifetime of Proton Exchange Membrane fuel cells," Applied Energy, Elsevier, vol. 142(C), pages 154-163.
- Ganesh, Akhil Hannegudda & Xu, Bin, 2022. "A review of reinforcement learning based energy management systems for electrified powertrains: Progress, challenge, and potential solution," Renewable and Sustainable Energy Reviews, Elsevier, vol. 154(C).
- Zhou, Jianhao & Liu, Jun & Xue, Yuan & Liao, Yuhui, 2022. "Total travel costs minimization strategy of a dual-stack fuel cell logistics truck enhanced with artificial potential field and deep reinforcement learning," Energy, Elsevier, vol. 239(PA).
- Dong, Peng & Zhao, Junwei & Liu, Xuewu & Wu, Jian & Xu, Xiangyang & Liu, Yanfang & Wang, Shuhan & Guo, Wei, 2022. "Practical application of energy management strategy for hybrid electric vehicles based on intelligent and connected technologies: Development stages, challenges, and future trends," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
- Wang, Hao & He, Hongwen & Bai, Yunfei & Yue, Hongwei, 2022. "Parameterized deep Q-network based energy management with balanced energy economy and battery life for hybrid electric vehicles," Applied Energy, Elsevier, vol. 320(C).
- Suri, Girish & Onori, Simona, 2016. "A control-oriented cycle-life model for hybrid electric vehicle lithium-ion batteries," Energy, Elsevier, vol. 96(C), pages 644-653.
- Lee, Heeyun & Kim, Kyunghyun & Kim, Namwook & Cha, Suk Won, 2022. "Energy efficient speed planning of electric vehicles for car-following scenario using model-based reinforcement learning," Applied Energy, Elsevier, vol. 313(C).
- Julian Schrittwieser & Ioannis Antonoglou & Thomas Hubert & Karen Simonyan & Laurent Sifre & Simon Schmitt & Arthur Guez & Edward Lockhart & Demis Hassabis & Thore Graepel & Timothy Lillicrap & David , 2020. "Mastering Atari, Go, chess and shogi by planning with a learned model," Nature, Nature, vol. 588(7839), pages 604-609, December.
- Jonas Degrave & Federico Felici & Jonas Buchli & Michael Neunert & Brendan Tracey & Francesco Carpanese & Timo Ewalds & Roland Hafner & Abbas Abdolmaleki & Diego de las Casas & Craig Donner & Leslie F, 2022. "Magnetic control of tokamak plasmas through deep reinforcement learning," Nature, Nature, vol. 602(7897), pages 414-419, February.
- Peter R. Wurman & Samuel Barrett & Kenta Kawamoto & James MacGlashan & Kaushik Subramanian & Thomas J. Walsh & Roberto Capobianco & Alisa Devlic & Franziska Eckert & Florian Fuchs & Leilani Gilpin & P, 2022. "Outracing champion Gran Turismo drivers with deep reinforcement learning," Nature, Nature, vol. 602(7896), pages 223-228, February.
- Tang, Xiaolin & Zhou, Haitao & Wang, Feng & Wang, Weida & Lin, Xianke, 2022. "Longevity-conscious energy management strategy of fuel cell hybrid electric Vehicle Based on deep reinforcement learning," Energy, Elsevier, vol. 238(PA).
- David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
- Di Giorgio, Paolo & Di Ilio, Giovanni & Jannelli, Elio & Conte, Fiorentino Valerio, 2022. "Innovative battery thermal management system based on hydrogen storage in metal hydrides for fuel cell hybrid electric vehicles," Applied Energy, Elsevier, vol. 315(C).
- Quan, Shengwei & Wang, Ya-Xiong & Xiao, Xuelian & He, Hongwen & Sun, Fengchun, 2021. "Real-time energy management for fuel cell electric vehicle using speed prediction-based model predictive control considering performance degradation," Applied Energy, Elsevier, vol. 304(C).
- Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
- Pu, Yuchen & Li, Qi & Zou, Xueli & Li, Ruirui & Li, Luoyi & Chen, Weirong & Liu, Hong, 2021. "Optimal sizing for an integrated energy system considering degradation and seasonal hydrogen storage," Applied Energy, Elsevier, vol. 302(C).
Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
- Peng, Jiankun & Shen, Yang & Wu, ChangCheng & Wang, Chunhai & Yi, Fengyan & Ma, Chunye, 2023. "Research on energy-saving driving control of hydrogen fuel bus based on deep reinforcement learning in freeway ramp weaving area," Energy, Elsevier, vol. 285(C).
- Hussain, Shahid & Irshad, Reyazur Rashid & Pallonetto, Fabiano & Hussain, Ihtisham & Hussain, Zakir & Tahir, Muhammad & Abimannan, Satheesh & Shukla, Saurabh & Yousif, Adil & Kim, Yun-Su & El-Sayed, H, 2023. "Hybrid coordination scheme based on fuzzy inference mechanism for residential charging of electric vehicles," Applied Energy, Elsevier, vol. 352(C).
- Huang, Ruchen & He, Hongwen & Su, Qicong, 2024. "Towards a fossil-free urban transport system: An intelligent cross-type transferable energy management framework based on deep transfer reinforcement learning," Applied Energy, Elsevier, vol. 363(C).
- He, Hongwen & Su, Qicong & Huang, Ruchen & Niu, Zegong, 2024. "Enabling intelligent transferable energy management of series hybrid electric tracked vehicle across motion dimensions via soft actor-critic algorithm," Energy, Elsevier, vol. 294(C).
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- He, Hongwen & Meng, Xiangfei & Wang, Yong & Khajepour, Amir & An, Xiaowen & Wang, Renguang & Sun, Fengchun, 2024. "Deep reinforcement learning based energy management strategies for electrified vehicles: Recent advances and perspectives," Renewable and Sustainable Energy Reviews, Elsevier, vol. 192(C).
- Huang, Ruchen & He, Hongwen & Su, Qicong, 2024. "Towards a fossil-free urban transport system: An intelligent cross-type transferable energy management framework based on deep transfer reinforcement learning," Applied Energy, Elsevier, vol. 363(C).
- Weifan Long & Taixian Hou & Xiaoyi Wei & Shichao Yan & Peng Zhai & Lihua Zhang, 2023. "A Survey on Population-Based Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(10), pages 1-17, May.
- Christoph Graf & Viktor Zobernig & Johannes Schmidt & Claude Klockl, 2021. "Computational Performance of Deep Reinforcement Learning to find Nash Equilibria," Papers 2104.12895, arXiv.org.
- Li, Wenqing & Ni, Shaoquan, 2022. "Train timetabling with the general learning environment and multi-agent deep reinforcement learning," Transportation Research Part B: Methodological, Elsevier, vol. 157(C), pages 230-251.
- Ren, Xiaoxia & Ye, Jinze & Xie, Liping & Lin, Xinyou, 2024. "Battery longevity-conscious energy management predictive control strategy optimized by using deep reinforcement learning algorithm for a fuel cell hybrid electric vehicle," Energy, Elsevier, vol. 286(C).
- Wu, Jie & Li, Dong, 2023. "Modeling and maximizing information diffusion over hypergraphs based on deep reinforcement learning," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 629(C).
- De Moor, Bram J. & Gijsbrechts, Joren & Boute, Robert N., 2022. "Reward shaping to improve the performance of deep reinforcement learning in perishable inventory management," European Journal of Operational Research, Elsevier, vol. 301(2), pages 535-545.
- Christopher R. Madan, 2020. "Considerations for Comparing Video Game AI Agents with Humans," Challenges, MDPI, vol. 11(2), pages 1-12, August.
- Christoph Graf & Viktor Zobernig & Johannes Schmidt & Claude Klöckl, 2024. "Computational Performance of Deep Reinforcement Learning to Find Nash Equilibria," Computational Economics, Springer;Society for Computational Economics, vol. 63(2), pages 529-576, February.
- Wang, Yong & Wu, Yuankai & Tang, Yingjuan & Li, Qin & He, Hongwen, 2023. "Cooperative energy management and eco-driving of plug-in hybrid electric vehicle via multi-agent reinforcement learning," Applied Energy, Elsevier, vol. 332(C).
- Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
- Sumitkumar, Rathor & Al-Sumaiti, Ameena Saad, 2024. "Shared autonomous electric vehicle: Towards social economy of energy and mobility from power-transportation nexus perspective," Renewable and Sustainable Energy Reviews, Elsevier, vol. 197(C).
- Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
- Gokhale, Gargya & Claessens, Bert & Develder, Chris, 2022. "Physics informed neural networks for control oriented thermal modeling of buildings," Applied Energy, Elsevier, vol. 314(C).
- Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.
- Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
- Zhang, Tianhao & Dong, Zhe & Huang, Xiaojin, 2024. "Multi-objective optimization of thermal power and outlet steam temperature for a nuclear steam supply system with deep reinforcement learning," Energy, Elsevier, vol. 286(C).
- Taejong Joo & Hyunyoung Jun & Dongmin Shin, 2022. "Task Allocation in Human–Machine Manufacturing Systems Using Deep Reinforcement Learning," Sustainability, MDPI, vol. 14(4), pages 1-18, February.
- Boute, Robert N. & Gijsbrechts, Joren & van Jaarsveld, Willem & Vanvuchelen, Nathalie, 2022. "Deep reinforcement learning for inventory control: A roadmap," European Journal of Operational Research, Elsevier, vol. 298(2), pages 401-412.
More about this item
Keywords
Fuel cell hybrid electric bus; Energy management strategy; Distributed deep reinforcement learning; Asynchronous advantage actor-critic (A3C); Multi-process parallel computation;All these keywords.
JEL classification:
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:346:y:2023:i:c:s0306261923007225. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.