IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v363y2024ics030626192400463x.html
   My bibliography  Save this article

Towards a fossil-free urban transport system: An intelligent cross-type transferable energy management framework based on deep transfer reinforcement learning

Author

Listed:
  • Huang, Ruchen
  • He, Hongwen
  • Su, Qicong

Abstract

Deep reinforcement learning (DRL) is now a research focus for the energy management of fuel cell vehicles (FCVs) to improve hydrogen utilization efficiency. However, since DRL-based energy management strategies (EMSs) need to be retrained when the types of FCVs are changed, it is a laborious task to develop DRL-based EMSs for different FCVs. Given that, this article introduces transfer learning (TL) into DRL to design a novel deep transfer reinforcement learning (DTRL) method and then innovatively proposes an intelligent transferable energy management framework between two different urban FCVs based on the designed DTRL method to achieve the reuse of well-trained EMSs. To begin, an enhanced soft actor-critic (SAC) algorithm integrating prioritized experience replay (PER) is formulated to be the studied DRL algorithm in this article. Then, an enhanced-SAC based EMS of a light fuel cell hybrid electric vehicle (FCHEV) is pre-trained by using massive real-world driving data. After that, the learned knowledge stored in the FCHEV's well-trained EMS is captured and then transferred into the EMS of a heavy-duty fuel cell hybrid electric bus (FCHEB). Finally, the FCHEB's EMS is fine-tuned in a stochastic environment to ensure adaptability to real driving conditions. Simulation results indicate that, compared to the state-of-the-art baseline EMS, the proposed DTRL-based EMS accelerates the convergence speed by 91.55% and improves the fuel economy by 6.78%. This article contributes to shortening the development cycle of DRL-based EMSs and improving the utilization efficiency of hydrogen energy in the urban transport sector.

Suggested Citation

  • Huang, Ruchen & He, Hongwen & Su, Qicong, 2024. "Towards a fossil-free urban transport system: An intelligent cross-type transferable energy management framework based on deep transfer reinforcement learning," Applied Energy, Elsevier, vol. 363(C).
  • Handle: RePEc:eee:appene:v:363:y:2024:i:c:s030626192400463x
    DOI: 10.1016/j.apenergy.2024.123080
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S030626192400463X
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2024.123080?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Zhou, Quan & Li, Yanfei & Zhao, Dezong & Li, Ji & Williams, Huw & Xu, Hongming & Yan, Fuwu, 2022. "Transferable representation modelling for real-time energy management of the plug-in hybrid vehicle based on k-fold fuzzy learning and Gaussian process regression," Applied Energy, Elsevier, vol. 305(C).
    2. Lin, Zewei & Wang, Peng & Ren, Songyan & Zhao, Daiqing, 2023. "Economic and environmental impacts of EVs promotion under the 2060 carbon neutrality target—A CGE based study in Shaanxi Province of China," Applied Energy, Elsevier, vol. 332(C).
    3. Shuo Feng & Haowei Sun & Xintao Yan & Haojie Zhu & Zhengxia Zou & Shengyin Shen & Henry X. Liu, 2023. "Dense reinforcement learning for safety validation of autonomous vehicles," Nature, Nature, vol. 615(7953), pages 620-627, March.
    4. Coraci, Davide & Brandi, Silvio & Hong, Tianzhen & Capozzoli, Alfonso, 2023. "Online transfer learning strategy for enhancing the scalability and deployment of deep reinforcement learning control in smart buildings," Applied Energy, Elsevier, vol. 333(C).
    5. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    6. Andersson, Öivind & Börjesson, Pål, 2021. "The greenhouse gas emissions of an electrified vehicle combined with renewable fuels: Life cycle assessment and policy implications," Applied Energy, Elsevier, vol. 289(C).
    7. Zhou, Jianhao & Liu, Jun & Xue, Yuan & Liao, Yuhui, 2022. "Total travel costs minimization strategy of a dual-stack fuel cell logistics truck enhanced with artificial potential field and deep reinforcement learning," Energy, Elsevier, vol. 239(PA).
    8. Peter R. Wurman & Samuel Barrett & Kenta Kawamoto & James MacGlashan & Kaushik Subramanian & Thomas J. Walsh & Roberto Capobianco & Alisa Devlic & Franziska Eckert & Florian Fuchs & Leilani Gilpin & P, 2022. "Outracing champion Gran Turismo drivers with deep reinforcement learning," Nature, Nature, vol. 602(7896), pages 223-228, February.
    9. David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
    10. Li, Yuecheng & He, Hongwen & Khajepour, Amir & Wang, Hong & Peng, Jiankun, 2019. "Energy management for a power-split hybrid electric bus via deep reinforcement learning with terrain information," Applied Energy, Elsevier, vol. 255(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    2. He, Hongwen & Meng, Xiangfei & Wang, Yong & Khajepour, Amir & An, Xiaowen & Wang, Renguang & Sun, Fengchun, 2024. "Deep reinforcement learning based energy management strategies for electrified vehicles: Recent advances and perspectives," Renewable and Sustainable Energy Reviews, Elsevier, vol. 192(C).
    3. Zhang, Hao & Liu, Shang & Lei, Nuo & Fan, Qinhao & Wang, Zhi, 2022. "Leveraging the benefits of ethanol-fueled advanced combustion and supervisory control optimization in hybrid biofuel-electric vehicles," Applied Energy, Elsevier, vol. 326(C).
    4. Zhang, Hao & Fan, Qinhao & Liu, Shang & Li, Shengbo Eben & Huang, Jin & Wang, Zhi, 2021. "Hierarchical energy management strategy for plug-in hybrid electric powertrain integrated with dual-mode combustion engine," Applied Energy, Elsevier, vol. 304(C).
    5. Nweye, Kingsley & Sankaranarayanan, Siva & Nagy, Zoltan, 2023. "MERLIN: Multi-agent offline and transfer learning for occupant-centric operation of grid-interactive communities," Applied Energy, Elsevier, vol. 346(C).
    6. Chen, Jiaxin & Shu, Hong & Tang, Xiaolin & Liu, Teng & Wang, Weida, 2022. "Deep reinforcement learning-based multi-objective control of hybrid power system combined with road recognition under time-varying environment," Energy, Elsevier, vol. 239(PC).
    7. Zhang, Hao & Chen, Boli & Lei, Nuo & Li, Bingbing & Chen, Chaoyi & Wang, Zhi, 2024. "Coupled velocity and energy management optimization of connected hybrid electric vehicles for maximum collective efficiency," Applied Energy, Elsevier, vol. 360(C).
    8. Zhang, Hao & Lei, Nuo & Wang, Zhi, 2024. "Ammonia-hydrogen propulsion system for carbon-free heavy-duty vehicles," Applied Energy, Elsevier, vol. 369(C).
    9. Jinming Xu & Yuan Lin, 2024. "Energy Management for Hybrid Electric Vehicles Using Safe Hybrid-Action Reinforcement Learning," Mathematics, MDPI, vol. 12(5), pages 1-20, February.
    10. Wu, Jie & Li, Dong, 2023. "Modeling and maximizing information diffusion over hypergraphs based on deep reinforcement learning," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 629(C).
    11. He, Hongwen & Su, Qicong & Huang, Ruchen & Niu, Zegong, 2024. "Enabling intelligent transferable energy management of series hybrid electric tracked vehicle across motion dimensions via soft actor-critic algorithm," Energy, Elsevier, vol. 294(C).
    12. Wang, Yong & Wu, Yuankai & Tang, Yingjuan & Li, Qin & He, Hongwen, 2023. "Cooperative energy management and eco-driving of plug-in hybrid electric vehicle via multi-agent reinforcement learning," Applied Energy, Elsevier, vol. 332(C).
    13. Peng, Jiankun & Shen, Yang & Wu, ChangCheng & Wang, Chunhai & Yi, Fengyan & Ma, Chunye, 2023. "Research on energy-saving driving control of hydrogen fuel bus based on deep reinforcement learning in freeway ramp weaving area," Energy, Elsevier, vol. 285(C).
    14. Tian Zhu & Merry H. Ma, 2022. "Deriving the Optimal Strategy for the Two Dice Pig Game via Reinforcement Learning," Stats, MDPI, vol. 5(3), pages 1-14, August.
    15. Xiaoyue Li & John M. Mulvey, 2023. "Optimal Portfolio Execution in a Regime-switching Market with Non-linear Impact Costs: Combining Dynamic Program and Neural Network," Papers 2306.08809, arXiv.org.
    16. Pedro Afonso Fernandes, 2024. "Forecasting with Neuro-Dynamic Programming," Papers 2404.03737, arXiv.org.
    17. Desreveaux, A. & Bouscayrol, A. & Trigui, R. & Hittinger, E. & Castex, E. & Sirbu, G.M., 2023. "Accurate energy consumption for comparison of climate change impact of thermal and electric vehicles," Energy, Elsevier, vol. 268(C).
    18. Nathan Companez & Aldeida Aleti, 2016. "Can Monte-Carlo Tree Search learn to sacrifice?," Journal of Heuristics, Springer, vol. 22(6), pages 783-813, December.
    19. Yuchen Zhang & Wei Yang, 2022. "Breakthrough invention and problem complexity: Evidence from a quasi‐experiment," Strategic Management Journal, Wiley Blackwell, vol. 43(12), pages 2510-2544, December.
    20. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:363:y:2024:i:c:s030626192400463x. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.