IDEAS home Printed from https://ideas.repec.org/a/eee/energy/v307y2024ics0360544224024617.html
   My bibliography  Save this article

A unified benchmark for deep reinforcement learning-based energy management: Novel training ideas with the unweighted reward

Author

Listed:
  • Chen, Jiaxin
  • Tang, Xiaolin
  • Yang, Kai

Abstract

Deep reinforcement learning stands as a powerful force in the realm of intelligent control for hybrid power systems, yet some imperfections persist in the positive progression of learning-based strategies, necessitating the proposal of essential solutions to address these flaws. Firstly, a public and reliable benchmark model for hybrid powertrains and the optimization results of energy management strategies are essential. Hence, two Python-based standard deep reinforcement learning agents and four Simulink-based hybrid powertrains are employed, forming a co-simulation training approach as the reliable solution. Secondly, a detailed analysis from the perspectives of range, magnitude, and importance reveals that the optimization terms in traditional reward functions can mislead the agent during the training process and require cumbersome weight tuning. Accordingly, this paper proposes a novel training idea that combines the rule-based engine start-stop with an unweighted reward tailored for optimizing engine efficiency and facilitating training progress. Finally, a hardware-in-the-loop test is performed, treating the P2 hybrid electric vehicle as the target. The results show that two deep reinforcement learning-based energy management strategies achieved fuel economies of 6.537 L/100 km and 6.330 L/100 km, respectively, and more efficient and reasonable control sequences ensure the working state of the engine as well as the state of charge of batteries.

Suggested Citation

  • Chen, Jiaxin & Tang, Xiaolin & Yang, Kai, 2024. "A unified benchmark for deep reinforcement learning-based energy management: Novel training ideas with the unweighted reward," Energy, Elsevier, vol. 307(C).
  • Handle: RePEc:eee:energy:v:307:y:2024:i:c:s0360544224024617
    DOI: 10.1016/j.energy.2024.132687
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0360544224024617
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.energy.2024.132687?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Shuo Feng & Haowei Sun & Xintao Yan & Haojie Zhu & Zhengxia Zou & Shengyin Shen & Henry X. Liu, 2023. "Dense reinforcement learning for safety validation of autonomous vehicles," Nature, Nature, vol. 615(7953), pages 620-627, March.
    2. Ganesh, Akhil Hannegudda & Xu, Bin, 2022. "A review of reinforcement learning based energy management systems for electrified powertrains: Progress, challenge, and potential solution," Renewable and Sustainable Energy Reviews, Elsevier, vol. 154(C).
    3. Peng, Jiankun & He, Hongwen & Xiong, Rui, 2017. "Rule based energy management strategy for a series–parallel plug-in hybrid electric bus optimized by dynamic programming," Applied Energy, Elsevier, vol. 185(P2), pages 1633-1643.
    4. Wu, Jingda & He, Hongwen & Peng, Jiankun & Li, Yuecheng & Li, Zhanjiang, 2018. "Continuous reinforcement learning of energy management with deep Q network for a power split hybrid electric bus," Applied Energy, Elsevier, vol. 222(C), pages 799-811.
    5. Han, Jie & Liu, Wenxue & Zheng, Yusheng & Khalatbarisoltani, Arash & Yang, Yalian & Hu, Xiaosong, 2023. "Health-conscious predictive energy management strategy with hybrid speed predictor for plug-in hybrid electric vehicles: Investigating the impact of battery electro-thermal-aging models," Applied Energy, Elsevier, vol. 352(C).
    6. Wang, Hanchen & Ye, Yiming & Zhang, Jiangfeng & Xu, Bin, 2023. "A comparative study of 13 deep reinforcement learning based energy management methods for a hybrid electric vehicle," Energy, Elsevier, vol. 266(C).
    7. Peter R. Wurman & Samuel Barrett & Kenta Kawamoto & James MacGlashan & Kaushik Subramanian & Thomas J. Walsh & Roberto Capobianco & Alisa Devlic & Franziska Eckert & Florian Fuchs & Leilani Gilpin & P, 2022. "Outracing champion Gran Turismo drivers with deep reinforcement learning," Nature, Nature, vol. 602(7896), pages 223-228, February.
    8. Tran, Dai-Duong & Vafaeipour, Majid & El Baghdadi, Mohamed & Barrero, Ricardo & Van Mierlo, Joeri & Hegazy, Omar, 2020. "Thorough state-of-the-art analysis of electric and hybrid vehicle powertrains: Topologies and integrated energy management strategies," Renewable and Sustainable Energy Reviews, Elsevier, vol. 119(C).
    9. Liu, Zongwei & Hao, Han & Cheng, Xiang & Zhao, Fuquan, 2018. "Critical issues of energy efficient and new energy vehicles development in China," Energy Policy, Elsevier, vol. 115(C), pages 92-97.
    10. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    11. Li, Yuecheng & He, Hongwen & Khajepour, Amir & Wang, Hong & Peng, Jiankun, 2019. "Energy management for a power-split hybrid electric bus via deep reinforcement learning with terrain information," Applied Energy, Elsevier, vol. 255(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. He, Hongwen & Meng, Xiangfei & Wang, Yong & Khajepour, Amir & An, Xiaowen & Wang, Renguang & Sun, Fengchun, 2024. "Deep reinforcement learning based energy management strategies for electrified vehicles: Recent advances and perspectives," Renewable and Sustainable Energy Reviews, Elsevier, vol. 192(C).
    2. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    3. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    4. Chen, Jiaxin & Shu, Hong & Tang, Xiaolin & Liu, Teng & Wang, Weida, 2022. "Deep reinforcement learning-based multi-objective control of hybrid power system combined with road recognition under time-varying environment," Energy, Elsevier, vol. 239(PC).
    5. Dong, Peng & Zhao, Junwei & Liu, Xuewu & Wu, Jian & Xu, Xiangyang & Liu, Yanfang & Wang, Shuhan & Guo, Wei, 2022. "Practical application of energy management strategy for hybrid electric vehicles based on intelligent and connected technologies: Development stages, challenges, and future trends," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
    6. Huang, Ruchen & He, Hongwen & Su, Qicong & Härtl, Martin & Jaensch, Malte, 2024. "Enabling cross-type full-knowledge transferable energy management for hybrid electric vehicles via deep transfer reinforcement learning," Energy, Elsevier, vol. 305(C).
    7. Diming Lou & Yinghua Zhao & Liang Fang & Yuanzhi Tang & Caihua Zhuang, 2022. "Encoder–Decoder-Based Velocity Prediction Modelling for Passenger Vehicles Coupled with Driving Pattern Recognition," Sustainability, MDPI, vol. 14(17), pages 1-21, August.
    8. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    9. Pang, Kexin & Zhou, Jian & Tsianikas, Stamatis & Coit, David W. & Ma, Yizhong, 2024. "Long-term microgrid expansion planning with resilience and environmental benefits using deep reinforcement learning," Renewable and Sustainable Energy Reviews, Elsevier, vol. 191(C).
    10. Qi, Chunyang & Zhu, Yiwen & Song, Chuanxue & Yan, Guangfu & Xiao, Feng & Da wang, & Zhang, Xu & Cao, Jingwei & Song, Shixin, 2022. "Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle," Energy, Elsevier, vol. 238(PA).
    11. Huang, Ruchen & He, Hongwen & Zhao, Xuyang & Wang, Yunlong & Li, Menglin, 2022. "Battery health-aware and naturalistic data-driven energy management for hybrid electric bus based on TD3 deep reinforcement learning algorithm," Applied Energy, Elsevier, vol. 321(C).
    12. Tang, Wenbin & Wang, Yaqian & Jiao, Xiaohong & Ren, Lina, 2023. "Hierarchical energy management strategy based on adaptive dynamic programming for hybrid electric vehicles in car-following scenarios," Energy, Elsevier, vol. 265(C).
    13. Li, Cheng & Xu, Xiangyang & Zhu, Helong & Gan, Jiongpeng & Chen, Zhige & Tang, Xiaolin, 2024. "Research on car-following control and energy management strategy of hybrid electric vehicles in connected scene," Energy, Elsevier, vol. 293(C).
    14. Feng, Zhiyan & Zhang, Qingang & Zhang, Yiming & Fei, Liangyu & Jiang, Fei & Zhao, Shengdun, 2024. "Practicability analysis of online deep reinforcement learning towards energy management strategy of 4WD-BEVs driven by dual-motor in-wheel motors," Energy, Elsevier, vol. 290(C).
    15. Yaqian Wang & Xiaohong Jiao, 2022. "Dual Heuristic Dynamic Programming Based Energy Management Control for Hybrid Electric Vehicles," Energies, MDPI, vol. 15(9), pages 1-19, April.
    16. Hu, Dong & Xie, Hui & Song, Kang & Zhang, Yuanyuan & Yan, Long, 2023. "An apprenticeship-reinforcement learning scheme based on expert demonstrations for energy management strategy of hybrid electric vehicles," Applied Energy, Elsevier, vol. 342(C).
    17. Iqbal, Najam & Wang, Hu & Zheng, Zunqing & Yao, Mingfa, 2024. "Reinforcement learning-based heuristic planning for optimized energy management in power-split hybrid electric heavy duty vehicles," Energy, Elsevier, vol. 302(C).
    18. Huang, Ying & Wang, Shilong & Li, Ke & Fan, Zhuwei & Xie, Haiming & Jiang, Fachao, 2023. "Multi-parameter adaptive online energy management strategy for concrete truck mixers with a novel hybrid powertrain considering vehicle mass," Energy, Elsevier, vol. 277(C).
    19. Alessia Musa & Pier Giuseppe Anselma & Giovanni Belingardi & Daniela Anna Misul, 2023. "Energy Management in Hybrid Electric Vehicles: A Q-Learning Solution for Enhanced Drivability and Energy Efficiency," Energies, MDPI, vol. 17(1), pages 1-20, December.
    20. Geng, Wenran & Lou, Diming & Wang, Chen & Zhang, Tong, 2020. "A cascaded energy management optimization method of multimode power-split hybrid electric vehicles," Energy, Elsevier, vol. 199(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:307:y:2024:i:c:s0360544224024617. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.