IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v267y2020ics0306261920304128.html
   My bibliography  Save this article

Heuristic action execution for energy efficient charge-sustaining control of connected hybrid vehicles with model-free double Q-learning

Author

Listed:
  • Shuai, Bin
  • Zhou, Quan
  • Li, Ji
  • He, Yinglong
  • Li, Ziyang
  • Williams, Huw
  • Xu, Hongming
  • Shuai, Shijin

Abstract

This paper investigates a model-free supervisory control methodology with double Q-learning for the hybrid vehicle in charge-sustaining scenarios. It aims to improve the vehicle’s energy efficiency continuously while maintaining the battery’s state-of-charge in real-world driving. Two new heuristic action execution policies, the max-value-based policy and the random policy, are proposed for the double Q-learning method to reduce overestimation of the merit-function values for each action in power-split control of the vehicle. Experimental studies based on software-in-the-loop (offline learning) and hardware-in-the-loop (online learning) platforms are carried out to explore the potential of energy-saving in four driving cycles defined with real-world vehicle operations. The results from 35 rounds of offline undisturbed learning show that the heuristic action execution policies can improve the learning performance of conventional double Q-learning by achieving at least 1.09% higher energy efficiency. The proposed methods achieve similar results obtained by dynamic programming, but they have the capability of real-time online application. Double Q-learnings are shown more robust to turbulence during the disturbed learning: they realise at least three times improvement in energy efficiency compared to the standard Q-learning. Random execution policy achieves 1.18% higher energy efficiency than the max-value-based policy for the same driving condition. Significant tests show that deciding factor in the random execution policy has little impact on learning performance. By implementing the control strategies for online learning, the proposed model-free control method can save energy by more than 4.55% in the predefined real-world driving conditions compared to the method using standard Q-learning.

Suggested Citation

  • Shuai, Bin & Zhou, Quan & Li, Ji & He, Yinglong & Li, Ziyang & Williams, Huw & Xu, Hongming & Shuai, Shijin, 2020. "Heuristic action execution for energy efficient charge-sustaining control of connected hybrid vehicles with model-free double Q-learning," Applied Energy, Elsevier, vol. 267(C).
  • Handle: RePEc:eee:appene:v:267:y:2020:i:c:s0306261920304128
    DOI: 10.1016/j.apenergy.2020.114900
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261920304128
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2020.114900?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Zhou, Quan & Zhang, Wei & Cash, Scott & Olatunbosun, Oluremi & Xu, Hongming & Lu, Guoxiang, 2017. "Intelligent sizing of a series hybrid electric power-train system based on Chaos-enhanced accelerated particle swarm optimization," Applied Energy, Elsevier, vol. 189(C), pages 588-601.
    2. Zhou, Quan & Li, Ji & Shuai, Bin & Williams, Huw & He, Yinglong & Li, Ziyang & Xu, Hongming & Yan, Fuwu, 2019. "Multi-step reinforcement learning for model-free predictive energy management of an electrified off-highway vehicle," Applied Energy, Elsevier, vol. 255(C).
    3. Wang, Feng & Zhang, Jian & Xu, Xing & Cai, Yingfeng & Zhou, Zhiguang & Sun, Xiaoqiang, 2019. "A comprehensive dynamic efficiency-enhanced energy management strategy for plug-in hybrid electric vehicles," Applied Energy, Elsevier, vol. 247(C), pages 657-669.
    4. Wu, Jingda & He, Hongwen & Peng, Jiankun & Li, Yuecheng & Li, Zhanjiang, 2018. "Continuous reinforcement learning of energy management with deep Q network for a power split hybrid electric bus," Applied Energy, Elsevier, vol. 222(C), pages 799-811.
    5. Han, Xuefeng & He, Hongwen & Wu, Jingda & Peng, Jiankun & Li, Yuecheng, 2019. "Energy management based on reinforcement learning with double deep Q-learning for a hybrid electric tracked vehicle," Applied Energy, Elsevier, vol. 254(C).
    6. David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Zhou, Quan & Li, Yanfei & Zhao, Dezong & Li, Ji & Williams, Huw & Xu, Hongming & Yan, Fuwu, 2022. "Transferable representation modelling for real-time energy management of the plug-in hybrid vehicle based on k-fold fuzzy learning and Gaussian process regression," Applied Energy, Elsevier, vol. 305(C).
    2. Liu, Teng & Tan, Wenhao & Tang, Xiaolin & Zhang, Jinwei & Xing, Yang & Cao, Dongpu, 2021. "Driving conditions-driven energy management strategies for hybrid electric vehicles: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 151(C).
    3. Hu, Dong & Xie, Hui & Song, Kang & Zhang, Yuanyuan & Yan, Long, 2023. "An apprenticeship-reinforcement learning scheme based on expert demonstrations for energy management strategy of hybrid electric vehicles," Applied Energy, Elsevier, vol. 342(C).
    4. Zhang, Hao & Chen, Boli & Lei, Nuo & Li, Bingbing & Chen, Chaoyi & Wang, Zhi, 2024. "Coupled velocity and energy management optimization of connected hybrid electric vehicles for maximum collective efficiency," Applied Energy, Elsevier, vol. 360(C).
    5. Hua, Min & Zhang, Cetengfei & Zhang, Fanggang & Li, Zhi & Yu, Xiaoli & Xu, Hongming & Zhou, Quan, 2023. "Energy management of multi-mode plug-in hybrid electric vehicle using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 348(C).
    6. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    7. Zhang, Hao & Fan, Qinhao & Liu, Shang & Li, Shengbo Eben & Huang, Jin & Wang, Zhi, 2021. "Hierarchical energy management strategy for plug-in hybrid electric powertrain integrated with dual-mode combustion engine," Applied Energy, Elsevier, vol. 304(C).
    8. Fuwu Yan & Jinhai Wang & Changqing Du & Min Hua, 2022. "Multi-Objective Energy Management Strategy for Hybrid Electric Vehicles Based on TD3 with Non-Parametric Reward Function," Energies, MDPI, vol. 16(1), pages 1-17, December.
    9. Yang, Shaohua & Lao, Keng-Weng & Hui, Hongxun & Chen, Yulin, 2023. "A robustness-enhanced frequency regulation scheme for power system against multiple cyber and physical emergency events," Applied Energy, Elsevier, vol. 350(C).
    10. Zhang, Hao & Liu, Shang & Lei, Nuo & Fan, Qinhao & Wang, Zhi, 2022. "Leveraging the benefits of ethanol-fueled advanced combustion and supervisory control optimization in hybrid biofuel-electric vehicles," Applied Energy, Elsevier, vol. 326(C).
    11. Qiu, Dawei & Wang, Yi & Sun, Mingyang & Strbac, Goran, 2022. "Multi-service provision for electric vehicles in power-transportation networks towards a low-carbon transition: A hierarchical and hybrid multi-agent reinforcement learning approach," Applied Energy, Elsevier, vol. 313(C).
    12. Kong, Yan & Xu, Nan & Liu, Qiao & Sui, Yan & Yue, Fenglai, 2023. "A data-driven energy management method for parallel PHEVs based on action dependent heuristic dynamic programming (ADHDP) model," Energy, Elsevier, vol. 265(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Zhang, Hailong & Peng, Jiankun & Tan, Huachun & Dong, Hanxuan & Ding, Fan & Ran, Bin, 2020. "Tackling SOC long-term dynamic for energy management of hybrid electric buses via adaptive policy optimization," Applied Energy, Elsevier, vol. 269(C).
    2. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    3. Liu, Huanlong & Chen, Guanpeng & Li, Dafa & Wang, Jiawei & Zhou, Jianyi, 2021. "Energy active adjustment and bidirectional transfer management strategy of the electro-hydrostatic hydraulic hybrid powertrain for battery bus," Energy, Elsevier, vol. 230(C).
    4. Zhang, Hao & Fan, Qinhao & Liu, Shang & Li, Shengbo Eben & Huang, Jin & Wang, Zhi, 2021. "Hierarchical energy management strategy for plug-in hybrid electric powertrain integrated with dual-mode combustion engine," Applied Energy, Elsevier, vol. 304(C).
    5. Yang, Ningkang & Han, Lijin & Xiang, Changle & Liu, Hui & Li, Xunmin, 2021. "An indirect reinforcement learning based real-time energy management strategy via high-order Markov Chain model for a hybrid electric vehicle," Energy, Elsevier, vol. 236(C).
    6. Zhou, Quan & Li, Yanfei & Zhao, Dezong & Li, Ji & Williams, Huw & Xu, Hongming & Yan, Fuwu, 2022. "Transferable representation modelling for real-time energy management of the plug-in hybrid vehicle based on k-fold fuzzy learning and Gaussian process regression," Applied Energy, Elsevier, vol. 305(C).
    7. Matteo Acquarone & Claudio Maino & Daniela Misul & Ezio Spessa & Antonio Mastropietro & Luca Sorrentino & Enrico Busto, 2023. "Influence of the Reward Function on the Selection of Reinforcement Learning Agents for Hybrid Electric Vehicles Real-Time Control," Energies, MDPI, vol. 16(6), pages 1-22, March.
    8. Zhang, Hao & Chen, Boli & Lei, Nuo & Li, Bingbing & Chen, Chaoyi & Wang, Zhi, 2024. "Coupled velocity and energy management optimization of connected hybrid electric vehicles for maximum collective efficiency," Applied Energy, Elsevier, vol. 360(C).
    9. Zhang, Cetengfei & Zhou, Quan & Hua, Min & Xu, Hongming & Bassett, Mike & Zhang, Fanggang, 2023. "Cuboid equivalent consumption minimization strategy for energy management of multi-mode plug-in hybrid vehicles considering diverse time scale objectives," Applied Energy, Elsevier, vol. 351(C).
    10. Huang, Ruchen & He, Hongwen & Zhao, Xuyang & Wang, Yunlong & Li, Menglin, 2022. "Battery health-aware and naturalistic data-driven energy management for hybrid electric bus based on TD3 deep reinforcement learning algorithm," Applied Energy, Elsevier, vol. 321(C).
    11. Chen, Zheng & Gu, Hongji & Shen, Shiquan & Shen, Jiangwei, 2022. "Energy management strategy for power-split plug-in hybrid electric vehicle based on MPC and double Q-learning," Energy, Elsevier, vol. 245(C).
    12. Chen, Ruihu & Yang, Chao & Ma, Yue & Wang, Weida & Wang, Muyao & Du, Xuelong, 2022. "Online learning predictive power coordinated control strategy for off-road hybrid electric vehicles considering the dynamic response of engine generator set," Applied Energy, Elsevier, vol. 323(C).
    13. Zhou, Jianhao & Xue, Yuan & Xu, Da & Li, Chaoxiong & Zhao, Wanzhong, 2022. "Self-learning energy management strategy for hybrid electric vehicle via curiosity-inspired asynchronous deep reinforcement learning," Energy, Elsevier, vol. 242(C).
    14. Wu, Jingda & Huang, Chao & He, Hongwen & Huang, Hailong, 2024. "Confidence-aware reinforcement learning for energy management of electrified vehicles," Renewable and Sustainable Energy Reviews, Elsevier, vol. 191(C).
    15. Dong, Peng & Zhao, Junwei & Liu, Xuewu & Wu, Jian & Xu, Xiangyang & Liu, Yanfang & Wang, Shuhan & Guo, Wei, 2022. "Practical application of energy management strategy for hybrid electric vehicles based on intelligent and connected technologies: Development stages, challenges, and future trends," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
    16. Liu, Huanlong & Chen, Guanpeng & Xie, Chixin & Li, Dafa & Wang, Jiawei & Li, Shun, 2020. "Research on energy-saving characteristics of battery-powered electric-hydrostatic hydraulic hybrid rail vehicles," Energy, Elsevier, vol. 205(C).
    17. Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
    18. Hu, Dong & Xie, Hui & Song, Kang & Zhang, Yuanyuan & Yan, Long, 2023. "An apprenticeship-reinforcement learning scheme based on expert demonstrations for energy management strategy of hybrid electric vehicles," Applied Energy, Elsevier, vol. 342(C).
    19. Kunyu Wang & Rong Yang & Yongjian Zhou & Wei Huang & Song Zhang, 2022. "Design and Improvement of SD3-Based Energy Management Strategy for a Hybrid Electric Urban Bus," Energies, MDPI, vol. 15(16), pages 1-21, August.
    20. Zhang, Bin & Hu, Weihao & Xu, Xiao & Li, Tao & Zhang, Zhenyuan & Chen, Zhe, 2022. "Physical-model-free intelligent energy management for a grid-connected hybrid wind-microturbine-PV-EV energy system via deep reinforcement learning approach," Renewable Energy, Elsevier, vol. 200(C), pages 433-448.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:267:y:2020:i:c:s0306261920304128. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.