IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v17y2024i24p6298-d1543238.html
   My bibliography  Save this article

Quadruple Deep Q-Network-Based Energy Management Strategy for Plug-In Hybrid Electric Vehicles

Author

Listed:
  • Dingyi Guo

    (Institute of Future Lighting, Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
    Research Institute of Fudan University in Ningbo, Ningbo 315336, China
    General Research and Development Institute, China FAW Corporation Limited, Changchun 130011, China)

  • Guangyin Lei

    (Institute of Future Lighting, Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
    Research Institute of Fudan University in Ningbo, Ningbo 315336, China)

  • Huichao Zhao

    (General Research and Development Institute, China FAW Corporation Limited, Changchun 130011, China)

  • Fang Yang

    (General Research and Development Institute, China FAW Corporation Limited, Changchun 130011, China)

  • Qiang Zhang

    (General Research and Development Institute, China FAW Corporation Limited, Changchun 130011, China)

Abstract

This study proposes the use of a Quadruple Deep Q-Network (QDQN) for optimizing the energy management strategy of Plug-in Hybrid Electric Vehicles (PHEVs). The aim of this research is to improve energy utilization efficiency by employing reinforcement learning techniques, with a focus on reducing energy consumption while maintaining vehicle performance. The methods include training a QDQN model to learn optimal energy management policies based on vehicle operating conditions and comparing the results with those obtained from traditional dynamic programming (DP), Double Deep Q-Network (DDQN), and Deep Q-Network (DQN) approaches. The findings demonstrate that the QDQN-based strategy significantly improves energy utilization, achieving a maximum efficiency increase of 11% compared with DP. Additionally, this study highlights that alternating updates between two Q-networks in DDQN helps avoid local optima, further enhancing performance, especially when greedy strategies tend to fall into suboptimal choices. The conclusions suggest that QDQN is an effective and robust approach for optimizing energy management in PHEVs, offering superior energy efficiency over traditional reinforcement learning methods. This approach provides a promising direction for real-time energy optimization in hybrid and electric vehicles.

Suggested Citation

  • Dingyi Guo & Guangyin Lei & Huichao Zhao & Fang Yang & Qiang Zhang, 2024. "Quadruple Deep Q-Network-Based Energy Management Strategy for Plug-In Hybrid Electric Vehicles," Energies, MDPI, vol. 17(24), pages 1-21, December.
  • Handle: RePEc:gam:jeners:v:17:y:2024:i:24:p:6298-:d:1543238
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/17/24/6298/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/17/24/6298/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Hua, Haochen & Qin, Yuchao & Hao, Chuantong & Cao, Junwei, 2019. "Optimal energy management strategies for energy Internet via deep reinforcement learning approach," Applied Energy, Elsevier, vol. 239(C), pages 598-609.
    2. Sun, Fengchun & Hu, Xiaosong & Zou, Yuan & Li, Siguang, 2011. "Adaptive unscented Kalman filtering for state of charge estimation of a lithium-ion battery for electric vehicles," Energy, Elsevier, vol. 36(5), pages 3531-3540.
    3. Liu, Teng & Wang, Bo & Yang, Chenglang, 2018. "Online Markov Chain-based energy management for a hybrid tracked vehicle with speedy Q-learning," Energy, Elsevier, vol. 160(C), pages 544-555.
    4. Han, Xuefeng & He, Hongwen & Wu, Jingda & Peng, Jiankun & Li, Yuecheng, 2019. "Energy management based on reinforcement learning with double deep Q-learning for a hybrid electric tracked vehicle," Applied Energy, Elsevier, vol. 254(C).
    5. Du, Guodong & Zou, Yuan & Zhang, Xudong & Kong, Zehui & Wu, Jinlong & He, Dingbo, 2019. "Intelligent energy management for hybrid electric tracked vehicles using online reinforcement learning," Applied Energy, Elsevier, vol. 251(C), pages 1-1.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yang, Ningkang & Han, Lijin & Xiang, Changle & Liu, Hui & Li, Xunmin, 2021. "An indirect reinforcement learning based real-time energy management strategy via high-order Markov Chain model for a hybrid electric vehicle," Energy, Elsevier, vol. 236(C).
    2. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    3. Liu, Teng & Tan, Wenhao & Tang, Xiaolin & Zhang, Jinwei & Xing, Yang & Cao, Dongpu, 2021. "Driving conditions-driven energy management strategies for hybrid electric vehicles: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 151(C).
    4. Matteo Acquarone & Claudio Maino & Daniela Misul & Ezio Spessa & Antonio Mastropietro & Luca Sorrentino & Enrico Busto, 2023. "Influence of the Reward Function on the Selection of Reinforcement Learning Agents for Hybrid Electric Vehicles Real-Time Control," Energies, MDPI, vol. 16(6), pages 1-22, March.
    5. Zhang, Wei & Wang, Jixin & Liu, Yong & Gao, Guangzong & Liang, Siwen & Ma, Hongfeng, 2020. "Reinforcement learning-based intelligent energy management architecture for hybrid construction machinery," Applied Energy, Elsevier, vol. 275(C).
    6. Chen, Ruihu & Yang, Chao & Ma, Yue & Wang, Weida & Wang, Muyao & Du, Xuelong, 2022. "Online learning predictive power coordinated control strategy for off-road hybrid electric vehicles considering the dynamic response of engine generator set," Applied Energy, Elsevier, vol. 323(C).
    7. Wang, Yue & Li, Keqiang & Zeng, Xiaohua & Gao, Bolin & Hong, Jichao, 2023. "Investigation of novel intelligent energy management strategies for connected HEB considering global planning of fixed-route information," Energy, Elsevier, vol. 263(PB).
    8. Kong, Xiangyu & Kong, Deqian & Yao, Jingtao & Bai, Linquan & Xiao, Jie, 2020. "Online pricing of demand response based on long short-term memory and reinforcement learning," Applied Energy, Elsevier, vol. 271(C).
    9. Xiao, Boyi & Yang, Weiwei & Wu, Jiamin & Walker, Paul D. & Zhang, Nong, 2022. "Energy management strategy via maximum entropy reinforcement learning for an extended range logistics vehicle," Energy, Elsevier, vol. 253(C).
    10. Stefan Milićević & Ivan Blagojević & Saša Milojević & Milan Bukvić & Blaža Stojanović, 2024. "Numerical Analysis of Optimal Hybridization in Parallel Hybrid Electric Powertrains for Tracked Vehicles," Energies, MDPI, vol. 17(14), pages 1-19, July.
    11. Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
    12. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    13. Du, Guodong & Zou, Yuan & Zhang, Xudong & Guo, Lingxiong & Guo, Ningyuan, 2022. "Energy management for a hybrid electric vehicle based on prioritized deep reinforcement learning framework," Energy, Elsevier, vol. 241(C).
    14. Baodi Zhang & Sheng Guo & Xin Zhang & Qicheng Xue & Lan Teng, 2020. "Adaptive Smoothing Power Following Control Strategy Based on an Optimal Efficiency Map for a Hybrid Electric Tracked Vehicle," Energies, MDPI, vol. 13(8), pages 1-25, April.
    15. Liu, Huanlong & Chen, Guanpeng & Li, Dafa & Wang, Jiawei & Zhou, Jianyi, 2021. "Energy active adjustment and bidirectional transfer management strategy of the electro-hydrostatic hydraulic hybrid powertrain for battery bus," Energy, Elsevier, vol. 230(C).
    16. Han, Lijin & Yang, Ke & Ma, Tian & Yang, Ningkang & Liu, Hui & Guo, Lingxiong, 2022. "Battery life constrained real-time energy management strategy for hybrid electric vehicles based on reinforcement learning," Energy, Elsevier, vol. 259(C).
    17. Ming Zhang & Dongfang Yang & Jiaxuan Du & Hanlei Sun & Liwei Li & Licheng Wang & Kai Wang, 2023. "A Review of SOH Prediction of Li-Ion Batteries Based on Data-Driven Algorithms," Energies, MDPI, vol. 16(7), pages 1-28, March.
    18. Nyong-Bassey, Bassey Etim & Giaouris, Damian & Patsios, Charalampos & Papadopoulou, Simira & Papadopoulos, Athanasios I. & Walker, Sara & Voutetakis, Spyros & Seferlis, Panos & Gadoue, Shady, 2020. "Reinforcement learning based adaptive power pinch analysis for energy management of stand-alone hybrid energy storage systems considering uncertainty," Energy, Elsevier, vol. 193(C).
    19. Yin, Linfei & Zhang, Bin, 2021. "Time series generative adversarial network controller for long-term smart generation control of microgrids," Applied Energy, Elsevier, vol. 281(C).
    20. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:17:y:2024:i:24:p:6298-:d:1543238. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.