IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v12y2024i5p663-d1345170.html
   My bibliography  Save this article

Energy Management for Hybrid Electric Vehicles Using Safe Hybrid-Action Reinforcement Learning

Author

Listed:
  • Jinming Xu

    (Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou 510641, China)

  • Yuan Lin

    (Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou 510641, China)

Abstract

Reinforcement learning has shown success in solving complex control problems, yet safety remains paramount in engineering applications like energy management systems (EMS), particularly in hybrid electric vehicles (HEVs). An effective EMS is crucial for coordinating power flow while ensuring safety, such as maintaining the battery state of charge within safe limits, which presents a challenging task. Traditional reinforcement learning struggles with safety constraints, and the penalty method often leads to suboptimal performance. This study introduces Lagrangian-based parameterized soft actor–critic (PASACLag), a novel safe hybrid-action reinforcement learning algorithm for HEV energy management. PASACLag utilizes a unique composite action representation to handle continuous actions (e.g., engine torque) and discrete actions (e.g., gear shift and clutch engagement) concurrently. It integrates a Lagrangian method to separately address control objectives and constraints, simplifying the reward function and enhancing safety. We evaluate PASACLag’s performance using the World Harmonized Vehicle Cycle (901 s), with a generalization analysis of four different cycles. The results indicate that PASACLag achieves a less than 10% increase in fuel consumption compared to dynamic programming. Moreover, PASACLag surpasses PASAC, an unsafe counterpart using penalty methods, in fuel economy and constraint satisfaction metrics during generalization. These findings highlight PASACLag’s effectiveness in acquiring complex EMS for control within a hybrid action space while prioritizing safety.

Suggested Citation

  • Jinming Xu & Yuan Lin, 2024. "Energy Management for Hybrid Electric Vehicles Using Safe Hybrid-Action Reinforcement Learning," Mathematics, MDPI, vol. 12(5), pages 1-20, February.
  • Handle: RePEc:gam:jmathe:v:12:y:2024:i:5:p:663-:d:1345170
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/12/5/663/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/12/5/663/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Marc G. Bellemare & Salvatore Candido & Pablo Samuel Castro & Jun Gong & Marlos C. Machado & Subhodeep Moitra & Sameera S. Ponda & Ziyu Wang, 2020. "Autonomous navigation of stratospheric balloons using reinforcement learning," Nature, Nature, vol. 588(7836), pages 77-82, December.
    2. Elia Kaufmann & Leonard Bauersfeld & Antonio Loquercio & Matthias Müller & Vladlen Koltun & Davide Scaramuzza, 2023. "Champion-level drone racing using deep reinforcement learning," Nature, Nature, vol. 620(7976), pages 982-987, August.
    3. Oriol Vinyals & Igor Babuschkin & Wojciech M. Czarnecki & Michaël Mathieu & Andrew Dudzik & Junyoung Chung & David H. Choi & Richard Powell & Timo Ewalds & Petko Georgiev & Junhyuk Oh & Dan Horgan & M, 2019. "Grandmaster level in StarCraft II using multi-agent reinforcement learning," Nature, Nature, vol. 575(7782), pages 350-354, November.
    4. Wang, Hao & He, Hongwen & Bai, Yunfei & Yue, Hongwei, 2022. "Parameterized deep Q-network based energy management with balanced energy economy and battery life for hybrid electric vehicles," Applied Energy, Elsevier, vol. 320(C).
    5. Peter R. Wurman & Samuel Barrett & Kenta Kawamoto & James MacGlashan & Kaushik Subramanian & Thomas J. Walsh & Roberto Capobianco & Alisa Devlic & Franziska Eckert & Florian Fuchs & Leilani Gilpin & P, 2022. "Outracing champion Gran Turismo drivers with deep reinforcement learning," Nature, Nature, vol. 602(7896), pages 223-228, February.
    6. Fengqi Zhang & Lihua Wang & Serdar Coskun & Hui Pang & Yahui Cui & Junqiang Xi, 2020. "Energy Management Strategies for Hybrid Electric Vehicles: Review, Classification, Comparison, and Outlook," Energies, MDPI, vol. 13(13), pages 1-35, June.
    7. Li, Yuecheng & He, Hongwen & Khajepour, Amir & Wang, Hong & Peng, Jiankun, 2019. "Energy management for a power-split hybrid electric bus via deep reinforcement learning with terrain information," Applied Energy, Elsevier, vol. 255(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Wang, Yong & Wu, Yuankai & Tang, Yingjuan & Li, Qin & He, Hongwen, 2023. "Cooperative energy management and eco-driving of plug-in hybrid electric vehicle via multi-agent reinforcement learning," Applied Energy, Elsevier, vol. 332(C).
    2. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    3. Nweye, Kingsley & Sankaranarayanan, Siva & Nagy, Zoltan, 2023. "MERLIN: Multi-agent offline and transfer learning for occupant-centric operation of grid-interactive communities," Applied Energy, Elsevier, vol. 346(C).
    4. Zhang, Hao & Chen, Boli & Lei, Nuo & Li, Bingbing & Chen, Chaoyi & Wang, Zhi, 2024. "Coupled velocity and energy management optimization of connected hybrid electric vehicles for maximum collective efficiency," Applied Energy, Elsevier, vol. 360(C).
    5. Constantin Waubert de Puiseau & Richard Meyes & Tobias Meisen, 2022. "On reliability of reinforcement learning based production scheduling systems: a comparative survey," Journal of Intelligent Manufacturing, Springer, vol. 33(4), pages 911-927, April.
    6. Raeid Saqur, 2024. "What Teaches Robots to Walk, Teaches Them to Trade too -- Regime Adaptive Execution using Informed Data and LLMs," Papers 2406.15508, arXiv.org.
    7. Malte Reinschmidt & József Fortágh & Andreas Günther & Valentin V. Volchkov, 2024. "Reinforcement learning in cold atom experiments," Nature Communications, Nature, vol. 15(1), pages 1-11, December.
    8. He, Hongwen & Meng, Xiangfei & Wang, Yong & Khajepour, Amir & An, Xiaowen & Wang, Renguang & Sun, Fengchun, 2024. "Deep reinforcement learning based energy management strategies for electrified vehicles: Recent advances and perspectives," Renewable and Sustainable Energy Reviews, Elsevier, vol. 192(C).
    9. Huang, Ruchen & He, Hongwen & Su, Qicong & Härtl, Martin & Jaensch, Malte, 2024. "Enabling cross-type full-knowledge transferable energy management for hybrid electric vehicles via deep transfer reinforcement learning," Energy, Elsevier, vol. 305(C).
    10. Zhang, Hao & Liu, Shang & Lei, Nuo & Fan, Qinhao & Wang, Zhi, 2022. "Leveraging the benefits of ethanol-fueled advanced combustion and supervisory control optimization in hybrid biofuel-electric vehicles," Applied Energy, Elsevier, vol. 326(C).
    11. Huang, Ruchen & He, Hongwen & Su, Qicong, 2024. "Towards a fossil-free urban transport system: An intelligent cross-type transferable energy management framework based on deep transfer reinforcement learning," Applied Energy, Elsevier, vol. 363(C).
    12. Matteo Vaccargiu & Andrea Pinna & Roberto Tonelli & Luisanna Cocco, 2023. "Blockchain in the Energy Sector for SDG Achievement," Sustainability, MDPI, vol. 15(20), pages 1-23, October.
    13. Yang, Ningkang & Han, Lijin & Xiang, Changle & Liu, Hui & Li, Xunmin, 2021. "An indirect reinforcement learning based real-time energy management strategy via high-order Markov Chain model for a hybrid electric vehicle," Energy, Elsevier, vol. 236(C).
    14. Yi, Zonggen & Luo, Yusheng & Westover, Tyler & Katikaneni, Sravya & Ponkiya, Binaka & Sah, Suba & Mahmud, Sadab & Raker, David & Javaid, Ahmad & Heben, Michael J. & Khanna, Raghav, 2022. "Deep reinforcement learning based optimization for a tightly coupled nuclear renewable integrated energy system," Applied Energy, Elsevier, vol. 328(C).
    15. Selin Engin & Hasan Çınar & İlyas Kandemir, 2024. "A Rule-Based Energy Management Technique Considering Altitude Energy for a Mini UAV with a Hybrid Power System Consisting of Battery and Solar Cell," Energies, MDPI, vol. 17(16), pages 1-16, August.
    16. Liying Xu & Jiadi Zhu & Bing Chen & Zhen Yang & Keqin Liu & Bingjie Dang & Teng Zhang & Yuchao Yang & Ru Huang, 2022. "A distributed nanocluster based multi-agent evolutionary network," Nature Communications, Nature, vol. 13(1), pages 1-10, December.
    17. Daphne Cornelisse & Thomas Rood & Mateusz Malinowski & Yoram Bachrach & Tal Kachman, 2022. "Neural Payoff Machines: Predicting Fair and Stable Payoff Allocations Among Team Members," Papers 2208.08798, arXiv.org.
    18. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    19. Yavuz Eray Altun & Osman Akın Kutlar, 2024. "Energy Management Systems’ Modeling and Optimization in Hybrid Electric Vehicles," Energies, MDPI, vol. 17(7), pages 1-39, April.
    20. Weisheng Chiu & Thomas Chun Man Fan & Sang-Back Nam & Ping-Hung Sun, 2021. "Knowledge Mapping and Sustainable Development of eSports Research: A Bibliometric and Visualized Analysis," Sustainability, MDPI, vol. 13(18), pages 1-17, September.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:12:y:2024:i:5:p:663-:d:1345170. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.