IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v328y2022ics030626192201491x.html
   My bibliography  Save this article

Online energy management strategy considering fuel cell fault for multi-stack fuel cell hybrid vehicle based on multi-agent reinforcement learning

Author

Listed:
  • Shi, Wenzhuo
  • Huangfu, Yigeng
  • Xu, Liangcai
  • Pang, Shengzhao

Abstract

Because of the high-power demand of fuel cell hybrid vehicles, a multi-stack fuel cell system (MFCS) composed of multiple low-power fuel cell stacks (FCSs) instead of a high-power one has become a satisfactory solution. This is due to the modularity of MFCS, which is more reliable and durable. However, the hybrid power system (MHPSS) of an MFCS hybrid electric vehicle possesses not only MFCS, but also batteries to improve the dynamic performance of MHPSS. Due to the difference in characteristics of MFCS and battery, the energy management strategy (EMS) of MHPSS is the key of ensuring its safe and efficient operation. However, most of the existing MHPSS EMSs are complicated in design and complex to compute online. Besides, they also do not consider the robustness of EMS, that is, EMS has the ability to guarantee the safe operation of MHPSS when the MFCS fails. To solve the above problems, this paper proposes an EMS based on independent Q-learning (IQL) which is an algorithm of multi-agent reinforcement learning to maintain battery state of charge (SOC) and minimize hydrogen consumption. The proposed EMS is not only simple in design, but also can guarantee the normal operation of the MHPSS when the MFCS fails, and can be transplanted to execute online on a microcontroller unit or a field programmable gate array. The various parts of the MHPSS model are first built, then the IQL strategy (IQLS) is trained offline in the established model environment, and finally, the IQLS is ported to the hardware-in-the-loop platform for validation. In order to verify the effectiveness of the proposed EMS, the solution of dimensionality-reduced DP is also used to calculate the fuel economy. Through the experimental verification under different initial SOC and driving cycles, it can be seen that the IQLS proposed in this paper can achieve the goal of maintaining battery SOC and minimizing hydrogen consumption, and also has good generalization ability and safe operation ability under fault conditions.

Suggested Citation

  • Shi, Wenzhuo & Huangfu, Yigeng & Xu, Liangcai & Pang, Shengzhao, 2022. "Online energy management strategy considering fuel cell fault for multi-stack fuel cell hybrid vehicle based on multi-agent reinforcement learning," Applied Energy, Elsevier, vol. 328(C).
  • Handle: RePEc:eee:appene:v:328:y:2022:i:c:s030626192201491x
    DOI: 10.1016/j.apenergy.2022.120234
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S030626192201491X
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2022.120234?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Lian, Renzong & Peng, Jiankun & Wu, Yuankai & Tan, Huachun & Zhang, Hailong, 2020. "Rule-interposing deep reinforcement learning based energy management strategy for power-split hybrid electric vehicle," Energy, Elsevier, vol. 197(C).
    2. Shuxian Li & Minghui Hu & Changchao Gong & Sen Zhan & Datong Qin, 2018. "Energy Management Strategy for Hybrid Electric Vehicle Based on Driving Condition Identification Using KGA-Means," Energies, MDPI, vol. 11(6), pages 1-16, June.
    3. Peng, Jiankun & He, Hongwen & Xiong, Rui, 2017. "Rule based energy management strategy for a series–parallel plug-in hybrid electric bus optimized by dynamic programming," Applied Energy, Elsevier, vol. 185(P2), pages 1633-1643.
    4. Zou, Yuan & Liu, Teng & Liu, Dexing & Sun, Fengchun, 2016. "Reinforcement learning-based real-time energy management for a hybrid tracked vehicle," Applied Energy, Elsevier, vol. 171(C), pages 372-382.
    5. Shen, Peihong & Zhao, Zhiguo & Zhan, Xiaowen & Li, Jingwei & Guo, Qiuyi, 2018. "Optimal energy management strategy for a plug-in hybrid electric commercial vehicle based on velocity prediction," Energy, Elsevier, vol. 155(C), pages 838-852.
    6. Xiong, Rui & Cao, Jiayi & Yu, Quanqing, 2018. "Reinforcement learning-based real-time power management for hybrid energy storage system in the plug-in hybrid electric vehicle," Applied Energy, Elsevier, vol. 211(C), pages 538-548.
    7. Zachary P. Cano & Dustin Banham & Siyu Ye & Andreas Hintennach & Jun Lu & Michael Fowler & Zhongwei Chen, 2018. "Batteries and fuel cells for emerging electric vehicle markets," Nature Energy, Nature, vol. 3(4), pages 279-289, April.
    8. Huang, Yanjun & Wang, Hong & Khajepour, Amir & Li, Bin & Ji, Jie & Zhao, Kegang & Hu, Chuan, 2018. "A review of power management strategies and component sizing methods for hybrid vehicles," Renewable and Sustainable Energy Reviews, Elsevier, vol. 96(C), pages 132-144.
    9. Amjad, Shaik & Neelakrishnan, S. & Rudramoorthy, R., 2010. "Review of design considerations and technological challenges for successful development and deployment of plug-in hybrid electric vehicles," Renewable and Sustainable Energy Reviews, Elsevier, vol. 14(3), pages 1104-1110, April.
    10. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Jia, Chunchun & Li, Kunang & He, Hongwen & Zhou, Jiaming & Li, Jianwei & Wei, Zhongbao, 2023. "Health-aware energy management strategy for fuel cell hybrid bus considering air-conditioning control based on TD3 algorithm," Energy, Elsevier, vol. 283(C).
    2. Hua, Min & Zhang, Cetengfei & Zhang, Fanggang & Li, Zhi & Yu, Xiaoli & Xu, Hongming & Zhou, Quan, 2023. "Energy management of multi-mode plug-in hybrid electric vehicle using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 348(C).
    3. Jia, Chunchun & He, Hongwen & Zhou, Jiaming & Li, Jianwei & Wei, Zhongbao & Li, Kunang, 2024. "Learning-based model predictive energy management for fuel cell hybrid electric bus with health-aware control," Applied Energy, Elsevier, vol. 355(C).
    4. Jia, Chunchun & He, Hongwen & Zhou, Jiaming & Li, Jianwei & Wei, Zhongbao & Li, Kunang, 2023. "A novel health-aware deep reinforcement learning energy management for fuel cell bus incorporating offline high-quality experience," Energy, Elsevier, vol. 282(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Dong, Peng & Zhao, Junwei & Liu, Xuewu & Wu, Jian & Xu, Xiangyang & Liu, Yanfang & Wang, Shuhan & Guo, Wei, 2022. "Practical application of energy management strategy for hybrid electric vehicles based on intelligent and connected technologies: Development stages, challenges, and future trends," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
    2. Lian, Renzong & Peng, Jiankun & Wu, Yuankai & Tan, Huachun & Zhang, Hailong, 2020. "Rule-interposing deep reinforcement learning based energy management strategy for power-split hybrid electric vehicle," Energy, Elsevier, vol. 197(C).
    3. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    4. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    5. Nie, Zhigen & Jia, Yuan & Wang, Wanqiong & Chen, Zheng & Outbib, Rachid, 2022. "Co-optimization of speed planning and energy management for intelligent fuel cell hybrid vehicle considering complex traffic conditions," Energy, Elsevier, vol. 247(C).
    6. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    7. Zhou, Jianhao & Xue, Siwu & Xue, Yuan & Liao, Yuhui & Liu, Jun & Zhao, Wanzhong, 2021. "A novel energy management strategy of hybrid electric vehicle via an improved TD3 deep reinforcement learning," Energy, Elsevier, vol. 224(C).
    8. Liu, Teng & Tan, Wenhao & Tang, Xiaolin & Zhang, Jinwei & Xing, Yang & Cao, Dongpu, 2021. "Driving conditions-driven energy management strategies for hybrid electric vehicles: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 151(C).
    9. Du, Guodong & Zou, Yuan & Zhang, Xudong & Kong, Zehui & Wu, Jinlong & He, Dingbo, 2019. "Intelligent energy management for hybrid electric tracked vehicles using online reinforcement learning," Applied Energy, Elsevier, vol. 251(C), pages 1-1.
    10. Zhu, Tao & Wills, Richard G.A. & Lot, Roberto & Ruan, Haijun & Jiang, Zhihao, 2021. "Adaptive energy management of a battery-supercapacitor energy storage system for electric vehicles based on flexible perception and neural network fitting," Applied Energy, Elsevier, vol. 292(C).
    11. Du, Guodong & Zou, Yuan & Zhang, Xudong & Liu, Teng & Wu, Jinlong & He, Dingbo, 2020. "Deep reinforcement learning based energy management for a hybrid electric vehicle," Energy, Elsevier, vol. 201(C).
    12. Chen, Jiaxin & Shu, Hong & Tang, Xiaolin & Liu, Teng & Wang, Weida, 2022. "Deep reinforcement learning-based multi-objective control of hybrid power system combined with road recognition under time-varying environment," Energy, Elsevier, vol. 239(PC).
    13. Fengqi Zhang & Lihua Wang & Serdar Coskun & Hui Pang & Yahui Cui & Junqiang Xi, 2020. "Energy Management Strategies for Hybrid Electric Vehicles: Review, Classification, Comparison, and Outlook," Energies, MDPI, vol. 13(13), pages 1-35, June.
    14. López-Ibarra, Jon Ander & Gaztañaga, Haizea & Saez-de-Ibarra, Andoni & Camblong, Haritza, 2020. "Plug-in hybrid electric buses total cost of ownership optimization at fleet level based on battery aging," Applied Energy, Elsevier, vol. 280(C).
    15. Zhou, Jianhao & Xue, Yuan & Xu, Da & Li, Chaoxiong & Zhao, Wanzhong, 2022. "Self-learning energy management strategy for hybrid electric vehicle via curiosity-inspired asynchronous deep reinforcement learning," Energy, Elsevier, vol. 242(C).
    16. Xiao, B. & Ruan, J. & Yang, W. & Walker, P.D. & Zhang, N., 2021. "A review of pivotal energy management strategies for extended range electric vehicles," Renewable and Sustainable Energy Reviews, Elsevier, vol. 149(C).
    17. Marouane Adnane & Ahmed Khoumsi & João Pedro F. Trovão, 2023. "Efficient Management of Energy Consumption of Electric Vehicles Using Machine Learning—A Systematic and Comprehensive Survey," Energies, MDPI, vol. 16(13), pages 1-39, June.
    18. Yaqian Wang & Xiaohong Jiao, 2022. "Dual Heuristic Dynamic Programming Based Energy Management Control for Hybrid Electric Vehicles," Energies, MDPI, vol. 15(9), pages 1-19, April.
    19. Hu, Dong & Xie, Hui & Song, Kang & Zhang, Yuanyuan & Yan, Long, 2023. "An apprenticeship-reinforcement learning scheme based on expert demonstrations for energy management strategy of hybrid electric vehicles," Applied Energy, Elsevier, vol. 342(C).
    20. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:328:y:2022:i:c:s030626192201491x. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.