IDEAS home Printed from https://ideas.repec.org/a/eee/reensy/v224y2022ics0951832022001831.html
   My bibliography  Save this article

A prescriptive Dirichlet power allocation policy with deep reinforcement learning

Author

Listed:
  • Tian, Yuan
  • Han, Minghao
  • Kulkarni, Chetan
  • Fink, Olga

Abstract

Prescribing optimal operation based on the condition of the system, and thereby potentially prolonging its remaining useful lifetime, has tremendous potential in terms of actively managing the availability, maintenance, and costs of complex systems. Reinforcement learning (RL) algorithms are particularly suitable for this type of problem given their learning capabilities. A special case of a prescriptive operation is the power allocation task, which can be considered as a sequential allocation problem whereby the action space is bounded by a simplex constraint. A general continuous action-space solution of such sequential allocation problems has still remained an open research question for RL algorithms. In continuous action space, the standard Gaussian policy applied in reinforcement learning does not support simplex constraints, while the Gaussian-softmax policy introduces a bias during training. In this work, we propose the Dirichlet policy for continuous allocation tasks and analyze the bias and variance of its policy gradients. We demonstrate that the Dirichlet policy is bias-free and provides significantly faster convergence, better performance, and better robustness to hyperparameter changes as compared to the Gaussian-softmax policy. Moreover, we demonstrate the applicability of the proposed algorithm on a prescriptive operation case in which we propose the Dirichlet power allocation policy and evaluate its performance on a case study of a set of multiple lithium-ion (Li-I) battery systems. The experimental results demonstrate the potential to prescribe optimal operation, improving the efficiency and sustainability of multi-power source systems.

Suggested Citation

  • Tian, Yuan & Han, Minghao & Kulkarni, Chetan & Fink, Olga, 2022. "A prescriptive Dirichlet power allocation policy with deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 224(C).
  • Handle: RePEc:eee:reensy:v:224:y:2022:i:c:s0951832022001831
    DOI: 10.1016/j.ress.2022.108529
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0951832022001831
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.ress.2022.108529?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Kristen A. Severson & Peter M. Attia & Norman Jin & Nicholas Perkins & Benben Jiang & Zi Yang & Michael H. Chen & Muratahan Aykol & Patrick K. Herring & Dimitrios Fraggedakis & Martin Z. Bazant & Step, 2019. "Data-driven prediction of battery cycle life before capacity degradation," Nature Energy, Nature, vol. 4(5), pages 383-391, May.
    2. Meissner, Robert & Rahn, Antonia & Wicke, Kai, 2021. "Developing prescriptive maintenance strategies in the aviation industry based on a discrete-event simulation framework for post-prognostics decision making," Reliability Engineering and System Safety, Elsevier, vol. 214(C).
    3. Nagulapati, Vijay Mohan & Lee, Hyunjun & Jung, DaWoon & Brigljevic, Boris & Choi, Yunseok & Lim, Hankwon, 2021. "Capacity estimation of batteries: Influence of training dataset size and diversity on data driven prognostic models," Reliability Engineering and System Safety, Elsevier, vol. 216(C).
    4. Guofeng Sun & Zhiqiang Tian & Renhua Liu & Yun Jing & Yawen Ma, 2020. "Research on Coordination and Optimization of Order Allocation and Delivery Route Planning in Take-Out System," Mathematical Problems in Engineering, Hindawi, vol. 2020, pages 1-16, July.
    5. Wang, Wei & Lin, Mingqiang & Fu, Yongnian & Luo, Xiaoping & Chen, Hanghang, 2020. "Multi-objective optimization of reliability-redundancy allocation problem for multi-type production systems considering redundancy strategies," Reliability Engineering and System Safety, Elsevier, vol. 193(C).
    6. Zhang, Shuo & Xiong, Rui & Cao, Jiayi, 2016. "Battery durability and longevity based power management for plug-in hybrid electric vehicle with hybrid energy storage system," Applied Energy, Elsevier, vol. 179(C), pages 316-328.
    7. Wang, Yue & Zeng, Xiaohua & Song, Dafeng & Yang, Nannan, 2019. "Optimal rule design methodology for energy management strategy of a power-split hybrid electric bus," Energy, Elsevier, vol. 185(C), pages 1086-1099.
    8. Xiong, Rui & Cao, Jiayi & Yu, Quanqing, 2018. "Reinforcement learning-based real-time power management for hybrid energy storage system in the plug-in hybrid electric vehicle," Applied Energy, Elsevier, vol. 211(C), pages 538-548.
    9. Sabri-Laghaie, Kamyar & Karimi-Nasab, Mehdi, 2019. "Random search algorithms for redundancy allocation problem of a queuing system with maintenance considerations," Reliability Engineering and System Safety, Elsevier, vol. 185(C), pages 144-162.
    10. Xu, Yue & Pi, Dechang & Yang, Shengxiang & Chen, Yang, 2021. "A novel discrete bat algorithm for heterogeneous redundancy allocation of multi-state systems subject to probabilistic common-cause failure," Reliability Engineering and System Safety, Elsevier, vol. 208(C).
    11. Arias Chao, Manuel & Kulkarni, Chetan & Goebel, Kai & Fink, Olga, 2022. "Fusing physics-based and deep learning models for prognostics," Reliability Engineering and System Safety, Elsevier, vol. 217(C).
    12. Zhang, Xiaoxiong & Ding, Song & Ge, Bingfeng & Xia, Boyuan & Pedrycz, Witold, 2021. "Resource allocation among multiple targets for a defender-attacker game with false targets consideration," Reliability Engineering and System Safety, Elsevier, vol. 211(C).
    13. Wang, Yujie & Sun, Zhendong & Chen, Zonghai, 2019. "Development of energy management system based on a rule-based power distribution strategy for hybrid power sources," Energy, Elsevier, vol. 175(C), pages 1055-1066.
    14. Liu, Xinyang & Zheng, Zhuoyuan & Büyüktahtakın, İ. Esra & Zhou, Zhi & Wang, Pingfeng, 2021. "Battery asset management with cycle life prognosis," Reliability Engineering and System Safety, Elsevier, vol. 216(C).
    15. Zhengyao Jiang & Dixing Xu & Jinjun Liang, 2017. "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem," Papers 1706.10059, arXiv.org, revised Jul 2017.
    16. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    17. Yu Sui & Shiming Song, 2020. "A Multi-Agent Reinforcement Learning Framework for Lithium-ion Battery Scheduling Problems," Energies, MDPI, vol. 13(8), pages 1-13, April.
    18. Xu, Xiaodong & Tang, Shengjin & Yu, Chuanqiang & Xie, Jian & Han, Xuebing & Ouyang, Minggao, 2021. "Remaining Useful Life Prediction of Lithium-ion Batteries Based on Wiener Process Under Time-Varying Temperature Condition," Reliability Engineering and System Safety, Elsevier, vol. 214(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Yan, Dongyang & Li, Keping & Zhu, Qiaozhen & Liu, Yanyan, 2023. "A railway accident prevention method based on reinforcement learning – Active preventive strategy by multi-modal data," Reliability Engineering and System Safety, Elsevier, vol. 234(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ardeshiri, Reza Rouhi & Liu, Ming & Ma, Chengbin, 2022. "Multivariate stacked bidirectional long short term memory for lithium-ion battery health management," Reliability Engineering and System Safety, Elsevier, vol. 224(C).
    2. da Silva, Samuel Filgueira & Eckert, Jony Javorski & Corrêa, Fernanda Cristina & Silva, Fabrício Leonardo & Silva, Ludmila C.A. & Dedini, Franco Giuseppe, 2022. "Dual HESS electric vehicle powertrain design and fuzzy control based on multi-objective optimization to increase driving range and battery life cycle," Applied Energy, Elsevier, vol. 324(C).
    3. Xu, Xiaodong & Tang, Shengjin & Han, Xuebing & Lu, Languang & Wu, Yu & Yu, Chuanqiang & Sun, Xiaoyan & Xie, Jian & Feng, Xuning & Ouyang, Minggao, 2023. "Fast capacity prediction of lithium-ion batteries using aging mechanism-informed bidirectional long short-term memory network," Reliability Engineering and System Safety, Elsevier, vol. 234(C).
    4. Xiong, Rui & Duan, Yanzhou & Cao, Jiayi & Yu, Quanqing, 2018. "Battery and ultracapacitor in-the-loop approach to validate a real-time power management method for an all-climate electric vehicle," Applied Energy, Elsevier, vol. 217(C), pages 153-165.
    5. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    6. Zhang, Jiusi & Jiang, Yuchen & Li, Xiang & Huo, Mingyi & Luo, Hao & Yin, Shen, 2022. "An adaptive remaining useful life prediction approach for single battery with unlabeled small sample data and parameter uncertainty," Reliability Engineering and System Safety, Elsevier, vol. 222(C).
    7. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    8. Wei, Yupeng & Wu, Dazhong, 2023. "Prediction of state of health and remaining useful life of lithium-ion battery using graph convolutional network with dual attention mechanisms," Reliability Engineering and System Safety, Elsevier, vol. 230(C).
    9. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    10. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    11. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    12. Alessio Brini & Daniele Tantari, 2021. "Deep Reinforcement Trading with Predictable Returns," Papers 2104.14683, arXiv.org, revised May 2023.
    13. Brini, Alessio & Tedeschi, Gabriele & Tantari, Daniele, 2023. "Reinforcement learning policy recommendation for interbank network stability," Journal of Financial Stability, Elsevier, vol. 67(C).
    14. Lyu, Dongzhen & Niu, Guangxing & Liu, Enhui & Zhang, Bin & Chen, Gang & Yang, Tao & Zio, Enrico, 2022. "Time space modelling for fault diagnosis and prognosis with uncertainty management: A general theoretical formulation," Reliability Engineering and System Safety, Elsevier, vol. 226(C).
    15. Park, Keonwoo & Moon, Ilkyeong, 2022. "Multi-agent deep reinforcement learning approach for EV charging scheduling in a smart grid," Applied Energy, Elsevier, vol. 328(C).
    16. Wang, Yue & Zeng, Xiaohua & Song, Dafeng, 2020. "Hierarchical optimal intelligent energy management strategy for a power-split hybrid electric bus based on driving information," Energy, Elsevier, vol. 199(C).
    17. Chi T. P. Nguyen & Bảo-Huy Nguyễn & Minh C. Ta & João Pedro F. Trovão, 2023. "Dual-Motor Dual-Source High Performance EV: A Comprehensive Review," Energies, MDPI, vol. 16(20), pages 1-28, October.
    18. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    19. Nagulapati, Vijay Mohan & Lee, Hyunjun & Jung, DaWoon & Brigljevic, Boris & Choi, Yunseok & Lim, Hankwon, 2021. "Capacity estimation of batteries: Influence of training dataset size and diversity on data driven prognostic models," Reliability Engineering and System Safety, Elsevier, vol. 216(C).
    20. Lv, Haichao & Kang, Lixia & Liu, Yongzhong, 2023. "Analysis of strategies to maximize the cycle life of lithium-ion batteries based on aging trajectory prediction," Energy, Elsevier, vol. 275(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:reensy:v:224:y:2022:i:c:s0951832022001831. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: https://www.journals.elsevier.com/reliability-engineering-and-system-safety .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.