IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v13y2020i8p1982-d346625.html
   My bibliography  Save this article

A Multi-Agent Reinforcement Learning Framework for Lithium-ion Battery Scheduling Problems

Author

Listed:
  • Yu Sui

    (1812 Seville Way, San Jose, CA 95131, USA)

  • Shiming Song

    (1812 Seville Way, San Jose, CA 95131, USA)

Abstract

This paper presents a reinforcement learning framework for solving battery scheduling problems in order to extend the lifetime of batteries used in electrical vehicles (EVs), cellular phones, and embedded systems. Battery pack lifetime has often been the limiting factor in many of today’s smart systems, from mobile devices and wireless sensor networks to EVs. Smart charge-discharge scheduling of battery packs is essential to obtain super linear gain of overall system lifetime, due to the recovery effect and nonlinearity in the battery characteristics. Additionally, smart scheduling has also been shown to be beneficial for optimizing the system’s thermal profile and minimizing chances of irreversible battery damage. The recent rapidly-growing community and development infrastructure have added deep reinforcement learning (DRL) to the available tools for designing battery management systems. Through leveraging the representation powers of deep neural networks and the flexibility and versatility of reinforcement learning, DRL offers a powerful solution to both roofline analysis and real-world deployment on complicated use cases. This work presents a DRL-based battery scheduling framework to solve battery scheduling problems, with high flexibility to fit various battery models and application scenarios. Through the discussion of this framework, comparisons have also been made between conventional heuristics-based methods and DRL. The experiments demonstrate that DRL-based scheduling framework achieves battery lifetime comparable to the best weighted-k round-robin (kRR) heuristic scheduling algorithm. In the meantime, the framework offers much greater flexibility in accommodating a wide range of battery models and use cases, including thermal control and imbalanced battery.

Suggested Citation

  • Yu Sui & Shiming Song, 2020. "A Multi-Agent Reinforcement Learning Framework for Lithium-ion Battery Scheduling Problems," Energies, MDPI, vol. 13(8), pages 1-13, April.
  • Handle: RePEc:gam:jeners:v:13:y:2020:i:8:p:1982-:d:346625
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/13/8/1982/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/13/8/1982/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Brida V. Mbuwir & Frederik Ruelens & Fred Spiessens & Geert Deconinck, 2017. "Battery Energy Management in a Microgrid Using Batch Reinforcement Learning," Energies, MDPI, vol. 10(11), pages 1-19, November.
    2. Alex Graves & Greg Wayne & Malcolm Reynolds & Tim Harley & Ivo Danihelka & Agnieszka Grabska-Barwińska & Sergio Gómez Colmenarejo & Edward Grefenstette & Tiago Ramalho & John Agapiou & Adrià Puigdomèn, 2016. "Hybrid computing using a neural network with dynamic external memory," Nature, Nature, vol. 538(7626), pages 471-476, October.
    3. Yalian Yang & Xiaosong Hu & Datong Qing & Fangyuan Chen, 2013. "Arrhenius Equation-Based Cell-Health Assessment: Application to Thermal Energy Management Design of a HEV NiMH Battery Pack," Energies, MDPI, vol. 6(5), pages 1-17, May.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Jorge Varela Barreras & Ricardo de Castro & Yihao Wan & Tomislav Dragicevic, 2021. "A Consensus Algorithm for Multi-Objective Battery Balancing," Energies, MDPI, vol. 14(14), pages 1-25, July.
    2. Harri Aaltonen & Seppo Sierla & Rakshith Subramanya & Valeriy Vyatkin, 2021. "A Simulation Environment for Training a Reinforcement Learning Agent Trading a Battery Storage," Energies, MDPI, vol. 14(17), pages 1-20, September.
    3. Tian, Yuan & Han, Minghao & Kulkarni, Chetan & Fink, Olga, 2022. "A prescriptive Dirichlet power allocation policy with deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 224(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Harasis, Salman & Khan, Irfan & Massoud, Ahmed, 2024. "Enabling large-scale integration of electric bus fleets in harsh environments: Possibilities, potentials, and challenges," Energy, Elsevier, vol. 300(C).
    2. Harri Aaltonen & Seppo Sierla & Rakshith Subramanya & Valeriy Vyatkin, 2021. "A Simulation Environment for Training a Reinforcement Learning Agent Trading a Battery Storage," Energies, MDPI, vol. 14(17), pages 1-20, September.
    3. Ning Wang & Weisheng Xu & Weihui Shao & Zhiyu Xu, 2019. "A Q-Cube Framework of Reinforcement Learning Algorithm for Continuous Double Auction among Microgrids," Energies, MDPI, vol. 12(15), pages 1-26, July.
    4. Chen, Pengzhan & Liu, Mengchao & Chen, Chuanxi & Shang, Xin, 2019. "A battery management strategy in microgrid for personalized customer requirements," Energy, Elsevier, vol. 189(C).
    5. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    6. Mayank Jha & Frede Blaabjerg & Mohammed Ali Khan & Varaha Satya Bharath Kurukuru & Ahteshamul Haque, 2019. "Intelligent Control of Converter for Electric Vehicles Charging Station," Energies, MDPI, vol. 12(12), pages 1-25, June.
    7. Alexander N. Kozlov & Nikita V. Tomin & Denis N. Sidorov & Electo E. S. Lora & Victor G. Kurbatsky, 2020. "Optimal Operation Control of PV-Biomass Gasifier-Diesel-Hybrid Systems Using Reinforcement Learning Techniques," Energies, MDPI, vol. 13(10), pages 1-20, May.
    8. Yujie Wu & Bizhao Shi & Zhong Zheng & Hanle Zheng & Fangwen Yu & Xue Liu & Guojie Luo & Lei Deng, 2024. "Adaptive spatiotemporal neural networks through complementary hybridization," Nature Communications, Nature, vol. 15(1), pages 1-15, December.
    9. Daniel Philps & Tillman Weyde & Artur d'Avila Garcez & Roy Batchelor, 2018. "Continual Learning Augmented Investment Decisions," Papers 1812.02340, arXiv.org, revised Jan 2019.
    10. Bio Gassi, Karim & Baysal, Mustafa, 2023. "Improving real-time energy decision-making model with an actor-critic agent in modern microgrids with energy storage devices," Energy, Elsevier, vol. 263(PE).
    11. Martin Henke & Getu Hailu, 2020. "Thermal Management of Stationary Battery Systems: A Literature Review," Energies, MDPI, vol. 13(16), pages 1-16, August.
    12. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    13. Garza-González, E. & Posadas-Castillo, C. & López-Mancilla, D. & Soriano-Sánchez, A.G., 2020. "Increasing synchronizability in a scale-free network via edge elimination," Mathematics and Computers in Simulation (MATCOM), Elsevier, vol. 174(C), pages 233-243.
    14. Juan D. Velásquez & Lorena Cadavid & Carlos J. Franco, 2023. "Intelligence Techniques in Sustainable Energy: Analysis of a Decade of Advances," Energies, MDPI, vol. 16(19), pages 1-45, October.
    15. Grace Muriithi & Sunetra Chowdhury, 2021. "Optimal Energy Management of a Grid-Tied Solar PV-Battery Microgrid: A Reinforcement Learning Approach," Energies, MDPI, vol. 14(9), pages 1-24, May.
    16. Khawaja Haider Ali & Mohammad Abusara & Asif Ali Tahir & Saptarshi Das, 2023. "Dual-Layer Q-Learning Strategy for Energy Management of Battery Storage in Grid-Connected Microgrids," Energies, MDPI, vol. 16(3), pages 1-17, January.
    17. Tsianikas, Stamatis & Yousefi, Nooshin & Zhou, Jian & Rodgers, Mark D. & Coit, David, 2021. "A storage expansion planning framework using reinforcement learning and simulation-based optimization," Applied Energy, Elsevier, vol. 290(C).
    18. Álex Omar Topa Gavilema & José Domingo Álvarez & José Luis Torres Moreno & Manuel Pérez García, 2021. "Towards Optimal Management in Microgrids: An Overview," Energies, MDPI, vol. 14(16), pages 1-25, August.
    19. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    20. Xiaobin Hong & Nianzhi Li & Jinheng Feng & Qingzhao Kong & Guixiong Liu, 2015. "Multi-Electrode Resistivity Probe for Investigation of Local Temperature Inside Metal Shell Battery Cells via Resistivity: Experiments and Evaluation of Electrical Resistance Tomography," Energies, MDPI, vol. 8(2), pages 1-23, January.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:13:y:2020:i:8:p:1982-:d:346625. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.