IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v17y2024i18p4557-d1476145.html
   My bibliography  Save this article

Reinforcement Learning for Fair and Efficient Charging Coordination for Smart Grid

Author

Listed:
  • Amr A. Elshazly

    (Department of Computer Science, Tennessee Technological University, Cookeville, TN 38505, USA)

  • Mahmoud M. Badr

    (Department of Network and Computer Security, College of Engineering, SUNY Polytechnic Institute, Utica, NY 13502, USA
    Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Cairo 11672, Egypt)

  • Mohamed Mahmoud

    (Department of Electrical and Computer Engineering, Tennessee Technological University, Cookeville, TN 38505, USA)

  • William Eberle

    (Department of Computer Science, Tennessee Technological University, Cookeville, TN 38505, USA)

  • Maazen Alsabaan

    (Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia)

  • Mohamed I. Ibrahem

    (Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Cairo 11672, Egypt
    School of Computer and Cyber Sciences, Augusta University, Augusta, GA 30912, USA)

Abstract

The integration of renewable energy sources, such as rooftop solar panels, into smart grids poses significant challenges for managing customer-side battery storage. In response, this paper introduces a novel reinforcement learning (RL) approach aimed at optimizing the coordination of these batteries. Our approach utilizes a single-agent, multi-environment RL system designed to balance power saving, customer satisfaction, and fairness in power distribution. The RL agent dynamically allocates charging power while accounting for individual battery levels and grid constraints, employing an actor–critic algorithm. The actor determines the optimal charging power based on real-time conditions, while the critic iteratively refines the policy to enhance overall performance. The key advantages of our approach include: (1) Adaptive Power Allocation: The RL agent effectively reduces overall power consumption by optimizing grid power allocation, leading to more efficient energy use. (2) Enhanced Customer Satisfaction: By increasing the total available power from the grid, our approach significantly reduces instances of battery levels falling below the critical state of charge (SoC), thereby improving customer satisfaction. (3) Fair Power Distribution: Fairness improvements are notable, with the highest fair reward rising by 173.7% across different scenarios, demonstrating the effectiveness of our method in minimizing discrepancies in power distribution. (4) Improved Total Reward: The total reward also shows a significant increase, up by 94.1%, highlighting the efficiency of our RL-based approach. Experimental results using a real-world dataset confirm that our RL approach markedly improves fairness, power efficiency, and customer satisfaction, underscoring its potential for optimizing smart grid operations and energy management systems.

Suggested Citation

  • Amr A. Elshazly & Mahmoud M. Badr & Mohamed Mahmoud & William Eberle & Maazen Alsabaan & Mohamed I. Ibrahem, 2024. "Reinforcement Learning for Fair and Efficient Charging Coordination for Smart Grid," Energies, MDPI, vol. 17(18), pages 1-28, September.
  • Handle: RePEc:gam:jeners:v:17:y:2024:i:18:p:4557-:d:1476145
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/17/18/4557/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/17/18/4557/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Ceusters, Glenn & Rodríguez, Román Cantú & García, Alberte Bouso & Franke, Rüdiger & Deconinck, Geert & Helsen, Lieve & Nowé, Ann & Messagie, Maarten & Camargo, Luis Ramirez, 2021. "Model-predictive control and reinforcement learning in multi-energy system case studies," Applied Energy, Elsevier, vol. 303(C).
    2. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    3. Wang, Kang & Wang, Haixin & Yang, Zihao & Feng, Jiawei & Li, Yanzhen & Yang, Junyou & Chen, Zhe, 2023. "A transfer learning method for electric vehicles charging strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 343(C).
    4. Verschae, Rodrigo & Kawashima, Hiroaki & Kato, Takekazu & Matsuyama, Takashi, 2016. "Coordinated energy management for inter-community imbalance minimization," Renewable Energy, Elsevier, vol. 87(P2), pages 922-935.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Jozsef Menyhart, 2025. "Electric Vehicles and Energy Communities: Vehicle-to-Grid Opportunities and a Sustainable Future," Energies, MDPI, vol. 18(4), pages 1-17, February.
    2. Amr A. Elshazly & Islam Elgarhy & Mohamed Mahmoud & Mohamed I. Ibrahem & Maazen Alsabaan, 2025. "A Privacy-Preserving RL-Based Secure Charging Coordinator Using Efficient FL for Smart Grid Home Batteries," Energies, MDPI, vol. 18(4), pages 1-34, February.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    2. Amr A. Elshazly & Islam Elgarhy & Mohamed Mahmoud & Mohamed I. Ibrahem & Maazen Alsabaan, 2025. "A Privacy-Preserving RL-Based Secure Charging Coordinator Using Efficient FL for Smart Grid Home Batteries," Energies, MDPI, vol. 18(4), pages 1-34, February.
    3. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    4. Gokhale, Gargya & Claessens, Bert & Develder, Chris, 2022. "Physics informed neural networks for control oriented thermal modeling of buildings," Applied Energy, Elsevier, vol. 314(C).
    5. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    6. Nweye, Kingsley & Sankaranarayanan, Siva & Nagy, Zoltan, 2023. "MERLIN: Multi-agent offline and transfer learning for occupant-centric operation of grid-interactive communities," Applied Energy, Elsevier, vol. 346(C).
    7. Rodrigo Verschae & Takekazu Kato & Takashi Matsuyama, 2016. "Energy Management in Prosumer Communities: A Coordinated Approach," Energies, MDPI, vol. 9(7), pages 1-27, July.
    8. Jordi de la Hoz & Àlex Alonso & Sergio Coronas & Helena Martín & José Matas, 2020. "Impact of Different Regulatory Structures on the Management of Energy Communities," Energies, MDPI, vol. 13(11), pages 1-26, June.
    9. Zhang, Yijie & Ma, Tao & Yang, Hongxing, 2022. "Grid-connected photovoltaic battery systems: A comprehensive review and perspectives," Applied Energy, Elsevier, vol. 328(C).
    10. Hou, Guolian & Huang, Ting & Zheng, Fumeng & Huang, Congzhi, 2024. "A hierarchical reinforcement learning GPC for flexible operation of ultra-supercritical unit considering economy," Energy, Elsevier, vol. 289(C).
    11. Kim, Donghun & Wang, Zhe & Brugger, James & Blum, David & Wetter, Michael & Hong, Tianzhen & Piette, Mary Ann, 2022. "Site demonstration and performance evaluation of MPC for a large chiller plant with TES for renewable energy integration and grid decarbonization," Applied Energy, Elsevier, vol. 321(C).
    12. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    13. Homod, Raad Z. & Munahi, Basil Sh. & Mohammed, Hayder Ibrahim & Albadr, Musatafa Abbas Abbood & Abderrahmane, AISSA & Mahdi, Jasim M. & Ben Hamida, Mohamed Bechir & Alhasnawi, Bilal Naji & Albahri, A., 2024. "Deep clustering of reinforcement learning based on the bang-bang principle to optimize the energy in multi-boiler for intelligent buildings," Applied Energy, Elsevier, vol. 356(C).
    14. Eduardo J. Salazar & Mauro Jurado & Mauricio E. Samper, 2023. "Reinforcement Learning-Based Pricing and Incentive Strategy for Demand Response in Smart Grids," Energies, MDPI, vol. 16(3), pages 1-33, February.
    15. Zhang, Bin & Hu, Weihao & Xu, Xiao & Li, Tao & Zhang, Zhenyuan & Chen, Zhe, 2022. "Physical-model-free intelligent energy management for a grid-connected hybrid wind-microturbine-PV-EV energy system via deep reinforcement learning approach," Renewable Energy, Elsevier, vol. 200(C), pages 433-448.
    16. Schmitz, Simon & Brucke, Karoline & Kasturi, Pranay & Ansari, Esmail & Klement, Peter, 2024. "Forecast-based and data-driven reinforcement learning for residential heat pump operation," Applied Energy, Elsevier, vol. 371(C).
    17. Guo, Chenyu & Wang, Xin & Zheng, Yihui & Zhang, Feng, 2022. "Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    18. Kyritsis, A. & Voglitsis, D. & Papanikolaou, N. & Tselepis, S. & Christodoulou, C. & Gonos, I. & Kalogirou, S.A., 2017. "Evolution of PV systems in Greece and review of applicable solutions for higher penetration levels," Renewable Energy, Elsevier, vol. 109(C), pages 487-499.
    19. Hashemipour, Naser & Crespo del Granado, Pedro & Aghaei, Jamshid, 2021. "Dynamic allocation of peer-to-peer clusters in virtual local electricity markets: A marketplace for EV flexibility," Energy, Elsevier, vol. 236(C).
    20. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Operational optimization for off-grid renewable building energy system using deep reinforcement learning," Applied Energy, Elsevier, vol. 325(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:17:y:2024:i:18:p:4557-:d:1476145. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.