IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v17y2024i18p4557-d1476145.html
   My bibliography  Save this article

Reinforcement Learning for Fair and Efficient Charging Coordination for Smart Grid

Author

Listed:
  • Amr A. Elshazly

    (Department of Computer Science, Tennessee Technological University, Cookeville, TN 38505, USA)

  • Mahmoud M. Badr

    (Department of Network and Computer Security, College of Engineering, SUNY Polytechnic Institute, Utica, NY 13502, USA
    Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Cairo 11672, Egypt)

  • Mohamed Mahmoud

    (Department of Electrical and Computer Engineering, Tennessee Technological University, Cookeville, TN 38505, USA)

  • William Eberle

    (Department of Computer Science, Tennessee Technological University, Cookeville, TN 38505, USA)

  • Maazen Alsabaan

    (Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia)

  • Mohamed I. Ibrahem

    (Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Cairo 11672, Egypt
    School of Computer and Cyber Sciences, Augusta University, Augusta, GA 30912, USA)

Abstract

The integration of renewable energy sources, such as rooftop solar panels, into smart grids poses significant challenges for managing customer-side battery storage. In response, this paper introduces a novel reinforcement learning (RL) approach aimed at optimizing the coordination of these batteries. Our approach utilizes a single-agent, multi-environment RL system designed to balance power saving, customer satisfaction, and fairness in power distribution. The RL agent dynamically allocates charging power while accounting for individual battery levels and grid constraints, employing an actor–critic algorithm. The actor determines the optimal charging power based on real-time conditions, while the critic iteratively refines the policy to enhance overall performance. The key advantages of our approach include: (1) Adaptive Power Allocation: The RL agent effectively reduces overall power consumption by optimizing grid power allocation, leading to more efficient energy use. (2) Enhanced Customer Satisfaction: By increasing the total available power from the grid, our approach significantly reduces instances of battery levels falling below the critical state of charge (SoC), thereby improving customer satisfaction. (3) Fair Power Distribution: Fairness improvements are notable, with the highest fair reward rising by 173.7% across different scenarios, demonstrating the effectiveness of our method in minimizing discrepancies in power distribution. (4) Improved Total Reward: The total reward also shows a significant increase, up by 94.1%, highlighting the efficiency of our RL-based approach. Experimental results using a real-world dataset confirm that our RL approach markedly improves fairness, power efficiency, and customer satisfaction, underscoring its potential for optimizing smart grid operations and energy management systems.

Suggested Citation

  • Amr A. Elshazly & Mahmoud M. Badr & Mohamed Mahmoud & William Eberle & Maazen Alsabaan & Mohamed I. Ibrahem, 2024. "Reinforcement Learning for Fair and Efficient Charging Coordination for Smart Grid," Energies, MDPI, vol. 17(18), pages 1-28, September.
  • Handle: RePEc:gam:jeners:v:17:y:2024:i:18:p:4557-:d:1476145
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/17/18/4557/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/17/18/4557/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Ceusters, Glenn & Rodríguez, Román Cantú & García, Alberte Bouso & Franke, Rüdiger & Deconinck, Geert & Helsen, Lieve & Nowé, Ann & Messagie, Maarten & Camargo, Luis Ramirez, 2021. "Model-predictive control and reinforcement learning in multi-energy system case studies," Applied Energy, Elsevier, vol. 303(C).
    2. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    3. Wang, Kang & Wang, Haixin & Yang, Zihao & Feng, Jiawei & Li, Yanzhen & Yang, Junyou & Chen, Zhe, 2023. "A transfer learning method for electric vehicles charging strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 343(C).
    4. Verschae, Rodrigo & Kawashima, Hiroaki & Kato, Takekazu & Matsuyama, Takashi, 2016. "Coordinated energy management for inter-community imbalance minimization," Renewable Energy, Elsevier, vol. 87(P2), pages 922-935.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    3. Yi, Zonggen & Luo, Yusheng & Westover, Tyler & Katikaneni, Sravya & Ponkiya, Binaka & Sah, Suba & Mahmud, Sadab & Raker, David & Javaid, Ahmad & Heben, Michael J. & Khanna, Raghav, 2022. "Deep reinforcement learning based optimization for a tightly coupled nuclear renewable integrated energy system," Applied Energy, Elsevier, vol. 328(C).
    4. Gokhale, Gargya & Claessens, Bert & Develder, Chris, 2022. "Physics informed neural networks for control oriented thermal modeling of buildings," Applied Energy, Elsevier, vol. 314(C).
    5. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    6. Zhang, Tianren & Huang, Yuping & Liao, Hui & Liang, Yu, 2023. "A hybrid electric vehicle load classification and forecasting approach based on GBDT algorithm and temporal convolutional network," Applied Energy, Elsevier, vol. 351(C).
    7. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    8. Charalampos Rafail Lazaridis & Iakovos Michailidis & Georgios Karatzinis & Panagiotis Michailidis & Elias Kosmatopoulos, 2024. "Evaluating Reinforcement Learning Algorithms in Residential Energy Saving and Comfort Management," Energies, MDPI, vol. 17(3), pages 1-33, January.
    9. Nweye, Kingsley & Sankaranarayanan, Siva & Nagy, Zoltan, 2023. "MERLIN: Multi-agent offline and transfer learning for occupant-centric operation of grid-interactive communities," Applied Energy, Elsevier, vol. 346(C).
    10. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    11. Khaki, Behnam & Chu, Chicheng & Gadh, Rajit, 2019. "Hierarchical distributed framework for EV charging scheduling using exchange problem," Applied Energy, Elsevier, vol. 241(C), pages 461-471.
    12. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    13. Rodrigo Verschae & Takekazu Kato & Takashi Matsuyama, 2016. "Energy Management in Prosumer Communities: A Coordinated Approach," Energies, MDPI, vol. 9(7), pages 1-27, July.
    14. Davide Deltetto & Davide Coraci & Giuseppe Pinto & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Exploring the Potentialities of Deep Reinforcement Learning for Incentive-Based Demand Response in a Cluster of Small Commercial Buildings," Energies, MDPI, vol. 14(10), pages 1-25, May.
    15. Chen, Minghao & Sun, Yi & Xie, Zhiyuan & Lin, Nvgui & Wu, Peng, 2023. "An efficient and privacy-preserving algorithm for multiple energy hubs scheduling with federated and matching deep reinforcement learning," Energy, Elsevier, vol. 284(C).
    16. Machado, Diogo Ortiz & Chicaiza, William D. & Escaño, Juan M. & Gallego, Antonio J. & de Andrade, Gustavo A. & Normey-Rico, Julio E. & Bordons, Carlos & Camacho, Eduardo F., 2023. "Digital twin of a Fresnel solar collector for solar cooling," Applied Energy, Elsevier, vol. 339(C).
    17. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    18. Gao, Yuan & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-step solar irradiation prediction based on weather forecast and generative deep learning model," Renewable Energy, Elsevier, vol. 188(C), pages 637-650.
    19. Jordi de la Hoz & Àlex Alonso & Sergio Coronas & Helena Martín & José Matas, 2020. "Impact of Different Regulatory Structures on the Management of Energy Communities," Energies, MDPI, vol. 13(11), pages 1-26, June.
    20. Zhang, Yijie & Ma, Tao & Yang, Hongxing, 2022. "Grid-connected photovoltaic battery systems: A comprehensive review and perspectives," Applied Energy, Elsevier, vol. 328(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:17:y:2024:i:18:p:4557-:d:1476145. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.