IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v318y2022ics0306261922005256.html
   My bibliography  Save this article

Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning

Author

Listed:
  • Harrold, Daniel J.B.
  • Cao, Jun
  • Fan, Zhong

Abstract

To reduce global greenhouse gas emissions, the world must find intelligent solutions to maximise the utilisation of carbon-free renewable energy sources. In this paper, multi-agent reinforcement learning is used to control a microgrid in a mixed cooperative and competitive setting. The agents observe fluctuating energy demand, dynamic wholesale energy prices, and intermittent renewable energy sources to control a hybrid energy storage system to maximise the utilisation of the renewables to reduce the energy costs of the grid. In addition, an aggregator agent trades with external microgrids competing against one another and the aggregator to reduce their own energy bills. For this, the algorithm deep deterministic policy gradients (DDPG) and multi-agent DDPG (MADDPG) are used to compare the use of a single global controller versus multiple distributed agents, along with the single and multi-agent variants of distributional DDPG (D3PG) and twin delayed DDPG (TD3). The research found it is significantly more profitable for the primary microgrid to sell energy on its own terms rather than selling back to the utility grid, and is also beneficial for the external microgrids as they also reduce their own energy bills. The methods that produced the greatest profits were the multi-agent approaches where each agent has its own reward function based on the principle of marginal contribution from game theory. The multi-agent approaches were better able to evaluate their performance controlling individual components of the environment which allowed them to develop their own unique policies for the different types of energy storage system.

Suggested Citation

  • Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
  • Handle: RePEc:eee:appene:v:318:y:2022:i:c:s0306261922005256
    DOI: 10.1016/j.apenergy.2022.119151
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261922005256
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2022.119151?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Ardi Tampuu & Tambet Matiisen & Dorian Kodelja & Ilya Kuzovkin & Kristjan Korjus & Juhan Aru & Jaan Aru & Raul Vicente, 2017. "Multiagent cooperation and competition with deep reinforcement learning," PLOS ONE, Public Library of Science, vol. 12(4), pages 1-15, April.
    2. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    3. Wang, Yi & Qiu, Dawei & Strbac, Goran, 2022. "Multi-agent deep reinforcement learning for resilience-driven routing and scheduling of mobile energy storage systems," Applied Energy, Elsevier, vol. 310(C).
    4. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    5. Aneke, Mathew & Wang, Meihong, 2016. "Energy storage technologies and real life applications – A state of the art review," Applied Energy, Elsevier, vol. 179(C), pages 350-377.
    6. Kofinas, P. & Dounis, A.I. & Vouros, G.A., 2018. "Fuzzy Q-Learning for multi-agent decentralized energy management in microgrids," Applied Energy, Elsevier, vol. 219(C), pages 53-67.
    7. Kuznetsova, Elizaveta & Li, Yan-Fu & Ruiz, Carlos & Zio, Enrico & Ault, Graham & Bell, Keith, 2013. "Reinforcement learning for microgrid energy management," Energy, Elsevier, vol. 59(C), pages 133-146.
    8. Martin J. Osborne & Ariel Rubinstein, 1994. "A Course in Game Theory," MIT Press Books, The MIT Press, edition 1, volume 1, number 0262650401, April.
    9. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    10. Elia, A. & Kamidelivand, M. & Rogan, F. & Ó Gallachóir, B., 2021. "Impacts of innovation on renewable energy technology cost reductions," Renewable and Sustainable Energy Reviews, Elsevier, vol. 138(C).
    11. Carrillo, C. & Obando Montaño, A.F. & Cidrás, J. & Díaz-Dorado, E., 2013. "Review of power curve modelling for wind turbines," Renewable and Sustainable Energy Reviews, Elsevier, vol. 21(C), pages 572-581.
    12. Jing, Wenlong & Lai, Chean Hung & Wong, Wallace S.H. & Wong, M.L. Dennis, 2018. "A comprehensive study of battery-supercapacitor hybrid energy storage system for standalone PV power system in rural electrification," Applied Energy, Elsevier, vol. 224(C), pages 340-356.
    13. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    14. Wenying Li & Ming Tang & Xinzhen Zhang & Danhui Gao & Jian Wang, 2021. "Operation of Distributed Battery Considering Demand Response Using Deep Reinforcement Learning in Grid Edge Control," Energies, MDPI, vol. 14(22), pages 1-18, November.
    15. Li, Jiawen & Yu, Tao & Zhang, Xiaoshun, 2022. "Coordinated load frequency control of multi-area integrated energy system using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 306(PA).
    16. Li, Xiangke & Dong, Chaoyu & Jiang, Wentao & Wu, Xiaohua, 2021. "An improved coordination control for a novel hybrid AC/DC microgrid architecture with combined energy storage system," Applied Energy, Elsevier, vol. 292(C).
    17. Bogdanov, Dmitrii & Ram, Manish & Aghahosseini, Arman & Gulagi, Ashish & Oyewo, Ayobami Solomon & Child, Michael & Caldera, Upeksha & Sadovskaia, Kristina & Farfan, Javier & De Souza Noel Simas Barbos, 2021. "Low-cost renewable electricity as the key driver of the global energy transition towards sustainability," Energy, Elsevier, vol. 227(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Seongwoo Lee & Joonho Seon & Byungsun Hwang & Soohyun Kim & Youngghyu Sun & Jinyoung Kim, 2024. "Recent Trends and Issues of Energy Management Systems Using Machine Learning," Energies, MDPI, vol. 17(3), pages 1-24, January.
    2. Paudel, Diwas & Das, Tapas K., 2023. "A deep reinforcement learning approach for power management of battery-assisted fast-charging EV hubs participating in day-ahead and real-time electricity markets," Energy, Elsevier, vol. 283(C).
    3. Shi, Linjun & Lao, Wenjie & Wu, Feng & Lee, Kwang Y. & Li, Yang & Lin, Keman, 2023. "DDPG-based load frequency control for power systems with renewable energy by DFIM pumped storage hydro unit," Renewable Energy, Elsevier, vol. 218(C).
    4. Li, Sichen & Hu, Weihao & Cao, Di & Chen, Zhe & Huang, Qi & Blaabjerg, Frede & Liao, Kaiji, 2023. "Physics-model-free heat-electricity energy management of multiple microgrids based on surrogate model-enabled multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 346(C).
    5. Ren, Kezheng & Liu, Jun & Liu, Xinglei & Nie, Yongxin, 2023. "Reinforcement Learning-Based Bi-Level strategic bidding model of Gas-fired unit in integrated electricity and natural gas markets preventing market manipulation," Applied Energy, Elsevier, vol. 336(C).
    6. Romain Mannini & Julien Eynard & Stéphane Grieu, 2022. "A Survey of Recent Advances in the Smart Management of Microgrids and Networked Microgrids," Energies, MDPI, vol. 15(19), pages 1-37, September.
    7. Cephas Samende & Zhong Fan & Jun Cao & Renzo Fabián & Gregory N. Baltas & Pedro Rodriguez, 2023. "Battery and Hydrogen Energy Storage Control in a Smart Energy Network with Flexible Energy Demand Using Deep Reinforcement Learning," Energies, MDPI, vol. 16(19), pages 1-20, September.
    8. Anis ur Rehman & Muhammad Ali & Sheeraz Iqbal & Aqib Shafiq & Nasim Ullah & Sattam Al Otaibi, 2022. "Artificial Intelligence-Based Control and Coordination of Multiple PV Inverters for Reactive Power/Voltage Control of Power Distribution Networks," Energies, MDPI, vol. 15(17), pages 1-13, August.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    2. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    3. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    4. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    5. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    6. Zheng, Lingwei & Wu, Hao & Guo, Siqi & Sun, Xinyu, 2023. "Real-time dispatch of an integrated energy system based on multi-stage reinforcement learning with an improved action-choosing strategy," Energy, Elsevier, vol. 277(C).
    7. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    8. Caputo, Cesare & Cardin, Michel-Alexandre & Ge, Pudong & Teng, Fei & Korre, Anna & Antonio del Rio Chanona, Ehecatl, 2023. "Design and planning of flexible mobile Micro-Grids using Deep Reinforcement Learning," Applied Energy, Elsevier, vol. 335(C).
    9. Van-Hai Bui & Akhtar Hussain & Hak-Man Kim, 2019. "Q-Learning-Based Operation Strategy for Community Battery Energy Storage System (CBESS) in Microgrid System," Energies, MDPI, vol. 12(9), pages 1-17, May.
    10. Soleimanzade, Mohammad Amin & Kumar, Amit & Sadrzadeh, Mohtada, 2022. "Novel data-driven energy management of a hybrid photovoltaic-reverse osmosis desalination system using deep reinforcement learning," Applied Energy, Elsevier, vol. 317(C).
    11. Nyong-Bassey, Bassey Etim & Giaouris, Damian & Patsios, Charalampos & Papadopoulou, Simira & Papadopoulos, Athanasios I. & Walker, Sara & Voutetakis, Spyros & Seferlis, Panos & Gadoue, Shady, 2020. "Reinforcement learning based adaptive power pinch analysis for energy management of stand-alone hybrid energy storage systems considering uncertainty," Energy, Elsevier, vol. 193(C).
    12. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    13. Parwal, Arvind & Fregelius, Martin & Temiz, Irinia & Göteman, Malin & Oliveira, Janaina G. de & Boström, Cecilia & Leijon, Mats, 2018. "Energy management for a grid-connected wave energy park through a hybrid energy storage system," Applied Energy, Elsevier, vol. 231(C), pages 399-411.
    14. Sun, Hongchang & Niu, Yanlei & Li, Chengdong & Zhou, Changgeng & Zhai, Wenwen & Chen, Zhe & Wu, Hao & Niu, Lanqiang, 2022. "Energy consumption optimization of building air conditioning system via combining the parallel temporal convolutional neural network and adaptive opposition-learning chimp algorithm," Energy, Elsevier, vol. 259(C).
    15. Oleh Lukianykhin & Tetiana Bogodorova, 2021. "Voltage Control-Based Ancillary Service Using Deep Reinforcement Learning," Energies, MDPI, vol. 14(8), pages 1-22, April.
    16. Emilio Calvano & Giacomo Calzolari & Vincenzo Denicolò & Sergio Pastorello, 2019. "Algorithmic Pricing What Implications for Competition Policy?," Review of Industrial Organization, Springer;The Industrial Organization Society, vol. 55(1), pages 155-171, August.
    17. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    18. Arroyo, Javier & Manna, Carlo & Spiessens, Fred & Helsen, Lieve, 2022. "Reinforced model predictive control (RL-MPC) for building energy management," Applied Energy, Elsevier, vol. 309(C).
    19. Liu, Shuai & Wei, Li & Wang, Huai, 2020. "Review on reliability of supercapacitors in energy storage applications," Applied Energy, Elsevier, vol. 278(C).
    20. Zhong, Shengyuan & Wang, Xiaoyuan & Zhao, Jun & Li, Wenjia & Li, Hao & Wang, Yongzhen & Deng, Shuai & Zhu, Jiebei, 2021. "Deep reinforcement learning framework for dynamic pricing demand response of regenerative electric heating," Applied Energy, Elsevier, vol. 288(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:318:y:2022:i:c:s0306261922005256. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.