IDEAS home Printed from https://ideas.repec.org/a/eee/energy/v133y2017icp348-365.html
   My bibliography  Save this article

Deep transfer Q-learning with virtual leader-follower for supply-demand Stackelberg game of smart grid

Author

Listed:
  • Zhang, Xiaoshun
  • Bao, Tao
  • Yu, Tao
  • Yang, Bo
  • Han, Chuanjia

Abstract

This paper proposes a novel deep transfer Q-learning (DTQ) associated with a virtual leader-follower pattern for supply-demand Stackelberg game of smart grid. Each generator and load are regarded as an agent of a supplier and a demander, respectively, in which an economic dispatch (ED) and demand response (DR) can be simultaneously solved by DTQ. To maximize the total payoff of all the agents, a virtual leader-follower pattern is employed to achieve a reliable collaboration among the agents. Then, Q-learning with a cooperative swarm is adopted for the knowledge learning for each agent via appropriate explorations and exploitations in an unknown environment. Furthermore, the original extremely large-scale knowledge matrix can be efficiently decomposed into several simplified small-scale knowledge matrices through a binary state-action chain, while the continuous actions can be generated for continuous variables. Lastly, a deep belief network (DBN) is used for knowledge transfer, thus DTQ can effectively exploit the prior knowledge from source tasks so as to rapidly obtain an optimal solution of a new task. Case studies are carried out to evaluate the performance of DTQ for supply-demand Stackelberg game of smart grid on a 94-agent system and a practical Shenzhen power grid of southern China.

Suggested Citation

  • Zhang, Xiaoshun & Bao, Tao & Yu, Tao & Yang, Bo & Han, Chuanjia, 2017. "Deep transfer Q-learning with virtual leader-follower for supply-demand Stackelberg game of smart grid," Energy, Elsevier, vol. 133(C), pages 348-365.
  • Handle: RePEc:eee:energy:v:133:y:2017:i:c:p:348-365
    DOI: 10.1016/j.energy.2017.05.114
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S036054421730871X
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.energy.2017.05.114?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Lijing Zhu & Jingzhou Wang & Arash Farnoosh & Xunzhang Pan, 2021. "A Game-Theory Analysis of Electric Vehicle Adoption in Beijing under License Plate Control Policy," Working Papers hal-03500766, HAL.
    2. Hu, Chenxi & Zhang, Jun & Yuan, Hongxia & Gao, Tianlu & Jiang, Huaiguang & Yan, Jing & Wenzhong Gao, David & Wang, Fei-Yue, 2022. "Black swan event small-sample transfer learning (BEST-L) and its case study on electrical power prediction in COVID-19," Applied Energy, Elsevier, vol. 309(C).
    3. Zhang, Xiaoshun & Guo, Zhengxun & Pan, Feng & Yang, Yuyao & Li, Chuansheng, 2023. "Dynamic carbon emission factor based interactive control of distribution network by a generalized regression neural network assisted optimization," Energy, Elsevier, vol. 283(C).
    4. Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Coordination of resources at the edge of the electricity grid: Systematic review and taxonomy," Applied Energy, Elsevier, vol. 318(C).
    5. Qian, Fanyue & Gao, Weijun & Yang, Yongwen & Yu, Dan, 2020. "Potential analysis of the transfer learning model in short and medium-term forecasting of building HVAC energy consumption," Energy, Elsevier, vol. 193(C).
    6. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    7. Luqin Fan & Jing Zhang & Yu He & Ying Liu & Tao Hu & Heng Zhang, 2021. "Optimal Scheduling of Microgrid Based on Deep Deterministic Policy Gradient and Transfer Learning," Energies, MDPI, vol. 14(3), pages 1-15, January.
    8. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    9. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    10. Xue Zhou & Jianan Shou & Weiwei Cui, 2022. "A Game-Theoretic Approach to Design Solar Power Generation/Storage Microgrid System for the Community in China," Sustainability, MDPI, vol. 14(16), pages 1-21, August.
    11. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Operational optimization for off-grid renewable building energy system using deep reinforcement learning," Applied Energy, Elsevier, vol. 325(C).
    12. Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Scalable multi-agent reinforcement learning for distributed control of residential energy flexibility," Applied Energy, Elsevier, vol. 314(C).
    13. Hernandez-Matheus, Alejandro & Löschenbrand, Markus & Berg, Kjersti & Fuchs, Ida & Aragüés-Peñalba, Mònica & Bullich-Massagué, Eduard & Sumper, Andreas, 2022. "A systematic review of machine learning techniques related to local energy communities," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
    14. Zhang, Xiaoshun & Li, Shengnan & He, Tingyi & Yang, Bo & Yu, Tao & Li, Haofei & Jiang, Lin & Sun, Liming, 2019. "Memetic reinforcement learning based maximum power point tracking design for PV systems under partial shading condition," Energy, Elsevier, vol. 174(C), pages 1079-1090.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:133:y:2017:i:c:p:348-365. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.