IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v17y2024i21p5350-d1508027.html
   My bibliography  Save this article

Short-Term Electricity Futures Investment Strategies for Power Producers Based on Multi-Agent Deep Reinforcement Learning

Author

Listed:
  • Yizheng Wang

    (Economic Research Institute of State Grid, Zhejiang Electric Power Company, Hangzhou 310000, China
    These authors contributed equally to this work.)

  • Enhao Shi

    (College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
    These authors contributed equally to this work.)

  • Yang Xu

    (State Grid Zhejiang Electric Power Co., Ltd., Hangzhou 310000, China)

  • Jiahua Hu

    (State Grid Zhejiang Electric Power Co., Ltd., Hangzhou 310000, China)

  • Changsen Feng

    (College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China)

Abstract

The global development and enhancement of electricity financial markets aim to mitigate price risk in the electricity spot market. Power producers utilize financial derivatives for both hedging and speculation, necessitating careful selection of portfolio strategies. Current research on investment strategies for power financial derivatives primarily emphasizes risk management, resulting in a lack of a comprehensive investment framework. This study analyzes six short-term electricity futures contracts: base day, base week, base weekend, peak day, peak week, and peak weekend. A multi-agent deep reinforcement learning algorithm, Dual-Q MADDPG, is employed to learn from interactions with both the spot and futures market environments, considering the hedging and speculative behaviors of power producers. Upon completion of model training, the algorithm enables power producers to derive optimal portfolio strategies. Numerical experiments conducted in the Nordic electricity spot and futures markets indicate that the proposed Dual-Q MADDPG algorithm effectively reduces price risk in the spot market while generating substantial speculative returns. This study contributes to lowering barriers for power generators in the power finance market, thereby facilitating the widespread adoption of financial instruments, which enhances market liquidity and stability.

Suggested Citation

  • Yizheng Wang & Enhao Shi & Yang Xu & Jiahua Hu & Changsen Feng, 2024. "Short-Term Electricity Futures Investment Strategies for Power Producers Based on Multi-Agent Deep Reinforcement Learning," Energies, MDPI, vol. 17(21), pages 1-23, October.
  • Handle: RePEc:gam:jeners:v:17:y:2024:i:21:p:5350-:d:1508027
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/17/21/5350/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/17/21/5350/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Zhipeng Liang & Hao Chen & Junhao Zhu & Kangkang Jiang & Yanran Li, 2018. "Adversarial Deep Reinforcement Learning in Portfolio Management," Papers 1808.09940, arXiv.org, revised Nov 2018.
    2. Yucekaya, A., 2022. "Electricity trading for coal-fired power plants in Turkish power market considering uncertainty in spot, derivatives and bilateral contract market," Renewable and Sustainable Energy Reviews, Elsevier, vol. 159(C).
    3. Jaeck, Edouard & Lautier, Delphine, 2016. "Volatility in electricity derivative markets: The Samuelson effect revisited," Energy Economics, Elsevier, vol. 59(C), pages 300-313.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Thomas Deschatre & Xavier Warin, 2023. "A Common Shock Model for multidimensional electricity intraday price modelling with application to battery valuation," Papers 2307.16619, arXiv.org.
    2. Moradi, Amir & Salehi, Javad & Shafie-khah, Miadreza, 2024. "An interactive framework for strategic participation of a price-maker energy hub in the local gas and power markets based on the MPEC method," Energy, Elsevier, vol. 307(C).
    3. Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
    4. Asghari, M. & Afshari, H. & Jaber, M.Y. & Searcy, C., 2023. "Credibility-based cascading approach to achieve net-zero emissions in energy symbiosis networks using an Organic Rankine Cycle," Applied Energy, Elsevier, vol. 340(C).
    5. Mengying Zhu & Xiaolin Zheng & Yan Wang & Yuyuan Li & Qianqiao Liang, 2019. "Adaptive Portfolio by Solving Multi-armed Bandit via Thompson Sampling," Papers 1911.05309, arXiv.org, revised Nov 2019.
    6. Delphine H. Lautier, Franck Raynaud, and Michel A. Robe, 2019. "Shock Propagation Across the Futures Term Structure: Evidence from Crude Oil Prices," The Energy Journal, International Association for Energy Economics, vol. 0(Number 3).
    7. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    8. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    9. Thomas Deschatre & Pierre Gruet, 2021. "Electricity intraday price modeling with marked Hawkes processes," Papers 2103.07407, arXiv.org, revised Mar 2021.
    10. Pinciroli, Luca & Baraldi, Piero & Compare, Michele & Zio, Enrico, 2023. "Optimal operation and maintenance of energy storage systems in grid-connected microgrids by deep reinforcement learning," Applied Energy, Elsevier, vol. 352(C).
    11. Yasuhiro Nakayama & Tomochika Sawaki, 2023. "Causal Inference on Investment Constraints and Non-stationarity in Dynamic Portfolio Optimization through Reinforcement Learning," Papers 2311.04946, arXiv.org.
    12. Zhou, Dequn & Zhang, Yining & Wang, Qunwei & Ding, Hao, 2024. "How do uncertain renewable energy induced risks evolve in a two-stage deregulated wholesale power market," Applied Energy, Elsevier, vol. 353(PB).
    13. Hao, Zhaojun & Di Maio, Francesco & Zio, Enrico, 2023. "A sequential decision problem formulation and deep reinforcement learning solution of the optimization of O&M of cyber-physical energy systems (CPESs) for reliable and safe power production and supply," Reliability Engineering and System Safety, Elsevier, vol. 235(C).
    14. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    15. Yunan Ye & Hengzhi Pei & Boxin Wang & Pin-Yu Chen & Yada Zhu & Jun Xiao & Bo Li, 2020. "Reinforcement-Learning based Portfolio Management with Augmented Asset Movement Prediction States," Papers 2002.05780, arXiv.org.
    16. Pinciroli, Luca & Baraldi, Piero & Ballabio, Guido & Compare, Michele & Zio, Enrico, 2022. "Optimization of the Operation and Maintenance of renewable energy systems by Deep Reinforcement Learning," Renewable Energy, Elsevier, vol. 183(C), pages 752-763.
    17. Piccirilli, Marco & Schmeck, Maren Diane & Vargiolu, Tiziano, 2021. "Capturing the power options smile by an additive two-factor model for overlapping futures prices," Energy Economics, Elsevier, vol. 95(C).
    18. Xie, Haonan & Jiang, Meihui & Zhang, Dongdong & Goh, Hui Hwang & Ahmad, Tanveer & Liu, Hui & Liu, Tianhao & Wang, Shuyao & Wu, Thomas, 2023. "IntelliSense technology in the new power systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 177(C).
    19. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Bridging the gap between Markowitz planning and deep reinforcement learning," Papers 2010.09108, arXiv.org.
    20. Zhang, Dayong, 2017. "Oil shocks and stock markets revisited: Measuring connectedness from a global perspective," Energy Economics, Elsevier, vol. 62(C), pages 323-333.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:17:y:2024:i:21:p:5350-:d:1508027. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.