IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v17y2024i21p5350-d1508027.html
   My bibliography  Save this article

Short-Term Electricity Futures Investment Strategies for Power Producers Based on Multi-Agent Deep Reinforcement Learning

Author

Listed:
  • Yizheng Wang

    (Economic Research Institute of State Grid, Zhejiang Electric Power Company, Hangzhou 310000, China
    These authors contributed equally to this work.)

  • Enhao Shi

    (College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
    These authors contributed equally to this work.)

  • Yang Xu

    (State Grid Zhejiang Electric Power Co., Ltd., Hangzhou 310000, China)

  • Jiahua Hu

    (State Grid Zhejiang Electric Power Co., Ltd., Hangzhou 310000, China)

  • Changsen Feng

    (College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China)

Abstract

The global development and enhancement of electricity financial markets aim to mitigate price risk in the electricity spot market. Power producers utilize financial derivatives for both hedging and speculation, necessitating careful selection of portfolio strategies. Current research on investment strategies for power financial derivatives primarily emphasizes risk management, resulting in a lack of a comprehensive investment framework. This study analyzes six short-term electricity futures contracts: base day, base week, base weekend, peak day, peak week, and peak weekend. A multi-agent deep reinforcement learning algorithm, Dual-Q MADDPG, is employed to learn from interactions with both the spot and futures market environments, considering the hedging and speculative behaviors of power producers. Upon completion of model training, the algorithm enables power producers to derive optimal portfolio strategies. Numerical experiments conducted in the Nordic electricity spot and futures markets indicate that the proposed Dual-Q MADDPG algorithm effectively reduces price risk in the spot market while generating substantial speculative returns. This study contributes to lowering barriers for power generators in the power finance market, thereby facilitating the widespread adoption of financial instruments, which enhances market liquidity and stability.

Suggested Citation

  • Yizheng Wang & Enhao Shi & Yang Xu & Jiahua Hu & Changsen Feng, 2024. "Short-Term Electricity Futures Investment Strategies for Power Producers Based on Multi-Agent Deep Reinforcement Learning," Energies, MDPI, vol. 17(21), pages 1-23, October.
  • Handle: RePEc:gam:jeners:v:17:y:2024:i:21:p:5350-:d:1508027
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/17/21/5350/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/17/21/5350/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Zhipeng Liang & Hao Chen & Junhao Zhu & Kangkang Jiang & Yanran Li, 2018. "Adversarial Deep Reinforcement Learning in Portfolio Management," Papers 1808.09940, arXiv.org, revised Nov 2018.
    2. Yucekaya, A., 2022. "Electricity trading for coal-fired power plants in Turkish power market considering uncertainty in spot, derivatives and bilateral contract market," Renewable and Sustainable Energy Reviews, Elsevier, vol. 159(C).
    3. Jaeck, Edouard & Lautier, Delphine, 2016. "Volatility in electricity derivative markets: The Samuelson effect revisited," Energy Economics, Elsevier, vol. 59(C), pages 300-313.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Han, Lin & Kordzakhia, Nino & Trück, Stefan, 2020. "Volatility spillovers in Australian electricity markets," Energy Economics, Elsevier, vol. 90(C).
    2. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay & Jamal Atif, 2020. "AAMDRL: Augmented Asset Management with Deep Reinforcement Learning," Papers 2010.08497, arXiv.org.
    3. Thomas Deschatre & Xavier Warin, 2023. "A Common Shock Model for multidimensional electricity intraday price modelling with application to battery valuation," Papers 2307.16619, arXiv.org.
    4. Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
    5. Adebayo Oshingbesan & Eniola Ajiboye & Peruth Kamashazi & Timothy Mbaka, 2022. "Model-Free Reinforcement Learning for Asset Allocation," Papers 2209.10458, arXiv.org.
    6. Asghari, M. & Afshari, H. & Jaber, M.Y. & Searcy, C., 2023. "Credibility-based cascading approach to achieve net-zero emissions in energy symbiosis networks using an Organic Rankine Cycle," Applied Energy, Elsevier, vol. 340(C).
    7. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    8. Mengying Zhu & Xiaolin Zheng & Yan Wang & Yuyuan Li & Qianqiao Liang, 2019. "Adaptive Portfolio by Solving Multi-armed Bandit via Thompson Sampling," Papers 1911.05309, arXiv.org, revised Nov 2019.
    9. Delphine H. Lautier, Franck Raynaud, and Michel A. Robe, 2019. "Shock Propagation Across the Futures Term Structure: Evidence from Crude Oil Prices," The Energy Journal, International Association for Energy Economics, vol. 0(Number 3).
    10. Yoshiharu Sato, 2019. "Model-Free Reinforcement Learning for Financial Portfolios: A Brief Survey," Papers 1904.04973, arXiv.org, revised May 2019.
    11. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    12. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    13. Piccirilli, Marco & Schmeck, Maren Diane & Vargiolu, Tiziano, 2021. "Capturing the power options smile by an additive two-factor model for overlapping futures prices," Energy Economics, Elsevier, vol. 95(C).
    14. Thomas Deschatre & Pierre Gruet, 2021. "Electricity intraday price modeling with marked Hawkes processes," Papers 2103.07407, arXiv.org, revised Mar 2021.
    15. Pinciroli, Luca & Baraldi, Piero & Compare, Michele & Zio, Enrico, 2023. "Optimal operation and maintenance of energy storage systems in grid-connected microgrids by deep reinforcement learning," Applied Energy, Elsevier, vol. 352(C).
    16. Yasuhiro Nakayama & Tomochika Sawaki, 2023. "Causal Inference on Investment Constraints and Non-stationarity in Dynamic Portfolio Optimization through Reinforcement Learning," Papers 2311.04946, arXiv.org.
    17. Zhou, Dequn & Zhang, Yining & Wang, Qunwei & Ding, Hao, 2024. "How do uncertain renewable energy induced risks evolve in a two-stage deregulated wholesale power market," Applied Energy, Elsevier, vol. 353(PB).
    18. Jiwon Kim & Moon-Ju Kang & KangHun Lee & HyungJun Moon & Bo-Kwan Jeon, 2023. "Deep Reinforcement Learning for Asset Allocation: Reward Clipping," Papers 2301.05300, arXiv.org.
    19. Hao, Zhaojun & Di Maio, Francesco & Zio, Enrico, 2023. "A sequential decision problem formulation and deep reinforcement learning solution of the optimization of O&M of cyber-physical energy systems (CPESs) for reliable and safe power production and supply," Reliability Engineering and System Safety, Elsevier, vol. 235(C).
    20. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:17:y:2024:i:21:p:5350-:d:1508027. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.