IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2206.14267.html
   My bibliography  Save this paper

Applications of Reinforcement Learning in Finance -- Trading with a Double Deep Q-Network

Author

Listed:
  • Frensi Zejnullahu
  • Maurice Moser
  • Joerg Osterrieder

Abstract

This paper presents a Double Deep Q-Network algorithm for trading single assets, namely the E-mini S&P 500 continuous futures contract. We use a proven setup as the foundation for our environment with multiple extensions. The features of our trading agent are constantly being expanded to include additional assets such as commodities, resulting in four models. We also respond to environmental conditions, including costs and crises. Our trading agent is first trained for a specific time period and tested on new data and compared with the long-and-hold strategy as a benchmark (market). We analyze the differences between the various models and the in-sample/out-of-sample performance with respect to the environment. The experimental results show that the trading agent follows an appropriate behavior. It can adjust its policy to different circumstances, such as more extensive use of the neutral position when trading costs are present. Furthermore, the net asset value exceeded that of the benchmark, and the agent outperformed the market in the test set. We provide initial insights into the behavior of an agent in a financial domain using a DDQN algorithm. The results of this study can be used for further development.

Suggested Citation

  • Frensi Zejnullahu & Maurice Moser & Joerg Osterrieder, 2022. "Applications of Reinforcement Learning in Finance -- Trading with a Double Deep Q-Network," Papers 2206.14267, arXiv.org.
  • Handle: RePEc:arx:papers:2206.14267
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2206.14267
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Kumar Yashaswi, 2021. "Deep Reinforcement Learning for Portfolio Optimization using Latent Feature State Space (LFSS) Module," Papers 2102.06233, arXiv.org.
    2. Yunan Ye & Hengzhi Pei & Boxin Wang & Pin-Yu Chen & Yada Zhu & Jun Xiao & Bo Li, 2020. "Reinforcement-Learning based Portfolio Management with Augmented Asset Movement Prediction States," Papers 2002.05780, arXiv.org.
    3. Uta Pigorsch & Sebastian Schafer, 2021. "High-Dimensional Stock Portfolio Trading with Deep Reinforcement Learning," Papers 2112.04755, arXiv.org.
    4. Dat Thanh Tran & Juho Kanniainen & Moncef Gabbouj & Alexandros Iosifidis, 2020. "Data Normalization for Bilinear Structures in High-Frequency Financial Time-series," Papers 2003.00598, arXiv.org, revised Jul 2020.
    5. Chien Yi Huang, 2018. "Financial Trading as a Game: A Deep Reinforcement Learning Approach," Papers 1807.02787, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Adrian Millea & Abbas Edalat, 2022. "Using Deep Reinforcement Learning with Hierarchical Risk Parity for Portfolio Optimization," IJFS, MDPI, vol. 11(1), pages 1-16, December.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Shuyang Wang & Diego Klabjan, 2023. "An Ensemble Method of Deep Reinforcement Learning for Automated Cryptocurrency Trading," Papers 2309.00626, arXiv.org.
    2. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay & Jamal Atif, 2020. "AAMDRL: Augmented Asset Management with Deep Reinforcement Learning," Papers 2010.08497, arXiv.org.
    3. Ilia Zaznov & Julian Kunkel & Alfonso Dufour & Atta Badii, 2022. "Predicting Stock Price Changes Based on the Limit Order Book: A Survey," Mathematics, MDPI, vol. 10(8), pages 1-33, April.
    4. Zihao Zhang & Stefan Zohren & Stephen Roberts, 2019. "Deep Reinforcement Learning for Trading," Papers 1911.10107, arXiv.org.
    5. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    6. Wentao Zhang & Lingxuan Zhao & Haochong Xia & Shuo Sun & Jiaze Sun & Molei Qin & Xinyi Li & Yuqing Zhao & Yilei Zhao & Xinyu Cai & Longtao Zheng & Xinrun Wang & Bo An, 2024. "A Multimodal Foundation Agent for Financial Trading: Tool-Augmented, Diversified, and Generalist," Papers 2402.18485, arXiv.org, revised Jun 2024.
    7. Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
    8. Matteo Prata & Giuseppe Masi & Leonardo Berti & Viviana Arrigoni & Andrea Coletta & Irene Cannistraci & Svitlana Vyetrenko & Paola Velardi & Novella Bartolini, 2023. "LOB-Based Deep Learning Models for Stock Price Trend Prediction: A Benchmark Study," Papers 2308.01915, arXiv.org, revised Sep 2023.
    9. Jonas Hanetho, 2023. "Deep Policy Gradient Methods in Commodity Markets," Papers 2308.01910, arXiv.org.
    10. Hui Niu & Siyuan Li & Jian Li, 2022. "MetaTrader: An Reinforcement Learning Approach Integrating Diverse Policies for Portfolio Optimization," Papers 2210.01774, arXiv.org.
    11. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Bridging the gap between Markowitz planning and deep reinforcement learning," Papers 2010.09108, arXiv.org.
    12. Zihao Zhang & Stefan Zohren & Stephen Roberts, 2020. "Deep Learning for Portfolio Optimization," Papers 2005.13665, arXiv.org, revised Jan 2021.
    13. Tidor-Vlad Pricope, 2021. "Deep Reinforcement Learning in Quantitative Algorithmic Trading: A Review," Papers 2106.00123, arXiv.org.
    14. Dat Thanh Tran & Juho Kanniainen & Moncef Gabbouj & Alexandros Iosifidis, 2021. "Bilinear Input Normalization for Neural Networks in Financial Forecasting," Papers 2109.00983, arXiv.org.
    15. Eric Benhamou & David Saltiel & Serge Tabachnik & Sui Kai Wong & François Chareyron, 2021. "Distinguish the indistinguishable: a Deep Reinforcement Learning approach for volatility targeting models," Working Papers hal-03202431, HAL.
    16. Zhenhan Huang & Fumihide Tanaka, 2021. "MSPM: A Modularized and Scalable Multi-Agent Reinforcement Learning-based System for Financial Portfolio Management," Papers 2102.03502, arXiv.org, revised Feb 2022.
    17. Xiao-Yang Liu & Hongyang Yang & Jiechao Gao & Christina Dan Wang, 2021. "FinRL: Deep Reinforcement Learning Framework to Automate Trading in Quantitative Finance," Papers 2111.09395, arXiv.org.
    18. Alexandre Carbonneau & Frédéric Godin, 2023. "Deep Equal Risk Pricing of Financial Derivatives with Non-Translation Invariant Risk Measures," Risks, MDPI, vol. 11(8), pages 1-27, August.
    19. Jie Zou & Jiashu Lou & Baohua Wang & Sixue Liu, 2022. "A Novel Deep Reinforcement Learning Based Automated Stock Trading System Using Cascaded LSTM Networks," Papers 2212.02721, arXiv.org, revised Jul 2023.
    20. Eric Benhamou & David Saltiel & Serge Tabachnik & Sui Kai Wong & Franc{c}ois Chareyron, 2021. "Adaptive learning for financial markets mixing model-based and model-free RL for volatility targeting," Papers 2104.10483, arXiv.org, revised Apr 2021.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2206.14267. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.