IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2301.08688.html
   My bibliography  Save this paper

Asynchronous Deep Double Duelling Q-Learning for Trading-Signal Execution in Limit Order Book Markets

Author

Listed:
  • Peer Nagy
  • Jan-Peter Calliess
  • Stefan Zohren

Abstract

We employ deep reinforcement learning (RL) to train an agent to successfully translate a high-frequency trading signal into a trading strategy that places individual limit orders. Based on the ABIDES limit order book simulator, we build a reinforcement learning OpenAI gym environment and utilise it to simulate a realistic trading environment for NASDAQ equities based on historic order book messages. To train a trading agent that learns to maximise its trading return in this environment, we use Deep Duelling Double Q-learning with the APEX (asynchronous prioritised experience replay) architecture. The agent observes the current limit order book state, its recent history, and a short-term directional forecast. To investigate the performance of RL for adaptive trading independently from a concrete forecasting algorithm, we study the performance of our approach utilising synthetic alpha signals obtained by perturbing forward-looking returns with varying levels of noise. Here, we find that the RL agent learns an effective trading strategy for inventory management and order placing that outperforms a heuristic benchmark trading strategy having access to the same signal.

Suggested Citation

  • Peer Nagy & Jan-Peter Calliess & Stefan Zohren, 2023. "Asynchronous Deep Double Duelling Q-Learning for Trading-Signal Execution in Limit Order Book Markets," Papers 2301.08688, arXiv.org, revised Sep 2023.
  • Handle: RePEc:arx:papers:2301.08688
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2301.08688
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Zihao Zhang & Bryan Lim & Stefan Zohren, 2021. "Deep Learning for Market by Order Data," Applied Mathematical Finance, Taylor & Francis Journals, vol. 28(1), pages 79-95, January.
    2. Antonio Briola & Jeremy Turiel & Riccardo Marcaccioli & Alvaro Cauderan & Tomaso Aste, 2021. "Deep Reinforcement Learning for Active High Frequency Trading," Papers 2101.07107, arXiv.org, revised Aug 2023.
    3. Michael Karpe & Jin Fang & Zhongyao Ma & Chen Wang, 2020. "Multi-Agent Reinforcement Learning in a Realistic Limit Order Book Market Simulation," Papers 2006.05574, arXiv.org, revised Sep 2020.
    4. Zihao Zhang & Bryan Lim & Stefan Zohren, 2021. "Deep Learning for Market by Order Data," Papers 2102.08811, arXiv.org, revised Jul 2021.
    5. Schnaubelt, Matthias, 2022. "Deep reinforcement learning for the optimal placement of cryptocurrency limit orders," European Journal of Operational Research, Elsevier, vol. 296(3), pages 993-1006.
    6. Zihao Zhang & Stefan Zohren, 2021. "Multi-Horizon Forecasting for Limit Order Books: Novel Deep Learning Approaches and Hardware Acceleration using Intelligent Processing Units," Papers 2105.10430, arXiv.org, revised Aug 2021.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ilia Zaznov & Julian Kunkel & Alfonso Dufour & Atta Badii, 2022. "Predicting Stock Price Changes Based on the Limit Order Book: A Survey," Mathematics, MDPI, vol. 10(8), pages 1-33, April.
    2. Zihao Zhang & Stefan Zohren, 2021. "Multi-Horizon Forecasting for Limit Order Books: Novel Deep Learning Approaches and Hardware Acceleration using Intelligent Processing Units," Papers 2105.10430, arXiv.org, revised Aug 2021.
    3. Hong Guo & Jianwu Lin & Fanlin Huang, 2023. "Market Making with Deep Reinforcement Learning from Limit Order Books," Papers 2305.15821, arXiv.org.
    4. Antonio Briola & Silvia Bartolucci & Tomaso Aste, 2024. "HLOB -- Information Persistence and Structure in Limit Order Books," Papers 2405.18938, arXiv.org, revised Jun 2024.
    5. Jian Guo & Heung-Yeung Shum, 2024. "Large Investment Model," Papers 2408.10255, arXiv.org, revised Aug 2024.
    6. Jin Fang & Jiacheng Weng & Yi Xiang & Xinwen Zhang, 2022. "Imitate then Transcend: Multi-Agent Optimal Execution with Dual-Window Denoise PPO," Papers 2206.10736, arXiv.org.
    7. Konark Jain & Nick Firoozye & Jonathan Kochems & Philip Treleaven, 2024. "Limit Order Book Simulations: A Review," Papers 2402.17359, arXiv.org, revised Mar 2024.
    8. Antonio Briola & Jeremy Turiel & Riccardo Marcaccioli & Alvaro Cauderan & Tomaso Aste, 2021. "Deep Reinforcement Learning for Active High Frequency Trading," Papers 2101.07107, arXiv.org, revised Aug 2023.
    9. Eghbal Rahimikia & Stefan Zohren & Ser-Huang Poon, 2021. "Realised Volatility Forecasting: Machine Learning via Financial Word Embedding," Papers 2108.00480, arXiv.org, revised Nov 2024.
    10. Xianfeng Jiao & Zizhong Li & Chang Xu & Yang Liu & Weiqing Liu & Jiang Bian, 2023. "Microstructure-Empowered Stock Factor Extraction and Utilization," Papers 2308.08135, arXiv.org.
    11. Wang, Yuanrong & Aste, Tomaso, 2023. "Dynamic portfolio optimization with inverse covariance clustering," LSE Research Online Documents on Economics 117701, London School of Economics and Political Science, LSE Library.
    12. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    13. Kriebel, Johannes & Stitz, Lennart, 2022. "Credit default prediction from user-generated text in peer-to-peer lending using deep learning," European Journal of Operational Research, Elsevier, vol. 302(1), pages 309-323.
    14. Lorenzo Lucchese & Mikko Pakkanen & Almut Veraart, 2022. "The Short-Term Predictability of Returns in Order Book Markets: a Deep Learning Perspective," Papers 2211.13777, arXiv.org, revised Oct 2023.
    15. Xiao-Yang Liu & Jingyang Rui & Jiechao Gao & Liuqing Yang & Hongyang Yang & Zhaoran Wang & Christina Dan Wang & Jian Guo, 2021. "FinRL-Meta: A Universe of Near-Real Market Environments for Data-Driven Deep Reinforcement Learning in Quantitative Finance," Papers 2112.06753, arXiv.org, revised Mar 2022.
    16. Alvaro Arroyo & Alvaro Cartea & Fernando Moreno-Pino & Stefan Zohren, 2023. "Deep Attentive Survival Analysis in Limit Order Books: Estimating Fill Probabilities with Convolutional-Transformers," Papers 2306.05479, arXiv.org.
    17. Jingyang Wu & Xinyi Zhang & Fangyixuan Huang & Haochen Zhou & Rohtiash Chandra, 2024. "Review of deep learning models for crypto price prediction: implementation and evaluation," Papers 2405.11431, arXiv.org, revised Jun 2024.
    18. Zijian Shi & John Cartlidge, 2023. "Neural Stochastic Agent-Based Limit Order Book Simulation: A Hybrid Methodology," Papers 2303.00080, arXiv.org.
    19. Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
    20. Cong Zheng & Jiafa He & Can Yang, 2023. "Optimal Execution Using Reinforcement Learning," Papers 2306.17178, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2301.08688. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.