Asynchronous Deep Double Duelling Q-Learning for Trading-Signal Execution in Limit Order Book Markets
Author
Abstract
Suggested Citation
Download full text from publisher
References listed on IDEAS
- Zihao Zhang & Bryan Lim & Stefan Zohren, 2021. "Deep Learning for Market by Order Data," Applied Mathematical Finance, Taylor & Francis Journals, vol. 28(1), pages 79-95, January.
- Antonio Briola & Jeremy Turiel & Riccardo Marcaccioli & Alvaro Cauderan & Tomaso Aste, 2021. "Deep Reinforcement Learning for Active High Frequency Trading," Papers 2101.07107, arXiv.org, revised Aug 2023.
- Michael Karpe & Jin Fang & Zhongyao Ma & Chen Wang, 2020. "Multi-Agent Reinforcement Learning in a Realistic Limit Order Book Market Simulation," Papers 2006.05574, arXiv.org, revised Sep 2020.
- Zihao Zhang & Bryan Lim & Stefan Zohren, 2021. "Deep Learning for Market by Order Data," Papers 2102.08811, arXiv.org, revised Jul 2021.
- Schnaubelt, Matthias, 2022. "Deep reinforcement learning for the optimal placement of cryptocurrency limit orders," European Journal of Operational Research, Elsevier, vol. 296(3), pages 993-1006.
- Zihao Zhang & Stefan Zohren, 2021. "Multi-Horizon Forecasting for Limit Order Books: Novel Deep Learning Approaches and Hardware Acceleration using Intelligent Processing Units," Papers 2105.10430, arXiv.org, revised Aug 2021.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Ilia Zaznov & Julian Kunkel & Alfonso Dufour & Atta Badii, 2022. "Predicting Stock Price Changes Based on the Limit Order Book: A Survey," Mathematics, MDPI, vol. 10(8), pages 1-33, April.
- Zihao Zhang & Stefan Zohren, 2021. "Multi-Horizon Forecasting for Limit Order Books: Novel Deep Learning Approaches and Hardware Acceleration using Intelligent Processing Units," Papers 2105.10430, arXiv.org, revised Aug 2021.
- Hong Guo & Jianwu Lin & Fanlin Huang, 2023. "Market Making with Deep Reinforcement Learning from Limit Order Books," Papers 2305.15821, arXiv.org.
- Antonio Briola & Silvia Bartolucci & Tomaso Aste, 2024. "HLOB -- Information Persistence and Structure in Limit Order Books," Papers 2405.18938, arXiv.org, revised Jun 2024.
- Jian Guo & Heung-Yeung Shum, 2024. "Large Investment Model," Papers 2408.10255, arXiv.org, revised Aug 2024.
- Jin Fang & Jiacheng Weng & Yi Xiang & Xinwen Zhang, 2022. "Imitate then Transcend: Multi-Agent Optimal Execution with Dual-Window Denoise PPO," Papers 2206.10736, arXiv.org.
- Konark Jain & Nick Firoozye & Jonathan Kochems & Philip Treleaven, 2024. "Limit Order Book Simulations: A Review," Papers 2402.17359, arXiv.org, revised Mar 2024.
- Antonio Briola & Jeremy Turiel & Riccardo Marcaccioli & Alvaro Cauderan & Tomaso Aste, 2021. "Deep Reinforcement Learning for Active High Frequency Trading," Papers 2101.07107, arXiv.org, revised Aug 2023.
- Eghbal Rahimikia & Stefan Zohren & Ser-Huang Poon, 2021. "Realised Volatility Forecasting: Machine Learning via Financial Word Embedding," Papers 2108.00480, arXiv.org, revised Nov 2024.
- Xianfeng Jiao & Zizhong Li & Chang Xu & Yang Liu & Weiqing Liu & Jiang Bian, 2023. "Microstructure-Empowered Stock Factor Extraction and Utilization," Papers 2308.08135, arXiv.org.
- Wang, Yuanrong & Aste, Tomaso, 2023. "Dynamic portfolio optimization with inverse covariance clustering," LSE Research Online Documents on Economics 117701, London School of Economics and Political Science, LSE Library.
- Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
- Kriebel, Johannes & Stitz, Lennart, 2022. "Credit default prediction from user-generated text in peer-to-peer lending using deep learning," European Journal of Operational Research, Elsevier, vol. 302(1), pages 309-323.
- Lorenzo Lucchese & Mikko Pakkanen & Almut Veraart, 2022. "The Short-Term Predictability of Returns in Order Book Markets: a Deep Learning Perspective," Papers 2211.13777, arXiv.org, revised Oct 2023.
- Xiao-Yang Liu & Jingyang Rui & Jiechao Gao & Liuqing Yang & Hongyang Yang & Zhaoran Wang & Christina Dan Wang & Jian Guo, 2021. "FinRL-Meta: A Universe of Near-Real Market Environments for Data-Driven Deep Reinforcement Learning in Quantitative Finance," Papers 2112.06753, arXiv.org, revised Mar 2022.
- Alvaro Arroyo & Alvaro Cartea & Fernando Moreno-Pino & Stefan Zohren, 2023. "Deep Attentive Survival Analysis in Limit Order Books: Estimating Fill Probabilities with Convolutional-Transformers," Papers 2306.05479, arXiv.org.
- Jingyang Wu & Xinyi Zhang & Fangyixuan Huang & Haochen Zhou & Rohtiash Chandra, 2024. "Review of deep learning models for crypto price prediction: implementation and evaluation," Papers 2405.11431, arXiv.org, revised Jun 2024.
- Zijian Shi & John Cartlidge, 2023. "Neural Stochastic Agent-Based Limit Order Book Simulation: A Hybrid Methodology," Papers 2303.00080, arXiv.org.
- Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
- Cong Zheng & Jiafa He & Can Yang, 2023. "Optimal Execution Using Reinforcement Learning," Papers 2306.17178, arXiv.org.
More about this item
NEP fields
This paper has been announced in the following NEP Reports:- NEP-BIG-2023-02-13 (Big Data)
- NEP-CMP-2023-02-13 (Computational Economics)
- NEP-MST-2023-02-13 (Market Microstructure)
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2301.08688. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.