Practical Application of Deep Reinforcement Learning to Optimal Trade Execution
Author
Abstract
Suggested Citation
Download full text from publisher
References listed on IDEAS
- Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
- Xiaodong Li & Pangjing Wu & Chenxin Zou & Qing Li, 2022. "Hierarchical Deep Reinforcement Learning for VWAP Strategy Optimization," Papers 2212.14670, arXiv.org.
- Dieter Hendricks & Diane Wilcox, 2014. "A reinforcement learning extension to the Almgren-Chriss model for optimal trade execution," Papers 1403.2229, arXiv.org.
- Bertsimas, Dimitris & Lo, Andrew W., 1998. "Optimal control of execution costs," Journal of Financial Markets, Elsevier, vol. 1(1), pages 1-50, April.
- repec:bla:jfinan:v:43:y:1988:i:1:p:97-112 is not listed on IDEAS
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Schnaubelt, Matthias, 2022. "Deep reinforcement learning for the optimal placement of cryptocurrency limit orders," European Journal of Operational Research, Elsevier, vol. 296(3), pages 993-1006.
- Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
- Yuchen Fang & Kan Ren & Weiqing Liu & Dong Zhou & Weinan Zhang & Jiang Bian & Yong Yu & Tie-Yan Liu, 2021. "Universal Trading for Order Execution with Oracle Policy Distillation," Papers 2103.10860, arXiv.org.
- Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
- Dieter Hendricks, 2016. "Using real-time cluster configurations of streaming asynchronous features as online state descriptors in financial markets," Papers 1603.06805, arXiv.org, revised May 2017.
- Yuchao Dong, 2022. "Randomized Optimal Stopping Problem in Continuous time and Reinforcement Learning Algorithm," Papers 2208.02409, arXiv.org, revised Sep 2023.
- Söhnke M. Bartram & Jürgen Branke & Mehrshad Motahari, 2020.
"Artificial intelligence in asset management,"
Working Papers
20202001, Cambridge Judge Business School, University of Cambridge.
- Bartram, Söhnke & Branke, Jürgen & Motahari, Mehrshad, 2020. "Artificial Intelligence in Asset Management," CEPR Discussion Papers 14525, C.E.P.R. Discussion Papers.
- Ayman Chaouki & Stephen Hardiman & Christian Schmidt & Emmanuel S'eri'e & Joachim de Lataillade, 2020. "Deep Deterministic Portfolio Optimization," Papers 2003.06497, arXiv.org, revised Apr 2020.
- Dicks, Matthew & Paskaramoorthy, Andrew & Gebbie, Tim, 2024. "A simple learning agent interacting with an agent-based market model," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 633(C).
- Feiyang Pan & Tongzhe Zhang & Ling Luo & Jia He & Shuoling Liu, 2022. "Learn Continuously, Act Discretely: Hybrid Action-Space Reinforcement Learning For Optimal Execution," Papers 2207.11152, arXiv.org.
- Soohan Kim & Jimyeong Kim & Hong Kee Sul & Youngjoon Hong, 2023. "An Adaptive Dual-level Reinforcement Learning Approach for Optimal Trade Execution," Papers 2307.10649, arXiv.org.
- Schnaubelt, Matthias, 2020. "Deep reinforcement learning for the optimal placement of cryptocurrency limit orders," FAU Discussion Papers in Economics 05/2020, Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics.
- Haoran Wang & Xun Yu Zhou, 2019. "Continuous-Time Mean-Variance Portfolio Selection: A Reinforcement Learning Framework," Papers 1904.11392, arXiv.org, revised May 2019.
- Bruno Gašperov & Stjepan Begušić & Petra Posedel Šimović & Zvonko Kostanjčar, 2021. "Reinforcement Learning Approaches to Optimal Market Making," Mathematics, MDPI, vol. 9(21), pages 1-22, October.
- Xiaodong Li & Pangjing Wu & Chenxin Zou & Qing Li, 2022. "Hierarchical Deep Reinforcement Learning for VWAP Strategy Optimization," Papers 2212.14670, arXiv.org.
- Gianbiagio Curato & Jim Gatheral & Fabrizio Lillo, 2014. "Optimal execution with nonlinear transient market impact," Papers 1412.4839, arXiv.org.
- Curatola, Giuliano, 2022. "Price impact, strategic interaction and portfolio choice," The North American Journal of Economics and Finance, Elsevier, vol. 59(C).
- Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
- Xiaoyue Li & John M. Mulvey, 2023. "Optimal Portfolio Execution in a Regime-switching Market with Non-linear Impact Costs: Combining Dynamic Program and Neural Network," Papers 2306.08809, arXiv.org.
- Hong, Harrison & Rady, Sven, 2002.
"Strategic trading and learning about liquidity,"
Journal of Financial Markets, Elsevier, vol. 5(4), pages 419-450, October.
- Harrison Hong & Sven Rady, 2000. "Strategic Trading and Learning About Liquidity," FMG Discussion Papers dp356, Financial Markets Group.
- Rady, Sven & Hong, Harrison G, 2000. "Strategic Trading And Learning About Liquidity," CEPR Discussion Papers 2416, C.E.P.R. Discussion Papers.
- Hong, Harrison & Rady, Sven, 2001. "Strategic Trading and Learning about Liquidity," Discussion Papers in Economics 15, University of Munich, Department of Economics.
- Harrison Hong & Sven Rady, 2000. "Strategic Trading and Learning about Liquidity," Econometric Society World Congress 2000 Contributed Papers 1351, Econometric Society.
- Hong, Harrison & Rady, Sven, 2000. "Strategic trading and learning about liquidity," LSE Research Online Documents on Economics 119102, London School of Economics and Political Science, LSE Library.
More about this item
Keywords
deep reinforcement learning; optimal trade execution; artificial intelligence; market microstructure; financial application;All these keywords.
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jfinte:v:2:y:2023:i:3:p:23-429:d:1182401. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.