Reinforcement Learning for automatic financial trading: Introduction and some applications
Author
Abstract
Suggested Citation
Download full text from publisher
Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
- Hyungjun Park & Min Kyu Sim & Dong Gu Choi, 2019. "An intelligent financial portfolio trading strategy using deep Q-learning," Papers 1907.03665, arXiv.org, revised Nov 2019.
- Caiyu Jiang & Jianhua Wang, 2022. "A Portfolio Model with Risk Control Policy Based on Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(1), pages 1-16, December.
- Haoqian Li & Thomas Lau, 2019. "Reinforcement Learning: Prediction, Control and Value Function Approximation," Papers 1908.10771, arXiv.org.
- Petrus Strydom, 2017. "Funding optimization for a bank integrating credit and liquidity risk," Journal of Applied Finance & Banking, SCIENPRESS Ltd, vol. 7(2), pages 1-1.
- Fischer, Thomas G., 2018. "Reinforcement learning in financial markets - a survey," FAU Discussion Papers in Economics 12/2018, Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics.
- Ariel Neufeld & Julian Sester & Mario v{S}iki'c, 2022. "Markov Decision Processes under Model Uncertainty," Papers 2206.06109, arXiv.org, revised Jan 2023.
- Ariel Neufeld & Julian Sester & Mario Šikić, 2023. "Markov decision processes under model uncertainty," Mathematical Finance, Wiley Blackwell, vol. 33(3), pages 618-665, July.
- Zihao Zhang & Stefan Zohren & Stephen Roberts, 2020. "Deep Learning for Portfolio Optimization," Papers 2005.13665, arXiv.org, revised Jan 2021.
- Marco Corazza & Andrea Sangalli, 2015. "Q-Learning and SARSA: a comparison between two intelligent stochastic control approaches for financial trading," Working Papers 2015:15, Department of Economics, University of Venice "Ca' Foscari", revised 2015.
- Zihao Zhang & Stefan Zohren & Stephen Roberts, 2019. "Deep Reinforcement Learning for Trading," Papers 1911.10107, arXiv.org.
- Xiao-Yang Liu & Zhuoran Xiong & Shan Zhong & Hongyang Yang & Anwar Walid, 2018. "Practical Deep Reinforcement Learning Approach for Stock Trading," Papers 1811.07522, arXiv.org, revised Jul 2022.
More about this item
Keywords
Financial Trading System; Reinforcement Learning; Stochastic control; Q-learning algorithm; Kernel-based Reinforcement Learning.;All these keywords.
JEL classification:
- C61 - Mathematical and Quantitative Methods - - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling - - - Optimization Techniques; Programming Models; Dynamic Analysis
- C63 - Mathematical and Quantitative Methods - - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling - - - Computational Techniques
- D83 - Microeconomics - - Information, Knowledge, and Uncertainty - - - Search; Learning; Information and Knowledge; Communication; Belief; Unawareness
- G11 - Financial Economics - - General Financial Markets - - - Portfolio Choice; Investment Decisions
NEP fields
This paper has been announced in the following NEP Reports:- NEP-CMP-2013-01-07 (Computational Economics)
- NEP-ORE-2013-01-07 (Operations Research)
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ven:wpaper:2012:33. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Geraldine Ludbrook (email available below). General contact details of provider: https://edirc.repec.org/data/dsvenit.html .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.