FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance
Author
Abstract
Suggested Citation
Download full text from publisher
References listed on IDEAS
- Xinyi Li & Yinchuan Li & Yuancheng Zhan & Xiao-Yang Liu, 2019. "Optimistic Bull or Pessimistic Bear: Adaptive Deep Reinforcement Learning for Stock Portfolio Allocation," Papers 1907.01503, arXiv.org.
- Xiao-Yang Liu & Zhuoran Xiong & Shan Zhong & Hongyang Yang & Anwar Walid, 2018. "Practical Deep Reinforcement Learning Approach for Stock Trading," Papers 1811.07522, arXiv.org, revised Jul 2022.
- Bekiros, Stelios D., 2010. "Fuzzy adaptive decision-making for boundedly rational traders in speculative stock markets," European Journal of Operational Research, Elsevier, vol. 202(1), pages 285-293, April.
- Marco Corazza & Francesco Bertoluzzo, 2014. "Q-Learning-based financial trading systems with applications," Working Papers 2014:15, Department of Economics, University of Venice "Ca' Foscari".
- Wenhang Bao & Xiao-yang Liu, 2019. "Multi-Agent Deep Reinforcement Learning for Liquidation Strategy Analysis," Papers 1906.11046, arXiv.org.
- Chien Yi Huang, 2018. "Financial Trading as a Game: A Deep Reinforcement Learning Approach," Papers 1807.02787, arXiv.org.
- Hans Buehler & Lukas Gonon & Josef Teichmann & Ben Wood & Baranidharan Mohan & Jonathan Kochems, 2019. "Deep Hedging: Hedging Derivatives Under Generic Market Frictions Using Reinforcement Learning," Swiss Finance Institute Research Paper Series 19-80, Swiss Finance Institute.
- Fischer, Thomas G., 2018. "Reinforcement learning in financial markets - a survey," FAU Discussion Papers in Economics 12/2018, Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics.
- David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
- Bryan Foltice & Thomas Langer, 2015. "Profitable momentum trading strategies for individual investors," Financial Markets and Portfolio Management, Springer;Swiss Society for Financial Market Research, vol. 29(2), pages 85-113, May.
- Zhengyao Jiang & Dixing Xu & Jinjun Liang, 2017. "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem," Papers 1706.10059, arXiv.org, revised Jul 2017.
- Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
- Yong Zhang & Xingyu Yang, 2017. "Online Portfolio Selection Strategy Based on Combining Experts’ Advice," Computational Economics, Springer;Society for Computational Economics, vol. 50(1), pages 141-159, June.
- Burton G. Malkiel, 2003. "Passive Investment Strategies and Efficient Markets," European Financial Management, European Financial Management Association, vol. 9(1), pages 1-10, March.
Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
- Tidor-Vlad Pricope, 2021. "Deep Reinforcement Learning in Quantitative Algorithmic Trading: A Review," Papers 2106.00123, arXiv.org.
- Karush Suri & Xiao Qi Shi & Konstantinos Plataniotis & Yuri Lawryshyn, 2021. "TradeR: Practical Deep Hierarchical Reinforcement Learning for Trade Execution," Papers 2104.00620, arXiv.org.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Xiao-Yang Liu & Hongyang Yang & Jiechao Gao & Christina Dan Wang, 2021. "FinRL: Deep Reinforcement Learning Framework to Automate Trading in Quantitative Finance," Papers 2111.09395, arXiv.org.
- Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
- Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
- Zechu Li & Xiao-Yang Liu & Jiahao Zheng & Zhaoran Wang & Anwar Walid & Jian Guo, 2021. "FinRL-Podracer: High Performance and Scalable Deep Reinforcement Learning for Quantitative Finance," Papers 2111.05188, arXiv.org.
- Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
- Tidor-Vlad Pricope, 2021. "Deep Reinforcement Learning in Quantitative Algorithmic Trading: A Review," Papers 2106.00123, arXiv.org.
- Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
- Alessio Brini & Daniele Tantari, 2021. "Deep Reinforcement Trading with Predictable Returns," Papers 2104.14683, arXiv.org, revised May 2023.
- Schnaubelt, Matthias, 2022. "Deep reinforcement learning for the optimal placement of cryptocurrency limit orders," European Journal of Operational Research, Elsevier, vol. 296(3), pages 993-1006.
- Zihao Zhang & Stefan Zohren & Stephen Roberts, 2019. "Deep Reinforcement Learning for Trading," Papers 1911.10107, arXiv.org.
- Xiao-Yang Liu & Ziyi Xia & Jingyang Rui & Jiechao Gao & Hongyang Yang & Ming Zhu & Christina Dan Wang & Zhaoran Wang & Jian Guo, 2022. "FinRL-Meta: Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning," Papers 2211.03107, arXiv.org.
- Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
- Iwao Maeda & David deGraw & Michiharu Kitano & Hiroyasu Matsushima & Hiroki Sakaji & Kiyoshi Izumi & Atsuo Kato, 2020. "Deep Reinforcement Learning in Agent Based Financial Market Simulation," JRFM, MDPI, vol. 13(4), pages 1-17, April.
- Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Bridging the gap between Markowitz planning and deep reinforcement learning," Papers 2010.09108, arXiv.org.
- Xinyi Li & Yinchuan Li & Yuancheng Zhan & Xiao-Yang Liu, 2019. "Optimistic Bull or Pessimistic Bear: Adaptive Deep Reinforcement Learning for Stock Portfolio Allocation," Papers 1907.01503, arXiv.org.
- Brini, Alessio & Tantari, Daniele, 2023. "Deep reinforcement trading with predictable returns," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 622(C).
- MohammadAmin Fazli & Mahdi Lashkari & Hamed Taherkhani & Jafar Habibi, 2022. "A Novel Experts Advice Aggregation Framework Using Deep Reinforcement Learning for Portfolio Management," Papers 2212.14477, arXiv.org.
- Ali Hirsa & Joerg Osterrieder & Branka Hadji-Misheva & Jan-Alexander Posth, 2021. "Deep reinforcement learning on a multi-asset environment for trading," Papers 2106.08437, arXiv.org.
- Jinan Zou & Qingying Zhao & Yang Jiao & Haiyao Cao & Yanxi Liu & Qingsen Yan & Ehsan Abbasnejad & Lingqiao Liu & Javen Qinfeng Shi, 2022. "Stock Market Prediction via Deep Learning Techniques: A Survey," Papers 2212.12717, arXiv.org, revised Feb 2023.
- Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
More about this item
NEP fields
This paper has been announced in the following NEP Reports:- NEP-BIG-2020-12-14 (Big Data)
- NEP-CMP-2020-12-14 (Computational Economics)
- NEP-FMK-2020-12-14 (Financial Markets)
- NEP-MST-2020-12-14 (Market Microstructure)
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2011.09607. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.