IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2104.00620.html
   My bibliography  Save this paper

TradeR: Practical Deep Hierarchical Reinforcement Learning for Trade Execution

Author

Listed:
  • Karush Suri
  • Xiao Qi Shi
  • Konstantinos Plataniotis
  • Yuri Lawryshyn

Abstract

Advances in Reinforcement Learning (RL) span a wide variety of applications which motivate development in this area. While application tasks serve as suitable benchmarks for real world problems, RL is seldomly used in practical scenarios consisting of abrupt dynamics. This allows one to rethink the problem setup in light of practical challenges. We present Trade Execution using Reinforcement Learning (TradeR) which aims to address two such practical challenges of catastrophy and surprise minimization by formulating trading as a real-world hierarchical RL problem. Through this lens, TradeR makes use of hierarchical RL to execute trade bids on high frequency real market experiences comprising of abrupt price variations during the 2019 fiscal year COVID19 stock market crash. The framework utilizes an energy-based scheme in conjunction with surprise value function for estimating and minimizing surprise. In a large-scale study of 35 stock symbols from the S&P500 index, TradeR demonstrates robustness to abrupt price changes and catastrophic losses while maintaining profitable outcomes. We hope that our work serves as a motivating example for application of RL to practical problems.

Suggested Citation

  • Karush Suri & Xiao Qi Shi & Konstantinos Plataniotis & Yuri Lawryshyn, 2021. "TradeR: Practical Deep Hierarchical Reinforcement Learning for Trade Execution," Papers 2104.00620, arXiv.org.
  • Handle: RePEc:arx:papers:2104.00620
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2104.00620
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Ioannis Boukas & Damien Ernst & Thibaut Th'eate & Adrien Bolland & Alexandre Huynen & Martin Buchwald & Christelle Wynants & Bertrand Corn'elusse, 2020. "A Deep Reinforcement Learning Framework for Continuous Intraday Market Bidding," Papers 2004.05940, arXiv.org.
    2. Ziming Gao & Yuan Gao & Yi Hu & Zhengyong Jiang & Jionglong Su, 2020. "Application of Deep Q-Network in Portfolio Management," Papers 2003.06365, arXiv.org.
    3. Xiao-Yang Liu & Zhuoran Xiong & Shan Zhong & Hongyang Yang & Anwar Walid, 2018. "Practical Deep Reinforcement Learning Approach for Stock Trading," Papers 1811.07522, arXiv.org, revised Jul 2022.
    4. Ayman Chaouki & Stephen Hardiman & Christian Schmidt & Emmanuel S'eri'e & Joachim de Lataillade, 2020. "Deep Deterministic Portfolio Optimization," Papers 2003.06497, arXiv.org, revised Apr 2020.
    5. Xiao-Yang Liu & Hongyang Yang & Qian Chen & Runjia Zhang & Liuqing Yang & Bowen Xiao & Christina Dan Wang, 2020. "FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance," Papers 2011.09607, arXiv.org, revised Mar 2022.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Supriya Bajpai, 2021. "Application of deep reinforcement learning for Indian stock trading automation," Papers 2106.16088, arXiv.org.
    2. Tidor-Vlad Pricope, 2021. "Deep Reinforcement Learning in Quantitative Algorithmic Trading: A Review," Papers 2106.00123, arXiv.org.
    3. Benedikt Finnah, 2022. "Optimal bidding functions for renewable energies in sequential electricity markets," OR Spectrum: Quantitative Approaches in Management, Springer;Gesellschaft für Operations Research e.V., vol. 44(1), pages 1-27, March.
    4. Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
    5. Francisco Peñaranda & Enrique Sentana, 2024. "Portfolio management with big data," Working Papers wp2024_2411, CEMFI.
    6. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    7. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    8. Alessio Brini & Daniele Tantari, 2021. "Deep Reinforcement Trading with Predictable Returns," Papers 2104.14683, arXiv.org, revised May 2023.
    9. Panda, Saunak Kumar & Xiang, Yisha & Liu, Ruiqi, 2024. "Dynamic resource matching in manufacturing using deep reinforcement learning," European Journal of Operational Research, Elsevier, vol. 318(2), pages 408-423.
    10. Zhengyong Jiang & Jeyan Thiayagalingam & Jionglong Su & Jinjun Liang, 2023. "CAD: Clustering And Deep Reinforcement Learning Based Multi-Period Portfolio Management Strategy," Papers 2310.01319, arXiv.org.
    11. Xinyi Li & Yinchuan Li & Xiao-Yang Liu & Christina Dan Wang, 2019. "Risk Management via Anomaly Circumvent: Mnemonic Deep Learning for Midterm Stock Prediction," Papers 1908.01112, arXiv.org.
    12. Zihao Zhang & Stefan Zohren & Stephen Roberts, 2019. "Deep Reinforcement Learning for Trading," Papers 1911.10107, arXiv.org.
    13. Yizhuo Li & Peng Zhou & Fangyi Li & Xiao Yang, 2021. "An Improved Reinforcement Learning Model Based on Sentiment Analysis," Papers 2111.15354, arXiv.org.
    14. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    15. Priyanka Shinde & Ioannis Boukas & David Radu & Miguel Manuel de Villena & Mikael Amelin, 2021. "Analyzing Trade in Continuous Intra-Day Electricity Market: An Agent-Based Modeling Approach," Energies, MDPI, vol. 14(13), pages 1-31, June.
    16. Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
    17. Huifang Huang & Ting Gao & Yi Gui & Jin Guo & Peng Zhang, 2022. "Stock Trading Optimization through Model-based Reinforcement Learning with Resistance Support Relative Strength," Papers 2205.15056, arXiv.org.
    18. Ayman Chaouki & Stephen Hardiman & Christian Schmidt & Emmanuel S'eri'e & Joachim de Lataillade, 2020. "Deep Deterministic Portfolio Optimization," Papers 2003.06497, arXiv.org, revised Apr 2020.
    19. Xinyi Li & Yinchuan Li & Yuancheng Zhan & Xiao-Yang Liu, 2019. "Optimistic Bull or Pessimistic Bear: Adaptive Deep Reinforcement Learning for Stock Portfolio Allocation," Papers 1907.01503, arXiv.org.
    20. Huanming Zhang & Zhengyong Jiang & Jionglong Su, 2021. "A Deep Deterministic Policy Gradient-based Strategy for Stocks Portfolio Management," Papers 2103.11455, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2104.00620. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.