IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2410.14927.html
   My bibliography  Save this paper

Hierarchical Reinforced Trader (HRT): A Bi-Level Approach for Optimizing Stock Selection and Execution

Author

Listed:
  • Zijie Zhao
  • Roy E. Welsch

Abstract

Leveraging Deep Reinforcement Learning (DRL) in automated stock trading has shown promising results, yet its application faces significant challenges, including the curse of dimensionality, inertia in trading actions, and insufficient portfolio diversification. Addressing these challenges, we introduce the Hierarchical Reinforced Trader (HRT), a novel trading strategy employing a bi-level Hierarchical Reinforcement Learning framework. The HRT integrates a Proximal Policy Optimization (PPO)-based High-Level Controller (HLC) for strategic stock selection with a Deep Deterministic Policy Gradient (DDPG)-based Low-Level Controller (LLC) tasked with optimizing trade executions to enhance portfolio value. In our empirical analysis, comparing the HRT agent with standalone DRL models and the S&P 500 benchmark during both bullish and bearish market conditions, we achieve a positive and higher Sharpe ratio. This advancement not only underscores the efficacy of incorporating hierarchical structures into DRL strategies but also mitigates the aforementioned challenges, paving the way for designing more profitable and robust trading algorithms in complex markets.

Suggested Citation

  • Zijie Zhao & Roy E. Welsch, 2024. "Hierarchical Reinforced Trader (HRT): A Bi-Level Approach for Optimizing Stock Selection and Execution," Papers 2410.14927, arXiv.org.
  • Handle: RePEc:arx:papers:2410.14927
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2410.14927
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Boyu Zhang & Hongyang Yang & Xiao-Yang Liu, 2023. "Instruct-FinGPT: Financial Sentiment Analysis by Instruction Tuning of General-Purpose Large Language Models," Papers 2306.12659, arXiv.org.
    2. Xinyi Li & Yinchuan Li & Yuancheng Zhan & Xiao-Yang Liu, 2019. "Optimistic Bull or Pessimistic Bear: Adaptive Deep Reinforcement Learning for Stock Portfolio Allocation," Papers 1907.01503, arXiv.org.
    3. Taylan Kabbani & Ekrem Duman, 2022. "Deep Reinforcement Learning Approach for Trading Automation in The Stock Market," Papers 2208.07165, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay & Jamal Atif, 2020. "AAMDRL: Augmented Asset Management with Deep Reinforcement Learning," Papers 2010.08497, arXiv.org.
    2. Thanos Konstantinidis & Giorgos Iacovides & Mingxue Xu & Tony G. Constantinides & Danilo Mandic, 2024. "FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications," Papers 2403.12285, arXiv.org.
    3. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    4. Xiao-Yang Liu & Guoxuan Wang & Hongyang Yang & Daochen Zha, 2023. "FinGPT: Democratizing Internet-scale Data for Financial Large Language Models," Papers 2307.10485, arXiv.org, revised Nov 2023.
    5. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    6. Alonso-Robisco, Andres & Carbó, José Manuel, 2023. "Analysis of CBDC narrative by central banks using large language models," Finance Research Letters, Elsevier, vol. 58(PC).
    7. Xinyi Li & Yinchuan Li & Xiao-Yang Liu & Christina Dan Wang, 2019. "Risk Management via Anomaly Circumvent: Mnemonic Deep Learning for Midterm Stock Prediction," Papers 1908.01112, arXiv.org.
    8. Shuyang Wang & Diego Klabjan, 2023. "An Ensemble Method of Deep Reinforcement Learning for Automated Cryptocurrency Trading," Papers 2309.00626, arXiv.org.
    9. Costola, Michele & Hinz, Oliver & Nofer, Michael & Pelizzon, Loriana, 2023. "Machine learning sentiment analysis, COVID-19 news and stock market reactions," Research in International Business and Finance, Elsevier, vol. 64(C).
    10. Berend Jelmer Dirk Gort & Xiao-Yang Liu & Xinghang Sun & Jiechao Gao & Shuaiyu Chen & Christina Dan Wang, 2022. "Deep Reinforcement Learning for Cryptocurrency Trading: Practical Approach to Address Backtest Overfitting," Papers 2209.05559, arXiv.org, revised Jan 2023.
    11. Jingyi Gu & Sarvesh Shukla & Junyi Ye & Ajim Uddin & Guiling Wang, 2023. "Deep learning model with sentiment score and weekend effect in stock price prediction," SN Business & Economics, Springer, vol. 3(7), pages 1-20, July.
    12. Yinheng Li & Shaofei Wang & Han Ding & Hang Chen, 2023. "Large Language Models in Finance: A Survey," Papers 2311.10723, arXiv.org, revised Jul 2024.
    13. Huifang Huang & Ting Gao & Yi Gui & Jin Guo & Peng Zhang, 2022. "Stock Trading Optimization through Model-based Reinforcement Learning with Resistance Support Relative Strength," Papers 2205.15056, arXiv.org.
    14. Xinyi Li & Yinchuan Li & Hongyang Yang & Liuqing Yang & Xiao-Yang Liu, 2019. "DP-LSTM: Differential Privacy-inspired LSTM for Stock Prediction Using Financial News," Papers 1912.10806, arXiv.org.
    15. Alejandra de la Rica Escudero & Eduardo C. Garrido-Merchan & Maria Coronado-Vaca, 2024. "Explainable Post hoc Portfolio Management Financial Policy of a Deep Reinforcement Learning agent," Papers 2407.14486, arXiv.org.
    16. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Bridging the gap between Markowitz planning and deep reinforcement learning," Papers 2010.09108, arXiv.org.
    17. Ekaterina V. Orlova, 2023. "Dynamic Regimes for Corporate Human Capital Development Used Reinforcement Learning Methods," Mathematics, MDPI, vol. 11(18), pages 1-22, September.
    18. Masanori Hirano & Kentaro Imajo, 2024. "The Construction of Instruction-tuned LLMs for Finance without Instruction Data Using Continual Pretraining and Model Merging," Papers 2409.19854, arXiv.org.
    19. Wang, Jia & Wang, Xinyi & Wang, Xu, 2024. "International oil shocks and the volatility forecasting of Chinese stock market based on machine learning combination models," The North American Journal of Economics and Finance, Elsevier, vol. 70(C).
    20. Yuqi Nie & Yaxuan Kong & Xiaowen Dong & John M. Mulvey & H. Vincent Poor & Qingsong Wen & Stefan Zohren, 2024. "A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges," Papers 2406.11903, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2410.14927. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.