IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2402.01441.html
   My bibliography  Save this paper

Learning the Market: Sentiment-Based Ensemble Trading Agents

Author

Listed:
  • Andrew Ye
  • James Xu
  • Vidyut Veedgav
  • Yi Wang
  • Yifan Yu
  • Daniel Yan
  • Ryan Chen
  • Vipin Chaudhary
  • Shuai Xu

Abstract

We propose and study the integration of sentiment analysis and deep reinforcement learning ensemble algorithms for stock trading by evaluating strategies capable of dynamically altering their active agent given the concurrent market environment. In particular, we design a simple-yet-effective method for extracting financial sentiment and combine this with improvements on existing trading agents, resulting in a strategy that effectively considers both qualitative market factors and quantitative stock data. We show that our approach results in a strategy that is profitable, robust, and risk-minimal - outperforming the traditional ensemble strategy as well as single agent algorithms and market metrics. Our findings suggest that the conventional practice of switching and reevaluating agents in ensemble every fixed-number of months is sub-optimal, and that a dynamic sentiment-based framework greatly unlocks additional performance. Furthermore, as we have designed our algorithm with simplicity and efficiency in mind, we hypothesize that the transition of our method from historical evaluation towards real-time trading with live data to be relatively simple.

Suggested Citation

  • Andrew Ye & James Xu & Vidyut Veedgav & Yi Wang & Yifan Yu & Daniel Yan & Ryan Chen & Vipin Chaudhary & Shuai Xu, 2024. "Learning the Market: Sentiment-Based Ensemble Trading Agents," Papers 2402.01441, arXiv.org, revised Nov 2024.
  • Handle: RePEc:arx:papers:2402.01441
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2402.01441
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Zhengyao Jiang & Dixing Xu & Jinjun Liang, 2017. "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem," Papers 1706.10059, arXiv.org, revised Jul 2017.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Jiahua Xu & Daniel Perez & Yebo Feng & Benjamin Livshits, 2023. "Auto.gov: Learning-based On-chain Governance for Decentralized Finance (DeFi)," Papers 2302.09551, arXiv.org, revised May 2023.
    2. Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
    3. Alexandre Carbonneau & Fr'ed'eric Godin, 2021. "Deep equal risk pricing of financial derivatives with non-translation invariant risk measures," Papers 2107.11340, arXiv.org.
    4. Fischer, Thomas G., 2018. "Reinforcement learning in financial markets - a survey," FAU Discussion Papers in Economics 12/2018, Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics.
    5. Charl Maree & Christian W. Omlin, 2022. "Balancing Profit, Risk, and Sustainability for Portfolio Management," Papers 2207.02134, arXiv.org.
    6. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    7. Martino Banchio & Giacomo Mantegazza, 2022. "Artificial Intelligence and Spontaneous Collusion," Papers 2202.05946, arXiv.org, revised Sep 2023.
    8. Miquel Noguer i Alonso & Sonam Srivastava, 2020. "Deep Reinforcement Learning for Asset Allocation in US Equities," Papers 2010.04404, arXiv.org.
    9. Mengying Zhu & Xiaolin Zheng & Yan Wang & Yuyuan Li & Qianqiao Liang, 2019. "Adaptive Portfolio by Solving Multi-armed Bandit via Thompson Sampling," Papers 1911.05309, arXiv.org, revised Nov 2019.
    10. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    11. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    12. Nymisha Bandi & Theja Tulabandhula, 2020. "Off-Policy Optimization of Portfolio Allocation Policies under Constraints," Papers 2012.11715, arXiv.org.
    13. Alessio Brini & Daniele Tantari, 2021. "Deep Reinforcement Trading with Predictable Returns," Papers 2104.14683, arXiv.org, revised May 2023.
    14. Jian Guo & Heung-Yeung Shum, 2024. "Large Investment Model," Papers 2408.10255, arXiv.org, revised Aug 2024.
    15. Carbonneau, Alexandre, 2021. "Deep hedging of long-term financial derivatives," Insurance: Mathematics and Economics, Elsevier, vol. 99(C), pages 327-340.
    16. Tian, Yuan & Han, Minghao & Kulkarni, Chetan & Fink, Olga, 2022. "A prescriptive Dirichlet power allocation policy with deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 224(C).
    17. Brini, Alessio & Tedeschi, Gabriele & Tantari, Daniele, 2023. "Reinforcement learning policy recommendation for interbank network stability," Journal of Financial Stability, Elsevier, vol. 67(C).
    18. Haoren Zhu & Pengfei Zhao & Wilfred Siu Hung NG & Dik Lun Lee, 2024. "Financial Assets Dependency Prediction Utilizing Spatiotemporal Patterns," Papers 2406.11886, arXiv.org.
    19. Yasuhiro Nakayama & Tomochika Sawaki, 2023. "Causal Inference on Investment Constraints and Non-stationarity in Dynamic Portfolio Optimization through Reinforcement Learning," Papers 2311.04946, arXiv.org.
    20. Hans Buhler & Lukas Gonon & Josef Teichmann & Ben Wood, 2018. "Deep Hedging," Papers 1802.03042, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2402.01441. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.