IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2311.13743.html
   My bibliography  Save this paper

FinMem: A Performance-Enhanced LLM Trading Agent with Layered Memory and Character Design

Author

Listed:
  • Yangyang Yu
  • Haohang Li
  • Zhi Chen
  • Yuechen Jiang
  • Yang Li
  • Denghui Zhang
  • Rong Liu
  • Jordan W. Suchow
  • Khaldoun Khashanah

Abstract

Recent advancements in Large Language Models (LLMs) have exhibited notable efficacy in question-answering (QA) tasks across diverse domains. Their prowess in integrating extensive web knowledge has fueled interest in developing LLM-based autonomous agents. While LLMs are efficient in decoding human instructions and deriving solutions by holistically processing historical inputs, transitioning to purpose-driven agents requires a supplementary rational architecture to process multi-source information, establish reasoning chains, and prioritize critical tasks. Addressing this, we introduce \textsc{FinMem}, a novel LLM-based agent framework devised for financial decision-making. It encompasses three core modules: Profiling, to customize the agent's characteristics; Memory, with layered message processing, to aid the agent in assimilating hierarchical financial data; and Decision-making, to convert insights gained from memories into investment decisions. Notably, \textsc{FinMem}'s memory module aligns closely with the cognitive structure of human traders, offering robust interpretability and real-time tuning. Its adjustable cognitive span allows for the retention of critical information beyond human perceptual limits, thereby enhancing trading outcomes. This framework enables the agent to self-evolve its professional knowledge, react agilely to new investment cues, and continuously refine trading decisions in the volatile financial environment. We first compare \textsc{FinMem} with various algorithmic agents on a scalable real-world financial dataset, underscoring its leading trading performance in stocks. We then fine-tuned the agent's perceptual span and character setting to achieve a significantly enhanced trading performance. Collectively, \textsc{FinMem} presents a cutting-edge LLM agent framework for automated trading, boosting cumulative investment returns.

Suggested Citation

  • Yangyang Yu & Haohang Li & Zhi Chen & Yuechen Jiang & Yang Li & Denghui Zhang & Rong Liu & Jordan W. Suchow & Khaldoun Khashanah, 2023. "FinMem: A Performance-Enhanced LLM Trading Agent with Layered Memory and Character Design," Papers 2311.13743, arXiv.org, revised Dec 2023.
  • Handle: RePEc:arx:papers:2311.13743
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2311.13743
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Eero P䴤ri & Mika Vilska, 2014. "Performance of moving average trading strategies over varying stock market conditions: the Finnish evidence," Applied Economics, Taylor & Francis Journals, vol. 46(24), pages 2851-2872, August.
    2. Fischer, Thomas G., 2018. "Reinforcement learning in financial markets - a survey," FAU Discussion Papers in Economics 12/2018, Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Wentao Zhang & Lingxuan Zhao & Haochong Xia & Shuo Sun & Jiaze Sun & Molei Qin & Xinyi Li & Yuqing Zhao & Yilei Zhao & Xinyu Cai & Longtao Zheng & Xinrun Wang & Bo An, 2024. "A Multimodal Foundation Agent for Financial Trading: Tool-Augmented, Diversified, and Generalist," Papers 2402.18485, arXiv.org, revised Jun 2024.
    2. Xiangyu Li & Yawen Zeng & Xiaofen Xing & Jin Xu & Xiangmin Xu, 2025. "HedgeAgents: A Balanced-aware Multi-agent Financial Trading System," Papers 2502.13165, arXiv.org.
    3. Han Ding & Yinheng Li & Junhao Wang & Hang Chen, 2024. "Large Language Model Agent in Financial Trading: A Survey," Papers 2408.06361, arXiv.org.
    4. Yupeng Cao & Zhi Chen & Qingyun Pei & Fabrizio Dimino & Lorenzo Ausiello & Prashant Kumar & K. P. Subbalakshmi & Papa Momar Ndiaye, 2024. "RiskLabs: Predicting Financial Risk Using Large Language Model Based on Multi-Sources Data," Papers 2404.07452, arXiv.org.
    5. Tao Ren & Ruihan Zhou & Jinyang Jiang & Jiafeng Liang & Qinghao Wang & Yijie Peng, 2024. "RiskMiner: Discovering Formulaic Alphas via Risk Seeking Monte Carlo Tree Search," Papers 2402.07080, arXiv.org, revised Feb 2024.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Weiguang Han & Boyi Zhang & Qianqian Xie & Min Peng & Yanzhao Lai & Jimin Huang, 2023. "Select and Trade: Towards Unified Pair Trading with Hierarchical Reinforcement Learning," Papers 2301.10724, arXiv.org, revised Feb 2023.
    2. Charl Maree & Christian W. Omlin, 2022. "Balancing Profit, Risk, and Sustainability for Portfolio Management," Papers 2207.02134, arXiv.org.
    3. Longbing Cao, 2021. "AI in Finance: Challenges, Techniques and Opportunities," Papers 2107.09051, arXiv.org.
    4. Suyeol Yun, 2024. "Pretrained LLM Adapted with LoRA as a Decision Transformer for Offline RL in Quantitative Trading," Papers 2411.17900, arXiv.org.
    5. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    6. Schnaubelt, Matthias, 2022. "Deep reinforcement learning for the optimal placement of cryptocurrency limit orders," European Journal of Operational Research, Elsevier, vol. 296(3), pages 993-1006.
    7. Jiwon Kim & Moon-Ju Kang & KangHun Lee & HyungJun Moon & Bo-Kwan Jeon, 2023. "Deep Reinforcement Learning for Asset Allocation: Reward Clipping," Papers 2301.05300, arXiv.org.
    8. Zihao Zhang & Stefan Zohren & Stephen Roberts, 2019. "Deep Reinforcement Learning for Trading," Papers 1911.10107, arXiv.org.
    9. Federico Cornalba & Constantin Disselkamp & Davide Scassola & Christopher Helf, 2022. "Multi-Objective reward generalization: Improving performance of Deep Reinforcement Learning for applications in single-asset trading," Papers 2203.04579, arXiv.org, revised Feb 2023.
    10. Weiguang Han & Jimin Huang & Qianqian Xie & Boyi Zhang & Yanzhao Lai & Min Peng, 2023. "Mastering Pair Trading with Risk-Aware Recurrent Reinforcement Learning," Papers 2304.00364, arXiv.org.
    11. Lohrmann, Christoph & Luukka, Pasi, 2019. "Classification of intraday S&P500 returns with a Random Forest," International Journal of Forecasting, Elsevier, vol. 35(1), pages 390-407.
    12. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    13. Metghalchi, Massoud & Chen, Chien-Ping & Hayes, Linda A., 2015. "History of share prices and market efficiency of the Madrid general stock index," International Review of Financial Analysis, Elsevier, vol. 40(C), pages 178-184.
    14. Valeriy Zakamulin & Javier Giner, 2020. "Trend following with momentum versus moving averages: a tale of differences," Quantitative Finance, Taylor & Francis Journals, vol. 20(6), pages 985-1007, June.
    15. Shi, Huai-Long & Zhou, Wei-Xing, 2017. "Wax and wane of the cross-sectional momentum and contrarian effects: Evidence from the Chinese stock markets," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 486(C), pages 397-407.
    16. Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
    17. Jingyuan Wang & Yang Zhang & Ke Tang & Junjie Wu & Zhang Xiong, 2019. "AlphaStock: A Buying-Winners-and-Selling-Losers Investment Strategy using Interpretable Deep Reinforcement Attention Networks," Papers 1908.02646, arXiv.org.
    18. Kropiński, Paweł & Bosek, Bartłomiej & Pudo, Mikołaj, 2024. "State ownership, probability of informed trading, and profitability potential: Evidence from the Warsaw Stock Exchange," International Review of Financial Analysis, Elsevier, vol. 95(PB).
    19. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Bridging the gap between Markowitz planning and deep reinforcement learning," Papers 2010.09108, arXiv.org.
    20. Philippe Bergault & Olivier Gu'eant & Hamza Bodor, 2025. "To Hedge or Not to Hedge: Optimal Strategies for Stochastic Trade Flow Management," Papers 2503.02496, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2311.13743. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.