IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2502.11433.html
   My bibliography  Save this paper

FLAG-Trader: Fusion LLM-Agent with Gradient-based Reinforcement Learning for Financial Trading

Author

Listed:
  • Guojun Xiong
  • Zhiyang Deng
  • Keyi Wang
  • Yupeng Cao
  • Haohang Li
  • Yangyang Yu
  • Xueqing Peng
  • Mingquan Lin
  • Kaleb E Smith
  • Xiao-Yang Liu
  • Jimin Huang
  • Sophia Ananiadou
  • Qianqian Xie

Abstract

Large language models (LLMs) fine-tuned on multimodal financial data have demonstrated impressive reasoning capabilities in various financial tasks. However, they often struggle with multi-step, goal-oriented scenarios in interactive financial markets, such as trading, where complex agentic approaches are required to improve decision-making. To address this, we propose \textsc{FLAG-Trader}, a unified architecture integrating linguistic processing (via LLMs) with gradient-driven reinforcement learning (RL) policy optimization, in which a partially fine-tuned LLM acts as the policy network, leveraging pre-trained knowledge while adapting to the financial domain through parameter-efficient fine-tuning. Through policy gradient optimization driven by trading rewards, our framework not only enhances LLM performance in trading but also improves results on other financial-domain tasks. We present extensive empirical evidence to validate these enhancements.

Suggested Citation

  • Guojun Xiong & Zhiyang Deng & Keyi Wang & Yupeng Cao & Haohang Li & Yangyang Yu & Xueqing Peng & Mingquan Lin & Kaleb E Smith & Xiao-Yang Liu & Jimin Huang & Sophia Ananiadou & Qianqian Xie, 2025. "FLAG-Trader: Fusion LLM-Agent with Gradient-based Reinforcement Learning for Financial Trading," Papers 2502.11433, arXiv.org, revised Feb 2025.
  • Handle: RePEc:arx:papers:2502.11433
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2502.11433
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Xiao-Yang Liu & Guoxuan Wang & Hongyang Yang & Daochen Zha, 2023. "FinGPT: Democratizing Internet-scale Data for Financial Large Language Models," Papers 2307.10485, arXiv.org, revised Nov 2023.
    2. Yang Li & Yangyang Yu & Haohang Li & Zhi Chen & Khaldoun Khashanah, 2023. "TradingGPT: Multi-Agent System with Layered Memory and Distinct Characters for Enhanced Financial Trading Performance," Papers 2309.03736, arXiv.org.
    3. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    4. Zhipeng Liang & Hao Chen & Junhao Zhu & Kangkang Jiang & Yanran Li, 2018. "Adversarial Deep Reinforcement Learning in Portfolio Management," Papers 1808.09940, arXiv.org, revised Nov 2018.
    5. Ben Hambly & Renyuan Xu & Huining Yang, 2023. "Recent advances in reinforcement learning in finance," Mathematical Finance, Wiley Blackwell, vol. 33(3), pages 437-503, July.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Xiangyu Cui & Xun Li & Yun Shi & Si Zhao, 2023. "Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning," Papers 2312.15385, arXiv.org.
    2. Bouyaddou, Youssef & Jebabli, Ikram, 2025. "Integration of investor behavioral perspective and climate change in reinforcement learning for portfolio optimization," Research in International Business and Finance, Elsevier, vol. 73(PB).
    3. Wu, Bo & Li, Lingfei, 2024. "Reinforcement learning for continuous-time mean-variance portfolio selection in a regime-switching market," Journal of Economic Dynamics and Control, Elsevier, vol. 158(C).
    4. Konrad Mueller & Amira Akkari & Lukas Gonon & Ben Wood, 2024. "Fast Deep Hedging with Second-Order Optimization," Papers 2410.22568, arXiv.org.
    5. Haoren Zhu & Pengfei Zhao & Wilfred Siu Hung NG & Dik Lun Lee, 2024. "Financial Assets Dependency Prediction Utilizing Spatiotemporal Patterns," Papers 2406.11886, arXiv.org.
    6. Daniil Karzanov & Rub'en Garz'on & Mikhail Terekhov & Caglar Gulcehre & Thomas Raffinot & Marcin Detyniecki, 2025. "Regret-Optimized Portfolio Enhancement through Deep Reinforcement Learning and Future Looking Rewards," Papers 2502.02619, arXiv.org.
    7. Horikawa, Hiroaki & Nakagawa, Kei, 2024. "Relationship between deep hedging and delta hedging: Leveraging a statistical arbitrage strategy," Finance Research Letters, Elsevier, vol. 62(PA).
    8. Yuheng Zheng & Zihan Ding, 2024. "Reinforcement Learning in High-frequency Market Making," Papers 2407.21025, arXiv.org, revised Aug 2024.
    9. David Kuo Chuen Lee & Chong Guan & Yinghui Yu & Qinxu Ding, 2024. "A Comprehensive Review of Generative AI in Finance," FinTech, MDPI, vol. 3(3), pages 1-19, September.
    10. Woosung Koh & Insu Choi & Yuntae Jang & Gimin Kang & Woo Chang Kim, 2023. "Curriculum Learning and Imitation Learning for Model-free Control on Financial Time-series," Papers 2311.13326, arXiv.org, revised Jan 2024.
    11. Xianhua Peng & Chenyin Gong & Xue Dong He, 2023. "Reinforcement Learning for Financial Index Tracking," Papers 2308.02820, arXiv.org, revised Nov 2024.
    12. Pascal Franc{c}ois & Genevi`eve Gauthier & Fr'ed'eric Godin & Carlos Octavio P'erez Mendoza, 2024. "Is the difference between deep hedging and delta hedging a statistical arbitrage?," Papers 2407.14736, arXiv.org, revised Oct 2024.
    13. Reilly Pickard & Yuri Lawryshyn, 2023. "Deep Reinforcement Learning for Dynamic Stock Option Hedging: A Review," Mathematics, MDPI, vol. 11(24), pages 1-19, December.
    14. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay & Jamal Atif, 2020. "AAMDRL: Augmented Asset Management with Deep Reinforcement Learning," Papers 2010.08497, arXiv.org.
    15. Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
    16. Adebayo Oshingbesan & Eniola Ajiboye & Peruth Kamashazi & Timothy Mbaka, 2022. "Model-Free Reinforcement Learning for Asset Allocation," Papers 2209.10458, arXiv.org.
    17. Yichen Luo & Yebo Feng & Jiahua Xu & Paolo Tasca & Yang Liu, 2025. "LLM-Powered Multi-Agent System for Automated Crypto Portfolio Management," Papers 2501.00826, arXiv.org, revised Jan 2025.
    18. Thanos Konstantinidis & Giorgos Iacovides & Mingxue Xu & Tony G. Constantinides & Danilo Mandic, 2024. "FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications," Papers 2403.12285, arXiv.org.
    19. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    20. Mengying Zhu & Xiaolin Zheng & Yan Wang & Yuyuan Li & Qianqiao Liang, 2019. "Adaptive Portfolio by Solving Multi-armed Bandit via Thompson Sampling," Papers 1911.05309, arXiv.org, revised Nov 2019.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2502.11433. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.