IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2312.14203.html
   My bibliography  Save this paper

Shai: A large language model for asset management

Author

Listed:
  • Zhongyang Guo
  • Guanran Jiang
  • Zhongdan Zhang
  • Peng Li
  • Zhefeng Wang
  • Yinchun Wang

Abstract

This paper introduces "Shai" a 10B level large language model specifically designed for the asset management industry, built upon an open-source foundational model. With continuous pre-training and fine-tuning using a targeted corpus, Shai demonstrates enhanced performance in tasks relevant to its domain, outperforming baseline models. Our research includes the development of an innovative evaluation framework, which integrates professional qualification exams, tailored tasks, open-ended question answering, and safety assessments, to comprehensively assess Shai's capabilities. Furthermore, we discuss the challenges and implications of utilizing large language models like GPT-4 for performance assessment in asset management, suggesting a combination of automated evaluation and human judgment. Shai's development, showcasing the potential and versatility of 10B-level large language models in the financial sector with significant performance and modest computational requirements, hopes to provide practical insights and methodologies to assist industry peers in their similar endeavors.

Suggested Citation

  • Zhongyang Guo & Guanran Jiang & Zhongdan Zhang & Peng Li & Zhefeng Wang & Yinchun Wang, 2023. "Shai: A large language model for asset management," Papers 2312.14203, arXiv.org.
  • Handle: RePEc:arx:papers:2312.14203
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2312.14203
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Hongyang Yang & Xiao-Yang Liu & Christina Dan Wang, 2023. "FinGPT: Open-Source Financial Large Language Models," Papers 2306.06031, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Carolina Camassa, 2023. "Legal NLP Meets MiCAR: Advancing the Analysis of Crypto White Papers," Papers 2310.10333, arXiv.org, revised Oct 2023.
    2. Shengkun Wang & Taoran Ji & Linhan Wang & Yanshen Sun & Shang-Ching Liu & Amit Kumar & Chang-Tien Lu, 2024. "StockTime: A Time Series Specialized Large Language Model Architecture for Stock Price Prediction," Papers 2409.08281, arXiv.org.
    3. Tao Ren & Ruihan Zhou & Jinyang Jiang & Jiafeng Liang & Qinghao Wang & Yijie Peng, 2024. "RiskMiner: Discovering Formulaic Alphas via Risk Seeking Monte Carlo Tree Search," Papers 2402.07080, arXiv.org, revised Feb 2024.
    4. Wentao Zhang & Lingxuan Zhao & Haochong Xia & Shuo Sun & Jiaze Sun & Molei Qin & Xinyi Li & Yuqing Zhao & Yilei Zhao & Xinyu Cai & Longtao Zheng & Xinrun Wang & Bo An, 2024. "A Multimodal Foundation Agent for Financial Trading: Tool-Augmented, Diversified, and Generalist," Papers 2402.18485, arXiv.org, revised Jun 2024.
    5. Yinheng Li & Shaofei Wang & Han Ding & Hang Chen, 2023. "Large Language Models in Finance: A Survey," Papers 2311.10723, arXiv.org, revised Jul 2024.
    6. Masanori Hirano & Kentaro Imajo, 2024. "The Construction of Instruction-tuned LLMs for Finance without Instruction Data Using Continual Pretraining and Model Merging," Papers 2409.19854, arXiv.org.
    7. Yupeng Cao & Zhi Chen & Qingyun Pei & Fabrizio Dimino & Lorenzo Ausiello & Prashant Kumar & K. P. Subbalakshmi & Papa Momar Ndiaye, 2024. "RiskLabs: Predicting Financial Risk Using Large Language Model Based on Multi-Sources Data," Papers 2404.07452, arXiv.org.
    8. Yang Li & Yangyang Yu & Haohang Li & Zhi Chen & Khaldoun Khashanah, 2023. "TradingGPT: Multi-Agent System with Layered Memory and Distinct Characters for Enhanced Financial Trading Performance," Papers 2309.03736, arXiv.org.
    9. Kelvin J. L. Koa & Yunshan Ma & Ritchie Ng & Tat-Seng Chua, 2024. "Learning to Generate Explainable Stock Predictions using Self-Reflective Large Language Models," Papers 2402.03659, arXiv.org, revised Feb 2024.
    10. Neng Wang & Hongyang Yang & Christina Dan Wang, 2023. "FinGPT: Instruction Tuning Benchmark for Open-Source Large Language Models in Financial Datasets," Papers 2310.04793, arXiv.org, revised Nov 2023.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2312.14203. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.