Shai: A large language model for asset management
Author
Abstract
Suggested Citation
Download full text from publisher
References listed on IDEAS
- Hongyang Yang & Xiao-Yang Liu & Christina Dan Wang, 2023. "FinGPT: Open-Source Financial Large Language Models," Papers 2306.06031, arXiv.org.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Carolina Camassa, 2023. "Legal NLP Meets MiCAR: Advancing the Analysis of Crypto White Papers," Papers 2310.10333, arXiv.org, revised Oct 2023.
- Shengkun Wang & Taoran Ji & Linhan Wang & Yanshen Sun & Shang-Ching Liu & Amit Kumar & Chang-Tien Lu, 2024. "StockTime: A Time Series Specialized Large Language Model Architecture for Stock Price Prediction," Papers 2409.08281, arXiv.org.
- Tao Ren & Ruihan Zhou & Jinyang Jiang & Jiafeng Liang & Qinghao Wang & Yijie Peng, 2024. "RiskMiner: Discovering Formulaic Alphas via Risk Seeking Monte Carlo Tree Search," Papers 2402.07080, arXiv.org, revised Feb 2024.
- Wentao Zhang & Lingxuan Zhao & Haochong Xia & Shuo Sun & Jiaze Sun & Molei Qin & Xinyi Li & Yuqing Zhao & Yilei Zhao & Xinyu Cai & Longtao Zheng & Xinrun Wang & Bo An, 2024. "A Multimodal Foundation Agent for Financial Trading: Tool-Augmented, Diversified, and Generalist," Papers 2402.18485, arXiv.org, revised Jun 2024.
- Yinheng Li & Shaofei Wang & Han Ding & Hang Chen, 2023. "Large Language Models in Finance: A Survey," Papers 2311.10723, arXiv.org, revised Jul 2024.
- Masanori Hirano & Kentaro Imajo, 2024. "The Construction of Instruction-tuned LLMs for Finance without Instruction Data Using Continual Pretraining and Model Merging," Papers 2409.19854, arXiv.org.
- Yupeng Cao & Zhi Chen & Qingyun Pei & Fabrizio Dimino & Lorenzo Ausiello & Prashant Kumar & K. P. Subbalakshmi & Papa Momar Ndiaye, 2024. "RiskLabs: Predicting Financial Risk Using Large Language Model Based on Multi-Sources Data," Papers 2404.07452, arXiv.org.
- Yang Li & Yangyang Yu & Haohang Li & Zhi Chen & Khaldoun Khashanah, 2023. "TradingGPT: Multi-Agent System with Layered Memory and Distinct Characters for Enhanced Financial Trading Performance," Papers 2309.03736, arXiv.org.
- Kelvin J. L. Koa & Yunshan Ma & Ritchie Ng & Tat-Seng Chua, 2024. "Learning to Generate Explainable Stock Predictions using Self-Reflective Large Language Models," Papers 2402.03659, arXiv.org, revised Feb 2024.
- Neng Wang & Hongyang Yang & Christina Dan Wang, 2023. "FinGPT: Instruction Tuning Benchmark for Open-Source Large Language Models in Financial Datasets," Papers 2310.04793, arXiv.org, revised Nov 2023.
More about this item
NEP fields
This paper has been announced in the following NEP Reports:- NEP-AIN-2024-01-22 (Artificial Intelligence)
- NEP-BIG-2024-01-22 (Big Data)
- NEP-CMP-2024-01-22 (Computational Economics)
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2312.14203. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.