IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2410.04217.html
   My bibliography  Save this paper

Improving Portfolio Optimization Results with Bandit Networks

Author

Listed:
  • Gustavo de Freitas Fonseca
  • Lucas Coelho e Silva
  • Paulo Andr'e Lima de Castro

Abstract

In Reinforcement Learning (RL), multi-armed Bandit (MAB) problems have found applications across diverse domains such as recommender systems, healthcare, and finance. Traditional MAB algorithms typically assume stationary reward distributions, which limits their effectiveness in real-world scenarios characterized by non-stationary dynamics. This paper addresses this limitation by introducing and evaluating novel Bandit algorithms designed for non-stationary environments. First, we present the Adaptive Discounted Thompson Sampling (ADTS) algorithm, which enhances adaptability through relaxed discounting and sliding window mechanisms to better respond to changes in reward distributions. We then extend this approach to the Portfolio Optimization problem by introducing the Combinatorial Adaptive Discounted Thompson Sampling (CADTS) algorithm, which addresses computational challenges within Combinatorial Bandits and improves dynamic asset allocation. Additionally, we propose a novel architecture called Bandit Networks, which integrates the outputs of ADTS and CADTS, thereby mitigating computational limitations in stock selection. Through extensive experiments using real financial market data, we demonstrate the potential of these algorithms and architectures in adapting to dynamic environments and optimizing decision-making processes. For instance, the proposed bandit network instances present superior performance when compared to classic portfolio optimization approaches, such as capital asset pricing model, equal weights, risk parity, and Markovitz, with the best network presenting an out-of-sample Sharpe Ratio 20% higher than the best performing classical model.

Suggested Citation

  • Gustavo de Freitas Fonseca & Lucas Coelho e Silva & Paulo Andr'e Lima de Castro, 2024. "Improving Portfolio Optimization Results with Bandit Networks," Papers 2410.04217, arXiv.org, revised Oct 2024.
  • Handle: RePEc:arx:papers:2410.04217
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2410.04217
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Arthur Charpentier & Romuald Élie & Carl Remlinger, 2023. "Reinforcement Learning in Economics and Finance," Computational Economics, Springer;Society for Computational Economics, vol. 62(1), pages 425-462, June.
    2. Xiaoguang Huo & Feng Fu, 2017. "Risk-Aware Multi-Armed Bandit Problem with Application to Portfolio Selection," Papers 1709.04415, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Preil, Deniz & Krapp, Michael, 2022. "Bandit-based inventory optimisation: Reinforcement learning in multi-echelon supply chains," International Journal of Production Economics, Elsevier, vol. 252(C).
    2. Roujia Li & Jia Liu, 2022. "Online Portfolio Selection with Long-Short Term Forecasting," SN Operations Research Forum, Springer, vol. 3(4), pages 1-15, December.
    3. Chen, Zengjing & Epstein, Larry G. & Zhang, Guodong, 2023. "A central limit theorem, loss aversion and multi-armed bandits," Journal of Economic Theory, Elsevier, vol. 209(C).
    4. Samuel N. Cohen & Tanut Treetanthiploet, 2019. "Gittins' theorem under uncertainty," Papers 1907.05689, arXiv.org, revised Jun 2021.
    5. Malekipirbazari, Milad & Çavuş, Özlem, 2024. "Index policy for multiarmed bandit problem with dynamic risk measures," European Journal of Operational Research, Elsevier, vol. 312(2), pages 627-640.
    6. Guangsheng Yu & Qin Wang & Caijun Sun & Lam Duc Nguyen & H. M. N. Dilum Bandara & Shiping Chen, 2024. "Maximizing NFT Incentives: References Make You Rich," Papers 2402.06459, arXiv.org.
    7. Dylan Troop & Frédéric Godin & Jia Yuan Yu, 2022. "Best-Arm Identification Using Extreme Value Theory Estimates of the CVaR," JRFM, MDPI, vol. 15(4), pages 1-15, April.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2410.04217. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.