IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2301.00790.html
   My bibliography  Save this paper

Online learning techniques for prediction of temporal tabular datasets with regime changes

Author

Listed:
  • Thomas Wong
  • Mauricio Barahona

Abstract

The application of deep learning to non-stationary temporal datasets can lead to overfitted models that underperform under regime changes. In this work, we propose a modular machine learning pipeline for ranking predictions on temporal panel datasets which is robust under regime changes. The modularity of the pipeline allows the use of different models, including Gradient Boosting Decision Trees (GBDTs) and Neural Networks, with and without feature engineering. We evaluate our framework on financial data for stock portfolio prediction, and find that GBDT models with dropout display high performance, robustness and generalisability with reduced complexity and computational cost. We then demonstrate how online learning techniques, which require no retraining of models, can be used post-prediction to enhance the results. First, we show that dynamic feature projection improves robustness by reducing drawdown in regime changes. Second, we demonstrate that dynamical model ensembling based on selection of models with good recent performance leads to improved Sharpe and Calmar ratios of out-of-sample predictions. We also evaluate the robustness of our pipeline across different data splits and random seeds with good reproducibility.

Suggested Citation

  • Thomas Wong & Mauricio Barahona, 2022. "Online learning techniques for prediction of temporal tabular datasets with regime changes," Papers 2301.00790, arXiv.org, revised Aug 2023.
  • Handle: RePEc:arx:papers:2301.00790
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2301.00790
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    2. Christopher Bockel-Rickermann, 2022. "Predicting Day-Ahead Stock Returns using Search Engine Query Volumes: An Application of Gradient Boosted Decision Trees to the S&P 100," Papers 2205.15853, arXiv.org, revised Jun 2022.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    2. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    3. Imen Azzouz & Wiem Fekih Hassen, 2023. "Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach," Energies, MDPI, vol. 16(24), pages 1-18, December.
    4. Jacob W. Crandall & Mayada Oudah & Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael A. Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Nature Communications, Nature, vol. 9(1), pages 1-12, December.
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," TSE Working Papers 17-806, Toulouse School of Economics (TSE).
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," IAST Working Papers 17-68, Institute for Advanced Study in Toulouse (IAST).
      • Jacob Crandall & Mayada Oudah & Fatimah Ishowo-Oloko Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Post-Print hal-01897802, HAL.
    5. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    6. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    7. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    8. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    9. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    10. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    11. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    12. Zichen Lu & Ying Yan, 2024. "Temperature Control of Fuel Cell Based on PEI-DDPG," Energies, MDPI, vol. 17(7), pages 1-19, April.
    13. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    14. Wang, Xuan & Shu, Gequn & Tian, Hua & Wang, Rui & Cai, Jinwen, 2020. "Operation performance comparison of CCHP systems with cascade waste heat recovery systems by simulation and operation optimisation," Energy, Elsevier, vol. 206(C).
    15. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    16. Parvez Farazi, Nahid & Zou, Bo & Tulabandhula, Theja, 2022. "Dynamic On-Demand Crowdshipping Using Constrained and Heuristics-Embedded Double Dueling Deep Q-Network," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 166(C).
    17. Louback, Eduardo & Biswas, Atriya & Machado, Fabricio & Emadi, Ali, 2024. "A review of the design process of energy management systems for dual-motor battery electric vehicles," Renewable and Sustainable Energy Reviews, Elsevier, vol. 193(C).
    18. Brammer, Janis & Lutz, Bernhard & Neumann, Dirk, 2022. "Permutation flow shop scheduling with multiple lines and demand plans using reinforcement learning," European Journal of Operational Research, Elsevier, vol. 299(1), pages 75-86.
    19. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.
    20. Tri-Hai Nguyen & Laihyuk Park, 2023. "HAP-Assisted RSMA-Enabled Vehicular Edge Computing: A DRL-Based Optimization Framework," Mathematics, MDPI, vol. 11(10), pages 1-23, May.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2301.00790. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.