IDEAS home Printed from https://ideas.repec.org/p/ehl/lserod/122740.html
   My bibliography  Save this paper

Robust offline reinforcement learning with heavy-tailed rewards

Author

Listed:
  • Zhu, Jin
  • Wan, Runzhe
  • Qi, Zhengling
  • Luo, Shikai
  • Shi, Chengchun

Abstract

This paper endeavors to augment the robustness of offline reinforcement learning (RL) in scenarios laden with heavy-tailed rewards, a prevalent circumstance in real-world applications. We propose two algorithmic frameworks, ROAM and ROOM, for robust off-policy evaluation and offline policy optimization (OPO), respectively. Central to our frameworks is the strategic incorporation of the median-of-means method with offline RL, enabling straightforward uncertainty estimation for the value function estimator. This not only adheres to the principle of pessimism in OPO but also adeptly manages heavytailed rewards. Theoretical results and extensive experiments demonstrate that our two frameworks outperform existing methods on the logged dataset exhibits heavytailed reward distributions. The implementation of the proposal is available at https://github.com/Mamba413/ROOM.

Suggested Citation

  • Zhu, Jin & Wan, Runzhe & Qi, Zhengling & Luo, Shikai & Shi, Chengchun, 2024. "Robust offline reinforcement learning with heavy-tailed rewards," LSE Research Online Documents on Economics 122740, London School of Economics and Political Science, LSE Library.
  • Handle: RePEc:ehl:lserod:122740
    as

    Download full text from publisher

    File URL: http://eprints.lse.ac.uk/122740/
    File Function: Open access version.
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Weibin Mo & Zhengling Qi & Yufeng Liu, 2021. "Rejoinder: Learning Optimal Distributionally Robust Individualized Treatment Rules," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 116(534), pages 699-707, April.
    2. Vineet Goyal & Julien Grand-Clément, 2023. "Robust Markov Decision Processes: Beyond Rectangularity," Mathematics of Operations Research, INFORMS, vol. 48(1), pages 203-226, February.
    3. Wolfram Wiesemann & Daniel Kuhn & Berç Rustem, 2013. "Robust Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 38(1), pages 153-183, February.
    4. Jalaj Bhandari & Daniel Russo & Raghav Singal, 2021. "A Finite Time Analysis of Temporal Difference Learning with Linear Function Approximation," Operations Research, INFORMS, vol. 69(3), pages 950-973, May.
    5. Nathan Kallus & Masatoshi Uehara, 2022. "Efficiently Breaking the Curse of Horizon in Off-Policy Evaluation with Double Reinforcement Learning," Operations Research, INFORMS, vol. 70(6), pages 3282-3302, November.
    6. Daniel J. Luckett & Eric B. Laber & Anna R. Kahkoska & David M. Maahs & Elizabeth Mayer-Davis & Michael R. Kosorok, 2020. "Estimating Dynamic Treatment Regimes in Mobile Health Using V-Learning," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 115(530), pages 692-706, April.
    7. Arnab Nilim & Laurent El Ghaoui, 2005. "Robust Control of Markov Decision Processes with Uncertain Transition Matrices," Operations Research, INFORMS, vol. 53(5), pages 780-798, October.
    8. Peng Liao & Predrag Klasnja & Susan Murphy, 2021. "Off-Policy Estimation of Long-Term Average Outcomes With Applications to Mobile Health," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 116(533), pages 382-391, March.
    9. Weibin Mo & Zhengling Qi & Yufeng Liu, 2021. "Learning Optimal Distributionally Robust Individualized Treatment Rules," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 116(534), pages 659-674, April.
    10. Shie Mannor & Ofir Mebel & Huan Xu, 2016. "Robust MDPs with k -Rectangular Uncertainty," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1484-1509, November.
    11. Shi, Chengchun & Zhang, Shengxing & Lu, Wenbin & Song, Rui, 2022. "Statistical inference of the value function for reinforcement learning in infinite-horizon settings," LSE Research Online Documents on Economics 110882, London School of Economics and Political Science, LSE Library.
    12. Chengchun Shi & Sheng Zhang & Wenbin Lu & Rui Song, 2022. "Statistical inference of the value function for reinforcement learning in infinite‐horizon settings," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 84(3), pages 765-793, July.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Gao, Yuhe & Shi, Chengchun & Song, Rui, 2023. "Deep spectral Q-learning with application to mobile health," LSE Research Online Documents on Economics 119445, London School of Economics and Political Science, LSE Library.
    2. Lan Luo, By & Shi, Chengchun & Wang, Jitao & Wu, Zhenke & Li, Lexin, 2025. "Multivariate dynamic mediation analysis under a reinforcement learning framework," LSE Research Online Documents on Economics 127112, London School of Economics and Political Science, LSE Library.
    3. Zhang, Yingying & Shi, Chengchun & Luo, Shikai, 2023. "Conformal off-policy prediction," LSE Research Online Documents on Economics 118250, London School of Economics and Political Science, LSE Library.
    4. Maximilian Blesch & Philipp Eisenhauer, 2021. "Robust decision-making under risk and ambiguity," Papers 2104.12573, arXiv.org, revised Oct 2021.
    5. Luo, Shikai & Yang, Ying & Shi, Chengchun & Yao, Fang & Ye, Jieping & Zhu, Hongtu, 2024. "Policy evaluation for temporal and/or spatial dependent experiments," LSE Research Online Documents on Economics 122741, London School of Economics and Political Science, LSE Library.
    6. Andrew J. Keith & Darryl K. Ahner, 2021. "A survey of decision making and optimization under uncertainty," Annals of Operations Research, Springer, vol. 300(2), pages 319-353, May.
    7. Bakker, Hannah & Dunke, Fabian & Nickel, Stefan, 2020. "A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice," Omega, Elsevier, vol. 96(C).
    8. Maximilian Blesch & Philipp Eisenhauer, 2023. "Robust Decision-Making under Risk and Ambiguity," Rationality and Competition Discussion Paper Series 463, CRC TRR 190 Rationality and Competition.
    9. Maximilian Blesch & Philipp Eisenhauer, 2021. "Robust Decision-Making Under Risk and Ambiguity," ECONtribute Discussion Papers Series 104, University of Bonn and University of Cologne, Germany.
    10. Hao, Meiling & Su, Pingfan & Hu, Liyuan & Szabo, Zoltan & Zhao, Qianyu & Shi, Chengchun, 2024. "Forward and backward state abstractions for off-policy evaluation," LSE Research Online Documents on Economics 124074, London School of Economics and Political Science, LSE Library.
    11. Rasouli, Mohammad & Saghafian, Soroush, 2018. "Robust Partially Observable Markov Decision Processes," Working Paper Series rwp18-027, Harvard University, John F. Kennedy School of Government.
    12. Varagapriya, V & Singh, Vikas Vikram & Lisser, Abdel, 2024. "Rank-1 transition uncertainties in constrained Markov decision processes," European Journal of Operational Research, Elsevier, vol. 318(1), pages 167-178.
    13. Shie Mannor & Ofir Mebel & Huan Xu, 2016. "Robust MDPs with k -Rectangular Uncertainty," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1484-1509, November.
    14. Arthur Flajolet & Sébastien Blandin & Patrick Jaillet, 2018. "Robust Adaptive Routing Under Uncertainty," Operations Research, INFORMS, vol. 66(1), pages 210-229, January.
    15. Saghafian, Soroush, 2018. "Ambiguous partially observable Markov decision processes: Structural results and applications," Journal of Economic Theory, Elsevier, vol. 178(C), pages 1-35.
    16. Bren, Austin & Saghafian, Soroush, 2018. "Data-Driven Percentile Optimization for Multi-Class Queueing Systems with Model Ambiguity: Theory and Application," Working Paper Series rwp18-008, Harvard University, John F. Kennedy School of Government.
    17. Daido Kido, 2023. "Locally Asymptotically Minimax Statistical Treatment Rules Under Partial Identification," Papers 2311.08958, arXiv.org.
    18. Michael Jong Kim, 2016. "Robust Control of Partially Observable Failing Systems," Operations Research, INFORMS, vol. 64(4), pages 999-1014, August.
    19. Nicole Bauerle & Alexander Glauner, 2020. "Distributionally Robust Markov Decision Processes and their Connection to Risk Measures," Papers 2007.13103, arXiv.org.
    20. Eli Gutin & Daniel Kuhn & Wolfram Wiesemann, 2015. "Interdiction Games on Markovian PERT Networks," Management Science, INFORMS, vol. 61(5), pages 999-1017, May.

    More about this item

    Keywords

    Rights Retention;

    JEL classification:

    • C1 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ehl:lserod:122740. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: LSERO Manager (email available below). General contact details of provider: https://edirc.repec.org/data/lsepsuk.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.