IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2301.13152.html
   My bibliography  Save this paper

STEEL: Singularity-aware Reinforcement Learning

Author

Listed:
  • Xiaohong Chen
  • Zhengling Qi
  • Runzhe Wan

Abstract

Batch reinforcement learning (RL) aims at leveraging pre-collected data to find an optimal policy that maximizes the expected total rewards in a dynamic environment. The existing methods require absolutely continuous assumption (e.g., there do not exist non-overlapping regions) on the distribution induced by target policies with respect to the data distribution over either the state or action or both. We propose a new batch RL algorithm that allows for singularity for both state and action spaces (e.g., existence of non-overlapping regions between offline data distribution and the distribution induced by the target policies) in the setting of an infinite-horizon Markov decision process with continuous states and actions. We call our algorithm STEEL: SingulariTy-awarE rEinforcement Learning. Our algorithm is motivated by a new error analysis on off-policy evaluation, where we use maximum mean discrepancy, together with distributionally robust optimization, to characterize the error of off-policy evaluation caused by the possible singularity and to enable model extrapolation. By leveraging the idea of pessimism and under some technical conditions, we derive a first finite-sample regret guarantee for our proposed algorithm under singularity. Compared with existing algorithms,by requiring only minimal data-coverage assumption, STEEL improves the applicability and robustness of batch RL. In addition, a two-step adaptive STEEL, which is nearly tuning-free, is proposed. Extensive simulation studies and one (semi)-real experiment on personalized pricing demonstrate the superior performance of our methods in dealing with possible singularity in batch RL.

Suggested Citation

  • Xiaohong Chen & Zhengling Qi & Runzhe Wan, 2023. "STEEL: Singularity-aware Reinforcement Learning," Papers 2301.13152, arXiv.org, revised Jun 2024.
  • Handle: RePEc:arx:papers:2301.13152
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2301.13152
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Guanhua Chen & Donglin Zeng & Michael R. Kosorok, 2016. "Personalized Dose Finding Using Outcome Weighted Learning," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 111(516), pages 1509-1521, October.
    2. Toru Kitagawa & Aleksey Tetenov, 2018. "Who Should Be Treated? Empirical Welfare Maximization Methods for Treatment Choice," Econometrica, Econometric Society, vol. 86(2), pages 591-616, March.
    3. Bhattacharya, Debopam & Dupas, Pascaline, 2012. "Inferring welfare maximizing treatment assignment under budget constraints," Journal of Econometrics, Elsevier, vol. 167(1), pages 168-196.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Susan Athey & Stefan Wager, 2021. "Policy Learning With Observational Data," Econometrica, Econometric Society, vol. 89(1), pages 133-161, January.
    2. Chunrong Ai & Yue Fang & Haitian Xie, 2024. "Data-driven Policy Learning for a Continuous Treatment," Papers 2402.02535, arXiv.org.
    3. Eric Mbakop & Max Tabord‐Meehan, 2021. "Model Selection for Treatment Choice: Penalized Welfare Maximization," Econometrica, Econometric Society, vol. 89(2), pages 825-848, March.
    4. Anders Bredahl Kock & Martin Thyrsgaard, 2017. "Optimal sequential treatment allocation," Papers 1705.09952, arXiv.org, revised Aug 2018.
    5. Garbero, Alessandra & Sakos, Grayson & Cerulli, Giovanni, 2023. "Towards data-driven project design: Providing optimal treatment rules for development projects," Socio-Economic Planning Sciences, Elsevier, vol. 89(C).
    6. Firpo, Sergio & Galvao, Antonio F. & Kobus, Martyna & Parker, Thomas & Rosa-Dias, Pedro, 2020. "Loss Aversion and the Welfare Ranking of Policy Interventions," IZA Discussion Papers 13176, Institute of Labor Economics (IZA).
    7. Undral Byambadalai, 2022. "Identification and Inference for Welfare Gains without Unconfoundedness," Papers 2207.04314, arXiv.org.
    8. Giovanni Cerulli, 2020. "Optimal Policy Learning: From Theory to Practice," Papers 2011.04993, arXiv.org.
    9. Toru Kitagawa & Hugo Lopez & Jeff Rowley, 2022. "Stochastic Treatment Choice with Empirical Welfare Updating," Papers 2211.01537, arXiv.org, revised Feb 2023.
    10. Kock, Anders Bredahl & Preinerstorfer, David & Veliyev, Bezirgen, 2023. "Treatment recommendation with distributional targets," Journal of Econometrics, Elsevier, vol. 234(2), pages 624-646.
    11. Shosei Sakaguchi, 2021. "Estimation of Optimal Dynamic Treatment Assignment Rules under Policy Constraints," Papers 2106.05031, arXiv.org, revised Aug 2024.
    12. Nygaard, Vegard M. & Sørensen, Bent E. & Wang, Fan, 2022. "Optimal allocations to heterogeneous agents with an application to stimulus checks," Journal of Economic Dynamics and Control, Elsevier, vol. 138(C).
    13. Toru Kitagawa & Weining Wang & Mengshan Xu, 2022. "Policy Choice in Time Series by Empirical Welfare Maximization," Papers 2205.03970, arXiv.org, revised Jun 2023.
    14. Yu-Chang Chen & Haitian Xie, 2022. "Personalized Subsidy Rules," Papers 2202.13545, arXiv.org, revised Mar 2022.
    15. Johannes Haushofer & Paul Niehaus & Carlos Paramo & Edward Miguel & Michael W. Walker, 2022. "Targeting Impact versus Deprivation," NBER Working Papers 30138, National Bureau of Economic Research, Inc.
    16. Huber, Martin, 2019. "An introduction to flexible methods for policy evaluation," FSES Working Papers 504, Faculty of Economics and Social Sciences, University of Freiburg/Fribourg Switzerland.
    17. Juliano Assunção & Robert McMillan & Joshua Murphy & Eduardo Souza-Rodrigues, 2019. "Optimal Environmental Targeting in the Amazon Rainforest," NBER Working Papers 25636, National Bureau of Economic Research, Inc.
    18. Yuya Sasaki & Takuya Ura, 2020. "Welfare Analysis via Marginal Treatment Effects," Papers 2012.07624, arXiv.org.
    19. Karun Adusumilli & Friedrich Geiecke & Claudio Schilter, 2019. "Dynamically Optimal Treatment Allocation using Reinforcement Learning," Papers 1904.01047, arXiv.org, revised May 2022.
    20. Marianne Bertrand & Bruno Crépon & Alicia Marguerie & Patrick Premand, 2021. "Do Workfare Programs Live Up to Their Promises? Experimental Evidence from Cote D’Ivoire," NBER Working Papers 28664, National Bureau of Economic Research, Inc.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2301.13152. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.