IDEAS home Printed from https://ideas.repec.org/a/spr/annopr/v208y2013i1p383-41610.1007-s10479-012-1248-5.html
   My bibliography  Save this article

Batch mode reinforcement learning based on the synthesis of artificial trajectories

Author

Listed:
  • Raphael Fonteneau
  • Susan Murphy
  • Louis Wehenkel
  • Damien Ernst

Abstract

In this paper, we consider the batch mode reinforcement learning setting, where the central problem is to learn from a sample of trajectories a policy that satisfies or optimizes a performance criterion. We focus on the continuous state space case for which usual resolution schemes rely on function approximators either to represent the underlying control problem or to represent its value function. As an alternative to the use of function approximators, we rely on the synthesis of “artificial trajectories” from the given sample of trajectories, and show that this idea opens new avenues for designing and analyzing algorithms for batch mode reinforcement learning. Copyright Springer Science+Business Media New York 2013

Suggested Citation

  • Raphael Fonteneau & Susan Murphy & Louis Wehenkel & Damien Ernst, 2013. "Batch mode reinforcement learning based on the synthesis of artificial trajectories," Annals of Operations Research, Springer, vol. 208(1), pages 383-416, September.
  • Handle: RePEc:spr:annopr:v:208:y:2013:i:1:p:383-416:10.1007/s10479-012-1248-5
    DOI: 10.1007/s10479-012-1248-5
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1007/s10479-012-1248-5
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1007/s10479-012-1248-5?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. S. A. Murphy, 2003. "Optimal dynamic treatment regimes," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 65(2), pages 331-355, May.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Ruitu Xu & Yifei Min & Tianhao Wang & Zhaoran Wang & Michael I. Jordan & Zhuoran Yang, 2023. "Finding Regularized Competitive Equilibria of Heterogeneous Agent Macroeconomic Models with Reinforcement Learning," Papers 2303.04833, arXiv.org.
    2. Stefano Bromuri, 2019. "Dynamic heuristic acceleration of linearly approximated SARSA( $$\lambda $$ λ ): using ant colony optimization to learn heuristics dynamically," Journal of Heuristics, Springer, vol. 25(6), pages 901-932, December.
    3. Shosei Sakaguchi, 2024. "Robust Learning for Optimal Dynamic Treatment Regimes with Observational Data," Papers 2404.00221, arXiv.org.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Jin Wang & Donglin Zeng & D. Y. Lin, 2022. "Semiparametric single-index models for optimal treatment regimens with censored outcomes," Lifetime Data Analysis: An International Journal Devoted to Statistical Methods and Applications for Time-to-Event Data, Springer, vol. 28(4), pages 744-763, October.
    2. Ji Liu, 2024. "Education legislations that equalize: a study of compulsory schooling law reforms in post-WWII United States," Palgrave Communications, Palgrave Macmillan, vol. 11(1), pages 1-12, December.
    3. Durlauf, Steven N. & Navarro, Salvador & Rivers, David A., 2016. "Model uncertainty and the effect of shall-issue right-to-carry laws on crime," European Economic Review, Elsevier, vol. 81(C), pages 32-67.
    4. Yusuke Narita, 2018. "Toward an Ethical Experiment," Cowles Foundation Discussion Papers 2127, Cowles Foundation for Research in Economics, Yale University.
    5. Xin Qiu & Donglin Zeng & Yuanjia Wang, 2018. "Estimation and evaluation of linear individualized treatment rules to guarantee performance," Biometrics, The International Biometric Society, vol. 74(2), pages 517-528, June.
    6. Yiwang Zhou & Peter X.K. Song & Haoda Fu, 2021. "Net benefit index: Assessing the influence of a biomarker for individualized treatment rules," Biometrics, The International Biometric Society, vol. 77(4), pages 1254-1264, December.
    7. Ruoqing Zhu & Ying-Qi Zhao & Guanhua Chen & Shuangge Ma & Hongyu Zhao, 2017. "Greedy outcome weighted tree learning of optimal personalized treatment rules," Biometrics, The International Biometric Society, vol. 73(2), pages 391-400, June.
    8. Zeyu Bian & Erica E. M. Moodie & Susan M. Shortreed & Sahir Bhatnagar, 2023. "Variable selection in regression‐based estimation of dynamic treatment regimes," Biometrics, The International Biometric Society, vol. 79(2), pages 988-999, June.
    9. Thomas A. Murray & Peter F. Thall & Ying Yuan & Sarah McAvoy & Daniel R. Gomez, 2017. "Robust Treatment Comparison Based on Utilities of Semi-Competing Risks in Non-Small-Cell Lung Cancer," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 112(517), pages 11-23, January.
    10. Yusuke Narita, 2018. "Experiment-as-Market: Incorporating Welfare into Randomized Controlled Trials," Cowles Foundation Discussion Papers 2127r, Cowles Foundation for Research in Economics, Yale University, revised May 2019.
    11. Michael Lechner & Stephan Wiehler, 2013. "Does the Order and Timing of Active Labour Market Programmes Matter?," Oxford Bulletin of Economics and Statistics, Department of Economics, University of Oxford, vol. 75(2), pages 180-212, April.
    12. Shuze Chen & David Simchi-Levi & Chonghuan Wang, 2024. "Experimenting on Markov Decision Processes with Local Treatments," Papers 2407.19618, arXiv.org, revised Oct 2024.
    13. Vasilis Syrgkanis & Ruohan Zhan, 2023. "Post Reinforcement Learning Inference," Papers 2302.08854, arXiv.org, revised May 2024.
    14. Rich Benjamin & Moodie Erica E. M. & A. Stephens David, 2016. "Influence Re-weighted G-Estimation," The International Journal of Biostatistics, De Gruyter, vol. 12(1), pages 157-177, May.
    15. Stephens Alisa & Keele Luke & Joffe Marshall, 2016. "Generalized Structural Mean Models for Evaluating Depression as a Post-treatment Effect Modifier of a Jobs Training Intervention," Journal of Causal Inference, De Gruyter, vol. 4(2), pages 1-17, September.
    16. Peng Wu & Donglin Zeng & Haoda Fu & Yuanjia Wang, 2020. "On using electronic health records to improve optimal treatment rules in randomized trials," Biometrics, The International Biometric Society, vol. 76(4), pages 1075-1086, December.
    17. Luedtke Alexander R. & van der Laan Mark J., 2016. "Optimal Individualized Treatments in Resource-Limited Settings," The International Journal of Biostatistics, De Gruyter, vol. 12(1), pages 283-303, May.
    18. Weibin Mo & Yufeng Liu, 2022. "Efficient learning of optimal individualized treatment rules for heteroscedastic or misspecified treatment‐free effect models," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 84(2), pages 440-472, April.
    19. Stephen Chick & Martin Forster & Paolo Pertile, 2017. "A Bayesian decision theoretic model of sequential experimentation with delayed response," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 79(5), pages 1439-1462, November.
    20. Biernot Peter & Moodie Erica E. M., 2010. "A Comparison of Variable Selection Approaches for Dynamic Treatment Regimes," The International Journal of Biostatistics, De Gruyter, vol. 6(1), pages 1-20, January.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:annopr:v:208:y:2013:i:1:p:383-416:10.1007/s10479-012-1248-5. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.