Confidence Interval for Off-Policy Evaluation from Dependent Samples via Bandit Algorithm: Approach from Standardized Martingales
Author
Abstract
Suggested Citation
Download full text from publisher
References listed on IDEAS
- Jinyong Hahn & Keisuke Hirano & Dean Karlan, 2011.
"Adaptive Experimental Design Using the Propensity Score,"
Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 29(1), pages 96-108, January.
- Hahn, Jinyong & Hirano, Keisuke & Karlan, Dean, 2011. "Adaptive Experimental Design Using the Propensity Score," Journal of Business & Economic Statistics, American Statistical Association, vol. 29(1), pages 96-108.
- Hahn, Jinyong & Hirano, Keisuke & Karlan, Dean, 2008. "Adaptive Experimental Design Using the Propensity Score," MPRA Paper 8315, University Library of Munich, Germany.
- Jinyong Hahn & Keisuke Hirano & Dean Karlan, 2009. "Adaptive Experimental Design Using the Propensity Score," Working Papers 969, Economic Growth Center, Yale University.
- Hahn, Jinyong & Hirano, Keisuke & Karlan, Dean, 2009. "Adaptive Experimental Design Using the Propensity Score," Working Papers 59, Yale University, Department of Economics.
- Hahn, Jinyong & Hirano, Keisuke & Karlan, Dean S., 2009. "Adaptive Experimental Design Using the Propensity Score," Center Discussion Papers 47107, Yale University, Economic Growth Center.
- Zhengyuan Zhou & Susan Athey & Stefan Wager, 2023.
"Offline Multi-Action Policy Learning: Generalization and Optimization,"
Operations Research, INFORMS, vol. 71(1), pages 148-183, January.
- Zhou, Zhengyuan & Athey, Susan & Wager, Stefan, 2018. "Offline Multi-Action Policy Learning: Generalization and Optimization," Research Papers 3734, Stanford University, Graduate School of Business.
- Zhengyuan Zhou & Susan Athey & Stefan Wager, 2018. "Offline Multi-Action Policy Learning: Generalization and Optimization," Papers 1810.04778, arXiv.org, revised Nov 2018.
- Yingqi Zhao & Donglin Zeng & A. John Rush & Michael R. Kosorok, 2012. "Estimating Individualized Treatment Rules Using Outcome Weighted Learning," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 107(499), pages 1106-1118, September.
- Victor Chernozhukov & Denis Chetverikov & Mert Demirer & Esther Duflo & Christian Hansen & Whitney Newey & James Robins, 2018.
"Double/debiased machine learning for treatment and structural parameters,"
Econometrics Journal, Royal Economic Society, vol. 21(1), pages 1-68, February.
- Victor Chernozhukov & Denis Chetverikov & Mert Demirer & Esther Duflo & Christian Hansen & Whitney Newey & James Robins, 2017. "Double/Debiased Machine Learning for Treatment and Structural Parameters," NBER Working Papers 23564, National Bureau of Economic Research, Inc.
- Victor Chernozhukov & Denis Chetverikov & Mert Demirer & Esther Duflo & Christian Hansen & Whitney K. Newey & James Robins, 2017. "Double/debiased machine learning for treatment and structural parameters," CeMMAP working papers CWP28/17, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
- Victor Chernozhukov & Denis Chetverikov & Mert Demirer & Esther Duflo & Christian Hansen & Whitney K. Newey & James Robins, 2017. "Double/debiased machine learning for treatment and structural parameters," CeMMAP working papers 28/17, Institute for Fiscal Studies.
- Masahiro Kato & Masatoshi Uehara & Shota Yasui, 2020. "Off-Policy Evaluation and Learning for External Validity under a Covariate Shift," Papers 2002.11642, arXiv.org, revised Oct 2020.
- Keisuke Hirano & Guido W. Imbens & Geert Ridder, 2003.
"Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score,"
Econometrica, Econometric Society, vol. 71(4), pages 1161-1189, July.
- Keisuke Hirano & Guido W. Imbens & Geert Ridder, 2000. "Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score," NBER Technical Working Papers 0251, National Bureau of Economic Research, Inc.
- Guido Imbens, 2000. "Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score," Econometric Society World Congress 2000 Contributed Papers 1166, Econometric Society.
- Yusuke Narita & Shota Yasui & Kohei Yata, 2018. "Efficient Counterfactual Learning from Bandit Feedback," Cowles Foundation Discussion Papers 2155, Cowles Foundation for Research in Economics, Yale University.
- Athey, Susan & Wager, Stefan, 2017. "Efficient Policy Learning," Research Papers 3506, Stanford University, Graduate School of Business.
- Mert Demirer & Vasilis Syrgkanis & Greg Lewis & Victor Chernozhukov, 2019.
"Semi-Parametric Efficient Policy Learning with Continuous Actions,"
Papers
1905.10116, arXiv.org, revised Jul 2019.
- Mert Demirer & Vasilis Syrgkanis & Greg Lewis & Victor Chernozhukov, 2019. "Semi-Parametric Efficient Policy Learning with Continuous Actions," CeMMAP working papers CWP34/19, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
- Masahiro Kato & Shota Yasui & Kenichiro McAlinn, 2020. "The Adaptive Doubly Robust Estimator for Policy Evaluation in Adaptive Experiments and a Paradox Concerning Logging Policy," Papers 2010.03792, arXiv.org, revised Jun 2021.
- Masahiro Kato & Kenshi Abe & Kaito Ariu & Shota Yasui, 2020. "A Practical Guide of Off-Policy Evaluation for Bandit Problems," Papers 2010.12470, arXiv.org.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Masahiro Kato & Masatoshi Uehara & Shota Yasui, 2020. "Off-Policy Evaluation and Learning for External Validity under a Covariate Shift," Papers 2002.11642, arXiv.org, revised Oct 2020.
- Masahiro Kato & Shota Yasui & Kenichiro McAlinn, 2020. "The Adaptive Doubly Robust Estimator for Policy Evaluation in Adaptive Experiments and a Paradox Concerning Logging Policy," Papers 2010.03792, arXiv.org, revised Jun 2021.
- Masahiro Kato, 2021. "Adaptive Doubly Robust Estimator from Non-stationary Logging Policy under a Convergence of Average Probability," Papers 2102.08975, arXiv.org, revised Mar 2021.
- Andrew Bennett & Nathan Kallus, 2020. "Efficient Policy Learning from Surrogate-Loss Classification Reductions," Papers 2002.05153, arXiv.org.
- Rahul Singh & Liyuan Xu & Arthur Gretton, 2020. "Kernel Methods for Causal Functions: Dose, Heterogeneous, and Incremental Response Curves," Papers 2010.04855, arXiv.org, revised Oct 2022.
- Masahiro Kato & Yusuke Kaneko, 2020. "Off-Policy Evaluation of Bandit Algorithm from Dependent Samples under Batch Update Policy," Papers 2010.13554, arXiv.org.
- Michael C. Knaus, 2021.
"A double machine learning approach to estimate the effects of musical practice on student’s skills,"
Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 184(1), pages 282-300, January.
- Knaus, Michael C., 2018. "A Double Machine Learning Approach to Estimate the Effects of Musical Practice on Student's Skills," IZA Discussion Papers 11547, Institute of Labor Economics (IZA).
- Michael C. Knaus, 2018. "A Double Machine Learning Approach to Estimate the Effects of Musical Practice on Student's Skills," Papers 1805.10300, arXiv.org, revised Jan 2019.
- Masahiro Kato & Kenshi Abe & Kaito Ariu & Shota Yasui, 2020. "A Practical Guide of Off-Policy Evaluation for Bandit Problems," Papers 2010.12470, arXiv.org.
- Michael C Knaus, 2022.
"Double machine learning-based programme evaluation under unconfoundedness [Econometric methods for program evaluation],"
The Econometrics Journal, Royal Economic Society, vol. 25(3), pages 602-627.
- Knaus, Michael C., 2020. "Double Machine Learning based Program Evaluation under Unconfoundedness," Economics Working Paper Series 2004, University of St. Gallen, School of Economics and Political Science.
- Knaus, Michael C., 2020. "Double Machine Learning Based Program Evaluation under Unconfoundedness," IZA Discussion Papers 13051, Institute of Labor Economics (IZA).
- Michael C. Knaus, 2020. "Double Machine Learning based Program Evaluation under Unconfoundedness," Papers 2003.03191, arXiv.org, revised Jun 2022.
- Susan Athey & Stefan Wager, 2021.
"Policy Learning With Observational Data,"
Econometrica, Econometric Society, vol. 89(1), pages 133-161, January.
- Susan Athey & Stefan Wager, 2017. "Policy Learning with Observational Data," Papers 1702.02896, arXiv.org, revised Sep 2020.
- Huber, Martin, 2019.
"An introduction to flexible methods for policy evaluation,"
FSES Working Papers
504, Faculty of Economics and Social Sciences, University of Freiburg/Fribourg Switzerland.
- Martin Huber, 2019. "An introduction to flexible methods for policy evaluation," Papers 1910.00641, arXiv.org.
- Nathan Kallus, 2022. "Treatment Effect Risk: Bounds and Inference," Papers 2201.05893, arXiv.org, revised Jul 2022.
- Davide Viviano, 2019. "Policy Targeting under Network Interference," Papers 1906.10258, arXiv.org, revised Apr 2024.
- Kenshi Abe & Yusuke Kaneko, 2020. "Off-Policy Exploitability-Evaluation in Two-Player Zero-Sum Markov Games," Papers 2007.02141, arXiv.org, revised Dec 2020.
- Chunrong Ai & Yue Fang & Haitian Xie, 2024. "Data-driven Policy Learning for Continuous Treatments," Papers 2402.02535, arXiv.org, revised Nov 2024.
- Mert Demirer & Vasilis Syrgkanis & Greg Lewis & Victor Chernozhukov, 2019.
"Semi-Parametric Efficient Policy Learning with Continuous Actions,"
CeMMAP working papers
CWP34/19, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
- Mert Demirer & Vasilis Syrgkanis & Greg Lewis & Victor Chernozhukov, 2019. "Semi-Parametric Efficient Policy Learning with Continuous Actions," Papers 1905.10116, arXiv.org, revised Jul 2019.
- Harrison H. Li & Art B. Owen, 2023. "Double machine learning and design in batch adaptive experiments," Papers 2309.15297, arXiv.org.
- Masahiro Kato & Akihiro Oga & Wataru Komatsubara & Ryo Inokuchi, 2024. "Active Adaptive Experimental Design for Treatment Effect Estimation with Covariate Choices," Papers 2403.03589, arXiv.org, revised Jun 2024.
- Alexandre Belloni & Victor Chernozhukov & Denis Chetverikov & Christian Hansen & Kengo Kato, 2018.
"High-dimensional econometrics and regularized GMM,"
CeMMAP working papers
CWP35/18, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
- Alexandre Belloni & Victor Chernozhukov & Denis Chetverikov & Christian Hansen & Kengo Kato, 2018. "High-Dimensional Econometrics and Regularized GMM," Papers 1806.01888, arXiv.org, revised Jun 2018.
- Kyle Colangelo & Ying-Ying Lee, 2019. "Double debiased machine learning nonparametric inference with continuous treatments," CeMMAP working papers CWP72/19, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
More about this item
NEP fields
This paper has been announced in the following NEP Reports:- NEP-ECM-2020-07-27 (Econometrics)
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2006.06982. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.