IDEAS home Printed from https://ideas.repec.org/p/ehl/lserod/113310.html
   My bibliography  Save this paper

Dynamic causal effects evaluation in A/B testing with a reinforcement learning framework

Author

Listed:
  • Shi, Chengchun
  • Wang, Xiaoyu
  • Luo, Shikai
  • Zhu, Hongtu
  • Ye, Jieping
  • Song, Rui

Abstract

A/B testing, or online experiment is a standard business strategy to compare a new product with an old one in pharmaceutical, technological, and traditional industries. Major challenges arise in online experiments of two-sided marketplace platforms (e.g., Uber) where there is only one unit that receives a sequence of treatments over time. In those experiments, the treatment at a given time impacts current outcome as well as future outcomes. The aim of this article is to introduce a reinforcement learning framework for carrying A/B testing in these experiments, while characterizing the long-term treatment effects. Our proposed testing procedure allows for sequential monitoring and online updating. It is generally applicable to a variety of treatment designs in different industries. In addition, we systematically investigate the theoretical properties (e.g., size and power) of our testing procedure. Finally, we apply our framework to both simulated data and a real-world data example obtained from a technological company to illustrate its advantage over the current practice. A Python implementation of our test is available at https://github.com/callmespring/CausalRL. Supplementary materials for this article are available online.

Suggested Citation

  • Shi, Chengchun & Wang, Xiaoyu & Luo, Shikai & Zhu, Hongtu & Ye, Jieping & Song, Rui, 2022. "Dynamic causal effects evaluation in A/B testing with a reinforcement learning framework," LSE Research Online Documents on Economics 113310, London School of Economics and Political Science, LSE Library.
  • Handle: RePEc:ehl:lserod:113310
    as

    Download full text from publisher

    File URL: http://eprints.lse.ac.uk/113310/
    File Function: Open access version.
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Ying-Qi Zhao & Donglin Zeng & Eric B. Laber & Michael R. Kosorok, 2015. "New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 110(510), pages 583-598, June.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Shi, Chengchun & Wan, Runzhe & Song, Ge & Luo, Shikai & Zhu, Hongtu & Song, Rui, 2023. "A multiagent reinforcement learning framework for off-policy evaluation in two-sided markets," LSE Research Online Documents on Economics 117174, London School of Economics and Political Science, LSE Library.
    2. Li, Ting & Shi, Chengchun & Lu, Zhaohua & Li, Yi & Zhu, Hongtu, 2024. "Evaluating dynamic conditional quantile treatment effects with applications in ridesharing," LSE Research Online Documents on Economics 122488, London School of Economics and Political Science, LSE Library.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Jingxiang Chen & Yufeng Liu & Donglin Zeng & Rui Song & Yingqi Zhao & Michael R. Kosorok, 2016. "Comment," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 111(515), pages 942-947, July.
    2. Xin Qiu & Donglin Zeng & Yuanjia Wang, 2018. "Estimation and evaluation of linear individualized treatment rules to guarantee performance," Biometrics, The International Biometric Society, vol. 74(2), pages 517-528, June.
    3. Shosei Sakaguchi, 2021. "Estimation of Optimal Dynamic Treatment Assignment Rules under Policy Constraints," Papers 2106.05031, arXiv.org, revised Aug 2024.
    4. Qingxia Chen & Fan Zhang & Ming-Hui Chen & Xiuyu Julie Cong, 2020. "Estimation of treatment effects and model diagnostics with two-way time-varying treatment switching: an application to a head and neck study," Lifetime Data Analysis: An International Journal Devoted to Statistical Methods and Applications for Time-to-Event Data, Springer, vol. 26(4), pages 685-707, October.
    5. Kara E. Rudolph & Iván Díaz, 2022. "When the ends do not justify the means: Learning who is predicted to have harmful indirect effects," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 185(S2), pages 573-589, December.
    6. Baqun Zhang & Min Zhang, 2018. "C‐learning: A new classification framework to estimate optimal dynamic treatment regimes," Biometrics, The International Biometric Society, vol. 74(3), pages 891-899, September.
    7. Michael C. Knaus & Michael Lechner & Anthony Strittmatter, 2022. "Heterogeneous Employment Effects of Job Search Programs: A Machine Learning Approach," Journal of Human Resources, University of Wisconsin Press, vol. 57(2), pages 597-636.
    8. Toru Kitagawa & Guanyi Wang, 2021. "Who should get vaccinated? Individualized allocation of vaccines over SIR network," CeMMAP working papers CWP28/21, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
    9. Toru Kitagawa & Shosei Sakaguchi & Aleksey Tetenov, 2021. "Constrained Classification and Policy Learning," Papers 2106.12886, arXiv.org, revised Jul 2023.
    10. Qian Guan & Eric B. Laber & Brian J. Reich, 2016. "Comment," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 111(515), pages 936-942, July.
    11. Chengchun Shi & Sheng Zhang & Wenbin Lu & Rui Song, 2022. "Statistical inference of the value function for reinforcement learning in infinite‐horizon settings," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 84(3), pages 765-793, July.
    12. Kitagawa, Toru & Wang, Guanyi, 2023. "Who should get vaccinated? Individualized allocation of vaccines over SIR network," Journal of Econometrics, Elsevier, vol. 232(1), pages 109-131.
    13. Zhang, Haixiang & Huang, Jian & Sun, Liuquan, 2020. "A rank-based approach to estimating monotone individualized two treatment regimes," Computational Statistics & Data Analysis, Elsevier, vol. 151(C).
    14. Yunan Wu & Lan Wang, 2021. "Resampling‐based confidence intervals for model‐free robust inference on optimal treatment regimes," Biometrics, The International Biometric Society, vol. 77(2), pages 465-476, June.
    15. Shi, Chengchun & Luo, Shikai & Le, Yuan & Zhu, Hongtu & Song, Rui, 2022. "Statistically efficient advantage learning for offline reinforcement learning in infinite horizons," LSE Research Online Documents on Economics 115598, London School of Economics and Political Science, LSE Library.
    16. Zhou, Yunzhe & Qi, Zhengling & Shi, Chengchun & Li, Lexin, 2023. "Optimizing pessimism in dynamic treatment regimes: a Bayesian learning approach," LSE Research Online Documents on Economics 118233, London School of Economics and Political Science, LSE Library.
    17. Toru Kitagawa & Guanyi Wang, 2020. "Who should get vaccinated? Individualized allocation of vaccines over SIR network," CeMMAP working papers CWP59/20, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
    18. Shi, Chengchun & Wan, Runzhe & Song, Ge & Luo, Shikai & Zhu, Hongtu & Song, Rui, 2023. "A multiagent reinforcement learning framework for off-policy evaluation in two-sided markets," LSE Research Online Documents on Economics 117174, London School of Economics and Political Science, LSE Library.
    19. Kristin A. Linn & Eric B. Laber & Leonard A. Stefanski, 2017. "Interactive -Learning for Quantiles," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 112(518), pages 638-649, April.
    20. Hyung Park & Eva Petkova & Thaddeus Tarpey & R. Todd Ogden, 2023. "Functional additive models for optimizing individualized treatment rules," Biometrics, The International Biometric Society, vol. 79(1), pages 113-126, March.

    More about this item

    Keywords

    A/B testing; online experiment; reinforcement learning; causal inference; sequential testing; online updating; Research Support Fund; NSF-DMS-1555244; NSF-DMS-2113637;
    All these keywords.

    JEL classification:

    • C1 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ehl:lserod:113310. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: LSERO Manager (email available below). General contact details of provider: https://edirc.repec.org/data/lsepsuk.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.