IDEAS home Printed from https://ideas.repec.org/p/osf/osfxxx/8nsw4_v1.html
   My bibliography  Save this paper

Uncovering Individualised Treatment Effect: Evidence from Educational Trials

Author

Listed:
  • Xiao, ZhiMin

    (University of Exeter)

  • Hauser, Oliver P

    (University of Exeter)

  • Kirkwood, Charlie

    (University of Exeter)

  • Li, Daniel Z.
  • Jones, Benjamin
  • Higgins, Steve

Abstract

The use of large-scale Randomised Controlled Trials (RCTs) is fast becoming "the gold standard" of testing the causal effects of policy, social, and educational interventions. RCTs are typically evaluated — and ultimately judged — by the economic, educational, and statistical significance of the Average Treatment Effect (ATE) in the study sample. However, many interventions have heterogeneous treatment effects across different individuals, not captured by the ATE. One way to identify heterogeneous treatment effects is to conduct subgroup analyses, such as focusing on low-income Free School Meal pupils as required for projects funded by the Education Endowment Foundation (EEF) in England. These subgroup analyses, as we demonstrate in 48 EEF-funded RCTs involving over 200,000 students, are usually not standardised across studies and offer flexible degrees of freedom to researchers, potentially leading to mixed results. Here, we develop and deploy a machine-learning and regression-based framework for systematic estimation of Individualised Treatment Effect (ITE), which can show where a seemingly ineffective and uninformative intervention worked, for whom, and by how much. Our findings have implications for decision-makers in education, public health, and medical trials.

Suggested Citation

  • Xiao, ZhiMin & Hauser, Oliver P & Kirkwood, Charlie & Li, Daniel Z. & Jones, Benjamin & Higgins, Steve, 2020. "Uncovering Individualised Treatment Effect: Evidence from Educational Trials," OSF Preprints 8nsw4_v1, Center for Open Science.
  • Handle: RePEc:osf:osfxxx:8nsw4_v1
    DOI: 10.31219/osf.io/8nsw4_v1
    as

    Download full text from publisher

    File URL: https://osf.io/download/5e276f30edceab024282db78/
    Download Restriction: no

    File URL: https://libkey.io/10.31219/osf.io/8nsw4_v1?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Patrick M. Schnell & Qi Tang & Walter W. Offen & Bradley P. Carlin, 2016. "A Bayesian credible subgroups approach to identifying patient subgroups with positive treatment effects," Biometrics, The International Biometric Society, vol. 72(4), pages 1026-1036, December.
    2. Stefan Wager & Susan Athey, 2018. "Estimation and Inference of Heterogeneous Treatment Effects using Random Forests," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 113(523), pages 1228-1242, July.
    3. Susan Athey & Guido W. Imbens, 2017. "The State of Applied Econometrics: Causality and Policy Evaluation," Journal of Economic Perspectives, American Economic Association, vol. 31(2), pages 3-32, Spring.
    4. Charles F. Manski, 2019. "Treatment Choice With Trial Data: Statistical Decision Theory Should Supplant Hypothesis Testing," The American Statistician, Taylor & Francis Journals, vol. 73(S1), pages 296-304, March.
    5. Susan Athey & Guido W. Imbens, 2019. "Machine Learning Methods That Economists Should Know About," Annual Review of Economics, Annual Reviews, vol. 11(1), pages 685-725, August.
    6. Todd Rogers & Avi Feller, 2018. "Reducing student absences at scale by targeting parents’ misbeliefs," Nature Human Behaviour, Nature, vol. 2(5), pages 335-342, May.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Michael C Knaus, 2022. "Double machine learning-based programme evaluation under unconfoundedness [Econometric methods for program evaluation]," The Econometrics Journal, Royal Economic Society, vol. 25(3), pages 602-627.
    2. Aysegül Kayaoglu & Ghassan Baliki & Tilman Brück & Melodie Al Daccache & Dorothee Weiffen, 2023. "How to conduct impact evaluations in humanitarian and conflict settings," HiCN Working Papers 387, Households in Conflict Network.
    3. Yamin Ahmad & Adam Check & Ming Chien Lo, 2024. "Unit Roots in Macroeconomic Time Series: A Comparison of Classical, Bayesian and Machine Learning Approaches," Computational Economics, Springer;Society for Computational Economics, vol. 63(6), pages 2139-2173, June.
    4. Anna Baiardi & Andrea A. Naghi, 2021. "The Value Added of Machine Learning to Causal Inference: Evidence from Revisited Studies," Papers 2101.00878, arXiv.org.
    5. Daniel Goller & Tamara Harrer & Michael Lechner & Joachim Wolff, 2021. "Active labour market policies for the long-term unemployed: New evidence from causal machine learning," Papers 2106.10141, arXiv.org, revised May 2023.
    6. Joshua B. Gilbert & Zachary Himmelsbach & James Soland & Mridul Joshi & Benjamin W. Domingue, 2024. "Estimating Heterogeneous Treatment Effects with Item-Level Outcome Data: Insights from Item Response Theory," Papers 2405.00161, arXiv.org, revised Jan 2025.
    7. Raval, Devesh & Rosenbaum, Ted & Wilson, Nathan E., 2021. "How do machine learning algorithms perform in predicting hospital choices? evidence from changing environments," Journal of Health Economics, Elsevier, vol. 78(C).
    8. Carlos Fernández-Loría & Foster Provost & Jesse Anderton & Benjamin Carterette & Praveen Chandar, 2023. "A Comparison of Methods for Treatment Assignment with an Application to Playlist Generation," Information Systems Research, INFORMS, vol. 34(2), pages 786-803, June.
    9. Gabriel Okasa, 2022. "Meta-Learners for Estimation of Causal Effects: Finite Sample Cross-Fit Performance," Papers 2201.12692, arXiv.org.
    10. Carlos Fern'andez-Lor'ia & Foster Provost & Jesse Anderton & Benjamin Carterette & Praveen Chandar, 2020. "A Comparison of Methods for Treatment Assignment with an Application to Playlist Generation," Papers 2004.11532, arXiv.org, revised Apr 2022.
    11. Anna Baiardi & Andrea A. Naghi, 2021. "The Value Added of Machine Learning to Causal Inference: Evidence from Revisited Studies," Tinbergen Institute Discussion Papers 21-001/V, Tinbergen Institute.
    12. Harsh Parikh & Carlos Varjao & Louise Xu & Eric Tchetgen Tchetgen, 2022. "Validating Causal Inference Methods," Papers 2202.04208, arXiv.org, revised Jul 2022.
    13. Netta Barak‐Corren & Yoav Kan‐Tor & Nelson Tebbe, 2022. "Examining the effects of antidiscrimination laws on children in the foster care and adoption systems," Journal of Empirical Legal Studies, John Wiley & Sons, vol. 19(4), pages 1003-1066, December.
    14. Gabriel Okasa & Kenneth A. Younge, 2022. "Sample Fit Reliability," Papers 2209.06631, arXiv.org.
    15. Lechner, Michael, 2018. "Modified Causal Forests for Estimating Heterogeneous Causal Effects," IZA Discussion Papers 12040, Institute of Labor Economics (IZA).
    16. Hayakawa, Kazunobu & Keola, Souknilanh & Silaphet, Korrakoun & Yamanouchi, Kenta, 2022. "Estimating the impacts of international bridges on foreign firm locations: a machine learning approach," IDE Discussion Papers 847, Institute of Developing Economies, Japan External Trade Organization(JETRO).
    17. Labro, Eva & Lang, Mark & Omartian, James D., 2023. "Predictive analytics and centralization of authority," Journal of Accounting and Economics, Elsevier, vol. 75(1).
    18. Tsang, Andrew, 2021. "Uncovering Heterogeneous Regional Impacts of Chinese Monetary Policy," MPRA Paper 110703, University Library of Munich, Germany.
    19. Kyle Colangelo & Ying-Ying Lee, 2019. "Double debiased machine learning nonparametric inference with continuous treatments," CeMMAP working papers CWP54/19, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
    20. Daniel Goller, 2023. "Analysing a built-in advantage in asymmetric darts contests using causal machine learning," Annals of Operations Research, Springer, vol. 325(1), pages 649-679, June.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:osf:osfxxx:8nsw4_v1. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: OSF (email available below). General contact details of provider: https://osf.io/preprints/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.