IDEAS home Printed from https://ideas.repec.org/a/sae/evarev/v41y2017i3p240-279.html
   My bibliography  Save this article

Matched Comparison Group Design Standards in Systematic Reviews of Early Childhood Interventions

Author

Listed:
  • Jaime Thomas
  • Sarah A. Avellar
  • John Deke
  • Philip Gleason

Abstract

Background: Systematic reviews assess the quality of research on program effectiveness to help decision makers faced with many intervention options. Study quality standards specify criteria that studies must meet, including accounting for baseline differences between intervention and comparison groups. We explore two issues related to systematic review standards: covariate choice and choice of estimation method. Objective: To help systematic reviews develop/refine quality standards and support researchers in using nonexperimental designs to estimate program effects, we address two questions: (1) How well do variables that systematic reviews typically require studies to account for explain variation in key child and family outcomes? (2) What methods should studies use to account for preexisting differences between intervention and comparison groups? Methods: We examined correlations between baseline characteristics and key outcomes using Early Childhood Longitudinal Study—Birth Cohort data to address Question 1. For Question 2, we used simulations to compare two methods—matching and regression adjustment—to account for preexisting differences between intervention and comparison groups. Results: A broad range of potential baseline variables explained relatively little of the variation in child and family outcomes. This suggests the potential for bias even after accounting for these variables, highlighting the need for systematic reviews to provide appropriate cautions about interpreting the results of moderately rated, nonexperimental studies. Our simulations showed that regression adjustment can yield unbiased estimates if all relevant covariates are used, even when the model is misspecified, and preexisting differences between the intervention and the comparison groups exist.

Suggested Citation

  • Jaime Thomas & Sarah A. Avellar & John Deke & Philip Gleason, 2017. "Matched Comparison Group Design Standards in Systematic Reviews of Early Childhood Interventions," Evaluation Review, , vol. 41(3), pages 240-279, June.
  • Handle: RePEc:sae:evarev:v:41:y:2017:i:3:p:240-279
    DOI: 10.1177/0193841X17708721
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.1177/0193841X17708721
    Download Restriction: no

    File URL: https://libkey.io/10.1177/0193841X17708721?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Fortson, Kenneth & Gleason, Philip & Kopa, Emma & Verbitsky-Savitz, Natalya, 2015. "Horseshoes, hand grenades, and treatment effects? Reassessing whether nonexperimental estimators are biased," Economics of Education Review, Elsevier, vol. 44(C), pages 100-113.
    2. John Deke & Hanley Chiang, 2017. "The WWC Attrition Standard," Evaluation Review, , vol. 41(2), pages 130-154, April.
    3. Kenneth Fortson & Philip Gleason & Emma Kopa & Natalya Verbitsky-Savitz, 2015. "Horseshoes, Hand Grenades, and Treatment Effects? Reassessing Whether Nonexperimental Estimators are Biased," Mathematica Policy Research Reports 88154a3523cc492dbca5bcb47, Mathematica Policy Research.
    4. Heejung Bang & James M. Robins, 2005. "Doubly Robust Estimation in Missing Data and Causal Inference Models," Biometrics, The International Biometric Society, vol. 61(4), pages 962-973, December.
    5. Zhao, Zhong, 2008. "Sensitivity of propensity score methods to the specifications," Economics Letters, Elsevier, vol. 98(3), pages 309-319, March.
    6. Joshua D. Angrist, 1998. "Estimating the Labor Market Impact of Voluntary Military Service Using Social Security Data on Military Applicants," Econometrica, Econometric Society, vol. 66(2), pages 249-288, March.
    7. Kenneth A. Couch & Robert Bifulco, 2012. "Can Nonexperimental Estimates Replicate Estimates Based on Random Assignment in Evaluations of School Choice? A Within‐Study Comparison," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 31(3), pages 729-751, June.
    8. Thomas D. Cook & William R. Shadish & Vivian C. Wong, 2008. "Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 27(4), pages 724-750.
    9. Neil Seftor, "undated". "Raising the Bar," Mathematica Policy Research Reports 135e82100e784dad803fe9c89, Mathematica Policy Research.
    10. Peter Z. Schochet, "undated". "Statistical Power for Random Assignment Evaluations of Education Programs," Mathematica Policy Research Reports 6749d31ad72d4acf988f7dce5, Mathematica Policy Research.
    11. repec:mpr:mprres:5863 is not listed on IDEAS
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Vivian C. Wong & Peter M. Steiner & Kylie L. Anglin, 2018. "What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?," Evaluation Review, , vol. 42(2), pages 147-175, April.
    2. Fatih Unlu & Douglas Lee Lauen & Sarah Crittenden Fuller & Tiffany Berglund & Elc Estrera, 2021. "Can Quasi‐Experimental Evaluations That Rely On State Longitudinal Data Systems Replicate Experimental Results?," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(2), pages 572-613, March.
    3. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," CID Working Papers 364, Center for International Development at Harvard University.
    4. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," NBER Working Papers 26080, National Bureau of Economic Research, Inc.
    5. Huber, Martin & Lechner, Michael & Wunsch, Conny, 2013. "The performance of estimators based on the propensity score," Journal of Econometrics, Elsevier, vol. 175(1), pages 1-21.
    6. Frölich, Markus & Huber, Martin & Wiesenfarth, Manuel, 2017. "The finite sample performance of semi- and non-parametric estimators for treatment effects and policy evaluation," Computational Statistics & Data Analysis, Elsevier, vol. 115(C), pages 91-102.
    7. Graham, Bryan S. & Pinto, Cristine Campos de Xavier, 2022. "Semiparametrically efficient estimation of the average linear regression function," Journal of Econometrics, Elsevier, vol. 226(1), pages 115-138.
    8. Jacob Alex Klerman, 2017. "Special Issue Editor’s Overview Essay," Evaluation Review, , vol. 41(3), pages 175-182, June.
    9. William Rhodes, 2010. "Heterogeneous Treatment Effects: What Does a Regression Estimate?," Evaluation Review, , vol. 34(4), pages 334-361, August.
    10. Yang Tang & Thomas D. Cook, 2018. "Statistical Power for the Comparative Regression Discontinuity Design With a Pretest No-Treatment Control Function: Theory and Evidence From the National Head Start Impact Study," Evaluation Review, , vol. 42(1), pages 71-110, February.
    11. Fortson, Kenneth & Gleason, Philip & Kopa, Emma & Verbitsky-Savitz, Natalya, 2015. "Horseshoes, hand grenades, and treatment effects? Reassessing whether nonexperimental estimators are biased," Economics of Education Review, Elsevier, vol. 44(C), pages 100-113.
    12. Huber, Martin & Lechner, Michael & Wunsch, Conny, 2010. "How to Control for Many Covariates? Reliable Estimators Based on the Propensity Score," IZA Discussion Papers 5268, Institute of Labor Economics (IZA).
    13. Emma Persson & Sofie Persson & Ulf-G. Gerdtham & Katarina Steen Carlsson, 2019. "Effect of type 1 diabetes on school performance in a dynamic world: new analysis exploring Swedish register data," Applied Economics, Taylor & Francis Journals, vol. 51(24), pages 2606-2622, May.
    14. Caitlin Kearns & Douglas Lee Lauen & Bruce Fuller, 2020. "Competing With Charter Schools: Selection, Retention, and Achievement in Los Angeles Pilot Schools," Evaluation Review, , vol. 44(2-3), pages 111-144, April.
    15. Christina Clark Tuttle & Philip Gleason & Virginia Knechtel & Ira Nichols-Barrer & Kevin Booker & Gregory Chojnacki & Thomas Coen & Lisbeth Goble, "undated". "Understanding the Effect of KIPP as it Scales: Volume I, Impacts on Achievement and Other Outcomes," Mathematica Policy Research Reports 7d8e94c5e77a4a9c8bf09000d, Mathematica Policy Research.
    16. Alberto Abadie & Anish Agarwal & Raaz Dwivedi & Abhin Shah, 2024. "Doubly Robust Inference in Causal Latent Factor Models," Papers 2402.11652, arXiv.org, revised Apr 2024.
    17. Robin Jacob & Marie-Andree Somers & Pei Zhu & Howard Bloom, 2016. "The Validity of the Comparative Interrupted Time Series Design for Evaluating the Effect of School-Level Interventions," Evaluation Review, , vol. 40(3), pages 167-198, June.
    18. Naihobe Gonzalez & Johanna Lacoe & Armando Yañez & Alicia Demers & Sarah Crissey & Natalie Larkin, "undated". "Oakland Unite 2017-2018 Strategy Evaluation: Life Coaching and Employment and Education Support for Youth at Risk of Violence," Mathematica Policy Research Reports 75d308710973407d8f2a3f25c, Mathematica Policy Research.
    19. Ben Weidmann & Luke Miratrix, 2021. "Lurking Inferential Monsters? Quantifying Selection Bias In Evaluations Of School Programs," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(3), pages 964-986, June.
    20. Cousineau, Martin & Verter, Vedat & Murphy, Susan A. & Pineau, Joelle, 2023. "Estimating causal effects with optimization-based methods: A review and empirical comparison," European Journal of Operational Research, Elsevier, vol. 304(2), pages 367-380.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:evarev:v:41:y:2017:i:3:p:240-279. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.