IDEAS home Printed from https://ideas.repec.org/p/mpr/mprres/27f871b5b7b94f3a80278a5933cdfb1b.html
   My bibliography  Save this paper

Using an Experimental Evaluation of Charter Schools to Test Whether Nonexperimental Comparison Group Methods Can Replicate Experimental Impact Estimates

Author

Listed:
  • Kenneth Fortson
  • Natalya Verbitsky-Savitz
  • Emma Kopa
  • Philip Gleason

Abstract

Using data from Mathematica's experimental evaluation of charter schools, this methodological study examines the validity of four different comparison group approaches to test whether these designs can replicate findings from a well-implemented random assignment study.

Suggested Citation

  • Kenneth Fortson & Natalya Verbitsky-Savitz & Emma Kopa & Philip Gleason, 2012. "Using an Experimental Evaluation of Charter Schools to Test Whether Nonexperimental Comparison Group Methods Can Replicate Experimental Impact Estimates," Mathematica Policy Research Reports 27f871b5b7b94f3a80278a593, Mathematica Policy Research.
  • Handle: RePEc:mpr:mprres:27f871b5b7b94f3a80278a5933cdfb1b
    as

    Download full text from publisher

    File URL: https://www.mathematica.org/-/media/publications/pdfs/education/expereval_charterschools.pdf
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-620, September.
    2. James J. Heckman & Petra E. Todd, 2009. "A note on adapting propensity score matching and selection models to choice based samples," Econometrics Journal, Royal Economic Society, vol. 12(s1), pages 230-234, January.
    3. A. Smith, Jeffrey & E. Todd, Petra, 2005. "Does matching overcome LaLonde's critique of nonexperimental estimators?," Journal of Econometrics, Elsevier, vol. 125(1-2), pages 305-353.
    4. repec:mpr:mprres:5699 is not listed on IDEAS
    5. Steven Glazerman & Dan M. Levy & David Myers, 2003. "Nonexperimental Versus Experimental Estimates of Earnings Impacts," The ANNALS of the American Academy of Political and Social Science, , vol. 589(1), pages 63-93, September.
    6. Michael Lechner, 2000. "An Evaluation of Public-Sector-Sponsored Continuous Vocational Training Programs in East Germany," Journal of Human Resources, University of Wisconsin Press, vol. 35(2), pages 347-375.
    7. Black, Dan A. & Smith, J.A.Jeffrey A., 2004. "How robust is the evidence on the effects of college quality? Evidence from matching," Journal of Econometrics, Elsevier, vol. 121(1-2), pages 99-124.
    8. Peter Z. Schochet, "undated". "Statistical Power for Random Assignment Evaluations of Education Programs (Journal Article)," Mathematica Policy Research Reports a110ff428db043f4a6f55d349, Mathematica Policy Research.
    9. Peikes, Deborah N. & Moreno, Lorenzo & Orzol, Sean Michael, 2008. "Propensity Score Matching: A Note of Caution for Evaluators of Social Programs," The American Statistician, American Statistical Association, vol. 62, pages 222-231, August.
    10. Shadish, William R. & Clark, M. H. & Steiner, Peter M., 2008. "Can Nonrandomized Experiments Yield Accurate Answers? A Randomized Experiment Comparing Random and Nonrandom Assignments," Journal of the American Statistical Association, American Statistical Association, vol. 103(484), pages 1334-1344.
    11. Abadie, Alberto & Imbens, Guido W., 2011. "Bias-Corrected Matching Estimators for Average Treatment Effects," Journal of Business & Economic Statistics, American Statistical Association, vol. 29(1), pages 1-11.
    12. repec:mpr:mprres:5863 is not listed on IDEAS
    13. Roberto Agodini & Mark Dynarski, 2004. "Are Experiments the Only Option? A Look at Dropout Prevention Programs," The Review of Economics and Statistics, MIT Press, vol. 86(1), pages 180-194, February.
    14. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    15. Justin McCrary, 2002. "Using Electoral Cycles in Police Hiring to Estimate the Effect of Police on Crime: Comment," American Economic Review, American Economic Association, vol. 92(4), pages 1236-1243, September.
    16. Alberto Abadie & Guido W. Imbens, 2008. "On the Failure of the Bootstrap for Matching Estimators," Econometrica, Econometric Society, vol. 76(6), pages 1537-1557, November.
    17. repec:mpr:mprres:6210 is not listed on IDEAS
    18. Ron Zimmer & Brian Gill & Kevin Booker & Stephane Lavertu & Tim Sass & John Witte, "undated". "Charter Schools in Eight States: Effects on Achievement, Attainment, Integration, and Competition," Mathematica Policy Research Reports 6f5b31366ca2499e87ac2c1d0, Mathematica Policy Research.
    19. Peter Z. Schochet, "undated". "Is Regression Adjustment Supported by the Neyman Model for Causal Inference? (Presentation)," Mathematica Policy Research Reports abfc39d59c714499b2fe42f68, Mathematica Policy Research.
    20. Robert Bifulco & Helen F. Ladd, 2006. "The Impacts of Charter Schools on Student Achievement: Evidence from North Carolina," Education Finance and Policy, MIT Press, vol. 1(1), pages 50-90, January.
    21. Peter Z. Schochet, "undated". "Is Regression Adjustment Supported By the Neyman Model for Causal Inference?," Mathematica Policy Research Reports 782da2242fba458eb61752f96, Mathematica Policy Research.
    22. repec:mpr:mprres:3694 is not listed on IDEAS
    23. Jesse Rothstein, 2004. "Does Competition Among Public Schools Benefit Students and Taxpayers? A Comment on Hoxby," Working Papers 10, Princeton University, School of Public and International Affairs, Education Research Section..
    24. Elizabeth Ty Wilde & Robinson Hollister, 2007. "How close is close enough? Evaluating propensity score matching using data from a class size reduction experiment," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 26(3), pages 455-477.
    25. Robert Bifulco, 2010. "Can Propensity Score Analysis Replicate Estimates Based on Random Assignment in Evaluations of School Choice? A Within-Study Comparison," Center for Policy Research Working Papers 124, Center for Policy Research, Maxwell School, Syracuse University.
    26. Jesse Rothstein, 2007. "Does Competition Among Public Schools Benefit Students and Taxpayers? Comment," American Economic Review, American Economic Association, vol. 97(5), pages 2026-2037, December.
    27. Peter Z. Schochet, "undated". "Statistical Power for Random Assignment Evaluations of Education Programs," Mathematica Policy Research Reports 6749d31ad72d4acf988f7dce5, Mathematica Policy Research.
    28. Thomas Fraker & Rebecca Maynard, 1987. "The Adequacy of Comparison Group Designs for Evaluations of Employment-Related Programs," Journal of Human Resources, University of Wisconsin Press, vol. 22(2), pages 194-227.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Helen F. Ladd & Charles T. Clotfelter & John B. Holbein, 2017. "The Growing Segmentation of the Charter School Sector in North Carolina," Education Finance and Policy, MIT Press, vol. 12(4), pages 536-563, Fall.
    2. Philip M. Gleason & Christina Clark Tuttle & Brian Gill & Ira Nichols-Barrer & Bing-ru Teh, 2014. "Do KIPP Schools Boost Student Achievement?," Education Finance and Policy, MIT Press, vol. 9(1), pages 36-58, January.
    3. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," CID Working Papers 364, Center for International Development at Harvard University.
    4. repec:mpr:mprres:7821 is not listed on IDEAS
    5. Patrick L. Baude & Marcus Casey & Eric A. Hanushek & Gregory R. Phelan & Steven G. Rivkin, 2020. "The Evolution of Charter School Quality," Economica, London School of Economics and Political Science, vol. 87(345), pages 158-189, January.
    6. Caitlin Kearns & Douglas Lee Lauen & Bruce Fuller, 2020. "Competing With Charter Schools: Selection, Retention, and Achievement in Los Angeles Pilot Schools," Evaluation Review, , vol. 44(2-3), pages 111-144, April.
    7. repec:mpr:mprres:7927 is not listed on IDEAS
    8. Fatih Unlu & Douglas Lee Lauen & Sarah Crittenden Fuller & Tiffany Berglund & Elc Estrera, 2021. "Can Quasi‐Experimental Evaluations That Rely On State Longitudinal Data Systems Replicate Experimental Results?," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(2), pages 572-613, March.
    9. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," NBER Working Papers 26080, National Bureau of Economic Research, Inc.
    10. Christina Clark Tuttle & Brian Gill & Philip Gleason & Virginia Knechtel & Ira Nichols-Barrer & Alexandra Resch, "undated". "KIPP Middle Schools: Impacts on Achievement and Other Outcomes," Mathematica Policy Research Reports 4e2030d4eef1429395a8dd457, Mathematica Policy Research.
    11. Vivian C. Wong & Peter M. Steiner, 2018. "Designs of Empirical Evaluations of Nonexperimental Methods in Field Settings," Evaluation Review, , vol. 42(2), pages 176-213, April.
    12. Ira Nichols-Barrer & Joshua Haimson, 2013. "Impacts of Five Expeditionary Learning Middle Schools on Academic Achievement," Mathematica Policy Research Reports e4330aa3795e4e87a89ea4b52, Mathematica Policy Research.
    13. repec:mpr:mprres:7680 is not listed on IDEAS
    14. Vivian C. Wong & Peter M. Steiner & Kylie L. Anglin, 2018. "What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?," Evaluation Review, , vol. 42(2), pages 147-175, April.
    15. Brian Goesling & Joanne Lee, 2015. "Improving the Rigor of Quasi-Experimental Impact Evaluations: Lessons for Teen Pregnancy Prevention Researchers," Mathematica Policy Research Reports f4f8b24cbf874a3989d87cf89, Mathematica Policy Research.
    16. Brian Gill & Joshua Furgeson & Hanley S. Chiang & Bing-Ru Teh & Joshua Haimson & Natalya Verbitsky-Savitz, "undated". "Replicating Experimental Impact Estimates with Nonexperimental Methods in the Context of Control Crossover," Mathematica Policy Research Reports 2798055510274fa9b4fdfa54b, Mathematica Policy Research.
    17. Ben Weidmann & Luke Miratrix, 2021. "Lurking Inferential Monsters? Quantifying Selection Bias In Evaluations Of School Programs," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(3), pages 964-986, June.
    18. Robin Jacob & Marie-Andree Somers & Pei Zhu & Howard Bloom, 2016. "The Validity of the Comparative Interrupted Time Series Design for Evaluating the Effect of School-Level Interventions," Evaluation Review, , vol. 40(3), pages 167-198, June.
    19. Sarah R. Cohodes, 2016. "Teaching to the Student: Charter School Effectiveness in Spite of Perverse Incentives," Education Finance and Policy, MIT Press, vol. 11(1), pages 1-42, Winter.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. repec:mpr:mprres:7443 is not listed on IDEAS
    2. Fortson, Kenneth & Gleason, Philip & Kopa, Emma & Verbitsky-Savitz, Natalya, 2015. "Horseshoes, hand grenades, and treatment effects? Reassessing whether nonexperimental estimators are biased," Economics of Education Review, Elsevier, vol. 44(C), pages 100-113.
    3. Kenneth Fortson & Philip Gleason & Emma Kopa & Natalya Verbitsky-Savitz, "undated". "Horseshoes, Hand Grenades, and Treatment Effects? Reassessing Bias in Nonexperimental Estimators," Mathematica Policy Research Reports 1c24988cd5454dd3be51fbc2c, Mathematica Policy Research.
    4. Vivian C. Wong & Peter M. Steiner & Kylie L. Anglin, 2018. "What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?," Evaluation Review, , vol. 42(2), pages 147-175, April.
    5. Ferraro, Paul J. & Miranda, Juan José, 2014. "The performance of non-experimental designs in the evaluation of environmental programs: A design-replication study using a large-scale randomized experiment as a benchmark," Journal of Economic Behavior & Organization, Elsevier, vol. 107(PA), pages 344-365.
    6. Andrew P. Jaciw, 2016. "Applications of a Within-Study Comparison Approach for Evaluating Bias in Generalized Causal Inferences From Comparison Groups Studies," Evaluation Review, , vol. 40(3), pages 241-276, June.
    7. Andrew P. Jaciw, 2016. "Assessing the Accuracy of Generalized Inferences From Comparison Group Studies Using a Within-Study Comparison Approach," Evaluation Review, , vol. 40(3), pages 199-240, June.
    8. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," NBER Working Papers 26080, National Bureau of Economic Research, Inc.
    9. Heinrich, Carolyn J. & Mueser, Peter R. & Troske, Kenneth & Jeon, Kyung-Seong & Kahvecioglu, Daver C., 2009. "New Estimates of Public Employment and Training Program Net Impacts: A Nonexperimental Evaluation of the Workforce Investment Act Program," IZA Discussion Papers 4569, Institute of Labor Economics (IZA).
    10. Lechner, Michael & Wunsch, Conny, 2013. "Sensitivity of matching-based program evaluations to the availability of control variables," Labour Economics, Elsevier, vol. 21(C), pages 111-121.
    11. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," CID Working Papers 364, Center for International Development at Harvard University.
    12. Tymon Słoczyński, 2015. "The Oaxaca–Blinder Unexplained Component as a Treatment Effects Estimator," Oxford Bulletin of Economics and Statistics, Department of Economics, University of Oxford, vol. 77(4), pages 588-604, August.
    13. Carlos A. Flores & Oscar A. Mitnik, 2009. "Evaluating Nonexperimental Estimators for Multiple Treatments: Evidence from Experimental Data," Working Papers 2010-10, University of Miami, Department of Economics.
    14. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    15. Robin Jacob & Marie-Andree Somers & Pei Zhu & Howard Bloom, 2016. "The Validity of the Comparative Interrupted Time Series Design for Evaluating the Effect of School-Level Interventions," Evaluation Review, , vol. 40(3), pages 167-198, June.
    16. Peter R. Mueser & Kenneth R. Troske & Alexey Gorislavsky, 2007. "Using State Administrative Data to Measure Program Performance," The Review of Economics and Statistics, MIT Press, vol. 89(4), pages 761-783, November.
    17. Ben Weidmann & Luke Miratrix, 2021. "Lurking Inferential Monsters? Quantifying Selection Bias In Evaluations Of School Programs," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(3), pages 964-986, June.
    18. Steven Lehrer & Gregory Kordas, 2013. "Matching using semiparametric propensity scores," Empirical Economics, Springer, vol. 44(1), pages 13-45, February.
    19. Ferman, Bruno, 2021. "Matching estimators with few treated and many control observations," Journal of Econometrics, Elsevier, vol. 225(2), pages 295-307.
    20. Fatih Unlu & Douglas Lee Lauen & Sarah Crittenden Fuller & Tiffany Berglund & Elc Estrera, 2021. "Can Quasi‐Experimental Evaluations That Rely On State Longitudinal Data Systems Replicate Experimental Results?," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(2), pages 572-613, March.
    21. Elizabeth Ty Wilde & Robinson Hollister, 2007. "How close is close enough? Evaluating propensity score matching using data from a class size reduction experiment," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 26(3), pages 455-477.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:mpr:mprres:27f871b5b7b94f3a80278a5933cdfb1b. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Joanne Pfleiderer or Cindy George (email available below). General contact details of provider: https://edirc.repec.org/data/mathius.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.