IDEAS home Printed from https://ideas.repec.org/p/nbr/nberwo/26080.html
   My bibliography  Save this paper

Testing the Validity of the Single Interrupted Time Series Design

Author

Listed:
  • Katherine Baicker
  • Theodore Svoronos

Abstract

Given the complex relationships between patients’ demographics, underlying health needs, and outcomes, establishing the causal effects of health policy and delivery interventions on health outcomes is often empirically challenging. The single interrupted time series (SITS) design has become a popular evaluation method in contexts where a randomized controlled trial is not feasible. In this paper, we formalize the structure and assumptions underlying the single ITS design and show that it is significantly more vulnerable to confounding than is often acknowledged and, as a result, can produce misleading results. We illustrate this empirically using the Oregon Health Insurance Experiment, showing that an evaluation using a single interrupted time series design instead of the randomized controlled trial would have produced large and statistically significant results of the wrong sign. We discuss the pitfalls of the SITS design, and suggest circumstances in which it is and is not likely to be reliable.

Suggested Citation

  • Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," NBER Working Papers 26080, National Bureau of Economic Research, Inc.
  • Handle: RePEc:nbr:nberwo:26080
    Note: EH
    as

    Download full text from publisher

    File URL: http://www.nber.org/papers/w26080.pdf
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Steven Glazerman & Dan M. Levy & David Myers, 2003. "Nonexperimental Versus Experimental Estimates of Earnings Impacts," The ANNALS of the American Academy of Political and Social Science, , vol. 589(1), pages 63-93, September.
    2. Ashenfelter, Orley C, 1978. "Estimating the Effect of Training Programs on Earnings," The Review of Economics and Statistics, MIT Press, vol. 60(1), pages 47-57, February.
    3. Fortson, Kenneth & Gleason, Philip & Kopa, Emma & Verbitsky-Savitz, Natalya, 2015. "Horseshoes, hand grenades, and treatment effects? Reassessing whether nonexperimental estimators are biased," Economics of Education Review, Elsevier, vol. 44(C), pages 100-113.
    4. Cumby, Robert E & Huizinga, John, 1992. "Testing the Autocorrelation Structure of Disturbances in Ordinary Least Squares and Instrumental Variables Regressions," Econometrica, Econometric Society, vol. 60(1), pages 185-195, January.
    5. Shadish, William R. & Clark, M. H. & Steiner, Peter M., 2008. "Can Nonrandomized Experiments Yield Accurate Answers? A Randomized Experiment Comparing Random and Nonrandom Assignments," Journal of the American Statistical Association, American Statistical Association, vol. 103(484), pages 1334-1344.
    6. repec:mpr:mprres:2956 is not listed on IDEAS
    7. Kenneth Fortson & Philip Gleason & Emma Kopa & Natalya Verbitsky-Savitz, 2015. "Horseshoes, Hand Grenades, and Treatment Effects? Reassessing Whether Nonexperimental Estimators are Biased," Mathematica Policy Research Reports 88154a3523cc492dbca5bcb47, Mathematica Policy Research.
    8. Imbens, Guido W. & Lemieux, Thomas, 2008. "Regression discontinuity designs: A guide to practice," Journal of Econometrics, Elsevier, vol. 142(2), pages 615-635, February.
    9. Kenneth Fortson & Natalya Verbitsky-Savitz & Emma Kopa & Philip Gleason, 2012. "Using an Experimental Evaluation of Charter Schools to Test Whether Nonexperimental Comparison Group Methods Can Replicate Experimental Impact Estimates," Mathematica Policy Research Reports 27f871b5b7b94f3a80278a593, Mathematica Policy Research.
    10. A. Smith, Jeffrey & E. Todd, Petra, 2005. "Does matching overcome LaLonde's critique of nonexperimental estimators?," Journal of Econometrics, Elsevier, vol. 125(1-2), pages 305-353.
    11. Newey, Whitney & West, Kenneth, 2014. "A simple, positive semi-definite, heteroscedasticity and autocorrelation consistent covariance matrix," Applied Econometrics, Russian Presidential Academy of National Economy and Public Administration (RANEPA), vol. 33(1), pages 125-132.
    12. Espen Bratberg & Astrid Grasdal & Alf Erling Risa, 2002. "Evaluating Social Policy by Experimental and Nonexperimental Methods," Scandinavian Journal of Economics, Wiley Blackwell, vol. 104(1), pages 147-171, March.
    13. Elizabeth Ty Wilde & Robinson Hollister, 2007. "How close is close enough? Evaluating propensity score matching using data from a class size reduction experiment," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 26(3), pages 455-477.
    14. James J. Heckman & Jeffrey A. Smith, 1999. "The Pre-Program Earnings Dip and the Determinants of Participation in a Social Program: Implications for Simple Program Evaluation Strategies," NBER Working Papers 6983, National Bureau of Economic Research, Inc.
    15. Thomas D. Cook & William R. Shadish & Vivian C. Wong, 2008. "Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 27(4), pages 724-750.
    16. Amy Finkelstein & Sarah Taubman & Bill Wright & Mira Bernstein & Jonathan Gruber & Joseph P. Newhouse & Heidi Allen & Katherine Baicker, 2012. "The Oregon Health Insurance Experiment: Evidence from the First Year," The Quarterly Journal of Economics, President and Fellows of Harvard College, vol. 127(3), pages 1057-1106.
    17. repec:mpr:mprres:7443 is not listed on IDEAS
    18. David H. Greenberg & Charles Michalopoulos & Philip K. Robin, 2006. "Do experimental and nonexperimental evaluations give different answers about the effectiveness of government-funded training programs?," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 25(3), pages 523-552.
    19. Heckman, James J & Smith, Jeffrey A, 1999. "The Pre-programme Earnings Dip and the Determinants of Participation in a Social Programme. Implications for Simple Programme Evaluation Strategies," Economic Journal, Royal Economic Society, vol. 109(457), pages 313-348, July.
    20. Thomas Fraker & Rebecca Maynard, 1987. "The Adequacy of Comparison Group Designs for Evaluations of Employment-Related Programs," Journal of Human Resources, University of Wisconsin Press, vol. 22(2), pages 194-227.
    21. LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-620, September.
    22. repec:mpr:mprres:2953 is not listed on IDEAS
    23. Sebastian Calonico & Matias D. Cattaneo & Rocío Titiunik, 2015. "Optimal Data-Driven Regression Discontinuity Plots," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 110(512), pages 1753-1769, December.
    24. Leona S. Aiken & Stephen G. West & David E. Schwalm & James L. Carroll & Shenghwa Hsiung, 1998. "Comparison of a Randomized and Two Quasi-Experimental Designs in a Single Outcome Evaluation," Evaluation Review, , vol. 22(2), pages 207-244, April.
    25. repec:bla:scandj:v:104:y:2002:i:1:p:147-71 is not listed on IDEAS
    26. Andrews, Donald W K, 1993. "Tests for Parameter Instability and Structural Change with Unknown Change Point," Econometrica, Econometric Society, vol. 61(4), pages 821-856, July.
    27. West, S.G. & Duan, N. & Pequegnat, W. & Gaist, P. & Des Jarlais, D.C. & Holtgrave, D. & Szapocznik, J. & Fishbein, M. & Rapkin, B. & Clatts, M. & Mullen, P.D., 2008. "Alternatives to the randomized controlled trial," American Journal of Public Health, American Public Health Association, vol. 98(8), pages 1359-1366.
    28. Stephen H. Bell & Larry l. Orr & John D. Blomquist & Glen G. Cain, 1995. "Program Applicants as a Comparison Group in Evaluating Training Programs: Theory and a Test," Books from Upjohn Press, W.E. Upjohn Institute for Employment Research, number pacg, December.
    29. Gillings, D. & Makuc, D. & Siegel, E., 1981. "Analysis of interrupted time series mortality trends: An example to evaluate regionalized perinatal care," American Journal of Public Health, American Public Health Association, vol. 71(1), pages 38-46.
    30. Ariel Linden, 2015. "Conducting interrupted time-series analysis for single- and multiple-group comparisons," Stata Journal, StataCorp LP, vol. 15(2), pages 480-500, June.
    31. repec:mpr:mprres:3694 is not listed on IDEAS
    32. Anup Malani & Julian Reif, 2010. "Accounting for Anticipation Effects: An Application to Medical Malpractice Tort Reform," NBER Working Papers 16593, National Bureau of Economic Research, Inc.
    33. Andersson, Karolina & Petzold, Max Gustav & Sonesson, Christian & Lonnroth, Knut & Carlsten, Anders, 2006. "Do policy changes in the pharmaceutical reimbursement schedule affect drug expenditures?: Interrupted time series analysis of cost, volume and cost per volume trends in Sweden 1986-2002," Health Policy, Elsevier, vol. 79(2-3), pages 231-243, December.
    34. Buddelmeyer, Hielke & Skoufias, Emmanuel, 2003. "An Evaluation of the Performance of Regression Discontinuity Design on PROGRESA," IZA Discussion Papers 827, Institute of Labor Economics (IZA).
    35. Imbens,Guido W. & Rubin,Donald B., 2015. "Causal Inference for Statistics, Social, and Biomedical Sciences," Cambridge Books, Cambridge University Press, number 9780521885881.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Carrie E. Fry & Laura A. Hatfield, 2020. "Do Methodological Birds of a Feather Flock Together?," Papers 2006.11346, arXiv.org, revised Jul 2020.
    2. Santolini, Raffaella, 2023. "The COVID-19 green certificate’s effect on vaccine uptake in French and Italian regions," Journal of Policy Modeling, Elsevier, vol. 45(5), pages 1036-1057.
    3. Munerah Almulhem & Rasiah Thayakaran & Shahjehan Hanif & Tiffany Gooden & Neil Thomas & Jonathan Hazlehurst & Abd A Tahrani & Wasim Hanif & Krishnarajah Nirantharakumar, 2022. "Ramadan is not associated with increased infection risk in Pakistani and Bangladeshi populations: Findings from controlled interrupted time series analysis of UK primary care data," PLOS ONE, Public Library of Science, vol. 17(1), pages 1-15, January.
    4. Raffaella Santolini, 2022. "The Covid-19 Green Certificate'S Effect On Vaccine Uptake In Italian Regions," Working Papers 468, Universita' Politecnica delle Marche (I), Dipartimento di Scienze Economiche e Sociali.
    5. Rau, Tomás & Sarzosa, Miguel & Urzúa, Sergio, 2021. "The children of the missed pill," Journal of Health Economics, Elsevier, vol. 79(C).
    6. Fiorentini, Gianluca & Bruni, Matteo Lippi & Mammi, Irene, 2022. "The same old medicine but cheaper: The impact of patent expiry on physicians’ prescribing behaviour," Journal of Economic Behavior & Organization, Elsevier, vol. 204(C), pages 37-68.
    7. Peter Z. Schochet, 2021. "Statistical Power for Estimating Treatment Effects Using Difference-in-Differences and Comparative Interrupted Time Series Designs with Variation in Treatment Timing," Papers 2102.06770, arXiv.org, revised Oct 2021.
    8. Andreana, Gianmarco & Gualini, Andrea & Martini, Gianmaria & Porta, Flavio & Scotti, Davide, 2021. "The disruptive impact of COVID-19 on air transportation: An ITS econometric analysis," Research in Transportation Economics, Elsevier, vol. 90(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," CID Working Papers 364, Center for International Development at Harvard University.
    2. Vivian C. Wong & Peter M. Steiner & Kylie L. Anglin, 2018. "What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?," Evaluation Review, , vol. 42(2), pages 147-175, April.
    3. Ben Weidmann & Luke Miratrix, 2021. "Lurking Inferential Monsters? Quantifying Selection Bias In Evaluations Of School Programs," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(3), pages 964-986, June.
    4. Anders Stenberg & Olle Westerlund, 2015. "The long-term earnings consequences of general vs. specific training of the unemployed," IZA Journal of European Labor Studies, Springer;Forschungsinstitut zur Zukunft der Arbeit GmbH (IZA), vol. 4(1), pages 1-26, December.
    5. Fortson, Kenneth & Gleason, Philip & Kopa, Emma & Verbitsky-Savitz, Natalya, 2015. "Horseshoes, hand grenades, and treatment effects? Reassessing whether nonexperimental estimators are biased," Economics of Education Review, Elsevier, vol. 44(C), pages 100-113.
    6. Henrik Hansen & Ninja Ritter Klejnstrup & Ole Winckler Andersen, 2011. "A Comparison of Model-based and Design-based Impact Evaluations of Interventions in Developing Countries," IFRO Working Paper 2011/16, University of Copenhagen, Department of Food and Resource Economics.
    7. Andrew P. Jaciw, 2016. "Applications of a Within-Study Comparison Approach for Evaluating Bias in Generalized Causal Inferences From Comparison Groups Studies," Evaluation Review, , vol. 40(3), pages 241-276, June.
    8. Andrew P. Jaciw, 2016. "Assessing the Accuracy of Generalized Inferences From Comparison Group Studies Using a Within-Study Comparison Approach," Evaluation Review, , vol. 40(3), pages 199-240, June.
    9. Robin Jacob & Marie-Andree Somers & Pei Zhu & Howard Bloom, 2016. "The Validity of the Comparative Interrupted Time Series Design for Evaluating the Effect of School-Level Interventions," Evaluation Review, , vol. 40(3), pages 167-198, June.
    10. Kenneth Fortson & Natalya Verbitsky-Savitz & Emma Kopa & Philip Gleason, 2012. "Using an Experimental Evaluation of Charter Schools to Test Whether Nonexperimental Comparison Group Methods Can Replicate Experimental Impact Estimates," Mathematica Policy Research Reports 27f871b5b7b94f3a80278a593, Mathematica Policy Research.
    11. Richard Blundell & Monica Costa Dias, 2009. "Alternative Approaches to Evaluation in Empirical Microeconomics," Journal of Human Resources, University of Wisconsin Press, vol. 44(3).
    12. Ferraro, Paul J. & Miranda, Juan José, 2014. "The performance of non-experimental designs in the evaluation of environmental programs: A design-replication study using a large-scale randomized experiment as a benchmark," Journal of Economic Behavior & Organization, Elsevier, vol. 107(PA), pages 344-365.
    13. Lechner, Michael & Wunsch, Conny, 2013. "Sensitivity of matching-based program evaluations to the availability of control variables," Labour Economics, Elsevier, vol. 21(C), pages 111-121.
    14. Kenneth Fortson & Philip Gleason & Emma Kopa & Natalya Verbitsky-Savitz, "undated". "Horseshoes, Hand Grenades, and Treatment Effects? Reassessing Bias in Nonexperimental Estimators," Mathematica Policy Research Reports 1c24988cd5454dd3be51fbc2c, Mathematica Policy Research.
    15. Fatih Unlu & Douglas Lee Lauen & Sarah Crittenden Fuller & Tiffany Berglund & Elc Estrera, 2021. "Can Quasi‐Experimental Evaluations That Rely On State Longitudinal Data Systems Replicate Experimental Results?," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(2), pages 572-613, March.
    16. Raaum, Oddbjorn & Torp, Hege, 2002. "Labour market training in Norway--effect on earnings," Labour Economics, Elsevier, vol. 9(2), pages 207-247, April.
    17. Burt S. Barnow & Jeffrey Smith, 2015. "Employment and Training Programs," NBER Chapters, in: Economics of Means-Tested Transfer Programs in the United States, Volume 2, pages 127-234, National Bureau of Economic Research, Inc.
    18. Anders Stenberg & Xavier Luna & Olle Westerlund, 2014. "Does Formal Education for Older Workers Increase Earnings? — Evidence Based on Rich Data and Long-term Follow-up," LABOUR, CEIS, vol. 28(2), pages 163-189, June.
    19. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    20. Ari Hyytinen & Jaakko Meriläinen & Tuukka Saarimaa & Otto Toivanen & Janne Tukiainen, 2018. "When does regression discontinuity design work? Evidence from random election outcomes," Quantitative Economics, Econometric Society, vol. 9(2), pages 1019-1051, July.

    More about this item

    JEL classification:

    • C1 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General
    • I1 - Health, Education, and Welfare - - Health
    • I13 - Health, Education, and Welfare - - Health - - - Health Insurance, Public and Private

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nbr:nberwo:26080. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: the person in charge (email available below). General contact details of provider: https://edirc.repec.org/data/nberrus.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.