IDEAS home Printed from https://ideas.repec.org/a/sae/evarev/v45y2021i5p195-227.html
   My bibliography  Save this article

The Role of Sample Size to Attain Statistically Comparable Groups – A Required Data Preprocessing Step to Estimate Causal Effects With Observational Data

Author

Listed:
  • Ana Kolar
  • Peter M. Steiner

Abstract

Background: Propensity score methods provide data preprocessing tools to remove selection bias and attain statistically comparable groups - the first requirement when attempting to estimate causal effects with observational data. Although guidelines exist on how to remove selection bias when groups in comparison are large, not much is known on how to proceed when one of the groups in comparison, for example, a treated group, is particularly small, or when the study also includes lots of observed covariates (relative to the treated group's sample size). Objectives: This article investigates whether propensity score methods can help us to remove selection bias in studies with small treated groups and large amount of observed covariates. Measures: We perform a series of simulation studies to study factors such as sample size ratio of control to treated units, number of observed covariates and initial imbalances in observed covariates between the groups of units in comparison, that is, selection bias. Results: The results demonstrate that selection bias can be removed with small treated samples, but under different conditions than in studies with large treated samples. For example, a study design with 10 observed covariates and eight treated units will require the control group to be at least 10 times larger than the treated group, whereas a study with 500 treated units will require at least, only, two times bigger control group. Conclusions: To confirm the usefulness of simulation study results for practice, we carry out an empirical evaluation with real data. The study provides insights for practice and directions for future research.

Suggested Citation

  • Ana Kolar & Peter M. Steiner, 2021. "The Role of Sample Size to Attain Statistically Comparable Groups – A Required Data Preprocessing Step to Estimate Causal Effects With Observational Data," Evaluation Review, , vol. 45(5), pages 195-227, October.
  • Handle: RePEc:sae:evarev:v:45:y:2021:i:5:p:195-227
    DOI: 10.1177/0193841X211053937
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.1177/0193841X211053937
    Download Restriction: no

    File URL: https://libkey.io/10.1177/0193841X211053937?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-620, September.
    2. Petra E. Todd & Jeffrey A. Smith, 2001. "Reconciling Conflicting Evidence on the Performance of Propensity-Score Matching Methods," American Economic Review, American Economic Association, vol. 91(2), pages 112-118, May.
    3. Shadish, William R. & Clark, M. H. & Steiner, Peter M., 2008. "Can Nonrandomized Experiments Yield Accurate Answers? A Randomized Experiment Comparing Random and Nonrandom Assignments," Journal of the American Statistical Association, American Statistical Association, vol. 103(484), pages 1334-1344.
    4. Donald B. Rubin, 2005. "Causal Inference Using Potential Outcomes: Design, Modeling, Decisions," Journal of the American Statistical Association, American Statistical Association, vol. 100, pages 322-331, March.
    5. Imbens,Guido W. & Rubin,Donald B., 2015. "Causal Inference for Statistics, Social, and Biomedical Sciences," Cambridge Books, Cambridge University Press, number 9780521885881, October.
    6. Ho, Daniel & Imai, Kosuke & King, Gary & Stuart, Elizabeth A., 2011. "MatchIt: Nonparametric Preprocessing for Parametric Causal Inference," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 42(i08).
    7. Zhong Zhao, 2004. "Using Matching to Estimate Treatment Effects: Data Requirements, Matching Metrics, and Monte Carlo Evidence," The Review of Economics and Statistics, MIT Press, vol. 86(1), pages 91-107, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Harsh Parikh & Cynthia Rudin & Alexander Volfovsky, 2018. "MALTS: Matching After Learning to Stretch," Papers 1811.07415, arXiv.org, revised Jun 2023.
    2. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    3. Arun Advani & Toru Kitagawa & Tymon Słoczyński, 2019. "Mostly harmless simulations? Using Monte Carlo studies for estimator selection," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 34(6), pages 893-910, September.
    4. Ferman, Bruno, 2021. "Matching estimators with few treated and many control observations," Journal of Econometrics, Elsevier, vol. 225(2), pages 295-307.
    5. Gary King & Christopher Lucas & Richard A. Nielsen, 2017. "The Balance‐Sample Size Frontier in Matching Methods for Causal Inference," American Journal of Political Science, John Wiley & Sons, vol. 61(2), pages 473-489, April.
    6. Hainmueller, Jens, 2012. "Entropy Balancing for Causal Effects: A Multivariate Reweighting Method to Produce Balanced Samples in Observational Studies," Political Analysis, Cambridge University Press, vol. 20(1), pages 25-46, January.
    7. Timothy B. Armstrong & Michal Kolesár, 2021. "Finite‐Sample Optimal Estimation and Inference on Average Treatment Effects Under Unconfoundedness," Econometrica, Econometric Society, vol. 89(3), pages 1141-1177, May.
    8. Advani, Arun & Sloczynski, Tymon, 2013. "Mostly Harmless Simulations? On the Internal Validity of Empirical Monte Carlo Studies," IZA Discussion Papers 7874, Institute of Labor Economics (IZA).
    9. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," CID Working Papers 364, Center for International Development at Harvard University.
    10. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," NBER Working Papers 26080, National Bureau of Economic Research, Inc.
    11. Harsh Parikh & Carlos Varjao & Louise Xu & Eric Tchetgen Tchetgen, 2022. "Validating Causal Inference Methods," Papers 2202.04208, arXiv.org, revised Jul 2022.
    12. Vivian C. Wong & Peter M. Steiner & Kylie L. Anglin, 2018. "What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?," Evaluation Review, , vol. 42(2), pages 147-175, April.
    13. Augurzky, Boris & Kluve, Jochen, 2004. "Assessing the performance of matching algorithms when selection into treatment is strong," RWI Discussion Papers 21, RWI - Leibniz-Institut für Wirtschaftsforschung.
    14. Tymon Słoczyński, 2015. "The Oaxaca–Blinder Unexplained Component as a Treatment Effects Estimator," Oxford Bulletin of Economics and Statistics, Department of Economics, University of Oxford, vol. 77(4), pages 588-604, August.
    15. Sloczynski, Tymon, 2018. "A General Weighted Average Representation of the Ordinary and Two-Stage Least Squares Estimands," IZA Discussion Papers 11866, Institute of Labor Economics (IZA).
    16. A. Smith, Jeffrey & E. Todd, Petra, 2005. "Does matching overcome LaLonde's critique of nonexperimental estimators?," Journal of Econometrics, Elsevier, vol. 125(1-2), pages 305-353.
    17. Guido W. Imbens, 2022. "Causality in Econometrics: Choice vs Chance," Econometrica, Econometric Society, vol. 90(6), pages 2541-2566, November.
    18. Sergio Firpo, 2007. "Efficient Semiparametric Estimation of Quantile Treatment Effects," Econometrica, Econometric Society, vol. 75(1), pages 259-276, January.
    19. Jochen Kluve & Boris Augurzky, 2007. "Assessing the performance of matching algorithms when selection into treatment is strong," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 22(3), pages 533-557.
    20. Mark Kattenberg & Bas Scheer & Jurre Thiel, 2023. "Causal forests with fixed effects for treatment effect heterogeneity in difference-in-differences," CPB Discussion Paper 452, CPB Netherlands Bureau for Economic Policy Analysis.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:evarev:v:45:y:2021:i:5:p:195-227. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.