IDEAS home Printed from https://ideas.repec.org/p/osf/osfxxx/s2j4r_v1.html
   My bibliography  Save this paper

How often does random assignment fail? Estimates and recommendations

Author

Listed:
  • Goldberg, Matthew H.

Abstract

A fundamental goal of the scientific process is to make causal inferences. Random assignment to experimental conditions has been taken to be a gold-standard technique for establishing causality. Despite this, it is unclear how often random assignment fails to eliminate non-trivial differences between experimental conditions. Further, it is unknown to what extent larger sample sizes mitigates this issue. Chance differences between experimental conditions may be especially important when investigating topics that are highly sample-dependent, such as climate change and other politicized issues. Three studies examine simulated data (Study 1), three real datasets from original environmental psychology experiments (Study 2), and one nationally-representative dataset (Study 3) and find that differences between conditions that remain after random assignment are surprisingly common for sample sizes typical of social psychological scientific experiments. Methods and practices for identifying and mitigating such differences are discussed, and point to implications that are especially relevant to experiments in social and environmental psychology.

Suggested Citation

  • Goldberg, Matthew H., 2019. "How often does random assignment fail? Estimates and recommendations," OSF Preprints s2j4r_v1, Center for Open Science.
  • Handle: RePEc:osf:osfxxx:s2j4r_v1
    DOI: 10.31219/osf.io/s2j4r_v1
    as

    Download full text from publisher

    File URL: https://osf.io/download/5ca7670f2aa0d90016d1972c/
    Download Restriction: no

    File URL: https://libkey.io/10.31219/osf.io/s2j4r_v1?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. James N. Druckman & Thomas J. Leeper, 2012. "Learning More from Political Communication Experiments: Pretreatment and Its Effects," American Journal of Political Science, John Wiley & Sons, vol. 56(4), pages 875-896, October.
    2. Donald B. Rubin, 2005. "Causal Inference Using Potential Outcomes: Design, Modeling, Decisions," Journal of the American Statistical Association, American Statistical Association, vol. 100, pages 322-331, March.
    3. Jacob M. Montgomery & Brendan Nyhan & Michelle Torres, 2018. "How Conditioning on Posttreatment Variables Can Ruin Your Experiment and What to Do about It," American Journal of Political Science, John Wiley & Sons, vol. 62(3), pages 760-775, July.
    4. Charness, Gary & Gneezy, Uri & Kuhn, Michael A., 2012. "Experimental methods: Between-subject and within-subject design," Journal of Economic Behavior & Organization, Elsevier, vol. 81(1), pages 1-8.
    5. Shadish, William R. & Clark, M. H. & Steiner, Peter M., 2008. "Can Nonrandomized Experiments Yield Accurate Answers? A Randomized Experiment Comparing Random and Nonrandom Assignments," Journal of the American Statistical Association, American Statistical Association, vol. 103(484), pages 1334-1344.
    6. Rubin, Donald B., 2008. "Comment: The Design and Analysis of Gold Standard Randomized Experiments," Journal of the American Statistical Association, American Statistical Association, vol. 103(484), pages 1350-1353.
    7. Sander van der Linden & Anthony Leiserowitz & Edward Maibach, 2018. "Scientific agreement can neutralize politicization of facts," Nature Human Behaviour, Nature, vol. 2(1), pages 2-3, January.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Goldberg, Matthew H., 2019. "How often does random assignment fail? Estimates and recommendations," OSF Preprints s2j4r, Center for Open Science.
    2. Kari Lock Morgan & Donald B. Rubin, 2015. "Rerandomization to Balance Tiers of Covariates," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 110(512), pages 1412-1421, December.
    3. Leandro de Magalhaes & Salomo Hirvonen, 2019. "The Incumbent-Challenger Advantage and the Winner-Runner-up Advantage," Bristol Economics Discussion Papers 19/710, School of Economics, University of Bristol, UK.
    4. Yasemin Kisbu-Sakarya & Thomas D. Cook & Yang Tang & M. H. Clark, 2018. "Comparative Regression Discontinuity: A Stress Test With Small Samples," Evaluation Review, , vol. 42(1), pages 111-143, February.
    5. Jordan H. Rickles, 2011. "Using Interviews to Understand the Assignment Mechanism in a Nonexperimental Study," Evaluation Review, , vol. 35(5), pages 490-522, October.
    6. Jordan H. Rickles & Michael Seltzer, 2014. "A Two-Stage Propensity Score Matching Strategy for Treatment Effect Estimation in a Multisite Observational Study," Journal of Educational and Behavioral Statistics, , vol. 39(6), pages 612-636, December.
    7. Jinglong Zhao, 2024. "Experimental Design For Causal Inference Through An Optimization Lens," Papers 2408.09607, arXiv.org, revised Aug 2024.
    8. Cousineau, Martin & Verter, Vedat & Murphy, Susan A. & Pineau, Joelle, 2023. "Estimating causal effects with optimization-based methods: A review and empirical comparison," European Journal of Operational Research, Elsevier, vol. 304(2), pages 367-380.
    9. Martin Cousineau & Vedat Verter & Susan A. Murphy & Joelle Pineau, 2022. "Estimating causal effects with optimization-based methods: A review and empirical comparison," Papers 2203.00097, arXiv.org.
    10. Vivian C. Wong & Peter M. Steiner & Kylie L. Anglin, 2018. "What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?," Evaluation Review, , vol. 42(2), pages 147-175, April.
    11. Ana Kolar & Peter M. Steiner, 2021. "The Role of Sample Size to Attain Statistically Comparable Groups – A Required Data Preprocessing Step to Estimate Causal Effects With Observational Data," Evaluation Review, , vol. 45(5), pages 195-227, October.
    12. Kazi Iqbal & Asad Islam & John List & Vy Nguyen, 2021. "Myopic Loss Aversion and Investment Decisions: From the Laboratory to the Field," Framed Field Experiments 000730, The Field Experiments Website.
    13. Salvatore Bimonte & Antonella D’Agostino, 2021. "Tourism development and residents’ well-being: Comparing two seaside destinations in Italy," Tourism Economics, , vol. 27(7), pages 1508-1525, November.
    14. Jonas Schmidt & Tammo H. A. Bijmolt, 2020. "Accurately measuring willingness to pay for consumer goods: a meta-analysis of the hypothetical bias," Journal of the Academy of Marketing Science, Springer, vol. 48(3), pages 499-518, May.
    15. Christoph Dworschak, 2024. "Bias mitigation in empirical peace and conflict studies: A short primer on posttreatment variables," Journal of Peace Research, Peace Research Institute Oslo, vol. 61(3), pages 462-476, May.
    16. Jie, Yun, 2020. "Responding to requests for help: Effects of payoff schemes with small monetary units," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 88(C).
    17. Dylan Bugden & Jesse Brazil, 2024. "The role of geostrategic interests in motivating public support for foreign climate aid," Journal of Environmental Studies and Sciences, Springer;Association of Environmental Studies and Sciences, vol. 14(4), pages 803-813, December.
    18. Ederer, Florian & Stremitzer, Alexander, 2017. "Promises and expectations," Games and Economic Behavior, Elsevier, vol. 106(C), pages 161-178.
    19. Zhenzhen Xu & John D. Kalbfleisch, 2013. "Repeated Randomization and Matching in Multi-Arm Trials," Biometrics, The International Biometric Society, vol. 69(4), pages 949-959, December.
    20. Margot Racat & Antonin Ricard & René Mauer, 2024. "Effectuation and causation models: an integrative theoretical framework," Small Business Economics, Springer, vol. 62(3), pages 879-893, March.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:osf:osfxxx:s2j4r_v1. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: OSF (email available below). General contact details of provider: https://osf.io/preprints/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.