IDEAS home Printed from https://ideas.repec.org/p/ehl/lserod/112576.html
   My bibliography  Save this paper

Assessing the overall validity of randomised controlled trials

Author

Listed:
  • Krauss, Alexander

Abstract

In the biomedical, behavioural and social sciences, the leading method used to estimate causal effects is commonly randomised controlled trials (RCTs) that are generally viewed as both the source and justification of the most valid evidence. In studying the foundation and theory behind RCTs, the existing literature analyses important single issues and biases in isolation that influence causal outcomes in trials (such as randomisation, statistical probabilities and placebos). The common account of biased causal inference is described in a general way in terms of probabilistic imbalances between trial groups. This paper expands the common account of causal bias by distinguishing between the range of biases arising between trial groups but also within one of the groups or across the entire sample during trial design, implementation and analysis. This is done by providing concrete examples from highly influential RCT studies. In going beyond the existing RCT literature, the paper provides a broader, practice-based account of causal bias that specifies the between-group, within-group and across-group biases that affect the estimated causal results of trials – impacting both the effect size and statistical significance. Within this expanded framework, we can better identify the range of different types of biases we face in practice and address the central question about the overall validity of the RCT method and its causal claims. A study can face several smaller biases (related simultaneously to a smaller sample, smaller estimated effect, greater unblinding etc.) that generally add up to greater aggregate bias. Though difficult to measure precisely, it is important to assess and provide information in studies on how much different sources of bias, combined, can explain the estimated causal effect. The RCT method is thereby often the best we have to inform our policy decisions – and the evidence is strengthened when combined with multiple studies and other methods. Yet there is room for continually improving trials and identifying ways to reduce biases they face and to increase their overall validity. Implications are discussed.

Suggested Citation

  • Krauss, Alexander, 2021. "Assessing the overall validity of randomised controlled trials," LSE Research Online Documents on Economics 112576, London School of Economics and Political Science, LSE Library.
  • Handle: RePEc:ehl:lserod:112576
    as

    Download full text from publisher

    File URL: http://eprints.lse.ac.uk/112576/
    File Function: Open access version.
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Abhijit Vinayak Banerjee & Alice H. Amsden & Robert H. Bates & Jagdish Bhagwati & Angus Deaton & Nicholas Stern, 2007. "Making Aid Work," MIT Press Books, The MIT Press, edition 1, volume 1, number 0262026155, April.
    2. Cartwright,Nancy, 2007. "Hunting Causes and Using Them," Cambridge Books, Cambridge University Press, number 9780521860819, January.
    3. Angus Deaton, 2009. "Instruments of development: Randomization in the tropics, and the search for the elusive keys to economic development," Working Papers 1128, Princeton University, Woodrow Wilson School of Public and International Affairs, Center for Health and Wellbeing..
    4. Martin Ravallion, 2009. "Evaluation in the Practice of Development," The World Bank Research Observer, World Bank, vol. 24(1), pages 29-53, March.
    5. James J. Heckman, 1991. "Randomization and Social Policy Evaluation Revisited," NBER Technical Working Papers 0107, National Bureau of Economic Research, Inc.
    6. Judith Favereau & Michiru Nagatsu, 2020. "Holding back from theory: limits and methodological alternatives of randomized field experiments in development economics," Post-Print halshs-02934195, HAL.
    7. Abhijit V. Banerjee & Esther Duflo, 2009. "The Experimental Approach to Development Economics," Annual Review of Economics, Annual Reviews, vol. 1(1), pages 151-178, May.
    8. repec:pri:cheawb:deaton%20instruments%20of%20development%20keynes%20lecture%202009.pdf is not listed on IDEAS
    9. Glenn W. Harrison, 2011. "Randomisation and Its Discontents," Journal of African Economies, Centre for the Study of African Economies, vol. 20(4), pages 626-652, August.
    10. repec:pri:rpdevs:instruments_of_development.pdf is not listed on IDEAS
    11. Judith Favereau & Michiru Nagatsu, 2020. "Holding back from theory: limits and methodological alternatives of randomized field experiments in development economics," Journal of Economic Methodology, Taylor & Francis Journals, vol. 27(3), pages 191-211, July.
    12. repec:pri:cheawb:deaton%20instruments%20of%20development%20keynes%20lecture%202009 is not listed on IDEAS
    13. Cartwright,Nancy, 2007. "Hunting Causes and Using Them," Cambridge Books, Cambridge University Press, number 9780521677981, January.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Florent Bédécarrats & Isabelle Guérin & François Roubaud, 2015. "The gold standard for randomized evaluations: from discussion of method to political economy," Working Papers DT/2015/01, DIAL (Développement, Institutions et Mondialisation).
    2. William Easterly, 2009. "Can the West Save Africa?," Journal of Economic Literature, American Economic Association, vol. 47(2), pages 373-447, June.
    3. Florent Bédécarrats & Isabelle Guérin & François Roubaud, 2015. "The gold standard for randomised evaluations: from discussion of method to political economics," Working Papers CEB 15-009, ULB -- Universite Libre de Bruxelles.
    4. Florent Bedecarrats & Isabelle Guérin & François Roubaud, 2017. "L'étalon-or des évaluations randomisées : du discours de la méthode à l'économie politique," Working Papers ird-01445209, HAL.
    5. Basu, Kaushik, 2013. "The method of randomization and the role of reasoned intuition," Policy Research Working Paper Series 6722, The World Bank.
    6. Florent Bédécarrats & Isabelle Guérin & François Roubaud, 2019. "All that Glitters is not Gold. The Political Economy of Randomized Evaluations in Development," Development and Change, International Institute of Social Studies, vol. 50(3), pages 735-762, May.
    7. George F. DeMartino, 2021. "The specter of irreparable ignorance: counterfactuals and causality in economics," Review of Evolutionary Political Economy, Springer, vol. 2(2), pages 253-276, July.
    8. Guido W. Imbens, 2010. "Better LATE Than Nothing: Some Comments on Deaton (2009) and Heckman and Urzua (2009)," Journal of Economic Literature, American Economic Association, vol. 48(2), pages 399-423, June.
    9. Angus Deaton, 2009. "Instruments of development: Randomization in the tropics, and the search for the elusive keys to economic development," Working Papers 1128, Princeton University, Woodrow Wilson School of Public and International Affairs, Center for Health and Wellbeing..
    10. Reidpath, Daniel D. & Allotey, Pascale & Barker, S. Fiona & Clasen, Thomas & French, Matthew & Leder, Karin & Ramirez-Lovering, Diego & Rhule, Emma L.M. & Siri, José, 2022. "Implementing “from here to there”: A case study of conceptual and practical challenges in implementation science," Social Science & Medicine, Elsevier, vol. 301(C).
    11. Abhijit V. Banerjee & Esther Duflo, 2010. "Giving Credit Where It Is Due," Journal of Economic Perspectives, American Economic Association, vol. 24(3), pages 61-80, Summer.
    12. Olofsgård, Anders, 2012. "The Politics of Aid Effectiveness: Why Better Tools can Make for Worse Outcomes," SITE Working Paper Series 16, Stockholm School of Economics, Stockholm Institute of Transition Economics.
    13. Martin Prowse & Laura Camfield, 2013. "Improving the quality of development assistance," Progress in Development Studies, , vol. 13(1), pages 51-61, January.
    14. Donovan, Kevin P., 2018. "The rise of the randomistas: on the experimental turn in international aid," SocArXiv xygzb, Center for Open Science.
    15. Florent BEDECARRATS & Isabelle GUERIN & François ROUBAUD, 2017. "L'étalon-or des évaluations randomisées : économie politique des expérimentations aléatoires dans le domaine du développement," Working Paper 753120cd-506f-4c5f-80ed-7, Agence française de développement.
    16. Andrew Gelman & Guido Imbens, 2013. "Why ask Why? Forward Causal Inference and Reverse Causal Questions," NBER Working Papers 19614, National Bureau of Economic Research, Inc.
    17. Gisselquist, Rachel & Niño-Zarazúa, Miguel, 2013. "What can experiments tell us about how to improve governance?," MPRA Paper 49300, University Library of Munich, Germany.
    18. Mouchart, Michel & Russo, Federica & Wunsch, Guillaume, 2011. "Inferring causal relations by modelling structures : Article de recherche," LIDAM Discussion Papers ISBA 2011007, Université catholique de Louvain, Institute of Statistics, Biostatistics and Actuarial Sciences (ISBA).
    19. Gisselquist, Rachel & Niño-Zarazúa, Miguel, 2013. "What can experiments tell us about how to improve governance?," MPRA Paper 49300, University Library of Munich, Germany.
    20. Arnab Acharya & Giulia Greco & Edoardo Masset, 2010. "The economics approach to evaluation of health interventions in developing countries through randomised field trial," Journal of Development Effectiveness, Taylor & Francis Journals, vol. 2(4), pages 401-420.

    More about this item

    Keywords

    philosophy of science; philosophy of medicine; randomised controlled trials; RCTs; bias; validity; internal validity;
    All these keywords.

    JEL classification:

    • C1 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ehl:lserod:112576. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: LSERO Manager (email available below). General contact details of provider: https://edirc.repec.org/data/lsepsuk.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.