Author
Listed:
- Su Golder
- Yoon K Loke
- Martin Bland
Abstract
Su Golder and colleagues carry out an overview of meta-analyses to assess whether estimates of the risk of harm outcomes differ between randomized trials and observational studies. They find that, on average, there is no difference in the estimates of risk between overviews of observational studies and overviews of randomized trials. Background: There is considerable debate as to the relative merits of using randomised controlled trial (RCT) data as opposed to observational data in systematic reviews of adverse effects. This meta-analysis of meta-analyses aimed to assess the level of agreement or disagreement in the estimates of harm derived from meta-analysis of RCTs as compared to meta-analysis of observational studies. Methods and Findings: Searches were carried out in ten databases in addition to reference checking, contacting experts, citation searches, and hand-searching key journals, conference proceedings, and Web sites. Studies were included where a pooled relative measure of an adverse effect (odds ratio or risk ratio) from RCTs could be directly compared, using the ratio of odds ratios, with the pooled estimate for the same adverse effect arising from observational studies. Nineteen studies, yielding 58 meta-analyses, were identified for inclusion. The pooled ratio of odds ratios of RCTs compared to observational studies was estimated to be 1.03 (95% confidence interval 0.93–1.15). There was less discrepancy with larger studies. The symmetric funnel plot suggests that there is no consistent difference between risk estimates from meta-analysis of RCT data and those from meta-analysis of observational studies. In almost all instances, the estimates of harm from meta-analyses of the different study designs had 95% confidence intervals that overlapped (54/58, 93%). In terms of statistical significance, in nearly two-thirds (37/58, 64%), the results agreed (both studies showing a significant increase or significant decrease or both showing no significant difference). In only one meta-analysis about one adverse effect was there opposing statistical significance. Conclusions: Empirical evidence from this overview indicates that there is no difference on average in the risk estimate of adverse effects of an intervention derived from meta-analyses of RCTs and meta-analyses of observational studies. This suggests that systematic reviews of adverse effects should not be restricted to specific study types. : Please see later in the article for the Editors' Summary Background: Whenever patients consult a doctor, they expect the treatments they receive to be effective and to have minimal adverse effects (side effects). To ensure that this is the case, all treatments now undergo exhaustive clinical research—carefully designed investigations that test new treatments and therapies in people. Clinical investigations fall into two main groups—randomized controlled trials (RCTs) and observational, or non-randomized, studies. In RCTs, groups of patients with a specific disease or condition are randomly assigned to receive the new treatment or a control treatment, and the outcomes (for example, improvements in health and the occurrence of specific adverse effects) of the two groups of patients are compared. Because the patients are randomly chosen, differences in outcomes between the two groups are likely to be treatment-related. In observational studies, patients who are receiving a specific treatment are enrolled and outcomes in this group are compared to those in a similar group of untreated patients. Because the patient groups are not randomly chosen, differences in outcomes between cases and controls may be the result of a hidden shared characteristic among the cases rather than treatment-related (so-called confounding variables). Why Was This Study Done?: Although data from individual trials and studies are valuable, much more information about a potential new treatment can be obtained by systematically reviewing all the evidence and then doing a meta-analysis (so-called evidence-based medicine). A systematic review uses predefined criteria to identify all the research on a treatment; meta-analysis is a statistical method for combining the results of several studies to yield “pooled estimates” of the treatment effect (the efficacy of a treatment) and the risk of harm. Treatment effect estimates can differ between RCTs and observational studies, but what about adverse effect estimates? Can different study designs provide a consistent picture of the risk of harm, or are the results from different study designs so disparate that it would be meaningless to combine them in a single review? In this methodological overview, which comprises a systematic review and meta-analyses, the researchers assess the level of agreement in the estimates of harm derived from meta-analysis of RCTs with estimates derived from meta-analysis of observational studies. What Did the Researchers Do and Find?: The researchers searched literature databases and reference lists, consulted experts, and hand-searched various other sources for studies in which the pooled estimate of an adverse effect from RCTs could be directly compared to the pooled estimate for the same adverse effect from observational studies. They identified 19 studies that together covered 58 separate adverse effects. In almost all instances, the estimates of harm obtained from meta-analyses of RCTs and observational studies had overlapping 95% confidence intervals. That is, in statistical terms, the estimates of harm were similar. Moreover, in nearly two-thirds of cases, there was agreement between RCTs and observational studies about whether a treatment caused a significant increase in adverse effects, a significant decrease, or no significant change (a significant change is one unlikely to have occurred by chance). Finally, the researchers used meta-analysis to calculate that the pooled ratio of the odds ratios (a statistical measurement of risk) of RCTs compared to observational studies was 1.03. This figure suggests that there was no consistent difference between risk estimates obtained from meta-analysis of RCT data and those obtained from meta-analysis of observational study data. What Do These Findings Mean?: The findings of this methodological overview suggest that there is no difference on average in the risk estimate of an intervention's adverse effects obtained from meta-analyses of RCTs and from meta-analyses of observational studies. Although limited by some aspects of its design, this overview has several important implications for the conduct of systematic reviews of adverse effects. In particular, it suggests that, rather than limiting systematic reviews to certain study designs, it might be better to evaluate a broad range of studies. In this way, it might be possible to build a more complete, more generalizable picture of potential harms associated with an intervention, without any loss of validity, than by evaluating a single type of study. Such a picture, in combination with estimates of treatment effects also obtained from systematic reviews and meta-analyses, would help clinicians decide the best treatment for their patients. Additional Information: Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001026.
Suggested Citation
Su Golder & Yoon K Loke & Martin Bland, 2011.
"Meta-analyses of Adverse Effects Data Derived from Randomised Controlled Trials as Compared to Observational Studies: Methodological Overview,"
PLOS Medicine, Public Library of Science, vol. 8(5), pages 1-13, May.
Handle:
RePEc:plo:pmed00:1001026
DOI: 10.1371/journal.pmed.1001026
Download full text from publisher
Citations
Citations are extracted by the
CitEc Project, subscribe to its
RSS feed for this item.
Cited by:
- Valérie Seegers & Ludovic Trinquart & Isabelle Boutron & Philippe Ravaud, 2013.
"Comparison of Treatment Effect Estimates for Pharmacological Randomized Controlled Trials Enrolling Older Adults Only and Those including Adults: A Meta-Epidemiological Study,"
PLOS ONE, Public Library of Science, vol. 8(5), pages 1-5, May.
- Mingkwan Na Takuathung & Wannachai Sakuludomkan & Rapheephorn Khatsri & Nahathai Dukaew & Napatsorn Kraivisitkul & Balqis Ahmadmusa & Chollada Mahakkanukrauh & Kachathip Wangthaweesap & Jirakit Onin &, 2022.
"Adverse Effects of Angiotensin-Converting Enzyme Inhibitors in Humans: A Systematic Review and Meta-Analysis of 378 Randomized Controlled Trials,"
IJERPH, MDPI, vol. 19(14), pages 1-13, July.
- Amy Lanza & Philippe Ravaud & Carolina Riveros & Agnes Dechartres, 2016.
"Comparison of Estimates between Cohort and Case–Control Studies in Meta-Analyses of Therapeutic Interventions: A Meta-Epidemiological Study,"
PLOS ONE, Public Library of Science, vol. 11(5), pages 1-12, May.
- Rockers, Peter C. & Røttingen, John-Arne & Shemilt, Ian & Tugwell, Peter & Bärnighausen, Till, 2015.
"Inclusion of quasi-experimental studies in systematic reviews of health systems research,"
Health Policy, Elsevier, vol. 119(4), pages 511-521.
- Su Golder & Yoon K Loke & Martin Bland, 2013.
"Comparison of Pooled Risk Estimates for Adverse Effects from Different Observational Study Designs: Methodological Overview,"
PLOS ONE, Public Library of Science, vol. 8(8), pages 1-9, August.
- Guillermo Prada-Ramallal & Bahi Takkouche & Adolfo Figueiras, 2017.
"Summarising the Evidence for Drug Safety: A Methodological Discussion of Different Meta-Analysis Approaches,"
Drug Safety, Springer, vol. 40(7), pages 547-558, July.
- Mathur, Maya B & VanderWeele, Tyler, 2021.
"Methods to address confounding and other biases in meta-analyses: Review and recommendations,"
OSF Preprints
v7dtq, Center for Open Science.
- Sauman Singh-Phulgenda & Prabin Dahal & Roland Ngu & Brittany J Maguire & Alice Hawryszkiewycz & Sumayyah Rashan & Matthew Brack & Christine M Halleux & Fabiana Alves & Kasia Stepniewska & Piero L Oll, 2021.
"Serious adverse events following treatment of visceral leishmaniasis: A systematic review and meta-analysis,"
PLOS Neglected Tropical Diseases, Public Library of Science, vol. 15(3), pages 1-21, March.
- Tina Ljungberg & Emma Bondza & Connie Lethin, 2020.
"Evidence of the Importance of Dietary Habits Regarding Depressive Symptoms and Depression,"
IJERPH, MDPI, vol. 17(5), pages 1-18, March.
- Ying Wu & Hong-Bing Liu & Xue-Fei Shi & Yong Song, 2014.
"Conventional Hypoglycaemic Agents and the Risk of Lung Cancer in Patients with Diabetes: A Meta-Analysis,"
PLOS ONE, Public Library of Science, vol. 9(6), pages 1-10, June.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pmed00:1001026. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosmedicine (email available below). General contact details of provider: https://journals.plos.org/plosmedicine/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.