For lecture on 3 June 2021
Using the results of up-to-date systematic reviews of research< Back to search results
- Format Texts
- Language/s English
- Target Audience Further education, Self-directed learning
- EBM Stage 4 - Decision making
- Duration <5 mins
- Difficulty Intermediate
Key Concepts addressed
- 3-2a Do the outcomes measured matter to you?
- 2-2a Reviews of fair comparisons should be systematic
- 1-2f Consider all of the relevant fair comparisons
All research has been done in the past, but the results of research need to be used today and tomorrow to inform decisions in health care. Trustworthy evidence from research is necessary, but not sufficient, to improve the quality of health care.
Over recent years it has been realised increasingly that systematic reviews of research are needed to provide fair tests of treatments. This trend has been reflected in a rapid increase in the numbers of reports of systematic reviews being published on paper and electronically (Bastian et al. 2010). Sometimes reviews will show that no reliable evidence exists, and this is one of their most important functions. Similarly, they may sometimes confirm that reliable evidence is limited to a single study, and here, too, it is important to make this situation explicit.
Systematic reviews of research are being used widely (i) to inform clinical practice, often through clinical practice guidelines; (ii) to assess which medical treatments are cost-effective; (iii) to shape the agenda for additional research; and (iv) to meet the needs of patients for reliable information about the effects of treatments.
Although these developments show that the importance of systematic reviews has been accepted by those who are trying to improve access to the evidence needed to inform choices in health care (Smith and Chalmers 2001), there is still a long way to go. Many thousands of systematic reviews will be needed to cover existing research evidence, and then kept up to date as new evidence emerges. Indeed, one journal editor has suggested that there should be a moratorium on all new research until we’ve caught up with what existing evidence can tell us (Bausell 1993).
Those responsible for disbursing funds for research must ensure that resources are provided to cope with this backlog, and that new studies are only supported if systematic reviews of existing evidence have shown that additional studies are necessary, and that they have been designed to take account of the lessons from previous research. If journal editors are to serve their readers better, they must follow the lead of The Lancet and ensure that reports of new studies make clear what contribution new evidence has made to an up-to-date systematic review of all the relevant evidence (Young and Horton 2005).
The increased availability of up-to-date, systematic reviews is improving the quality of information about the effects of treatments, but the conclusions of systematic reviews should not be accepted uncritically. Different reviews purportedly addressing the same question about treatments sometimes arrive at different conclusions. Their authors are human and we need to be aware that they may select, analyse and present evidence in ways that support their prejudices and interests. The continuing evolution of reliable methods for preparing and maintaining systematic reviews will help to address this problem, but they cannot be expected to abolish it.
Systematic reviews are necessary but insufficient for informing decisions about treatments for individual patients and policies. Other important factors – needs, resources and priorities – need to be taken into account (Chalmers 1993; Rothwell 2007). And this is the point at which the art as well as the science of health care needs to be deployed for the benefit of patients and the public (Evans et al. 2011; en.testingtreatments.org).
The text in these essays may be copied and used for non-commercial purposes on condition that explicit acknowledgement is made to The James Lind Library (www.jameslindlibrary.org).
Bastian H, Glasziou P, Chalmers I (2010). Seventy-five trials and eleven systematic reviews a day: How will we ever keep up? PLoS Medicine 7:e1000326.
Bausell BB (1993). After the meta-analytic revolution. Evaluation and the Health Professions 16:3-12
Chalmers I (1993). The Cochrane Collaboration: preparing, maintaining and disseminating systematic reviews of the effects of health care. In: Warren KS, Mosteller F, eds. Doing more good than harm: the evaluation of health care interventions. Annals of the New York Academy of Sciences 1993;703:156-163.
Evans I, Thornton H, Chalmers I, Glasziou P (2011). Testing Treatments. London: Pinter and Martin.
Rothwell P (2007). Treating individuals: from randomized controlled trials to personalized medicine. London: Lancet.
Smith R, Chalmers I (2001). Britain’s gift: a ‘Medline’ of synthesized evidence. BMJ 323:1437-1438.
Young C, Horton R (2005). Putting clinical trials into context. Lancet 366:107-8.