It is not uncommon in practice that healthcare professionals develop clinical questions, from which they develop their search strategy and search for primary studies, and the first five most relevant studies of primary research yield contradictory conclusions. Similarly, with a quick search in PubMed on a "popular" topic, it is very likely that the five most relevant systematic reviews retrieved would also present different conclusions. Although it is well accepted that the main purpose of a systematic review is to better inform decision making in clinical or healthcare practice, finding conflicting conclusions may deter healthcare professionals inclined to use research evidence to inform their decision making from taking the next step of assessing systematic reviews for their quality and risk of bias in their conduct.
Systematic reviews have become a popular study design, and many journals are eager to publish them. However, many journals may not have a pool of methodologists as their peer reviewers or members on their editorial boards who can provide expertise in secondary research concepts. It is not uncommon to find published "systematic reviews" when in fact they are merely a literature review that may include a "systematized search", for example, or a "rapid review" or a systematic review with important (and often "defining") steps, such as critical appraisal, omitted.
So how do we address this? One option could be to conduct an umbrella review to sort out the wheat from the chaff, but are umbrella reviews, a review of reviews or an overview of reviews the most practical solutions? The answer is probably "yes" for a period of time, that is, until the enterprising evidence-based practitioner again finds the first five most relevant umbrella reviews that yield contradictory conclusions. What next? A review of umbrella reviews? When is it likely to end?
A wiser approach would be to concentrate on the quality of the conduct of legitimate systematic reviews. Just as the Consolidated Standards of Reporting Trials publication guidelines help improve the quality of randomized controlled trials (RCTs),1 the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines help improve the quality of systematic reviews. More recently, the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols has been introduced, highlighting the importance of protocols.2,3
What is the most important part in the conduct of a systematic review? The answer, in short, is that all parts are important. However, if we look at primary research using the most robust study design for effectiveness, the RCT, a vital part in this study design is "randomization". The first item on standardized critical appraisal tools/checklists in systematic reviews to assess the quality of RCTs is "randomization". All leading organizations in evidence-based health care, evidence-based medicine, evidence-based practice and systematic reviewing use randomization as the first item in their critical appraisal tools for RCTs.4-6 Why "randomization"? Because it saves us time in the whole critical appraisal process. Properly conducted randomization has the potential to significantly lower the risk of bias, in other words, raise the methodological quality of the whole study.
In the conduct of a systematic review, establishing that an a priori published protocol has been followed prevents bias and provides evidence to the reader that some thought has gone into planning the systematic review prior to its conduct. Protocols of systematic reviews guard against selective reporting and arbitrary decision making; they guide review methods and allow planning and the minimization of meta-biases.7 The majority of so-called independent systematic reviews (published outside the JBI Database of Systematic Reviews and Implementation Reports, Cochrane Library and Campbell Collaboration) are published in the absence of previously published a priori peer-reviewed protocols.3 The Joanna Briggs Institute (JBI) is a global leader in evidence-based health care and highlights the importance of referencing an a priori published systematic review protocol in the subsequent systematic review report. Therefore, it may be essential to include the latter as the first item on the JBI Critical Appraisal Checklist for Systematic Reviews and Research Synthesis.8 This can save time for reviewers in a similar way that randomization in the critical appraisal checklist for RCTs does for the systematic review.
Publishing systematic review protocols is not very "lucrative" for most journals due to the lack of citations they generate and the fact that protocols on their own cannot change practice or research. Although most journals understand the importance of publishing studies that yield both negative and positive results, the same principle should apply for systematic review reports. A peer-reviewed protocol with a robust and rigorous methodological design should always be published prior to the publishing of the systematic review report, the first important step toward ensuring that high-quality research is reported.
References