Authors

  1. Stern, Cindy

Article Content

Over the past few years the Joanna Briggs Institute (JBI) has been working tirelessly on expanding its methodological guidance for the conduct of systematic reviews. As a result, the JBI now offers guidance on conducting reviews of diagnostic test accuracy, prevalence and incidence, etiology and risk, mixed methods, umbrella reviews and scoping reviews, as well as further refining existing recommendations on reviews of effectiveness, meaningfulness, text and opinion and economic evaluations.

 

Correspondingly, the JBI Database of Systematic Reviews and Implementation Reports has and will continue to see an exponential rise in the diversity of reviews published that follow the different methodological approaches available. Regardless of the type of review undertaken, however, once a topic has been identified, a specific, answerable question needs to be developed. Formulation of a review question is step one in the systematic review process; the question puts the review process in motion, provides the foundation for the development of the search strategy and forms the basis for the inclusion and exclusion criteria.

 

A variety of mnemonics exist to help reviewers structure their review question and inclusion criteria. For quantitative reviews, PICO (that is, Population, Intervention, Comparator and Outcome) is the most frequently used mnemonic, while PICo is used for qualitative reviews (namely, Population, phenomena of Interest and Context).

 

The emergence of new methodologies for JBI has required a shift from its traditional PICO/PICo frameworks to a range of mnemonics that are specific to the type of review undertaken. For example, JBI recommends that prevalence and incidence reviews follow the CoCoPop mnemonic (Condition, Context and Population),1 diagnostic test accuracy reviews use the PIRD framework (Population, Index Test, Reference Test and Diagnosis of Interest),2 etiology and risk reviews follow the PEO mnemonic (Population, Exposure[s] and Outcome[s])3 and scoping reviews use the PCC framework (Population, Concept and Context).4

 

These developments have implications for review authors. All of these types of reviews have different components to consider when developing a review question(s), search strategy and inclusion criteria. Outlining a clear review question (or a series of questions) which are specific and describe each element of the planned review (using the appropriate mnemonic) is essential in ensuring a good quality systematic review. Irrespective of the type of review question, a good question will show a clear relationship to the inclusion criteria. Inclusion and exclusion criteria determine which papers will be selected. The clarity of the inclusion criteria ensures replicability of the review. The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) statement is a 27-item checklist that outlines the minimum set of items for reporting in systematic reviews and meta-analyses.5 The checklist is developed by an international group of experts, and one of the items specifies the need to provide an explicit statement of questions being addressed with reference to participants, interventions, comparisons, outcomes and study design.5

 

As well as for review authors, these methodological developments also have implications for peer reviewers. Expertise needs to be developed in line with these methodological developments as no longer are we looking solely for the PICO/PICo elements to be adequately defined and described. We now need to be aware of the differing approaches across each methodology, whether that be elements related to a condition, concept or diagnosis, for example. As an editor of the JBI Database of Systematic Reviews and Implementation Reports, whilst conducting editorial review of review protocol manuscripts submitted, the principal characteristic I assess is whether the review question/s and the inclusion criteria align. If there is incongruity between these elements, a review can easily go "off course" which sets the scene for a disjointed review in which the review questions(s) fundamentally cannot be answered. Identifying the appropriate methodology for a review and using the appropriate mnemonic to guide you in developing a review question is essential. It is the question design that has the most significant impact on the conduct of a review as the subsequent inclusion criteria are drawn from the question and provide the operational framework for the review.

 

Senior Research Fellow, Evidence Transfer, The Joanna Briggs Institute

 

References

 

1. Munn Z, Moola S, Lisy K, Riitano D, Tufanaru C. Methodological guidance for systematic reviews of observational epidemiological studies reporting prevalence and cumulative incidence data, International Journal of Evidence-Based Healthcare, 2015. 13(3):p. 147-153. [Context Link]

 

2. Campbell J, Klugar M, Ding S, Carmody D, Hakonsen S, Jadotte YT et al. Diagnostic test accuracy: methods for systematic review and meta-analysis, International Journal of Evidence-Based Healthcare, 2015. 13(3):p. 154-162. [Context Link]

 

3. Moola S, Munn Z, Sears K, Sfetcu R, Currie M, Lisy K et al. Conducting systematic reviews of association (etiology): The Joanna Briggs Institute's approach, International Journal of Evidence-Based Healthcare, 2015. 13(3):p. 163-169. [Context Link]

 

4. Peters M, Godfrey C, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews, International Journal of Evidence-Based Healthcare, 2015. 13(3):p.141-146. [Context Link]

 

5. Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med, 2009. 6(6): e1000097. [Context Link]