Does a systematic review of diagnostic tests begin and end with accuracy?
Reviews assessing the effects of interventions in health care are the most common types of systematic reviews published in the JBI Database of Systematic Reviews and Implementation Reports (JBISRIR). They provide the best available evidence of the effectiveness of interventions for particular conditions and their potential harms. However, prior to any intervention, a condition should be accurately diagnosed.
Types of diagnostic tests can include imaging and biochemical examination, pathological and psychological investigation, and signs and symptoms observed during history taking and clinical evaluation. Without an accurate diagnosis, any treatment administered to a patient is likely to be futile and may even result in unnecessary harm, not to mention wasted time and clinical resources. This makes the accurate diagnosis of a condition and, by implication, the use of an accurate and appropriate diagnostic test, a matter of great importance. New tests are continuously being developed, driven by the competing needs for improved accuracy, safety and cost. However, even within the concept of "accuracy", trade-offs exist due to the inversely proportional nature of test sensitivity (probability of a person with the condition being correctly diagnosed) and specificity (probability of a person without the condition being correctly diagnosed). No definitive answer on which property is the most important or where the balance should be struck can be given, as the potential harm caused by false positive and false negative diagnoses varies enormously, based on the condition of interest and the interventions that will be applied as a result of the diagnosis. This highlights the necessity of systematic reviews that synthesize the best available evidence to assess the accuracy of diagnostic tests and provide high quality evidence for this important area of clinical practice. Existing guidance for the conduct of systematic reviews of diagnostic test accuracy does exist,1 and several have been published in the JBISRIR;2-4 however in the absence of a set methodology they have been disparate in their approach. In order to facilitate the conduct of these reviews the Joanna Briggs Institute has now released methodological guidance for systematic reviews of diagnostic test accuracy.5,6 These guidelines bring together existing standards for studies of diagnostic test accuracy, such as QUADAS (Quality Assessment of Diagnostic Accuracy Studies)2 for critical appraisal7 and STARD (Standards for Reporting of Diagnostic Accuracy) for data extraction.8
It is well recognized, however, that when it comes to diagnostic tests there is more to consider than simply their accuracy. Most commonly discussed is the need to consider a test's clinical benefit,1 that is, does carrying out the test and obtaining a diagnosis actually result in better outcomes for the patient compared with an alternate/usual practice or no formal diagnostic testing? A controversial example is the diagnosis of prostate cancer in older men whose health status makes it likely that they will die of other causes before the cancer can progress to the point where it impacts or threatens their lives.9 In these cases it has been argued that diagnostic testing - however accurate - may not be appropriate as it provides no clinical benefit, increases patient stress, and creates the risk of inappropriate, harmful treatment options being pursued. Furthermore, some diagnostic tests, such as prostate examination and biopsy, are invasive and may cause great discomfort or carry significant risks. Depending on the severity of symptoms and the potential outcomes of the suspected condition, these risks may overshadow the expected benefits of an accurate diagnosis. Indeed, some conditions, such as Alzheimer's disease, may only be diagnosed with complete confidence through post-mortem examination.10 Other, albeit less dramatic, considerations include: the difficulty of the test (a process may be highly accurate when carried out expertly but completely misleading in the hands of the less experienced), speed (when time is of the essence the need to do something right will compete with an imperative to do something right now), and, unfortunately, the inescapable influence of cost.
Evidence of accuracy is absolutely essential for a diagnostic test to be implemented in clinical practice. However, for patient diagnoses to be informed by the best available evidence there is a clear need for that evidence to relate to more than just the accuracy of the test. Evidence of effectiveness, safety, cost and reliability is needed to fully inform clinical practice and ensure optimal patient outcomes. Beyond the realm of quantitative evaluation, the meaning given by patients to their diagnosis - and the diagnostic testing process itself - deserves investigation though qualitative review. By no means an afterthought, accuracy is just the tip of the iceberg for diagnostic tests, and is neither the beginning nor the end of the systematic review.
Research Fellow, Implementation Science, The Joanna Briggs Institute
References