1 / 47

Diagnostic and Screening Studies Module 9

Diagnostic and Screening Studies Module 9. Objectives. At the completion of this module participants should Describe the diagnostic process Be able to describe these concepts: Pre-test and post-test probability Sensitivity and specificity ROC curve Positive and negative predictive values

lwood
Télécharger la présentation

Diagnostic and Screening Studies Module 9

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Diagnostic and Screening Studies Module 9

  2. Objectives At the completion of this module participants should • Describe the diagnostic process • Be able to describe these concepts: • Pre-test and post-test probability • Sensitivity and specificity • ROC curve • Positive and negative predictive values • Likelihood ratios • Be able to evaluate the quality of a study that evaluates the diagnostic accuracy of a test • Index test & reference standard • Sources of error in studies of the accuracy of diagnostic tests

  3. Screening • Different than a Diagnostic test • Bring forward in time of diagnosis • Diagnosis prior to the onset of symptoms • Case broad net to identify those at high risk for disease • Often applied to a broader population with a lower a priori probability of disease • Minimize false negatives even if there are false positives • Screening test used to “rule out” disease (i.e., they have a very high sensitivity) whereas the goal of a diagnostic test is to “rule in” disease (i.e. high specificity)

  4. Screening • Screening tests are NOT used to diagnose any condition. • Any positive screening test MUST be followed by a more SPECIFIC test in order to make a diagnosis • For example: • VDRL (screening) • Very Sensitive • But many False Positives • FTA (diagnostic)

  5. Diagnostic Methods • Method of exhaustion • Inefficient and impractical • Pattern recognition • Fails in complex and atypical situations • Hypothetical-deductive reasoning • Formulation of a probabilistic differential diagnosis • Continuous refinement by incorporation of new data • Requires coping with residual uncertainty

  6. Diagnostic Process • Estimate a pre-test probability • Decide whether a diagnostic test is required • Has the ‘treatment threshold’ been crossed? • Has the ‘test threshold’ been crossed? • Apply the test • Estimate a post-test probability

  7. Diagnostic Testing as an ‘Intervention’ Initial Clinical Data: Rapidly Progressive Dementia ? CJD Assign Pre-Test Probability Test [Brain MRI] Assign Post-Test Probability Negative: Revisit Differential Positive: Dx = CJD

  8. Pre-Test Probability • The estimated likelihood of a specific diagnosis prior to application of a diagnostic test • How do we put a number on it? • Experience • Prevalence data

  9. Case A: 80 year-old man with 2-year history of memory decline and twitching Case B: 60 year-old woman with 1-year history of cognitive decline and imbalance Case C: 45 year-old man with 4-month history of cognitive decline, myoclonus, and ataxia Pre-test probability of CJD A B C High Low Probability of CJD

  10. Treatment Threshold Do you need a diagnostic test? • Has the “treatment threshold” been crossed? • Is the provisional diagnosis so likely that we can move on to treatment/management? • If yes, further testing is not necessary

  11. Test Threshold Do you need a diagnostic test? • Has the “test threshold” been crossed? • Is a specific diagnosis deemed so unlikely that we can comfortably dismiss it? • If no, then further testing is necessary

  12. Post-Test Probability • Start with a reasonable estimate of pre-test probability • Apply an accurate diagnostic test • Use the combined information from the pre-test probability and the accuracy of the diagnostic test, to estimate the post-test probability • Fagan’s nomogram illustrates this concept

  13. Fagan’s Nomogram

  14. Post-Test Decision-Making • How does the post-test probability influence clinical decision-making? • Is the test precise enough? • Can you be confident of the test interpretation (agreement)? • Do you need another test? • Can you move on to treatment?

  15. Development / Evaluation of Diagnostic Test PICO Model • P - patient description • I - intervention (index/diagnostic test) • C - comparison (gold or reference standard) • O - outcome (final diagnosis)

  16. P.I.C.O. - CJD as an example P - Patient with rapidly progressive dementia I - Diagnostic Test” or “Index Test”: • Basal ganglia hyperintensity on brain MRI C - Reference Standard” or “Gold Standard”: • WHO Criteria for CJD O -Diagnosis of CJD

  17. P.I.C.O. - An Answerable Question In patients with rapidly progressive dementia, how accurate is assessment of basal ganglia MRI hyperintensity, compared with WHO diagnostic criteria, for diagnosis of Creutzfeldt-Jacob disease (CJD)?

  18. Intervention: Index Test • The test that will be used in clinical practice to differentiate the condition of interest from some other state • Ideal characteristics of the index test • Accurate (compared with a reference standard) • Precise • Available • Convenient • Low risk • Inexpensive • Reproducible • Should be independent of the reference standard

  19. Reference Standard • Gold standard or the ‘truth’ • The best available procedure, method or criteria used to establish the presence or absence of the condition of interest • Should be independent of the index test • Why not just use the reference standard? • Unavailable (e.g. autopsy) • Risky (e.g. invasive procedure) • Expensive (e.g. new technology)

  20. Diagnostic Test Metrics • How do we quantify / measure diagnostic test accuracy? • Magnitude of the effect • Sensitivity and Specificity • Positive & Negative Predictive Values • Positive & Negative Likelihood Ratios • Precision • Confidence Intervals • Reproducibility • Most can be derived from our old friend … the 2x2 table

  21. 2 x 2 Table

  22. Sensitivity Sensitivity positivity in disease TP / (TP+FN) A Negative test that has a High Sensitivity (i.e., almost no false negatives) helps rule out the disease (TP + FN)

  23. Sensitivity: Examples • CSF oligoclonal banding for MS: 85-90% • Head CT for detection of acute SAH: 90-95% • Jolt accentuation of headache in acute bacterial meningitis: 100%

  24. Specificity Specificity negativity in no disease TN / (TN+FP) A Positive Test that has a High Specificity (i.e., almost no false negatives helps rule in the disease (TN + FP)

  25. Specificity: Examples • MRI for acute ischemic stroke <3h: 92% • Anticholinergic receptor antibodies in MG: 99% • MRI for acute hemorrhagic stroke: 99-100% • Oculomasticatory myorhythmia in Whipple’s disease: 100%

  26. Receiver Operator Characteristic(ROC) Curves • Plot of sensitivity vs (1-specificity) • ‘Trade off’ between sensitivity and specificity • ‘Trade off’ between true positives and false positives • 45° line - test with no discriminative value Optimal test accuracy Test with no diagnostic information

  27. True Positives Cutoff Value False Positives

  28. True Positives Cutoff Value False Positives

  29. Cutoff Value True Positives False Positives

  30. Positive & Negative Predictive Value PPV disease amongst those with a positive index test TP/ (TP+FP) NPV no disease amongst those with a negative index test TN/ (TN+FN)

  31. Baye’s Theorem and the Predictive Value of a Positive Test • The probability of a test demonstrating a true positive depends not only on the sensitivity and specificity of a test, but also on the prevalence of the disease in the population being studied. • The chance of a positive test being a true positive is markedly higher in a population with a high prevalence of disease. • In contrast, if a very sensitive and specific test is applied to a population with a very low prevalence of disease, most positive tests will actually be false positives.

  32. Baye’s Theorem Prevalence of Positive PredictiveCondition (%) Value of a Positive Test (%)* 75 98 50 95 20 83 10 68 5 50 1 16 0.1 2 * 95% sensitivity and 95% specificity

  33. Baye’s Theorem • In the example where the disease prevalence is 1% and the test has a 95% sensitivity and 95% specificity, the predictive value that a positive test is a true positive = 16.1%. • This means that 83.9% of the positive results will actually be false! In this setting, a highly sensitive and specific test is of absolutely no value. • Using Positive Predictive Values derived in one setting will likely not be valid in another setting with different disease prevalence. • This can be better addressed using Likelihood Ratios

  34. Likelihood Ratios • Most clinically useful measures

  35. Likelihood Ratio Interpretation

  36. Estimate a Post-Test Probability

  37. Sources of Bias • Spectrum Bias • Verification Bias • Independence • Incorporation bias • Blinding

  38. Random Error Insufficiently precise estimates of test accuracy Random error may be quantified statistically with confidence intervals 95% is standard Smaller interval (more precision) with larger sample size Precision

  39. Agreement • Many tests require observer interpretation • Clinical utility and generalizability are affected by the inter-observer agreement • Agreement above chance • Measured by kappa (κ) statistic

  40. STAR-D • STAndards for the Reporting of Diagnostic accuracy studies • Consensus document summarizing reporting requirements for diagnostic accuracy studies • 25-item checklist and flow-diagram • http://www.stard-statement.org/

  41. STARD Checklist

  42. STARD Checklist

  43. Summary • Recognize that diagnosis is usually achieved using hypothetical-deductive methods • Formulate an appropriate diagnostic question when considering use of a test • The clinical importance of a test result is determined by both the pretest probability of the disease and test accuracy • Diagnostic test accuracy is best expressed using LR and 95% CI • STAR-D criteria can assist in appraisal of the methods used to evaluate a diagnostic test

  44. References • http://www.stard-statement.org/ • Fagan TH. Nomogram for Bayes’s theorem. N Engl J Med 1975;293:257. • Ransohoff DF, Feinstein AR. Problems of spectrum and bias in evaluating efficacy of diagnostic tests. N Engl J Med 1978;299;926-930 • Reid MC, Lachs MS, Feinstein AR. Use of methodological standards in diagnostic test research: getting better but still not good. JAMA 1995;274:645-651

More Related