1 / 51

Diagnostic tests

Diagnostic tests. Subodh S Gupta MGIMS, Sewagram. Standard 2 X 2 table (For Diagnostic Tests). Gold Standard. Standard 2 X 2 table (For Diagnostic Tests). Gold Standard. Gold standard. In any study of diagnosis, the method being evaluated has to be compared to something

rafer
Télécharger la présentation

Diagnostic tests

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Diagnostic tests Subodh S Gupta MGIMS, Sewagram

  2. Standard 2 X 2 table(For Diagnostic Tests) Gold Standard

  3. Standard 2 X 2 table(For Diagnostic Tests) Gold Standard

  4. Gold standard • In any study of diagnosis, the method being evaluated has to be compared to something • The best available test that is used as comparison is called the GOLD STANDARD • Need to remember that all gold standards are not always gold; New test may be better than the gold standard

  5. Test parameters Gold Standard • Sensitivity = Pr(T+|D+) = a/(a+c) --Sensitivity is PID (Positive In Disease) • Specificity = Pr(T-|D-) = d/(b+d) --Specificity is NIH (Negative In Health)

  6. Test parameters Gold Standard • False Positive Rate (FP rate) = Pr(T+|D-) = b/(b+d) • False Negative Rate (FN rate) = Pr(T-|D+) = c/(a+c) • Diagnostic Accuracy = (a+d)/n

  7. Test parameters Gold Standard • Positive Predictive Value (PPV) = Pr(D+|T+) = a/(a+b) • Negative Predictive Value (NPV) = Pr(D-|T-) = d/(c+d)

  8. Test parameters: Example Gold Standard Sensitivity = 90/(90+10), Specificity = 95/(95+5) FP rate = 5/ (95+5); FN Rate = 10/ (90+10) Diagnostic Accuracy = (90+95) / (90+10+5+95) PPV = 90/(90+5); NPV = 95/(95+10)

  9. PPV & NPV with Prevalence

  10. Healthy population vs sick population HealthySick

  11. Predictive Values in hospital-based data

  12. Predictive Values in population-based data

  13. Test Parameters: Example Gold Standard Prevalence = 50% PPV = 94.7% NPV = 90.5%Diagnostic Accuracy = 92.5%

  14. Test Parameters: Example Gold Standard Prevalence = 5% PPV = 48.6% NPV = 99.4%Diagnostic Accuracy = 94.8%

  15. Test Parameters: Example Gold Standard Prevalence = 0.5% PPV = 8.3% NPV = 99.9%Diagnostic Accuracy = 95%

  16. Test Parameters: Example Gold Standard Prevalence = 0.05% PPV = 0.9% NPV = 100%Diagnostic Accuracy = 95%

  17. PPV & NPV with Prevalence

  18. Trade-offs between Sensitivity and Specificity

  19. Sensitivity and Specificity solve the wrong problem!!! • When we use Diagnostic test clinically, we do not know who actually has and does not have the target disorder, if we did, we would not need the Diagnostic Test. • Our Clinical Concern is not a vertical one of Sensitivity and Specificity, but a horizontal one of the meaning of Positive and Negative Test Results. BE-Workshop-DT-July2007

  20. When a clinician uses a test, which question is important ? • If I obtain a positive test result, what is the probability that this person actually has the disease? • If I obtain a negative test result, what is the probability that the person does not have the disease? BE-Workshop-DT-July2007

  21. Test parameters Gold Standard • Sensitivity = Pr(T+|D+) = a/(a+c) • Specificity = Pr(T-|D-) = d/(b+d) • PPV = Pr(D+|T+) = a/(a+b) • NPV = Pr(D-|T-) = d/(c+d)

  22. Likelihood Ratios • Likelihood Ratio is a ratio of two probabilities • Likelihood ratios state how many time more (or less) likely a particular test results are observed in patients with disease than in those without disease. • LR+ tells how much the odds of the disease increase when a test is positive. • LR- tells how much the odds of the disease decrease when a test is negative

  23. The likelihood ratio for a positive result (LR+) tells how much the odds of the disease increase when a test is positive. • The likelihood ratio for a negative result (LR-) tells you how much the odds of the disease decrease when a test is negative

  24. Likelihood Ratios The LR for a positive test is defined as: LR (+) = Prob (T+|D) / Prob(T+|ND) LR (+) = [TP/(TP+FN)] [FP/(FP+TN)] LR (+) = (Sensitivity) / (1-Specificity)

  25. Likelihood Ratios The LR for a negative test is defined as: LR (-) = Prob (T-|D) / Prob(T-|ND) LR (-) = [FN/(TP+FN)] [TP/(FP+TN)] LR (-) = (1-Sensitivity) / (Specificity)

  26. What is a good ‘Likelihood Ratios’? • A LR (+) more than 10 or a LR (-) less than 0.1 provides convincing diagnostic evidence. • A LR (+) more than 5 or a LR (-) less than 0.2 is considered to give strong diagnostic evidence.

  27. Likelihood Ratio: Example Gold Standard Likelihood Ratio for a positive test = (90/100) / (5/100) = 90/ 5 = 18 Likelihood Ratio for a negative test = (10/100) / (95/100) = 10/ 95 = 0.11

  28. Exercise • In a hypothetical example of a diagnostic test, serum levels of a biochemical marker of a particular disease were compared with the known diagnosis of the disease. 100 international units of the marker or greater was taken as an arbitrary positive test result:

  29. Example

  30. Exercise • Initial creatine phosphokinase (CK) levels were related to the subsequent diagnosis of acute myocardial infarction (MI) in a group of patients with suspected MI. Four ranges of CK result were chosen for the study:

  31. Exercise

  32. Odds and Probability Probability of Disease = (# with disease) / (# with & # without disease) = a/ (a+b) Odds of a disease = (# with disease) / (# without disease) = a/ b Probability = Odds/ (Odds+1); Odds = Probability / (1-Probability)

  33. Use of Likelihood Ratio Employment of following three step procedure: 1. Identify and convert the pre-test probability to pre-test odds. 2. Determine the post-test odds using the formula, Post-test Odds = Pre-test Odds * Likelihood Ratio 3. Convert the post-test odds into post-test probability.

  34. Likelihood Ratio: Example • A 52 yr woman presents after detecting 1.5 cm breast lump on self-exam. On clinical exam, the lump is not freely movable. If the pre-test probability is 20% and the LR for non-movable breast lump is 4, calculate the probability that this woman has breast cancer.

  35. Likelihood Ratio: Solution First step • Pre-test probability = 0.2 • Pre-test odds = Pre-test prob / (1-pre-test prob) • Pre-test odds = 0.2/(1-0.2) = 0.2/0.8 = 0.25 • Second step • Post-test odds Pre-test odds * LR • Post-test odds = 0.25*4 = 1 • Third step • Post-test probability = Post-test odds / (1 + Post-test odds) • Post-test probability = 1/(1+1) = ½ = 0.5

  36. Receiver Operating Characteristic (ROC) • Finding a best test • Finding a best cut-off • Finding a best combination Definitive positive Probably positive Equivocal probably negative

  37. ROC curve constructed from multiple test thresholds

  38. Receiver Operating Characteristic (ROC) • ROC Curve allows comparison of different tests for the same condition without (before) specifying a cut-off point. • The test with the largest AUC (Area under the curve) is the best.

  39. Features of good diagnosis study • Comparative (compares new test against old test). • Should be a “gold standard” • Should include both positive and negative results • Usually will involve “blinding” for both patient, tester and investigator.

  40. USERS GUIDES TO THE MEDICAL LITERATURE How to use an Article about a Diagnostic Test? • Are the results of the study valid? • What are the results and will they help me in caring for my patients? BE-Workshop-DT-July2007

  41. Methodological Questions for Appraising Journal Articles about Diagnostic Tests 1. Was there an independent, ‘blind’ comparison with a ‘gold’ standard’ of diagnosis? 2. Was the setting for the study as well as the filter through which the study patients passed, adequately described? 3. Did the patient sample include an appropriate spectrum of disease? 4. Have they done analysis of the pertinent subgroups 5. Where the tactics for carrying out the test described in sufficient detail to permit their exact replication?

  42. 6. Was the reproducibility of the test result (precision) and its interpretation (observer variation) determined? 7. Was the term ‘ normal’ defined sensibly? 8. Was precision of the test statistics given? 9. Was the indeterminate test results presented? 10. If the test is advocated as a part of a cluster or sequence of tests, was its contribution to the overall validity of the cluster or sequence determined? 11. Was the ‘ utility’ of the test determined?

More Related