1 / 15

6F5Z1003 Research Design and Analysis (Ed) Lecture 04

6F5Z1003 Research Design and Analysis (Ed) Lecture 04 sensitivity, predictive value and repeatability. Today. Sensitivity and Specificity Predictive values Measuring observer reliability What are they?; Why do we need them?; How do we do them?. Sensitivity and Specificity.

kiral
Télécharger la présentation

6F5Z1003 Research Design and Analysis (Ed) Lecture 04

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 6F5Z1003 Research Design and Analysis (Ed) Lecture 04 sensitivity, predictive value and repeatability

  2. Today Sensitivity and Specificity Predictive values Measuring observer reliability What are they?; Why do we need them?; How do we do them?

  3. Sensitivity and Specificity Sensitivity analysis is primarily used in medical diagnosis (but there are other applications) A typical use is to compare the efficiency of diagnosis techniques in the ability to detect a disease E.g., comparing a new diagnosis tool with the established Gold Standard diagnosis technique

  4. Sensitivity and Specificity Data summary:

  5. Sensitivity and Specificity From the previous data one can calculate: Sensitivity = a / (a+c) -the proportion of positive diagnoses when the condition is present Specificity = d / (b+d) -the proportion of negative diagnoses when the condition is absent

  6. Predictive Values Predictive values are used as a measure of how accurate diagnostic tests are Positive Predictive Value (PPV) – This is the proportion of those with a negative test result who DO have the condition Negative predictive value (NPV) – This is the proportion of those with a negative test who DO NOT have the condition

  7. Predictive Values Data summary:

  8. Predictive Values From the previous data one can calculate: Positive predictive value (PPV) = a / (a + b) -the proportion of positive diagnoses who actually have the condition Negative predictive value (NPV) = d / (c + d) -the proportion of negative diagnoses who actually no not have the condition

  9. Measuring observer reliability Observer reliability is generally used to control error, e.g., when more than one person is collecting data and it is important that they measure the same thing! There are a few way to do this – one is called Cohen’s Kappa Sometime called “repeatability” but there are several different ways to calculate

  10. Measuring observer reliability Data summary: NB the raw data are categorical – count data here

  11. Measuring observer reliability

  12. Measuring observer reliability Note that there were 20 proposals that were granted by both reader A and reader B, and 15 proposals that were rejected by both readers. Thus, the observed percentage agreement is  Pr(a) = (20 + 15) / 50 = 0.70

  13. Measuring observer reliability Reader A said "Yes" to 25 applicants and "No" to 25 applicants. Thus reader A said "Yes" 50% of the time. Reader B said "Yes" to 30 applicants and "No" to 20 applicants. Thus reader B said "Yes" 60% of the time. Therefore the probability that both of them would say "Yes" randomly is 0.50 · 0.60 = 0.30 and the probability that both of them would say "No" is 0.50 · 0.40 = 0.20.  Thus the overall probability of random agreement is Pr(e) = 0.3 + 0.2 = 0.5.

  14. Measuring observer reliability Pr(a) = (20 + 15) / 50 = 0.70 Pr(e) = 0.3 + 0.2 = 0.5

  15. Measuring observer reliability

More Related