1 / 23

Reliability of diagnostic tests

Reliability of diagnostic tests. October 29, 2014. O. Paltiel and R. Calderon. Different combinations of high and low precision/reliability and validity. The challenge of diagnosis.

Télécharger la présentation

Reliability of diagnostic tests

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reliability of diagnostic tests October 29, 2014 O. Paltiel and R. Calderon

  2. Different combinations of high and low precision/reliability and validity

  3. The challenge of diagnosis • Appearances to the mind are of four kinds. Things either are what they appear to be; or they neither are, nor appear to be, or they are, and do not appear to be, or they are not, yet appear to be. Rightly to aim in all these cases is the wise man’s task. Epictetus, 2nd century

  4. מהימנותReliability = • Definition “the extent to which repeated measurements of a stable phenomenon – by different people and instruments, at different times and places – get similar results” • = reproducibility or precision

  5. Validity and reliability A high reliability mean that in repeated measurements the results fall very close to each other; conversely, a low reliability means that they are scattered. Validity determines how close the mean of repeated measurements is to the true value. A low validity will produce more problems when interpreting results than a low reliability

  6. Sources of Variation Measurement • Instrument- The means of making the measurement • Observer -The person making the measurement Biologic • Within individuals- Changes in people with time and situation • Among individuals- Biologic difference from person to person

  7. Sources of variation. The measurement of diastolic blood pressure. Fletcher RH. Clinical Epidemiology. The Essentials.

  8. Biological Variability

  9. Clinical disagreement in interpreting diagnostic materials

  10. Clinical disagreement in interpreting diagnostic materials (cont.)

  11. The etiology of clinical disagreement The Examiner • Biologic variation in the senses • The tendency to record inference rather than evidence • Ensnarement by diagnostic classification schemes • Entrapment by prior expectation • Simple incompetency The examined • Biologic variation in the system being examined • Effects of illness and medications • Memory and rumination • Toss-ups The examination • Disruptive environments for the examination • Disruptive interactions between examiners and examined • Incorrect function or use of diagnostic tools

  12. Alvan Feinstein “To advance art and science in clinical examination, the equipment a clinician most needs to improve is himself”

  13. Effect of lack of sleep on clinical acumen

  14. Observers are biased: eg “context bias” .

  15. Observers are biased • Comparison of fetal heart rate by auscultation with rates obtained by fetal monitoring: • When true fetal heart rate was in the normal range, rates by auscultation evenly distributed around true value (ie random error only). • When true fetal heart rate abnormally high or low rates by auscultation biased toward normal. Day BMJ 1968;4:422

  16. A measure of agreement between two clinical assessments Observer I Positive Negative Observer Positive a b a+b II Negative c d c+d a+c d+b

  17. Definitions • Ao = a+d= observed agreement • N=a+b+c+d= Maximum potential agreement • Ae= (a+b)/n x (a+c)/n x n + (c+d)/n x (b+d)/n x n = Expected agreement by chance (assuming independence)

  18. Developing a useful index of clinical agreement

  19. The Kappa statistic K = (Ao – Ae)/(N- Ae)= = actual agreement beyond chance potential agreement beyond chance

  20. Example I Observer I Positive Negative Observer Positive 46 10 56 II Negative 12 32 44 58 42 100 Ao=78, Ae= (58X56)/100+(42X44)/100=51 K=(78-51)/(100-51)=0.55

  21. Example II Observer I Positive Negative Observer Positive 82 8 90 II Negative 5 5 10 87 13 100 Ao=87, Ae=79.6 K=(87-79.6)/(100-79.6)=0.36

  22. Properties of Kappa • Can be negative • Cannot be larger than 1 • Excellent agreement: Kappa > .75 • Good agreement: Kappa [0.40-0.75] • Poor agreement: Kappa <.4

  23. Continuous traits • For continuous traits e.g cholesterol a correlation coefficient can be used

More Related