1 / 26

Statistics and Image Quality Evaluation III

Statistics and Image Quality Evaluation III. Oleh Tretiak Medical Imaging Systems Fall, 2002. This Lecture. Mean square error as quality measure Shortcomings How to do ROC by hand Visual ROC experiment This and other files for todays lecture at

Télécharger la présentation

Statistics and Image Quality Evaluation III

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistics and Image Quality Evaluation III Oleh Tretiak Medical Imaging Systems Fall, 2002

  2. This Lecture • Mean square error as quality measure • Shortcomings • How to do ROC by hand • Visual ROC experiment • This and other files for todays lecture at • http://www.ece.drexel.edu/courses/ECE-S684/Notes/IQ3/

  3. Analytic Image Fidelity • Mean square difference between images u(m, n) - perfect image, u’(m, n) - defective image

  4. ‘White Noise’ Pattern

  5. Noise Patterns • White, low frequency, and high frequency noises. All have same standard deviation

  6. Effect of noise on image quaity: UL ~ original 8-bit image; UR ~ white noise; LL ~ low pass noise; LR ~ high pass noise. Noise standard deviation is equal to 8, so that PTP signal to rms noise ratio is 32.

  7. Effect of noise on image quaity: UL ~ original 8-bit image; UR ~ white noise; LL ~ low pass noise; LR ~ high pass noise. Noise standard deviation is equal to 32, so that PTP signal to rms noise ratio is 8.

  8. Conclusions • Same SNR does not produce the same image degradation when noises are different • ‘Low frequency’ noise is most visible • ‘High frequency’ noise is most damaging

  9. “The mean square error criterion is not without limitations, especially when used as a global measure of image fidelity. The prime justification for its common use is the relative ease with which it can be handled mathematically…” Anil Jain, in Fundamentals of Digital Image Processing.

  10. Method • Rating on a 1 to 5 scale (5 is best) • Rating performed by 21 subject • Statistics: • Average, maximum, minimum, standard deviation, standard error for Case A, Case B • Difference per viewer between Case A and Case B, and above statistics on difference

  11. Two Kinds of Errors • In a decision task with two alternatives, there are two kinds of errors • Suppose the alternatives are ‘healthy’ and ‘sick’ • Type I error: say healthy if sick • Type II error: say sick if healthy

  12. X - observation, t - threshold a = Pr[X > t | H0] (Type I error) b = Pr[X < t | H1] (Type II error) Choosing t, we can trade off between the two types of errors

  13. ROC Terminology • ROC — receiver operating characteristic • H0 — friend, negative; H1 — enemy, positive • Pr[X > t | H0] = probability of false alarm = probability of false positive = PFP = a • Pr[X > t | H1] = probability of detection = probability of true positive = PTP = b

  14. The ROC • The ROC shows the tradeoff between PFP and PTP as the threshold is varied

  15. Binormal Model • Negative: Normal, mean = 0, st. dev. = 1 • Negative: Normal, mean = a, st. dev. = b

  16. Show Excel Spreadsheet

  17. ROC from Experimental Data Strarting data: 9 samples of N(0, 1) pseudorandom variable, simulating negative population. Middle: data sorted, with cumulative distribution. Right: plot of middle.

  18. ROC from Experimental Data 2 Strarting data: 7 samples of N(2, 1.5) pseudorandom variable, simulating positive population. Middle: data sorted, with cumulative distribution. Right: plot of middle.

  19. ROC from Experimental Data 3 Merged sorted data and ‘Empirical ROC’ plot. Formula in FPF column =IF(N4="N",O4,P3) Formula in TPF column =IF(N4="P",O4,Q3)

  20. Comparison of Finite and Infinite Sample Theoretical ROC What would have been obtained from a very large sample Empirial ROC 9 negative and 7 positive cases

  21. Metz • University of Chicago ROC project: • http://www-radiology.uchicago.edu/krl/toppage11.htm • Software for estimating Az, also sample st. dev. And confidence intervals. • Versatile

  22. Ordinal Dominance Theory • Donald Bamber, Area above the Ordinal Dominance Graph and the Area below the Receiver Operating Characteristic Graph, J. of Math. Psych. 12: 387-415 (1975). • Sample Az is an unbiased, efficient, asymptotically normal estimate of population Az • Bamber gives a formula for variance of Az • Worst-case estimate (overbound) of variance

  23. Visual ROC Target • Target is 9x16 pixels, Gaussian noise plus 4x4 dark rectangle on one side • Protocol: Answer L, R if you think target is on left or right. • In this example, target is on left

  24. Experiment Goal • Test human (your) visual system processing power • Target is easy to see in absence of noise • This sort of stimulus can be analyzed, and theoretical best ROC found • Use paper form for your answers, enter your results on computer & mail to me. • Alternate method: Download file ROC-test-form.xls, enter data on form, and send to me by e-mail.

  25. Group 1

  26. Group 2

More Related