1 / 55

Image Quality in Digital Pathology

Image Quality in Digital Pathology. (from a pathologist’s perspective) Jonhan Ho, MD, MS. Disclosure. Image Quality: define/measure . Image quality is good enough if:. It has a resolution of 0.12345 μ /pixel It is captured in XYZ color space/pixel depth

emile
Télécharger la présentation

Image Quality in Digital Pathology

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Image Quality in Digital Pathology (from a pathologist’s perspective) Jonhan Ho, MD, MS

  2. Disclosure

  3. Image Quality: define/measure

  4. Image quality is good enough if: • It has a resolution of 0.12345 μ/pixel • It is captured in XYZ color space/pixel depth • It has a MTF curve that looks perfect • It has a focus quality score of 123 • Has a high/wide dynamic range

  5. What is “resolution”? • Spatial resolution • Sampling period • Optical resolution • Sensor resolution • Monitor resolution • New Year’s resolution???????

  6. Optical resolution • Theoretical maximum resolution of a 0.75 NA lens is 0.41μ. 1.30 NA – 0.23μ. • Has NOTHING to do with magnification! (we will get to that later.)

  7. Depth of Field • As aperture widens • Resolution improves • Depth of field narrows • Less tissue will be in focus

  8. Image quality is good enough if: • It has a resolution of 0.12345 μ/pixel • It is captured in XYZ color space/pixel depth • It has a MTF curve that looks perfect • It has a focus quality score of 123 • Has a high/wide dynamic range

  9. Image quality is good enough if it is: • “Sharp” • “Clear” • “Crisp” • “True” • “Easy on the eyes”

  10. Image quality is good enough if it is: • “Sharp” • “Clear” • “True”

  11. Image quality is good enough if: • You can see everything you can see on a glass slide

  12. Image quality is good enough if: • I can make a diagnosis from it

  13. Image quality is good enough if: • I can make as good a diagnosis from it as I can glass slides. • This is a concordance study • OK, but how do you measure this?!?!?!?!?!

  14. Gold standard = Another Diagnosis

  15. Concordance validation • Some intra-observer variability • Even more interobserver variability • Order effect • “great case” effect

  16. Concordance validation • Case selection • Random, from all benches? • Enriched, with difficult cases? • Presented with only initial H&E? • Allow ordering of levels, IHC, special stains? • If so, how can you compare with the original diagnosis? • Presented with all previously ordered stains? • If so, what about diagnosis bias? • How old of a case to allow?

  17. Concordance validation • Subject selection • Subspecialists? Generalists? • Do all observers read all cases, even if they are not accustomed to reading those types of cases? • Multi-institutional study • Do observers read cases from other institutions? • Staining/cutting protocol bias

  18. Concordance validation • Measuring concordance • Force pathologist to report in discrete data elements? • This is not natural! (especially in inflammatory processes!) • What happens if 1 data element is minimally discordant? • Allow pathologist to report as they normally do? • Free text – who decides if they are concordant? How much discordance to allow? What are the criteria?

  19. Concordance study bottom line • Very difficult to do with lots of noise • Will probably conclude that can make equivalent diagnoses • At the end, we will have identified cases that are discordant, but what does that mean? • What caused the discordances? • Bad images? If so what made them bad? • Familiarity with digital? • Lack of coffee?!?!?! • Still doesn’t feel like we’ve done our due diligence – what exactly are the differences between glass and digital?

  20. PERCEPTION = REALITY

  21. PERCEPTION = QUALITY “Sharp, clear, true”

  22. Psychophysics • The study of the relationship between the physical attributes of the stimulus and the psychological response of the observer

  23. What we need is -

  24. Images, image quality and observer performance: new horizons in radiology lecture. Kundel HL. Radiology. 1979 Aug;132(2):265-71

  25. Kundel on image quality • “The highest quality image is one that enables the observer to most accurately report diagnostically relevant structures and features.”

  26. Receiver Operator Curve (ROC)

  27. Conspicuity index formula • K = f(Size, contrast, Edge Gradient/surround complexity) • Probability of detection = f(K)

  28. Kundel, 1979 • “Just as a limited alphabet generates an astonishing variety of words, an equally limited number of features may generate an equally astonishing number of pictures.”

  29. Can this apply to pathology? • What is our alphabet? MORPHOLOGY! • Red blood cells • Identify inflammation by features • Eosinophils • Plasma cells • Hyperchromasia, pleomorphism, NC ratio • Build features into microstructures and macrostructures • Put features and structures into clinical context and compare to normal context • Formulate an opinion

  30. Advantages of feature based evaluation • Better alleviates experience bias, context bias • Can better perform interobserverconcordancy • Connects pathologist based tasks with measurable output understandable by engineers • Precedent in image interpretability (NIIRS)

  31. NIIRS 1 “Distinguish between major land use classes (agricultural, commercial, residential)”

  32. NIIRS 5 “Identify Christmas tree plantations”

  33. Disadvantages of feature based evaluation • Doesn’t eliminate the “representative ROI” problem • Still a difficult study to do • How to select features? How many? • How to determine gold standard? • What about features that are difficult to discretely characterize? (“hyperchromasia”, “pleomorphism”)

  34. Bottom line for validation • All of these methods must be explored as they each have their advantages and disadvantages • Technical • Diagnostic concordance • Feature vocabulary comparison

  35. Image perception - Magnification • Ratio • Microscope • Lens • Oculars • Scanner • Lens • Sensor resolution • Monitor resolution • Monitor distance

  36. 270 µm pixel pitch of monitor Magnification at the monitor 1 pixel =270 µm at the monitor 1 pixel = 10 µm at the sensor 270 / 10 = ~27X ~27X magnification from sensor to monitor 1 pixel = 10 µm at the sensor 1 pixel = 0.25 µm at the sample 10/0.25 = 40X 40X magnification from object to sensor = 1080X TOTAL magnification from object to monitor This is the equivalent of a 108X objective on a microscope!!??

  37. Near point = 10” What if the sensor was obscenely high resolution?

  38. Other things that cause bad images • Tissue detection • Focus

More Related