1 / 24

Use of a ‘Mathematical Microscope’ to Understand Radiologists’ Errors in Breast Cancer Detection

Use of a ‘Mathematical Microscope’ to Understand Radiologists’ Errors in Breast Cancer Detection. Claudia Mello-Thoms , MSEE, PhD Department of Biomedical Informatics University of Pittsburgh mellothomsc@upmc.edu. Motivation.

kagami
Télécharger la présentation

Use of a ‘Mathematical Microscope’ to Understand Radiologists’ Errors in Breast Cancer Detection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Use of a ‘Mathematical Microscope’ to Understand Radiologists’ Errors in Breast Cancer Detection Claudia Mello-Thoms, MSEE, PhD Department of Biomedical Informatics University of Pittsburgh mellothomsc@upmc.edu Slide 1

  2. Motivation • Breast cancer is the most common type of non-skin cancer among women in the US, and the 2nd most deadly. • Many factors affect the correct detection of breast cancer: • low disease prevalence in screening population; • high variability in the appearance of cancer; • experience level of the radiologist; • and many others. Slide 2

  3. It has been shown that 10-30% of all cancers detected by mammography screening are visible in retrospect. • Given that in 2007 an estimated 178,480 new breast cancers were expected to be detected in the US, and assuming that all these cancers were found at mammography screening: • 10% = 17,848; • 30% = 53,544. • For a long time it was assumed that radiologists only missed lesions that they did not visually inspect. • Eye-position tracking has been used to confirm this. Slide 3

  4. It showed that these misses could be divided in 3 categories: • ‘search’ errors: lesion did not attract any amount of visual attention; • ‘perceptual’ errors: lesion attracted visual attention, but not long enough for object recognition to occur; • ‘decision making’ errors: lesion attracted visual attention long enough for object recognition to occur; lesion was incorrectly identified as being either benign or a variation of normalcy. • Eye-position tracking studies have shown that 70% of the missed lesions attracted visual attention. Thus, most errors were either ‘perceptual’ or ‘decision making’. • A model of medical image perception states that the decision to report or to dismiss a perceived finding is based upon: • the local characteristics of the finding; • a comparison strategy of the finding with selected areas of the background. Slide 4

  5. Research Questions • 1. Can we differentiate malignant breast lesions that were correctly reported from the ones that were visually inspected but not reported? • 2. Are there any differences between malignant breast lesions that lead to ‘perceptual’ errors and the ones that lead to ‘decision making’ errors? Slide 5

  6. Methods • Four MQSA-certified expert breast radiologists from the Breast Imaging Division, Department of Radiology, University of Pittsburgh, participated in this experiment. • The radiologists read a case set of 40 two-view (CC and MLO) digitized mammograms. From these: • 30 cases contained a biopsy-verified malignant breast mass; • 10 cases were lesion-free and had been stable for 2 years. Slide 6

  7. The experiment had 2 phases: • 1. Visual Search: • Eye tracking; • This phase ended once the radiologist indicated their initial impression of the case, that is, whether it was ‘normal’ or ‘abnormal’. • 2. Report and Localization: • No eye tracking; • The radiologists used a mouse-controlled cursor to mark the (x,y) locations of all malignant masses that they wished to report in each case, in both views. Slide 7

  8. i) Segmentation ii) Wavelet filtering iii) Statistical Analyses Data Analysis • i) Segmentation: • Each segmented area corresponded to 5° of visual angle (i.e., the ‘useful visual field’). In this setting, 256 x 256 pixels; • Two types of areas were segmented: 1) areas that yielded a local response; and 2) areas that were used in the radiologist’s sampling of the background; • Local Responses: • Areas that attracted visual attention and yielded marks by the observer (True and False Positive decision outcomes); • Areas that contained a malignant mass that attracted visual attention but was not reported by the observer (False Negative decision outcomes); Slide 8

  9. TP X FP X FN Slide 9

  10. i) Segmentation ii) Wavelet filtering iii) Statistical Analyses Data Analysis • i) Segmentation: • Each segmented area corresponded to 5° of visual angle (i.e., the ‘useful visual field’). In this setting, 256 x 256 pixels; • Background Sampling: • Lesion-free areas that attracted prolonged visual dwell (>330 ms) but that did not yield a mark by the observer (that is, that were correctly interpreted to be lesion-free). Slide 10

  11. Bkg Bkg Bkg Bkg Slide 11

  12. i) Segmentation ii) Wavelet filtering iii) Statistical Analyses Data Analysis sfb 1 G energy log le1 G sfb 2 H energy log le2 G sfb 3 G energy log le3 H H energy log G energy log G H energy log H G energy log Level 0 H sfb N H energy log leN Spatial frequency bands Nonlinear transformation Level 1 Level 2 normalization Scalar vector Slide 12

  13. i) Segmentation ii) Wavelet filtering iii) Statistical Analyses Data Analysis • For the statistical analyses, the segmented areas were represented as: • Local responses: 20 scalar values that represented TPs, FPs and FNs in different ‘spatial frequency bands’; • Background sampling: 20 scalar values that represented each area of the background fixated by the radiologist for 330 ms or longer in different ‘spatial frequency bands’. Slide 13

  14. Results • Research Question 1. Can we differentiate malignant breast lesions that were correctly reported (TP) from the ones that were visually inspected but not reported? Slide 14

  15. Using Analysis of Variance • Differences between lesions visually inspected and correctly reported (TP) and those not reported due to: • Search errors: 10 sfbs; • Perceptual errors: borderline on 2 sfbs (p=0.0548); • Decision Making errors: none. • Differences in background sampling between lesions visually inspected and correctly reported (TP) and those not reported due to: • Perceptual errors: none; • Decision Making errors: 10 sfbs. Slide 15

  16. Results • Research Question 2. Are there any differences between malignant breast lesions that lead to ‘perceptual’ errors and the ones that lead to ‘decision making’ errors? Slide 16

  17. Using Analysis of Variance • Differences between lesions visually inspected but not reported due to ‘perceptual’ errors vs. those not reported due to ‘decision making’ errors: • None!! • Differences in background sampling between lesions visually inspected but not reported due to ‘perceptual’ errors vs. those not reported due to ‘decision making’ errors: • Statistically significant differences in 10 sfbs!! Slide 17

  18. Using Correlation Analysis Slide 18

  19. Conclusions • 1. We used a ‘mathematical microscope’ to study the differences between malignant breast masses that were visually inspected and correctly reported (TP) and those that (i) did not attract any amount of visual attention (‘search errors’); (ii) did attract visual attention but not long enough for object recognition to occur (‘perceptual errors’); or (iii) attracted visual attention long enough for object recognition to occur, but were ultimately dismissed by the radiologist (‘decision making errors’). Slide 19

  20. 2. Our data suggests that most local differences existed between the representation of the cancers that were correctly reported (TP) and those that did not attract any amount of visual attention (‘search errors’). • 3. We have previously shown that radiologists adhere to specific error patterns, and that these error patterns are characteristic to each radiologist. • 4. Taking these findings together, the current results suggest that local image processing, aiming to increase the conspicuity of certain lesions, may be useful, as long as it takes into account the individual variability of the different radiologists. Slide 20

  21. 5. In addition, the current results also suggest that most differences between correctly reported lesions (TP) and visually inspected but dismissed lesions (FN) can be found in the radiologists’ comparison strategy of the perceived finding with selected areas of the background. • 6. In this case, most differences were found between the TP and the ‘decision making’ errors, but also a significant number of differences appeared between the ‘perceptual’ and ‘decision making’ errors themselves. Slide 21

  22. 7. Finally, the missed lesions that were visually inspected but not reported could be completely characterized by: • Perceptual Errors: NO object recognition = NO change in search strategy • Bkgd sampling x Area of Lesion = No changes; • Decision Making Errors: ACTUAL decision = ACTUAL change in search strategy • Bkgd sampling x Area of Lesion = Decreased Correlation; Slide 22

  23. Take Home Message Local Image Processing may be useful to increase conspicuity of certain lesions, but there is no ‘one-size-fits-all’!!! It needs to be tailored to each radiologist! In the radiologist’s decision making process regarding how to proceed about a perceived finding (‘report it?’, ‘dismiss it?’), as important as the local characteristics of the lesion is the comparison strategy used. Often, local differences do not exist that can explain why certain lesions were reported while others, virtually identical, were dismissed. However, if one looks at how the radiologist searched the parenchyma, the explanation is right there! Slide 23

  24. Thank You! Questions? Slide 24

More Related