1 / 70

Medical Imaging Informatics Lecture #10: Clinical Perspective: Single Subject Classification

Medical Imaging Informatics Lecture #10: Clinical Perspective: Single Subject Classification Susanne Mueller M.D. Center for Imaging of Neurodegenerative Diseases Dept. of Radiology and Biomedical Imaging susanne.mueller@ucsf.edu. Overview.

ori-bridges
Télécharger la présentation

Medical Imaging Informatics Lecture #10: Clinical Perspective: Single Subject Classification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Medical Imaging Informatics Lecture #10: Clinical Perspective: Single Subject Classification Susanne Mueller M.D. Center for Imaging of Neurodegenerative Diseases Dept. of Radiology and Biomedical Imaging susanne.mueller@ucsf.edu

  2. Overview • Single Subject Classification/Characterization: Motivation and Problems • Bayesian Networks for Single Subject Classification/Characterization

  3. 1 .Single Subject Classification/Characterization

  4. Quantitative Neuroimaging: Group Comparisons Posttraumatic Stress Disorder Temporal Lobe Epilepsy Major Depression • Implicit assumptions of Group Comparisons: • Abnormal regions are relevant/characteristic for the disease process. • Abnormalities present in all patients, i.e., subject showing abnormalities with disease specific distribution is likely to have the disease.

  5. Quantitative Neuroimaging: Do the Assumptions hold up? Posttraumatic Stress Disorder Temporal Lobe Epilepsy Major Depression

  6. Motivation • Identification of different variants and/or degrees of the disease process. • Translation into clinical application.

  7. Requirements • Identification and extraction of discriminating feature: • - Single region. • - Combination of regions. • 2. Definition of a threshold for “abnormality”. Goal: High sensitivity and specificity.

  8. Sensitivity and Specificity: Definitions I Sensitivity: Probability that test is positive if the patient indeed has the disease. P (Test positive|Patient has disease) Test ideally always detects disease.

  9. Sensitivity and Specificity: Definitions II Specificity: Probability that test is negative if the patient does not have the disease. P (Test negative|Patient does not have disease) Test ideally detects only this disease and not some other non-disease related state/other disease

  10. Sensitivity and Specificity Sensitivity and specificity provide information about a test result given that the patient’s disease state is known. In clinic however the patient’s disease state is unknown and this is why the test was done in the first place. => positive and negative predictive value of the test

  11. Positive and Negative Predictive Value: Definition Positive predictive value (PPV): P (Patient has disease|Test positive) Negative predictive value (NPV): P (Patient does not have disease|Test negative

  12. Example Sensitivity: 0.80 Specificity: 0.94 PPV: 0.90 NPV: 0.86

  13. Receiver Operator Curve: ROC Sensitivity and Specificity are good candidates to assess test accuracy. However, they vary with the threshold (test pos/test neg) used. ROC is a means to compare the accuracy of diagnostic tests over a range of thresholds. ROC plot sensitivity vs 1- specificity of the test.

  14. EXAMPLE: ROC High threshold Good specificity: 0.92 Medium sensitivity: 0.52 Medium threshold Medium specificity: 0.7 Medium sensitivity: 0.83 sensitivity Low threshold Low specificity: 0.45 Good sensitivity: 0.95 Extreme low threshold No specificity: 0 Perfect sensitivity: 1 1-specificity

  15. Example: ROC II Chance line Example ROC Optimal threshold indicated by arrow ROC of good test Approaches the left corner of ROC

  16. Feature Definition: Information extracted from image. Usefulness of a feature to detect the disease is determined by 1. Convenience of measurement. 2. Accuracy of measurement. 3. Specificity for the disease (e.g.:CK-MB) 4. Number of features (single < several/feature map)

  17. Features and Thresholds used in Imaging for Single-Subject Analyses I A. Single feature = Region of interest (ROI) analysis Previous knowledge that ROI is affected by the disease comes either from previous imaging studies or from other sources, e.g. histopathology Approaches used to detect abnormality for ROI analyses: z-scores: z = (xs – mean xc)/SDc t-scores*: t = (xs – mean xc)/ SDc*(n+1/n)1/2) Bayesian estimate**: z* = (xs – mean xc)/q1/2 Crawford and Howell 1998*; Crawford and Grathwaite 2006**

  18. Example: ROI Analyses and Thresholds Hippocampal volumes corrected for intracranial volume obtained from T1 images of 49 age matched healthy controls (mean: 3.92±0.60) and hippocampal volume of a patient with medial temporal lobe epilepsy 3.29 z- score: -1.05 corresponds to one-tailed p = 0.147 t – score: -1.04 corresponds to one-tailed p = 0. 152 Bayesian one-tailed probability : 0.152 , i.e. 15% of the control hippocampal volumes fall below the patient’s volume

  19. Features and Thresholds used in Imaging for Single-Subject Analyses II B. Multiple features from the same source = map that encodes severity and distribution of the disease associated abnormality. Previous knowledge about the distribution/severity of the abnormalities is not mandatory to generate “abnormality” map, i.e., typically whole brain search strategy is employed. However, previous knowledge can be helpful for the correct interpretation. Approaches used to generate abnormality maps : z- score maps (continuous or thresholded) Single-case modification of the General Linear Model used for group analyses.

  20. Features and Thresholds used in Imaging for single-subject analyses III • Problems: • Difference reflects normal individual variability rather than disease effects. • Assumption that single subject represents the mean of a hypothetical population with equal variance as observed in the control group • Higher number of comparisons (multiple ROI/voxel-wise) require: • a. correction for multiple comparisons. • b. Adjustment of result at ROI/voxel level for results in immediate neighborhood, e.g. correction at cluster level • 4. Interpretation of resulting maps

  21. Influence of Correction for Multiple Comparison Increase FWE p <0.05 Decrease Increase FWE p <0.01 Decrease Increase FWE p <0.001 Decrease Scarpazza et al. Neuroimage 2013; 70: 175 -188

  22. Interpretation of Single Subject Maps • Potential strategies for map interpretation: • Visual inspection using knowledge about typical distribution of abnormalities in group comparisons. • Quantitative comparison with known abnormalities in group comparisons, e.g. calculation of Dice’s co-efficient for whole map. • Problems: • Requires existence of “disease typical pattern”. • Requires selection of “threshold” indicating match with typical pattern or not. • Difficulties to interpret severe abnormalities that do not match typical pattern. Atypical representation? Different disease?

  23. Examples Gray matter loss in TLE compared to controls

  24. 2. Bayesian Networks for Single Subject Classification/Characterization

  25. Characteristics of an Ideal Classification System • Uses non-parametric, non-linear statistics. • Identifies characteristic severe and mild brain abnormalities distinguishing between two groups based on their spatial proximity and strength of association with clinical variable (e.g. group membership) • Weights abnormal regions according to their ability to discriminate between two groups. • Provides probability of group membership and objective threshold based on based on congruence of individual abnormalities with group specific abnormalities. • Uses expert a priori knowledge to combine information from different sources (other imaging modalities, clinical information) for the determination of the final group membership.

  26. Bayesian Networks: Basics Definition: Probabilistic graphical model defined as: B = (G, Q) G isdirected acyclic graph (DAG) defined as G = (n, e) where n represents the set of nodes C in the network and e the set of directed edges that describe the probabilistic association among the nodes. Q is the set of all conditional probability states q that the nodes in the network can assume.

  27. Bayesian Networks: Basics: Simple Network DAGG Joint Probability Distribution Q Event B Event A

  28. Bayesian Networks: Basics: Slightly more complex Network Event C Event A Event B

  29. Bayesian Networks: Basics: It is getting more complicated A F B C E D I (V, Parents (V), Non-Descendents) V = any variable in the DAG Markovian assumptions of the DAG

  30. Bayesian Networks: Basics: It is getting more complicated A B C E D

  31. Bayesian Networks: Basics: It is getting more complicated A B C E D

  32. Bayesian Networks: Inference I: Probability of Evidence Query A B C E D True True Prob: 0.30

  33. Bayesian Networks: Inference II: Prior and Posterior Marginal Query Definition: Marginal: projection of the joint distribution on a smaller set of variables If joint probability distribution is Pr(x1,….,xn), then marginal distribution Pr(x1,….,xm), m≤n is defined as: Pr(x1,….,xm) = S Pr(x1,…,xn) Xm+1,….,xn prior marginal True = 0.60, False = 0.4 True = 0.92, False = 0.08 posterior marginal A B C True =0.52, False = 0.48 True=0.42, False = 0.58 True =1.0, False = 0.00 True=0.24, False = 0.76 E D True = 0.70, False = 0.3 True =0.36, False = 0.64 True = 0.84, False = 0.16 Evidence = True

  34. Bayesian Networks: Inference III: Most Probable Explanation (MEP) and Maximal a posteriori Hypothesis (MAP) Definition: MEP = Given evidence for one network variable, instantiation of all other network variables for which probability of the given variable is maximal MAP = Given evidence for one network variable, instantiation of a subset of network variables for which probability of the given variable is maximal A B C E D Evidence mpe: D = true Evidence mep: D = true

  35. Bayesian Networks: Inference IV: Different algorithms have been developed to update the remaining network after observation of other network variables. Examples for exact inference algorithms: Variable or factor elimination Recursive conditioning Clique tree propagation Belief propagation Examples for approximate inference algorithms: Generalized belief propagation Loopy belief propagation Importance sampling Mini-bucket elimination

  36. Bayesian Networks: Learning I: Parameter/Structure A B C E D

  37. Bayesian Networks: Learning II: Parameter Learning 1. Expert Knowledge 2. Data driven a. Maximum likelihood (complete data) b. Expectation maximization (incomplete data). c. Bayesian approach Structure Learning 1. Expert Knowledge 2. Data driven: a. Local search approach b. Constraint based approach: Greedy search (K2, K3), optimal search c. Bayesian approach

  38. Bayesian Networks: Application to Image Analysis? YES 1. Identification of features distinguishing between groups. 2. Combination of different distinguishing imaging features., e.g., volumetric and functional imaging.

  39. Bayesian Network: Basics: Feature Identification I Characterization of the problem 1. Parameter and structure learning. a. Representative trainings data set b. Information reduction: c. Definition of network nodes : d. Definition of possible node states. e. Calculation of the strength of association between image feature and variable of interest 2. Network query. a. Calculation of group affiliation based on concordance with feature set that had been identified during the learning process. Preparatory steps Structure learning Parameter learning

  40. GAMMA: Graphical Model-Based Morphometric Analysis* Bayesian Network: Basics: Feature Identification II Chen R, Herskovits E. IEEE Transactions on Medical Imaging: 2005; 24: 1237 – 1248

  41. GAMMA: Preparatory Steps I 1. Identification of trainings set: Images patients and controls or subjects with and without the function variable of interest for the Bayesian network. Representative for population, i.e., encompasses the variability typically found in each of the population

  42. GAMMA: Preparatory Steps II 2. Data Reduction Use of prior knowledge regarding the nature of the feature, e.g., reduction of information in image to regions with relative volume loss if disease is associated with atrophy. Creation of binary images: Each individual image is compared to a mean image and voxels with intensities below a predefined threshold, e.g. – 1 SD below control are set to 1, other voxels to zero

  43. GAMMA: Preparatory Steps II Data Reduction Binarized Control (1SD below mean) Mean Original Control Binarized Patient (1SD below mean) Original Patient SD Each binary map can be represented as follows: {F, V1, V2, V3…Vm} where F represents the state, i.e. patient or control and Vi represents the voxel at location i. Given the above definition, a voxel Vi with the value 1 means that there is a volume loss Choice of images to generate mean/SD image and threshold for binarization are crucial for performance

  44. GAMMA: Structure Learning Theoretical Steps. 1. Generate Bayesian Network that identifies the probabilistic relationship among {Vi} and F. 2. Generate cluster(s) of representative voxels (R, output: label map) such that all voxels in a cluster have similar probabilistic associations with F (output: belief map). All clusters are independent from each other and each cluster corresponds to a node.

  45. GAMMA: Structure Learning Practical I Step 1 a. Definition of search space V, e.g., all voxels where at least one subject has a value that differs from every other subject’s value for that voxel. b. Identification of the first search space voxel(s) that provide optimal distinction between states F, e.g. all controls 0, all patients 1. Assign voxel to putative group of representative voxels A.

  46. GAMMA: Structure Learning Practical I . . . . . . Group A n=10, “Controls” Group B n=10 , “Patients” Disease characterized by atrophy or “1” voxels compared to controls search space representative voxels 1st iteration

  47. GAMMA: Structure Learning Practical II Step 1 cont. c. Identification of voxel(s) whose addition to A increases the ability of A to correctly distinguish between states F. Process is repeated until there is no voxel left that fulfills that condition. d. Identification of all those voxels Rn in A that maximize the distinction between states F. The Rn of the first iteration corresponds to R. (The Rn after the first iteration are added to R). Voxels belonging to Rn are removed from search space V.

  48. GAMMA: Structure Learning Practical II

  49. GAMMA: Structure Learning Practical III Step 2 (iteration 2 and higher) a. Calculation of similarity s between voxels in A and voxels in Rn-1. Similarity s for one voxel Vi in A is defined as s(Vi,Rn-1)= P( Vi=1, Rn-1= 1) + P(Vi = 0, Rn-1 =0) The similarity for all n voxels in A is expressed as a similarity map S S = {s(Vi,Rn-1), s(Vj,Rn-1)….s(Vn,Rn-1)}.

  50. GAMMA: Structure Learning Practical III

More Related