1 / 37

Basics of fMRI Inference

Basics of fMRI Inference. Douglas N. Greve. fMRI Analysis Over-review. Subject 1. Preprocessing MC, STC, B0 Smoothing Normalization. Preprocessing MC, STC, B0 Smoothing Normalization. Preprocessing MC, STC, B0 Smoothing Normalization. Preprocessing MC, STC, B0 Smoothing

jonar
Télécharger la présentation

Basics of fMRI Inference

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Basics of fMRI Inference Douglas N. Greve

  2. fMRI Analysis Over-review Subject 1 Preprocessing MC, STC, B0 Smoothing Normalization Preprocessing MC, STC, B0 Smoothing Normalization Preprocessing MC, STC, B0 Smoothing Normalization Preprocessing MC, STC, B0 Smoothing Normalization First Level GLM Analysis X X X X X C C C C C Raw Data Subject 2 First Level GLM Analysis Raw Data Higher Level GLM Subject 3 First Level GLM Analysis Raw Data Subject 4 First Level GLM Analysis Raw Data Data Reduction: 1 value per subject per voxel Data Reduction: 1 value per voxel Need one more reduction to one number: “Thumbs up” or “Thumbs down”

  3. Overview • False Positives and False Negatives • Problem of Multiple Comparisons • Bonferroni Correction • Cluster Correction (voxel-wise threshold) • False Discovery Rate • Selection Bias

  4. Truth Table Conclusion Reality

  5. Error Rate Conclusion Reality False Positive Rate (FPR,a) – probability that you declare an effect to be present when there is no effect False Negative Rate (FNR,b) - probability that you declare no effect to be present when there is an effect

  6. Noise Causes Uncertainty Voxel 1 Voxel 2

  7. GLM Inference T=8 T=1

  8. FPR=area under curve to the right of line (p-value) Student’s t Distribution False Positive Rate • “NULL” Distribution Student’s t-Distribution • Noise Assumptions: • Gaussian noise • Independent noise • Homoskedastic (equal variances) • p-value is area under curve • to the right of T • For T = 3.4, FPR = p =.01 • For T=8, FPR = p = 10-11 • For T=1, FPR = p = 0.1 • Violation of assumptions change FPR T

  9. What does False Positive Rate Mean? FPR=.10

  10. False Negative Rate • Need to know what the effect size is • How big will the signal be? • How big will the noise be? • Previous data • Guess

  11. FPR=.10 FPR=.01 FPR=10-7 Trade Off of Error Rates • Inverse relationship between error rates • As False Positives (a) are reduced, • the False Negatives (b) increase • Increase sample size decreases b • Which Error is more important? Depends .. • Science? FPR=.05ish, FNR(b)<0.2, TPR(1-b)>0.8 • Pre-operative surgery?

  12. Power Analysis • Given any 3 of the following, you can compute the 4th: • Desired False Positive Rate (a, usually .05) • Desired False Negative Rate (b, usually .20) • Number of subjects • Effect size, ie, ratio of • Signal (eg, Angry-vs-Neutral, Schizophrenia-vs-Normal) • Noise • Obtained from previous data or guess Grants require a power analysis!

  13. Voxel-wise vs “Family-wise” Error Rate Rand(0,1) 100x100 10,000 vox p < 0.1 1000 vox p < 0.01 100 vox p < 0.001 10 vox p < 0.1 • A voxel-wise p<.01 means one expects 1% of voxels will be active purely by chance • What if you say that if even a single voxel has p<.01, you declare a “thumbs up”? • What is the probability that at least one voxel has p<.01?

  14. The “Problem of Multiple Comparisons” N = 10,000 • aVox = voxel-wise threshold (p< aVox) • aFWE = “Thumbs up” False Positive Rate • (FWE = Family-wise Error) • N = Number of voxels (“Search Space”) aVox =.10 aVox =.01 aVox =10-7

  15. Bonferroni Correction Compute Voxel-wise threshold needed to achieve a desired Family-wise FPR. To achieve aFWE = 0.01 with N = 10,000 voxels Need aVox = 0.000001 (10-6)

  16. Search Space • Set of voxels over which positives are searched • Severity of correction increases with size of search space (regardless of method) • Reduce Search Space • Reduce the area to a ROI (eg, superior temp gyrus) • Increase voxel size (cover same volume with fewer voxels) • Spatial Smoothing

  17. Spatial Smoothing • Spatially convolve image with Gaussian kernel. • Kernel sums to 1 • Full-Width/Half-max:FWHM = s/sqrt(log(256)) • s = standard deviation of the Gaussian Full-Width/Half-max 0 FWHM 5 FWHM 10 FWHM Full Max 2mm FWHM Half Max 5mm FWHM Smoothing causes irreversible loss of information/resolution 10mm FWHM

  18. Spatial Smoothing 0mm 5mm 10mm Smoothing 1mm 4mm 8mm Increased Voxel Size Smoothing causes irreversible loss of information (resolution), similar to increasing voxel size.

  19. Resel • Pixel = picture element • Voxel = volume element • Resel = resolution element (depends on smoothing level) • Resel = (FWHM)3 for volumes • Resel = (FWHM)2 for surfaces • If FWHM>Voxel Size, fewer Resels than Voxels. • Correct based on the number of Resels instead of number of voxels (math is more complicated, need Random Field Theory) Bonferroni

  20. Clusters aVox =.10 aVox =.01 aVox =10-7 • True signal tends to be clustered • False Positives tend to be randomly distributed in space • Cluster – set of spatially contiguous voxels that are above a given threshold.

  21. Smoothing increases size of random clusters FWHM 0 FWHM 2 FWHM 4 FWHM 6 Z Z>2.3 p<.01

  22. Gaussian Random Field Theory aFWE = f(aVox,N,FWHM,ClusterSize) • aVox Voxel-wise, Cluster-forming Threshold • N – Search space. • FWHM – Smoothing level • ClusterSize – size of cluster to be tested • aFWE – Cluster p-value

  23. Cluster Images Sig Map pVox < .001 Cluster Map pCluster < .05 Some small clusters do not “survive”

  24. Cluster Table R L Radiological Orientation ROI Atlas

  25. Cluster Correction Summary • Cluster – set of supra-threshold voxels (size) • Critical Size Threshold given by Random Field Theory • Search Space • Voxel-wise threshold (arbitrary) • FWHM (smoothing level) • Assumptions on each • Loose small clusters (False Negatives)

  26. Cluster Data Extraction • Spatial average over cluster of each subject’s contrast • Can correlate with other measures (age, test score, etc)

  27. Selection Bias: Cluster Data Extraction • Voodoo Correlations: Running a “circular” test on extracted data • Eg, cluster represents voxel-wise test on AD-vs-Normal • Then you cannot perform an AD-vs-Normal test on the extracted data • If you do, then • p-values will be much too significant and will not reflect actual false positive rate • Correlation coefficients will be much too high • Subtle and easy to do Vul, Edward, et al. "Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition." Perspectives on psychological science 4.3 (2009)

  28. bG1 bG2 1 1 1 0 0 0 0 0 1 1 = Permutation: Recall Two Group GLM Analysis • Does Group 1 differ from Group 2? • C = [1 -1], Contrast = C*b = bG1-bG2 • Compute T from t-test • t-test assumes: Gaussian, independent, homoscedastic • If not, then p-values are not accurate

  29. Permutation bG1 bG2 1 1 1 0 0 0 0 0 1 1 = • Permute rows of design matrix • Run analysis • Compute simulation test statistic Ts • Go back to step 1 • Repeat a large (~10k) times, get 10k values of Ts • Analyze your true data • Compute test statistic T • aFWE <= How often T>Ts Permutation: under the NULL, labelings in design matrix are irrelevant

  30. False Discovery Rate (FDR) p < 0.1 1000 vox p < 0.01 100 vox p < 0.001 10 vox • Given the voxel-wise threshold, know expected number of False Positives • If there are more Positives than this, then some of them must be True Positives

  31. False Discovery Rate (FDR) • Number of False Positives = N*aVox • Total Number of Positives = Count from image • aVox = f(FDR,N,Data)

  32. False Discovery Rate (FDR) • FDR = .05 means that 5% of Positives are False Positives • Which 5%, no one knows • How to interpret? FDR = .05 aVox = .00700 FDR = .01 aVox = .00080

  33. False Discovery Rate (FDR) • FDR = .05 means that 5% of Positives are False Positives • Which 5%, no one knows • How to interpret? Would you change your opinion of this blob if 50 of the voxels were False Positives? FDR = .05 aVox = .0070 FDR = .01 aVox = .00080

  34. False Discovery Rate (FDR) • FDR = .05 means that 5% of Positives are False Positives • Which 5%, no one knows • How to interpret? Would you change your opinion of this blob if 50 of the voxels were False Positives? FDR = .05 aVox = .0070 FDR = .01 aVox = .00080

  35. False Discovery Rate Summary • False Discoveries • FDR does not control FPR (False Positive Rate) • Careful when interpreting • Voxel-wise threshold is Data Dependent

  36. Summary • Final data reduction: thumbs up or thumbs down • Truth Table: False Positives (a) and False Negatives (b) • Trade-off in Error Rates • Problem of Multiple Comparisons (Family-wise Error) • Search Space, Search Space reduction • Larger voxels (less resolution) • Smoothing (Resels) • Bonferroni Correction • Cluster Correction (voxel-wise threshold) • Permutation (combine with cluster-wise) • False Discovery Rate (FDR) • Selection Bias – VooDoo Correlations

More Related