1 / 42

Random Field Theory

Random Field Theory. Will Penny SPM short course, London, May 2005. David Carmichael MfD 2006. image data. parameter estimates. design matrix. kernel. General Linear Model model fitting statistic image. realignment & motion correction. Random Field Theory. smoothing. normalisation.

Télécharger la présentation

Random Field Theory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Random Field Theory Will Penny SPM short course, London, May 2005 David Carmichael MfD 2006

  2. image data parameter estimates designmatrix kernel • General Linear Model • model fitting • statistic image realignment &motioncorrection Random Field Theory smoothing normalisation StatisticalParametric Map anatomicalreference corrected p-values

  3. Overview 1. Terminology 2. Random Field Theory • Cluster level inference • SPM Results • FDR

  4. Overview 1. Terminology • Random Field Theory • Cluster level inference • SPM Results • FDR

  5. Inference at a single voxel NULL hypothesis, H: activation is zero a = p(t>u|H) p-value: probability of getting a value of t at least as extreme as u. If a is small we reject the null hypothesis. u=2 t-distribution u=(effect size)/std(effect size)

  6. Sensitivity and Specificity ACTION Don’t Reject Reject H True TN FP H False FN TP TRUTH Specificity = TN/(# H True) = TN/(TN+FP) = 1 - a Sensitivity = TP/(# H False) = TP/(TP+FN) = b = power a = FP/(# H True) = FP/(TN+FP) = p-value/FP rate/sig level

  7. Sensitivity and Specificity ACTION At u1 Don’t Reject Reject Spec=7/10=70% Sens=10/10=100% H True (o) TN=7 FP=3 H False (x) FN=0 TP=10 TRUTH Specificity = TN/(# H True) Sensitivity = TP/(# H False) Eg. t-scores from regions that truly do and do not activate o o o o o o o x x x o o x x x o x x x x u1

  8. Sensitivity and Specificity ACTION Don’t Reject Reject At u2 H True (o) TN=9 FP=1 H False (x) FN=3 TP=7 TRUTH Spec=9/10=90% Sens=7/10=70% Specificity = TN/(# H True) Sensitivity = TP/(# H False) Eg. t-scores from regions that truly do and do not activate o o o o o o o x x x o o x x x o x x x x u2

  9. Inference at a single voxel NULL hypothesis, H: activation is zero a = p(t>u|H) We can choose u to ensure a voxel-wise significance level of a. This is called an ‘uncorrected’ p-value, for reasons we’ll see later. We can then plot a map of above threshold voxels. u=2 t-distribution

  10. Signal Inference for Images Noise Signal+Noise

  11. Use of ‘uncorrected’ p-value, a=0.1 11.3% 11.3% 12.5% 10.8% 11.5% 10.0% 10.7% 11.2% 10.2% 9.5% Percentage of Null Pixels that are False Positives Using an ‘uncorrected’ p-value of 0.1 will lead us to conclude on average that 10% of voxels are active when they are not. This is clearly undesirable. To correct for this we can define a null hypothesis for images of statistics.

  12. FAMILY-WISE NULL HYPOTHESIS: Activation is zero everywhere If we reject a voxel null hypothesis at any voxel, we reject the family-wise Null hypothesis A FP anywhere in the image gives a Family Wise Error (FWE) Family-wise Null Hypothesis Family-Wise Error (FWE) rate = ‘corrected’ p-value

  13. Use of ‘uncorrected’ p-value, a=0.1 Use of ‘corrected’ p-value, a=0.1 FWE

  14. The Bonferroni correction The Family-Wise Error rate (FWE), a, fora family of N independent voxels is α = Nv where v is the voxel-wise error rate. Therefore, to ensure a particular FWE set v = α / N BUT ...

  15. The Bonferroni correction Assume Independent Voxels

  16. Independent voxels - a good assumption?? • Voxel Point Spread Function (PSF) • - continuous signal is sampled for a discrete period • - imposes a filter that when FT’d gives a PSF • - Gives spread of signal through the image from point source • ..worse in PET • Physiological noise • Smoothing • Normalisation

  17. The Bonferroni correction Independent Voxels Spatially Correlated Voxels Bonferroni is too conservative for brain images

  18. Consider a statistic image as a discretisation of a continuous underlying random field Use results from continuous random field theory Random Field Theory Discretisation

  19. Overview 1. Terminology • Random Field Theory • Cluster level inference • SPM Results • FDR

  20. Topological measure threshold an image at u EC=# blobs at high u: Prob blob = avg (EC) So FWE, a = avg (EC) Euler Characteristic (EC)

  21. Example – 2D Gaussian images • α = R (4 ln 2) (2π) -3/2 u exp (-u2/2) Voxel-wise threshold, u Number of Resolution Elements (RESELS), R N=100x100 voxels, Smoothness FWHM=10, gives R=10x10=100

  22. Example – 2D Gaussian images • α = R (4 ln 2) (2π) -3/2 u exp (-u2/2) For R=100 and α=0.05 RFT gives u=3.8

  23. How do we know number of resels? • We can simply use the FWHM of the smoothing kernel But processes such as normalisation mean smoothness will vary 2. Estimate the FWHM at each voxel using residuals at each voxel (worsley 1998)

  24. volume Surface area diameter Euler # of space Resel Counts for Brain Structures (1) Threshold depends on Search Volume (2) Surface area makes a large contribution FWHM=20mm

  25. Overview 1. Terminology 2. Theory 3. Imaging Data 4. Levels of Inference 5. SPM Results

  26. Smoothness smoothness » voxel size practically FWHM 3 VoxDim Typical applied smoothing: Single Subj fMRI: 6mm PET: 12mm Multi Subj fMRI: 8-12mm PET: 16mm Applied Smoothing

  27. Overview 1. Terminology 2. Theory 3. Imaging Data 4. Levels of Inference 5. SPM Results

  28. Cluster Level Inference • We can increase sensitivity by trading off anatomical specificity • Given a voxel level threshold u, we can compute the likelihood (under the null hypothesis) of getting a cluster containing at least n voxels CLUSTER-LEVEL INFERENCE • Similarly, we can compute the likelihood of getting c clusters each having at least n voxels SET-LEVEL INFERENCE

  29. n=12 n=82 n=32 Levels of inference voxel-level P(c  1 | n > 0, t  4.37) = 0.048 (corrected) At least one cluster with unspecified number of voxels above threshold set-level P(c  3 | n  12, u  3.09) = 0.019 At least 3 clusters above threshold cluster-level P(c  1 | n  82, t  3.09) = 0.029 (corrected) At least one cluster with at least 82 voxels above threshold

  30. Overview 1. Terminology 2. Theory 3. Imaging Data 4. Levels of Inference 5. SPM Results

  31. SPM results I Activations Significant at Cluster level But not at Voxel Level

  32. SPM results II Activations Significant at Voxel and Cluster level

  33. SPM results...

  34. False Discovery Rate ACTION At u1 Don’t Reject Reject FDR=3/13=23% a=3/10=30% H True (o) TN=7 FP=3 H False (x) FN=0 TP=10 TRUTH Eg. t-scores from regions that truly do and do not activate FDR = FP/(# Reject) a = FP/(# H True) o o o o o o o x x x o o x x x o x x x x u1

  35. False Discovery Rate ACTION Don’t Reject Reject H True (o) TN=9 FP=1 H False (x) FN=3 TP=7 TRUTH At u2 FDR=1/8=13% a=1/10=10% Eg. t-scores from regions that truly do and do not activate FDR = FP/(# Reject) a = FP/(# H True) o o o o o o o x x x o o x x x o x x x x u2

  36. Signal False Discovery Rate Noise Signal+Noise

  37. Control of Familywise Error Rate at 10% FWE Occurrence of Familywise Error Control of False Discovery Rate at 10% 6.7% 10.5% 12.2% 8.7% 10.4% 14.9% 9.3% 16.2% 13.8% 14.0% Percentage of Activated Pixels that are False Positives

  38. Summary • We should not use uncorrected p-values • We can use Random Field Theory (RFT) to ‘correct’ p-values • RFT requires FWHM > 3 voxels • We only need to correct for the volume of interest • Cluster-level inference • False Discovery Rate is a viable alternative

  39. Functional Imaging Data • The Random Fields are the component fields, Y = Xw +E, e=E/σ • We can only estimate the component fields, using estimates of w and σ • To apply RFT we need the RESEL count which requires smoothness estimates

  40. ^  Estimated component fields voxels ? ? =  + parameters design matrix errors data matrix scans parameterestimates • estimate   residuals  estimated variance = Each row is an estimated component field estimatedcomponentfields

More Related