1 / 46

Causal Modeling of FMRI Data

Causal Modeling of FMRI Data. Joe Ramsey With enormous help from Clark Glymour as well as Russell Poldrack, Stephen and Catherine Hanson, Stephen Smith, Erich Kummerfeld , Ruben Sanchez, and others. Goals.

inoke
Télécharger la présentation

Causal Modeling of FMRI Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Causal Modeling of FMRI Data Joe Ramsey With enormous help from Clark Glymour as well as Russell Poldrack, Stephen and Catherine Hanson, Stephen Smith, Erich Kummerfeld, Ruben Sanchez, and others.

  2. Goals From Imaging data, to extract as much information as we can, as accurately as we can, about which brain regions influence which others in the course of psychological tasks. To generalize over tasks To specialize over groups of people.

  3. What Are the Brain Variables? In current studies, from 20,000 + ………………………………….3 voxels ROIs ROI = Region of interest Question: How sensitive are causal inferences to brain variable selection?

  4. How are ROIs constructed (FSL)? • Define an experimental variable (box function). • Use a generalized linear model to determine which voxels “light up” in correlation with the experimental variable. • Add a group level step if voxels lighting up for the group is desired. • Cluster the resulting voxels into connected clusters. • Small clusters are eliminated. • Remaining clusters become the ROIs. • Symmetry constraints may be imposed.

  5. Search Complexity: How Big is the Set of Possible Explanations? X Y X Y X Y X Y X Y For graphical models: For N variables: 8 }

  6. Statistical Complexity • Graphical models are untestable unless parameterized into statistical models. • Incomplete models of associations are likely to fail tests. • Multiple testing problems. • Multiple subjects/Missing ROIs. • No fast scoring method for mixed ancestral graphs that model feedback and latent common causes. • Weak time lag information.

  7. Measurement Complexity • Sampling rate is slower than causal interaction speed. • Indirect measurement creates spurious associations of measured variables: N1 N2 N3 X1 X3 X1 X2 X3 X2 Neural N, measured X Regression of X3 on X1, X2

  8. Specification Strategies • Guess a model and test it. • Search the model space or some restriction of it. a. Search for the full parameterized structure b. Search for graphical structure alone c. Search for graphical features (e.g, adjacencies)

  9. What Evidence of What Works, and Not? • Theory. • Limiting correctness of algorithms (PC, FCI, GES, LiNGAM, etc., under usually incorrect assumptions for fMRI). • Prior Knowledge • Do automated search results conform with established relationships? • Animal Experiments (Limited) • Simulation Studies

  10. Brief Review: Smith’s Simulation Study • 5 to 50 variables • 28 simulation conditions, 50 subjects/condition. • 38 search methods • Search 1 subject at a time.

  11. Methods tested by Smith • DCM, SEM excluded; no search. (Not completely true.) • Full correlation in various frequency bands • Partial correlation • Lasso(ICOV) • Mutual Information, Partial MI • Granger Causality • Coherence • Generalized Synchronization • Patel’s Conditional Dependence Measures • P(x|y) vs P(y|x) • Bayes Net Methods • CCD, CPC, FCI, PC, GES • LiNGAM

  12. Smith’s Results • Adjacencies: • Partial Correlation methods (GLASSO) and several “Bayes Net” methods from CMU get ~ 90% correct in most simulations. • Edge Directions • Smith: “None of the methods is very accurate, with Patel's τ performing best at estimating directionality, reaching nearly 65% d-accuracy, all other methods being close to chance.” (p. 883) • Most of the adjacencies for Patel’s τ are false.

  13. Simulation conditions (see handout)…

  14. SIMULATION 2 (10 variables, 11 edges)

  15. Simulation 4 (50 variables 61 edges)

  16. Simulation 7: 250 minutes, 5 variables

  17. Simulation 8: Shared Inputs

  18. Simulation 14: 5-Cycle

  19. Simulation 15: Stronger Connections

  20. Simulation 16: More Connections

  21. Simulation 22: Nonstationary Connection Strengths

  22. Simulation 24: One Strong External Input

  23. Take Away Conclusion? • Nothing works! • Methods that get adjacencies (90%) cannot get directions of influence. • Methods that get directions (60% - 70%) for normal session lengths cannot tell true adjacencies from false adjacencies. • Even with unrealistically long sessions (4 hours), the best method gets 90% accuracy for directions but finds very few adjacencies.

  24. Idea… • If we could: • Increase sample size (effectively) by using data from multiple subjects • Focus on a method with strong adjacencies • Combine this with a method with strong orientations • We may be able to do better (Ramsey, Hanson and Glymour, NeuroImage) • This is the strategy of the PC Lingam algorithm of Hoyer and several of us, though there are other ways to pursue the same strategy.

  25. Reminder: If noises are non-Gaussian, we can learn more than a pattern. Linear Models, Covariance Data, Pattern/CPDAG Linear Models, Non-Gaussian Noises (LiNG), Directed Graph (1) (2)

  26. Are noises for FMRI models non-Gaussian? • Yes. This is controversial but shouldn’t be. • For word/pseudoword data of Xue and Poldrack (Task 3), kurtosis ranges up to 39.3 for residuals. • There is a view in the literature that noises are distributed (empirically) as Gamma—say, with shape 19 and scale 20.

  27. Are connection functions linear for FMRI data? • You tell me: • I’ve not done a thorough survey of studies.

  28. Coefficients? • One expects them to be positive. • Empirically, in linear models of fMRI data, there are very few negative coefficients (1 in 200, say). • They’re only slightly negative if so. • This is consistent with negative coefficient occurring due to small sample regression estimation errors. • For the most part, need to be less than 1. • Brain activations are cyclic and evolving over time. • Empirically, in linear models of fMRI, most coefficients are less than 1. To the extent that they’re greater than 1, one suspects nonlinearity.

  29. The IMaGES algorithm • Adaptation for multiple subjects of GES, a Bayes net method tested by Smith, et al. • Iterative model construction using Bayesian scores separately on each subject at each step; edge with best average score added. • Tolerates ROIs missing in various subjects. • Seeks feed forward structure only. • Finds adjacencies between variables with latent common causes. • Forces sparsity by penalized BIC score to avoid triangulated variables (see Measurement Complexity)

  30. IMaGES/LOFS • Smith (2011): “Future work might look to optimize the use of higher-order statistics specifically for the scenario of estimating directionality from fMRI data.” • LiNGAM orients edges by non-Normality of higher moments of the distributions of adjacent variables. • LOFS uses the IMaGES adjacencies, and the LiNGAM idea for directing edges (with a different score for non-Normality, and without independent components). • Unlike IMaGES, LOFS can find cycles. • LOFS (from our paper) is R1 and/or R2…

  31. Procedure R1(S) You don’t have to read these—I’ll describe them! • G <- empty graph over variables of S • For each variable V • Find the combination C of adj(V, S) that maximizes NG(eV|C). • For each W in C • Add W V to G • Return G

  32. Procedure R2(S) • G <- empty graph over variables of S • For each pair of variables X, Y • Scores<-empty • For each combination of adjacents C for X and Y • If NG(eX|Y) < NG(X) & NG(eY|X) > NG(Y) • score <- NG(X) + NG(eY|X) • Add <X->Y, score> to Scores • If NG(eX|Y) > NG(X) & NG(eY|X) < NG(Y) • score <- NG(eX|Y) + NG(Y) • Add <X<-Y, score> to Scores • If Scores is empty • Add X—Y to G. • Else • Add to G the edge in Scores with the highest score. • Return G

  33. Non-Gaussianity Scores • Log Cosh – used in ICA • Exp = -e^(-X^2/2) –used in ICA • Kurtosis – ICA (one of the first tried, not great) • Mean absolute – PC LiNGAM • E(e^X) – Cumulant arithmetic = e^(κ1(X) + 1/(2!) κ2(X) + 1/(3!) κ3(X) + …) • Anderson Darling A^2 – LOFS • Empirical Distribution Function (EDF) score with heavy weighting on the tails. • We’re using this one!

  34. Mixing Residuals • We are assuming that residuals for ROIs from different subjects are drawn from the same population, so that they can be mixed. • Sometimes we center residuals from different subjects before mixing, sometimes not. • For Smith study, doesn’t matter—the data is already centered!

  35. Precision and Recall • Precision = True positives / all positives • What fraction of the guys you found were correct? • Recall = True positives / all true guys • What fraction of the correct guys did you find?

  36. Some Further Problems • Discovering nearly canceling 2 cycles is hard (but we will try anyway…) • Identifying latent latents for acyclic models • Reliability of search may be worse with event designs than with block designs • Subjects that differ in causal structures will yield poor results for multi-subject methods.

  37. Hands On • Download task3extracted.tet and task3extracted.zip. Make sure the zip file extracted if not automatically. • Open the session. • Place a new Data box on the workbench. • Select Load Data from the File menu. • Using the shift key, select all of the files in the zip. • Load them.

  38. Hands On • Attach a Search box to the data and run IMaGES. • Copy the layout from the layout graph provided into the search box (using menus). • Attach another Search box with IMaGES and Data as input and run LOFS.

  39. Thanks! • S. M. Smith, K L. Miller, G. Salimi-Khorshidi, M. Webster, C. F. Beckmann, T. E. Nichols, J. D. Ramsey, M. W. Woolrich(2011), Network modelling methods for fMRI, NeuroImage. • J.D. Ramsey, S.J. Hanson, C. Hanson, Y.O. Halchenko, R.A. Poldrack, and C. Glymour(2010), Six Problems for causal inference from fMRI, NeuroImage. • J.D. Ramsey, S.J. Hanson, C. Glymour. Multi-subject search correctly identifies causal connections and most causal directions in the DCM models of the Smith et al. simulation study. NeuroImage. • G. Xue, Poldrack, R., 2007. The neural substrates of visual perceptual learning of words: implications for the visual word form area hypothesis. J. Cogn. Neurosci. • Thanks to the James S. McDonnell Foundation.

More Related