1 / 34

Machine Learning on fMRI Data

Machine Learning on fMRI Data. Rebecca Hutchinson January 28, 2008. Marcel Just. Svetlana Shinkareva. Francisco Pereira. Vincente Malave. Neurosemantics Research Team. Postdoctoral Fellows. Professional Staff. Tom Mitchell. Vladimir Cherkassky. Rob Mason. PhD Students. Kai Min Chang.

mattox
Télécharger la présentation

Machine Learning on fMRI Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Machine Learning on fMRI Data Rebecca Hutchinson January 28, 2008

  2. Marcel Just Svetlana Shinkareva Francisco Pereira Vincente Malave Neurosemantics Research Team Postdoctoral Fellows Professional Staff Tom Mitchell Vladimir Cherkassky Rob Mason PhD Students Kai Min Chang Rebecca Hutchinson Mark Palatucci Indra Rustandi

  3. fMRI Data … Hemodynamic Response Features: 10,000 voxels, imaged every second. Training examples: 10-40 trials (task repetitions). Signal Amplitude Neural activity Time (seconds)

  4. Brain Imaging and Machine Learning ML Case study: high dimensional, sparse data "Learning to Decode Cognitive States from Brain Images,“ T.M. Mitchell, et al., Machine Learning, 57(1), pp. 145-175, 2004 "The Support Vector Decomposition Machine" F. Pereira, G. Gordon, ICML-2006. "Classification in Very High Dimensional Problems with Handfuls of Examples", M. Palatucci and T. Mitchell, ECML-2007 Francisco Pereira PhD thesis topic

  5. Brain Imaging and Machine Learning ML Case study: learning models of individuals, of the population, and of individual variation "Training fMRI Classifiers to Discriminate Cognitive States across Multiple Subjects", X. Wang, R. Hutchinson, and T. M. Mitchell, NIPS 2003. "Classifying Multiple-Subject fMRI Data Using the Hierarchical Gaussian Naïve Bayes Classifier“, Indrayana Rustandi, 13th Conference on Human Brain Mapping. June 2007. Indra Rustandi PhD thesis topic

  6. Brain Imaging and Machine Learning ML Case study: complex time series generated by hidden processes "Hidden Process Models", Rebecca Hutchinson, T. Mitchell, I. Rustandi, ICML-2006. "Learning to Identify Overlapping and Hidden Cognitive Processes from fMRI Data,"R. Hutchinson, T.M. Mitchell, I. Rustandi, 11th Conference on Human Brain Mapping. 2005. Rebecca Hutchinson PhD thesis topic

  7. Study: Pictures and Sentences Press Button View Picture Read Sentence • Task: Decide whether sentence describes picture correctly, indicate with button press. • 13 normal subjects, 40 trials per subject. • Sentences and pictures describe 3 symbols: *, +, and $, using ‘above’, ‘below’, ‘not above’, ‘not below’. • Images are acquired every 0.5 seconds. Read Sentence Fixation View Picture Rest t=0 4 sec. 8 sec.

  8. Goals for fMRI • To track cognitive processes over time. • Estimate process hemodynamic responses. • Estimate process timings. • Allowing processes that do not directly correspond to the stimuli timing is a key contribution of HPMs! • To compare hypotheses of cognitive behavior.

  9. Process 1: ReadSentence Response signature W: Duration d: 11 sec. Offsets W: {0,1} P(): {q0,q1} Process 2: ViewPicture Response signature W: Duration d: 11 sec. Offsets W: {0,1} P(): {q0,q1} Processes of the HPM: v1 v2 v1 v2 Input stimuli: sentence picture Timing landmarks : Process instance:i2 Process : 2 Timing landmark: 2 Offset O: 1 (Start time: 2+ O) 1 2 One configuration c of process instances i1, i2, … ik: i1 i2  Predicted mean: + N(0,s12) v1 v2 + N(0,s22)

  10. HPM Formalism HPM = <P, C, S> P = <p1,…, pP>, a set of processes (e.g. ReadSentence) p = <W,d,W,Q>, a process W = response signature d = process duration W = allowable offsets Q = multinomial parameters over values in W C = <c1,…, cC>, a set of configurations c = <i1,…,iL>, a set of process instances i = <p,l,O>, a process instance (e.g. ReadSentence(S1)) p = process ID • = timing landmark (e.g. stimulus presentation of S1) O = offset (takes values in Wp) S = <s12,…,sV2>, variance for each voxel

  11. Encoding Experiment Design Processes: Input stimuli: Constraints Encoded: p(i1) = {1,2} p(i2) = {1,2} p(i1) != p(i2) O(i1) = 0 O(i2) = 0 p(i3) = 3 O(i3) = {1,2} ReadSentence = 1 ViewPicture = 2 Timing landmarks : 1 2 Decide = 3 Configuration 1: Configuration 2: Configuration 3: Configuration 4:

  12. Inference • Over configurations • Choose the most likely configuration, where: • C=configuration, Y=observed data, HPM=model

  13. Learning • Parameters to learn: • Response signature W for each process • Timing distribution Q for each process • Variance sv2 for each voxel • Expectation-Maximization (EM) algorithm. • Latent variable z is an indicator of which configuration is correct. • E step: estimate a probability distribution over z. • M step: update estimates of W (using weighted least squares) and Q and sv2 (using standard MLEs) based on the E step.

  14. Cognitive Processes

  15. Comparing Cognitive Models 5-fold cross-validation: Average data log-likelihood of held-out fold.

  16. Learned Temporal Response Process response time course for a single voxel of Subject K for process R under HPM13 Process response parameter value Images (each image is 0.5 seconds)

  17. Learned Offset Distributions Subject K

  18. Related Work • fMRI • General Linear Model (Dale99) • Must assume timing of process onset to estimate hemodynamic response. • Computer models of human cognition (Just99, Anderson04) • Predict fMRI data rather than learning parameters of processes from the data. • Machine Learning • Classification of windows of fMRI data (Cox03, Haxby01, Mitchell04) • Does not typically model overlapping hemodynamic responses. • Dynamic Bayes Networks (Murphy02, Ghahramani97) • HPM assumptions/constraints are difficult to encode in DBNs.

  19. Future Work • Regularization for process responses: • Temporal smoothness • Spatial smoothness • Spatial sparsity • Spatial priors • Improve algorithm complexities. • Apply to open cognitive science problems.

  20. Conclusions • Take-away messages: • fMRI data is an interesting case study for a number of machine learning challenges. • HPMs are a probabilistic model for time series data generated by a latent collection of processes. • In the fMRI domain, HPMs can simultaneously estimate the hemodynamic response and localize the timing of cognitive processes.

  21. References John R. Anderson, Daniel Bothell, Michael D. Byrne, Scott Douglass, Christian Lebiere, and Yulin Qin. An integrated theory of the mind. Psychological Review, 111(4):1036–1060, 2004. http://act-r.psy.cmu.edu/about/. David D. Cox and Robert L. Savoy. Functional magnetic resonance imaging (fMRI) ”brain reading”: detecting and classifying distributed patterns of fMRI activity in human visual cortex. NeuroImage, 19:261–270, 2003. Anders M. Dale. Optimal experimental design for event-related fMRI. Human Brain Mapping, 8:109–114, 1999. Zoubin Ghahramani and Michael I. Jordan. Factorial hidden Markov models. Machine Learning, 29:245–275, 1997. James V. Haxby, M. Ida Gobbini, Maura L. Furey, Alumit Ishai, Jennifer L. Schouten, and Pietro Pietrini. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293:2425–2430, September 2001. Marcel Adam Just, Patricia A. Carpenter, and Sashank Varma. Computational modeling of high-level cognition and brain function. Human Brain Mapping, 8:128–136, 1999. http://www.ccbi.cmu.edu/project 10modeling4CAPS.htm. Tom M. Mitchell et al. Learning to decode cognitive states from brain images. Machine Learning, 57:145–175, 2004. Kevin P. Murphy. Dynamic bayesian networks. To appear in Probabilistic Graphical Models, M. Jordan, November 2002.

  22. Simple Case: Known Timing N p1 p2 p3 N 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 … … … W(1) p1 = Y p2 W(2) T W(3) p3

  23. Challenge: Unknown Timing N p1 p2 p3 N 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 … … … W(1) p1 = Y p2 W(2) T W(3) p3 Uncertainty about the processes essentially makes the design matrix a random variable.

  24. Uncertain Timings • Design matrix models several choices for each time point. Configurations for each row: P S D t=1 t=1 t=2 t=2 … t=18 t=18 t=18 t=18 … 3,4 1,2 3,4 1,2 … 3 4 1 2 … 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 … … … 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 ... … … T’>T

  25. Uncertain Timings • Weight each row with probabilities from E-step. P S D Configurations: Weights: 3,4 1,2 3,4 1,2 … e1 e2 e3 e4 … 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 … … … Y = W e1 = P(C=3|Y,Wold,Qold,sold) + P(C=4|Y,Wold,Qold,sold)

  26. Our Approach • Model of processes contains a probability distribution over when they can occur. • Use prior knowledge about timing to limit the hypothesis space.

  27. HPM Modeling Assumptions • Model latent time series at process-level. • Process instances share parameters based on their process types. • Use prior knowledge from experiment design. • Sum process responses linearly.

  28. HPMs: the graphical model Configuration c Timing Landmark l The set C of configurations constrains the joint distribution on {p(k),O(k)} " k. Process Type p Offset O Start Time s S i1,…,ik observed unobserved Yt,v t=[1,T], v=[1,V]

  29. Process 1: Process P: d1 … dN d1 … dN t t … … … t t Prior knowledge: There are a total of 6 processes in this window of data. An instance of Process 1 begins in this window. An instance of Process P begins in this window. An instance of either Process 1 OR Process P begins in this window. d1 … dN

  30. Process 1: Process P: d1 … dN d1 … dN t t … … … t t Process 1 timings: … Process P timings: More questions: -Can we learn the parameters of these processes from the data (even when we don’t know when they occur)? -Would a different set of processes model the data better? d1 … dN

  31. Approach and Assumptions • Model latent time series at process-level. • Processes contain probability distributions over when they occur. • Process instances inherit parameters from their process types. • Use prior knowledge from experiment design to limit complexity. • Sum process responses linearly.

More Related