1 / 82

Ising Models for Neural Data

Ising Models for Neural Data. John Hertz, Niels Bohr Institute and Nordita work done with Yasser Roudi (Nordita) and Joanna Tyrcha (SU) Math Bio Seminar, SU, 26 March 2009. arXiv:0902.2885v1 (2009 ). Background and basic idea:.

ordell
Télécharger la présentation

Ising Models for Neural Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ising Models for Neural Data John Hertz, Niels Bohr Institute and Nordita work done with Yasser Roudi (Nordita) and Joanna Tyrcha (SU) Math Bio Seminar, SU, 26 March 2009 arXiv:0902.2885v1 (2009)

  2. Background and basic idea: • New recording technology makes it possible to record from hundreds of neurons simultaneously

  3. Background and basic idea: • New recording technology makes it possible to record from hundreds of neurons simultaneously • But what to make of all these data?

  4. Background and basic idea: • New recording technology makes it possible to record from hundreds of neurons simultaneously • But what to make of all these data? • Construct a model of the spike pattern distribution: find “functional connectivity” between neurons

  5. Background and basic idea: • New recording technology makes it possible to record from hundreds of neurons simultaneously • But what to make of all these data? • Construct a model of the spike pattern distribution: find “functional connectivity” between neurons • Here: results for model networks

  6. Outline

  7. Outline • Data

  8. Outline • Data • Model and methods, exact and approximate

  9. Outline • Data • Model and methods, exact and approximate • Results: accuracy of approximations, scaling of functional connections

  10. Outline • Data • Model and methods, exact and approximate • Results: accuracy of approximations, scaling of functional connections • Quality of the fit to the data distribution

  11. Get Spike Data from Simulations of Model Network Excitatory Population External Input (Exc.) Inhibitory Population 2 populations in network: Excitatory, Inhibitory

  12. Get Spike Data from Simulations of Model Network Excitatory Population External Input (Exc.) Inhibitory Population 2 populations in network: Excitatory, Inhibitory Excitatory external drive

  13. Get Spike Data from Simulations of Model Network Excitatory Population External Input (Exc.) Inhibitory Population 2 populations in network: Excitatory, Inhibitory Excitatory external drive HH-like neurons, conductance-based synapses

  14. Get Spike Data from Simulations of Model Network Excitatory Population External Input (Exc.) Inhibitory Population 2 populations in network: Excitatory, Inhibitory Excitatory external drive HH-like neurons, conductance-based synapses Random connectivity:Probability of connection between any two neurons is c = K/N, where N is the size of the population and K is the average number of presynaptic neurons.

  15. Get Spike Data from Simulations of Model Network Excitatory Population External Input (Exc.) Inhibitory Population 2 populations in network: Excitatory, Inhibitory Excitatory external drive HH-like neurons, conductance-based synapses Random connectivity:Probability of connection between any two neurons is c = K/N, where N is the size of the population and K is the average number of presynaptic neurons. Results here for c = 0.1, N = 1000

  16. Tonic input inhibitory (100) excitatory (400) 16.1 Hz 7.9 Hz

  17. Rapidly-varying input Rext Stimulus modulation: t (sec) Filtered white noise  = 100 ms

  18. Rasters inhibitory (100) 15.1 Hz excitatory (400) 8.6 Hz

  19. Correlation coefficients Data in 10-ms bins tonic data cc ~ 0.0052 ± 0.0328

  20. Correlation coefficients ”stimulus” data cc ~ 0.0086 ± 0.0278 Experiments: Cited values of cc~0.01 [Schneidmann et al, Nature (2006)]

  21. Modeling the distribution of spike patterns Have sets of spike patterns {Si}k Si = ±1 for spike/no spike(we use10-ms bins) (temporal order irrelevant)

  22. Modeling the distribution of spike patterns Have sets of spike patterns {Si}k Si = ±1 for spike/no spike(we use10-ms bins) (temporal order irrelevant) Construct a distribution P[S] that generates the observed patterns (i.e., has the same correlations)

  23. Modeling the distribution of spike patterns Have sets of spike patterns {Si}k Si = ±1 for spike/no spike(we use10-ms bins) (temporal order irrelevant) Construct a distribution P[S] that generates the observed patterns (i.e., has the same correlations) Simplest nontrivial model (Schneidman et al, Nature 440 1007 (2006), Tkačik et al, arXiv:q-bio.NC/0611072): Ising model, parametrized by Jij, hi

  24. An inverse problem: Have: statistics <Si>, <SiSj> want: hi, Jij

  25. An inverse problem: Have: statistics <Si>, <SiSj> want: hi, Jij Exact method: Boltzmann learning

  26. An inverse problem: Have: statistics <Si>, <SiSj> want: hi, Jij Exact method: Boltzmann learning

  27. An inverse problem: Have: statistics <Si>, <SiSj> want: hi, Jij Exact method: Boltzmann learning Requires long Monte Carlo runs to compute model statistics

  28. 1. (Naïve) mean field theory

  29. 1. (Naïve) mean field theory Mean field equations: or

  30. 1. (Naïve) mean field theory Mean field equations: or Inverse susceptibility (inverse correlation) matrix

  31. 1. (Naïve) mean field theory Mean field equations: or Inverse susceptibility (inverse correlation) matrix So, given correlation matrix, invert it, and

  32. 2. TAP approximation

  33. 2. TAP approximation Thouless, Anderson, Palmer, Phil Mag 35 (1977) Kappen & Rodriguez, Neural Comp 10 (1998) Tanaka, PRE 58 2302 (1998) “TAP equations” (improved MFT for spin glasses)

  34. 2. TAP approximation Thouless, Anderson, Palmer, Phil Mag 35 (1977) Kappen & Rodriguez, Neural Comp 10 (1998) Tanaka, PRE 58 2302 (1998) “TAP equations” (improved MFT for spin glasses)

  35. 2. TAP approximation Thouless, Anderson, Palmer, Phil Mag 35 (1977) Kappen & Rodriguez, Neural Comp 10 (1998) Tanaka, PRE 58 2302 (1998) “TAP equations” (improved MFT for spin glasses) Onsager “reaction term”

  36. 2. TAP approximation Thouless, Anderson, Palmer, Phil Mag 35 (1977) Kappen & Rodriguez, Neural Comp 10 (1998) Tanaka, PRE 58 2302 (1998) “TAP equations” (improved MFT for spin glasses) Onsager “reaction term”

  37. 2. TAP approximation Thouless, Anderson, Palmer, Phil Mag 35 (1977) Kappen & Rodriguez, Neural Comp 10 (1998) Tanaka, PRE 58 2302 (1998) “TAP equations” (improved MFT for spin glasses) Onsager “reaction term” A quadratic equation to solve for Jij

  38. 3. Independent-pair approximation

  39. 3. Independent-pair approximation Solve the two-spin problem:

  40. 3. Independent-pair approximation Solve the two-spin problem: Solve for J:

  41. 3. Independent-pair approximation Solve the two-spin problem: Solve for J: Low-rate limit:

  42. 4. Sessak-Monasson approximation

  43. 4. Sessak-Monasson approximation A combination of naïve mean field theory and independent-pair approximations:

  44. 4. Sessak-Monasson approximation A combination of naïve mean field theory and independent-pair approximations:

  45. 4. Sessak-Monasson approximation A combination of naïve mean field theory and independent-pair approximations: (Last term is to avoid double-counting)

  46. Comparing approximations: N=20 nMFT ind pair low-rate TAP SM TAP/SM

  47. Comparing approximations: N=20 N =200 nMFT ind pair nMFT ind pair low-rate TAP low-rate TAP SM TAP/SM SM TAP/SM

  48. Comparing approximations: N=20 N =200 nMFT ind pair nMFT ind pair low-rate TAP low-rate TAP SM TAP/SM SM TAP/SM the winner!

  49. Error measures SM/TAP SM TAP nMFT ind pair low-rate ind pair low-rate nMFT TAP SM SM/TAP

  50. N-dependence: How do the inferred couplings depend on the size of the set of neurons used in the inference algorithm?

More Related