1 / 46

Linear Network Theory and Sloppy Models

Linear Network Theory and Sloppy Models. Mark Goldman Center for Neuroscience UC Davis. Outline. Linear network theory essentials Nonlinear networks F itting network models and the “sloppy models” problem. Issue : How do neurons accumulate & store signals in working memory?.

gary
Télécharger la présentation

Linear Network Theory and Sloppy Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Linear Network Theory and Sloppy Models Mark Goldman Center for Neuroscience UC Davis

  2. Outline • Linear network theory essentials • Nonlinear networks • Fitting network models and the “sloppy models” problem

  3. Issue: How do neurons accumulate & store signals in working memory? • In many memory & decision-making circuits, neurons • accumulate and/or maintain signals for ~1-10 seconds stimulus accumulation neuronal activity (firing rates) storage (working memory) time Puzzle: • Most neurons intrinsically have brief memories synaptic input Input stimulus r firing rater synapse tneuron ~10-100 ms

  4. “Tuning curve” persistent activity: stores running total of input commands The Oculomotor Neural Integrator Eye velocity coding command neurons excitatory inhibitory Integrator neurons: Firing rate Eye position: Eye position 1 sec time (data from Aksay et al., Nature Neuroscience, 2001; picture of eye from MarinEyes)

  5. Network Architecture Left side neurons Right side neurons 100 100 firing rate firing rate 0 0 Eye Position L R Eye Position L R 4 neuron populations: Excitatory Inhibitory (Aksay et al., 2000) Firing rates: midline Recurrent (dis)inhibition Recurrent excitation background inputs & eye movement commands

  6. tneuron Neuron receiving network positive feedback: Standard Model: Network Positive Feedback 1) Recurrent excitation 2) Recurrent (dis)inhibition (Machens et al., Science, 2005) Command input: Typical isolated single neuron firing rate: time

  7. Many-neuron Patterns of Activity Represent Eye Position saccade Activity of 2 neurons Eye position represented by location along a low dimensional manifold (“line attractor”) (H.S. Seung, D. Lee)

  8. “Chalk talk” section: Linear network theory

  9. Line Attractor Picture of the Neural Integrator Geometrical picture of eigenvectors: r2 No decay along direction of eigenvector with eigenvalue = 1 r1 Decay along direction of eigenvectors with eigenvalue < 1 “Line Attractor” or “Line of Fixed Points”

  10. Next up… • 1) A few comments on linear networks: • Complex eigenvalues • Non-identical time constants • 2) Nonlinear networks & network fitting • 3) The problem of “sloppy models”: how to determine what • was important in one’s model fits

  11. General Case: Unequal t’s, l’s complex Re-write equations as: • Calculate the eigenvectors and eigenvalues. • Eigenvalues have typical form: • The corresponding eigenvector component has dynamics:

  12. Nonlinear networks & Fitting networks to data

  13. Nonlinear Network Model Outputs Wcontra Wipsi Coupled nonlinear equations: Mathematically intractable? Wij = weight of connection from neuron j to neuron i Burst commands & Tonic background inputs Firing rate dynamics of each neuron: Firing rate changes Intrinsic leak same-side excitation opposite-side inhibition Bkgd. input Burst command input

  14. Nonlinear Network Model Outputs Wcontra Wipsi Coupled nonlinear equations: Mathematically intractable? Integrator! Wij = weight of connection from neuron j to neuron i Burst commands & Tonic background inputs Firing rate dynamics of each neuron: Firing rate changes Intrinsic leak same-side excitation opposite-side inhibition Bkgd. input Burst command input For persistent activity: must sum to 0

  15. Fitting the Model Fitting condition for neuron i: Current needed to maintain firing rate r total excitatory current received total inhibitory current received Background input Knowns: -r’s: known at each eye position (tuning curves) -f(r): known from single-neuron experiments (not shown) Unknowns: -synaptic weights Wij > 0 and external inputs Ti -synaptic nonlinearities sexc(r), sinh(r) Assume form of sexc,inh(r)  constrained linear regression for Wij, Ti (data = rates ri at different eye positions)

  16. Fitting the Model Fitting condition for neuron i: Current needed to maintain firing rate r total excitatory current received total inhibitory current received Background input Combine recurrent inputs into 1 term: …the form of a standard regression problem!

  17. Fitting the Model Fitting condition for neuron i: Current needed to maintain firing rate r total excitatory current received total inhibitory current received Background input Combine recurrent inputs into 1 term:

  18. Fitting the Model Fitting condition for neuron i: Current needed to maintain firing rate r total excitatory current received total inhibitory current received Background input General time-varying problem (assume t known):

  19. Model Integrates its Inputs and Reproduces the Tuning Curves of Every Neuron Example model neuron voltage trace: Network integrates its inputs …and all neurons precisely match tuning curve data Firing rate (Hz) Time (sec) gray: raw firing rate (black: smoothed rate) green: perfect integral solid lines: experimental tuning curves boxes: model rates (& variability)

  20. …But Many Very Different Networks Give Near-Perfect Model Performance Local excitation Global excitation Circuits with very different connectivity… left exc. left inh. right exc. right inh. left inh. left exc. right exc. right inh. …but nearly identical performance: right side left side

  21. “Sloppy” Models (Models with poorly constrained parameters)

  22. Motivation: “Sloppy” Behavior in Identified Neurons Puzzle: Highly variable data from different instances of a crustacean bursting neuron with highly stereotypical output: Golowaschet al., J. Neurophys., 2002

  23. “Sloppy” Behavior in a Single-Neuron Bursting Model Model neuron 1: Models with 3 spikes per burst (other colors: different #’s spikes/burst) 1000 time (ms) Model neuron 2: time (ms) Goldman et al., J. Neurosci., 2001

  24. Sensitivity Analysis: Which features of the connectivity are most critical? Cost function curvature is described by “Hessian” matrix of 2nd derivatives: Cost function surface: cost C W2 W1 Insensitive direction (low curvature) Sensitive direction (high curvature) diagonal elements: sensitivity to varying a single parameter off-diagonal elements: interactions (Reference: see the work of J.P. Sethna group)

  25. Sensitivity Analysis: Which features of the connectivity are most critical? Cost function curvature is described by “Hessian” matrix of 2nd derivatives: Cost function surface: diagonal elements: sensitivity to varying a single parameter off-diagonal elements: interactions between pairs of parameter eigenvectors/eigenvalues: identify patterns of weight changes to which network is most sensitive cost C W2 W1 Insensitive direction (low curvature) Sensitive direction (high curvature) (Reference: see the work of J.P. Sethna group)

  26. Sensitive & Insensitive Directions in Connectivity Matrix Sensitive directions (of model-fitting cost function) Insensitive Eigenvector 1: make all connections more excitatory Eigenvector 2: weaken excitation & inhibition Eigenvector 3: vary low vs. high threshold neurons Eigenvector 10: Offsetting changes in weights more exc. less inh. less inh. less exc. perturb perturb perturb perturb right side avg. Fisher et al., Neuron, 2013

  27. Diversity of Solutions: Example Circuits Differing Only in Insensitive Components Two circuits with different connectivity… …differ only in their insensitive components Differences between circuits along each component log(difference) -5 -10 …but near-identical performance… right side left side

  28. Exercises: • 1) Symmetric mutual inhibitory/self-excitatory network • 2) Sensitivity analysis of the autapse: • sketch the model sloppy & sensitive directions in (w, t) • assuming experimental data is an exponential decay

  29. Extra Slides: Functionally feedforward & non-normal networks

  30. Recent data: “Time cells” observed in rat hippocampal recordings during delayed-comparison task MacDonald et al., Neuron, 2011: feedforward progression (Goldman, Neuron, 2009) (See also similar data during spatial navigation memory tasks by Pastalkova et al. 2008; Harvey et al. 2012)

  31. Neuronal firing rates Time (sec) Summed output Time (sec) Response of Individual Neurons in Line Attractor Networks All neurons exhibit similar slow decay: Due to strong coupling that mediates positive feedback Problem: Does not reproduce the heterogeneity in neuronal activity seen experimentally! Problem 2: To generate stable activity for 2 seconds (+/- 5%) requires 10-second long exponential decay

  32. Feedforward Networks Can Integrate! (Goldman, Neuron, 2009) Simplest example: Chain of neuron clusters that successively filter an input

  33. Integral of input! (up to duration ~Nt) (can prove this works analytically) Feedforward Networks Can Integrate! (Goldman, Neuron, 2009) Simplest example: Chain of neuron clusters that successively filter an input

  34. Generalization to Coupled Networks: Feedforward transitions between patterns of activity Recurrent (coupled) network Map each neuron to a combination of neurons by applying a coordinate rotation matrix R (Schur decomposition) Feedforward network Connectivity matrix Wij: Geometric picture: (Math of Schur:See Goldman, Neuron,2009; Murphy & Miller, Neuron, 2009; Ganguli et al., PNAS, 2008)

  35. Responses of functionally feedforward networks Functionally feedforward activity patterns… Feedforward network activity patterns & neuronal firing rates Effect of stimulating pattern 1:

  36. Imag(l) persistent mode 1 Real(l) Imag(l) Real(l) no persistent mode??? 1 Math Puzzle: Eigenvalue analysis does not predict long time scale of response! Line attractor networks: Neuronal responses: Eigenvalue spectra: Feedforward networks: (Goldman, Neuron,2009; see also: Murphy & Miller, Neuron 2009; Ganguli & Sompolinsky, PNAS 2008)

  37. Math Puzzle: Schur vs. Eigenvector Decompositions

  38. Answer to Math Puzzle: Pseudospectralanalysis ( Trefethen & Embree, Spectra & Pseudospectra, 2005) Eigenvalues l: Pseudoeigenvalues le: • Set of all values le that satisfy the inequality:||(W – le1)v|| <e • Govern transient responses • Can differ greatly from eigenvalues when eigenvectors are highly non-orthogonal (nonnormal matrices) • Satisfy equation:(W–l1)v =0 • Govern long-time asymptotic behavior Black dots: eigenvalues; Surrounding contours: colors give boundaries of set of pseudoeigenvals., for different values of e (from Supplement to Goldman, Neuron, 2009)

  39. Imag(l) persistent mode 1 Real(l) Imag(l) Real(l) no persistent mode??? 1 Answer to Math Puzzle: Pseudo-eigenvalues Normal networks: Neuronal responses Eigenvalues Feedforward networks:

  40. Imag(l) persistent mode 1 Real(l) Imag(l) transiently acts like persistent mode Real(l) Answer to Math Puzzle: Pseudo-eigenvalues Normal networks: Neuronal responses Eigenvalues Feedforward networks: Pseudoeigenvals 1 0 -1 1 (Goldman, Neuron,2009)

  41. Practical program for approaching equations coupled through a term Mx • Step 1: Find the eigenvalues and eigenvectors of M. • Step 2: Decompose x into its eigenvector components • Step 3: Stretch/scale each eigenvalue component • Step 4: (solve for c and) transform back to original coordinates. eig(M) in MATLAB

  42. Practical program for approaching equations coupled through a term Mx • Step 1: Find the eigenvalues and eigenvectors of M. • Step 2: Decompose x into its eigenvector components • Step 3: Stretch/scale each eigenvalue component • Step 4: (solve for c and) transform back to original coordinates.

  43. Practical program for approaching equations coupled through a term Mx • Step 1: Find the eigenvalues and eigenvectors of M. • Step 2: Decompose x into its eigenvector components • Step 3: Stretch/scale each eigenvalue component • Step 4: (solve for c and) transform back to original coordinates.

  44. Practical program for approaching equations coupled through a term Mx • Step 1: Find the eigenvalues and eigenvectors of M. • Step 2: Decompose x into its eigenvector components • Step 3: Stretch/scale each eigenvalue component • Step 4: (solve for c and) transform back to original coordinates.

  45. Practical program for approaching equations coupled through a term Mx • Step 1: Find the eigenvalues and eigenvectors of M. • Step 2: Decompose x into its eigenvector components • Step 3: Stretch/scale each eigenvalue component • Step 4: (solve for c and) transform back to original coordinates.

  46. Practical program for approaching equations coupled through a term Mx • Step 1: Find the eigenvalues and eigenvectors of M. • Step 2: Decompose x into its eigenvector components • Step 3: Stretch/scale each eigenvalue component • Step 4: (solve for c and) transform back to original coordinates.

More Related