1 / 30

On Bubbles and Drifts: Continuous attractor networks in brain models

On Bubbles and Drifts: Continuous attractor networks in brain models. Thomas Trappenberg Dalhousie University, Canada. Once upon a time ... (my CANN shortlist). Wilson & Cowan (1973) Grossberg (1973) Amari (1977) … Sampolinsky & Hansel (1996) Zhang (1997) … Stringer et al (2002).

Télécharger la présentation

On Bubbles and Drifts: Continuous attractor networks in brain models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On Bubbles and Drifts:Continuous attractor networks in brain models Thomas Trappenberg Dalhousie University, Canada

  2. Once upon a time ... (my CANN shortlist) • Wilson & Cowan (1973) • Grossberg (1973) • Amari (1977) • … • Sampolinsky & Hansel (1996) • Zhang (1997) • … • Stringer et al (2002)

  3. It’s just a `Hopfield’ net … Recurrent architecture Synaptic weights

  4. In mathematical terms … Updating network states (network dynamics) Gain function Weight kernel

  5. Weights describe the effective interaction profile in Superior Colliculus TT, Dorris, Klein & Munoz, J. Cog. Neuro. 13 (2001)

  6. End states Network can form bubbles of persistent activity (in Oxford English: activity packets)

  7. Space is represented with activity packets in the hippocampal system From Samsonovich & McNaughton Path integration and cognitive mapping in a continuous attractor neural J. Neurosci. 17 (1997)

  8. There are phase transitions in the weight-parameter space

  9. CANNs work with spiking neurons Xiao-Jing Wang, Trends in Neurosci. 24 (2001)

  10. Shutting-off works also in rate model Node Time

  11. Various gain functions are used End states

  12. CANNs can be trained with Hebb Hebb: Training pattern:

  13. Normalization is important to have convergent method • Random initial states • Weight normalization w(x,y) w(x,50) x x y Training time

  14. Gradient-decent learning is also possible (Kechen Zhang) Gradient decent with regularization = Hebb + weight decay

  15. CANNs have a continuum of point attractors Point attractors and basin of attraction Line of point attractors Can be mixed: Rolls, Stringer, Trappenberg A unified model of spatial and episodic memory Proceedings B of the Royal Society 269:1087-1093 (2002)

  16. Neuroscience applications of CANNs • Persistent activity (memory) and winner-takes-all (competition) • Working memory (e.g. Compte, Wang, Brunel etc) • Place and head direction cells (e.g. Zhang, Redish, Touretzky, • Samsonovitch, McNaughton, Skaggs, Stringer et al.) • Attention (e.g. Olshausen, Salinas & Abbot, etc) • Population decoding (e.g. Wu et al,Pouget, Zhang, Deneve, etc ) • Oculomotor programming (e.g. Kopecz & Schoener, Trappenberg) • etc

  17. L I P S E F F E F T h a l C N S N p r S C Cerebellum R F Superior colliculus intergrates exogenous and endogenous inputs

  18. Superior Colliculus is a CANN TT, Dorris, Klein & Munoz, J. Cog. Neuro. 13 (2001)

  19. CANN with adaptive input strength explains express saccades

  20. CANN are great for population decoding (fast pattern matching implementation)

  21. CANN (integrators) are stiff

  22. … and drift and jump TT, ICONIP'98

  23. Modified CANN solves path-integration

  24. CANNs can learn dynamic motor primitives Stringer, Rolls, TT, de Araujo, Neural Networks 16 (2003).

  25. NMDA stabilization Drift is caused by asymmetries

  26. CANN can support multiple packets Stringer, Rolls & TT, Neural Networks 17 (2004)

  27. How many activity packets can be stable? T.T., Neural Information Processing-Letters and Reviews, Vol. 1 (2003)

  28. Stabilization can be too strong TT & Standage, CNS’04

  29. CANN can discover dimensionality

  30. The model equations: Continuous dynamic (leaky integrator): : activity of node i : firing rate : synaptic efficacy matrix : global inhibition : visual input : time constant : scaling factor : #connections per node : slope : threshold NMDA-style stabilization: Hebbian learning:

More Related