1 / 84

Models of Learning

Models of Learning. Hebbian ~ coincidence Recruitment ~ one trial Supervised ~ correction (backprop) Reinforcement ~ delayed reward Unsupervised ~ similarity. Hebb’s Rule.

muniya
Télécharger la présentation

Models of Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Models of Learning • Hebbian ~ coincidence • Recruitment ~ one trial • Supervised ~ correction (backprop) • Reinforcement ~ delayed reward • Unsupervised ~ similarity

  2. Hebb’s Rule • The key idea underlying theories of neural learning go back to the Canadian psychologist Donald Hebb and is called Hebb’s rule. • From an information processing perspective, the goal of the system is to increase the strength of the neural connections that are effective.

  3. Hebb (1949) “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased” From: The organization of behavior.

  4. Hebb’s rule • Each time that a particular synaptic connection is active, see if the receiving cell also becomes active. If so, the connection contributed to the success (firing) of the receiving cell and should be strengthened. If the receiving cell was not active in this time period, our synapse did not contribute to the success the trend and should be weakened.

  5. strengthen weaken LTP and Hebb’s Rule • Hebb’s Rule: neurons that fire together wire together • Long Term Potentiation (LTP) is the biological basis of Hebb’s Rule • Calcium channels are the key mechanism

  6. Chemical realization of Hebb’s rule • It turns out that there are elegant chemical processes that realize Hebbian learning at two distinct time scales • Early Long Term Potentiation (LTP) • Late LTP • These provide the temporal and structural bridge from short term electrical activity, through intermediate memory, to long term structural changes.

  7. Calcium Channels Facilitate Learning • In addition to the synaptic channels responsible for neural signaling, there are also Calcium-based channels that facilitate learning. • As Hebb suggested, when a receiving neuron fires, chemical changes take place at each synapse that was active shortly before the event.

  8. Long Term Potentiation (LTP) • These changes make each of the winning synapses more potent for an intermediate period, lasting from hours to days (LTP). • In addition, repetition of a pattern of successful firing triggers additional chemical changes that lead, in time, to an increase in the number of receptor channels associated with successful synapses - the requisite structural change for long term memory. • There are also related processes for weakening synapses and also for strengthening pairs of synapses that are active at about the same time.

  9. The Hebb rule is found with long term potentiation (LTP) in the hippocampus Schafer collateral pathway Pyramidal cells 1 sec. stimuli At 100 hz

  10. During normal low-frequency trans-mission, glutamate interacts with NMDA and non-NMDA (AMPA) and metabotropic receptors. With high-frequency stimulation, Calcium comes in

  11. Enhanced Transmitter Release AMPA

  12. Early and late LTP • (Kandel, ER, JH Schwartz and TM Jessell (2000) Principles of Neural Science. New York: McGraw-Hill.) • Experimental setup for demonstrating LTP in the hippocampus. The Schaffer collateral pathway is stimulated to cause a response in pyramidal cells of CA1. • Comparison of EPSP size in early and late LTP with the early phase evoked by a single train and the late phase by 4 trains of pulses.

  13. Computational Models based onHebb’s rule The activity-dependent tuning of the developing nervous system, as well as post-natal learning and development, do well by following Hebb’s rule. Explicit Memory in mammals appears to involve LTP in the Hippocampus. Many computational systems for modeling incorporate versions of Hebb’s rule. • Winner-Take-All: • Units compete to learn, or update their weights. • The processing element with the largest output is declared the winner • Lateral inhibition of its competitors. • Recruitment Learning • Learning Triangle Nodes • LTP in Episodic Memory Formation

  14. WTA: Stimulus ‘at’ is presented 1 2 a t o

  15. Competition starts at category level 1 2 a t o

  16. Competition resolves 1 2 a t o

  17. Hebbian learning takes place 1 2 a t o Category node 2 now represents ‘at’

  18. Presenting ‘to’ leads to activation of category node 1 1 2 a t o

  19. Presenting ‘to’ leads to activation of category node 1 1 2 a t o

  20. Presenting ‘to’ leads to activation of category node 1 1 2 a t o

  21. Presenting ‘to’ leads to activation of category node 1 1 2 a t o

  22. Category 1 is established through Hebbian learning as well 1 2 a t o Category node 1 now represents ‘to’

  23. Hebb’s rule is not sufficient • What happens if the neural circuit fires perfectly, but the result is very bad for the animal, like eating something sickening? • A pure invocation of Hebb’s rule would strengthen all participating connections, which can’t be good. • On the other hand, it isn’t right to weaken all the active connections involved; much of the activity was just recognizing the situation – we would like to change only those connections that led to the wrong decision. • No one knows how to specify a learning rule that will change exactly the offending connections when an error occurs. • Computer systems, and presumably nature as well, rely upon statistical learning rules that tend to make the right changes over time. More in later lectures.

  24. tastebud tastes rotten eats food gets sick drinks water Hebb’s rule is insufficient • should you “punish” all the connections?

  25. Models of Learning • Hebbian ~ coincidence • Recruitment ~ one trial • Supervised ~ correction (backprop) • Reinforcement ~ delayed reward • Unsupervised ~ similarity

  26. Recruiting connections • Given that LTP involves synaptic strength changes and Hebb’s rule involves coincident-activation based strengthening of connections • How can connections between two nodes be recruited using Hebbs’s rule?

  27. K Y X N B F = B/N the point is, with a fan-out of1000, if we allow 2 intermediate layers, we can almost always find a path The Idea of Recruitment Learning • Suppose we want to link up node X to node Y • The idea is to pick the two nodes in the middle to link them up • Can we be sure that we can find a path to get from X to Y?

  28. X Y

  29. X Y

  30. Finding a Connection P = (1-F) **B**K P = Probability of NO link between X and Y N = Number of units in a “layer” B = Number of randomly outgoing units per unit F = B/N , the branching factor K = Number of Intermediate layers, 2 in the example N= 106 107 108 K= # Paths = (1-Pk-1)*(N/F) = (1-Pk-1)*B

  31. Finding a Connection in Random Networks For Networks with N nodes and branching factor, there is a high probability of finding good links. (Valiant 1995)

  32. Recruiting a Connection in Random Networks • Informal Algorithm • Activate the two nodes to be linked • Have nodes with double activation strengthen their active synapses (Hebb) • There is evidence for a “now print” signal based on LTP (episodic memory)

  33. A B C Triangle nodes and feature structures A B C

  34. Representing concepts using triangle nodes

  35. Recruiting triangle nodes • Let’s say we are trying to remember a green circle • currently weak connections between concepts (dotted lines) has-color has-shape blue green round oval

  36. Strengthen these connections • and you end up with this picture has-color has-shape Greencircle blue green round oval

  37. Has-color Has-shape Green Round

  38. Has-color Has-shape GREEN ROUND

  39. Back Propagation Jerome Feldman CS182/CogSci110/Ling109 Spring 2007

  40. Types of Activation functions

  41. Linearly separable patterns An architecture for a Perceptron which can solve this type of decision boundary problem. An "on" response in the output node represents one class, and an "off" response represents the other. Linearly Separable Patterns

  42. Multi-layer Feed-forward Network

  43. XOR -0.5 1 -1 OR AND -0.5 -1.5 1 1 1 1 Boolean XOR o h1 h1 x1 x1

  44. Pattern Separation and NN architecture

  45. Supervised Learning - Backprop • How do we train the weights of the network • Basic Concepts • Use a continuous, differentiable activation function (Sigmoid) • Use the idea of gradient descent on the error surface • Extend to multiple layers

  46. “activations” “errors” Backpropagation Algorithm

  47. Backprop • To learn on data which is not linearly separable: • Build multiple layer networks (hidden layer) • Use a sigmoid squashing function instead of a step function.

  48. Tasks Unconstrained pattern classification Credit assessment Digit Classification Function approximation Learning control Stock prediction

More Related