1 / 37

Synaptic Dynamics: Unsupervised Learning

Synaptic Dynamics: Unsupervised Learning. Part Ⅰ Xiao Bing. Input. 处理 单元. Output. Input. 处理 单元. Output. outline. Learning Supervised Learning and Unsupervised Learning Supervised Learning and Unsupervised Learning in neural network Four Unsupervised Learning Laws. outline.

sondrav
Télécharger la présentation

Synaptic Dynamics: Unsupervised Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Synaptic Dynamics:Unsupervised Learning Part Ⅰ Xiao Bing

  2. Input 处理 单元 Output Input 处理 单元 Output

  3. outline • Learning • Supervised Learning and Unsupervised Learning • Supervised Learning and Unsupervised Learning in neural network • Four Unsupervised Learning Laws

  4. outline • Learning • Supervised Learning and Unsupervised Learning • Supervised Learning and Unsupervised Learning in neural network • Four Unsupervised Learning Laws

  5. Learning • Encoding A system learns a pattern if the system encodes the pattern in its structure. • Change A system learns or adapts or “self -organizes” when sample data changes system parameters. • Quantization A system learns only a small proportion of all patterns in the sampled pattern environment, so quantization is necessary.

  6. Learning • Encoding: A system learns a pattern if the system encodes the pattern in its structure. • Change: A system learns or adapts or “self -organizes” when sample data changes system parameters. • Quantization A system learns only a small proportion of all patterns in the sampled pattern environment.

  7. S Encoding • A system has Learned a stimulus-response pair • If is a sample from the function A system has learned if the system responses with for all ,and .

  8. Encoding Close to Close to , S • A system has partially learned or approximated the function .

  9. Learning • Encoding: A system learns a pattern if the system encodes the pattern in its structure. • Change: A system learns or adapts or “self -organizes” when sample data changes system parameters. • Quantization A system learns only a small proportion of all patterns in the sampled pattern environment.

  10. Change • We have learned calculus if our calculus-exam-behavior has changed from failing to passing. • A system learns when pattern stimulation change a memory medium and leaves it changed for some comparatively long stretch of time.

  11. Change Please pay attention to: • We identify learning with change in any synapse, not in a neuron.

  12. Learning • Encoding: A system learns a pattern if the system encodes the pattern in its structure. • Change: A system learns or adapts or “self -organizes” when sample data changes system parameters. • Quantization A system learns only a small proportion of all patterns in the sampled pattern environment.

  13. Quantization Pattern space sampling Sampled pattern space quantizing Quantized pattern space Uniform(一致的) sampling probability provides an information-theoretic criterion for an optimal quantization.

  14. Quantization 1.Learning replaces old stored patterns with new patterns and forms “internal representations” or prototypes of sampled patterns. 2.Learned prototypes define quantized patterns.

  15. Quantization • Neural network models prototype patterns are presented as vectors of real numbers. learning “adaptive vector quantization” (AVQ)

  16. Quantization Process of learning • Quantize pattern space from into regions of quantization or decision classes. • Learned prototype vectors define synaptic points . • If and only if some point moves in the pattern space ,the system learns see also figure 4.1, page 113

  17. outline • Learning • Supervised Learning and Unsupervised Learning • Supervised Learning and Unsupervised Learning in neural network • Four Unsupervised Learning Laws

  18. Supervised Learning and Unsupervised Learning • Criterion Whether the learning algorithm uses pattern-class information

  19. outline • Learning • Supervised Learning and Unsupervised Learning • Supervised Learning and Unsupervised Learning in neural network • Four Unsupervised Learning Laws

  20. Supervised Learning and Unsupervised Learning in neural network • Besides differences presented before, there are more differences between supervised learning and unsupervised learning in neural network.

  21. Unsupervised Learning in neural network • Local information is information physically available to the synapse. • The differential equations define unsupervised learning laws and describe how synapses evolve with local information.

  22. Unsupervised Learning in neural network • Local information include: synaptic properties or neuronal signal properties information of structural and chemical alterations in neurons and synapses …… Synapse has access to this information only briefly.

  23. Unsupervised Learning in neural network Function of local information • Allowing asynchronous synapses to learn in real time. • Shrinking the function space of feasible unsupervised learning laws.

  24. outline • Learning • Supervised Learning and Unsupervised Learning • Supervised Learning and Unsupervised Learning in neural network • Four Unsupervised Learning Laws

  25. Four Unsupervised Learning Laws • Signal Hebbian • Competitive • Differential Hebbian • Differential competitive

  26. Output neuron field Input neuron field presynaptic Neuron i Neuron j postsynaptic dendrite axon dendrite axon Synapse Four Unsupervised Learning Laws

  27. Signal Hebbian • Correlating local neuronal signals • If neuron i and neuron j are activated synchronously, energy of synapse is strengthened, or energy of synapse is weakened.

  28. Competitive • Modulating the signal-synaptic difference with the zero-one competitive signal (signal of neuron j ). • Synapse learns only if their postsynaptic neurons win. • Postsynaptic neurons code for presynaptic signal patterns.

  29. Differential Hebbian • Correlating signal velocities as well as neuronal signals • The signal velocity is obtained by differential of neuronal signal

  30. Differential competitive • Combining competitive and differential Hebbian learning • Learn only if change

  31. See also • Simple competitive learning applet of neuronal networks http://www.psychology.mcmaster.ca/4i03/demos/competitive1-demo.html

  32. See also • Kohonen SOM applet http://www.psychology.mcmaster.ca/4i03/demos/competitive-demo.html

  33. Welcome Wang Xiumei and Wang Ying to introduce four unsupervised learning laws in detail

More Related