1 / 25

IE 585

IE 585. Competitive Network – I Hamming Net & Self-Organizing Map. Competitive Nets. Unsupervised MAXNET Hamming Net Mexican Hat Net Self-Organizing Map (SOM) Adaptive Resonance Theory (ART) Supervised Learning Vector Quantization (LVQ) Counterpropagation. Clustering Net.

lesley-dyer
Télécharger la présentation

IE 585

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IE 585 Competitive Network – I Hamming Net & Self-Organizing Map

  2. Competitive Nets Unsupervised • MAXNET • Hamming Net • Mexican Hat Net • Self-Organizing Map (SOM) • Adaptive Resonance Theory (ART) Supervised • Learning Vector Quantization (LVQ) • Counterpropagation

  3. Clustering Net • Number of input neurons equal to the dimension of input vectors • Each output neuron represents a cluster  the number of output neurons limits the number of clusters that can be performed • The weight vector for an output neuron serves as a representative for the input patterns which the net has placed on that cluster • The weight vector for the winning neuron is adjusted

  4. Winner-Take-All • The squared Euclidean distance is used to determine the closest weight vector to a pattern vector • Only the neuron with the smallest Euclidean distance from the input vector is allowed to update

  5. MAXNET • Developed by Lippmann, 1987 • Can be used as a subset to pick the node whose input is the largest • Completely interconnected (including self-connection) • Symmetric weights • No training • Weights are fixed

  6. Architecture of MAXNET 1 1 1 1

  7. Procedure of MAXNET Initialize activations and weights Update activation of each node If more than one node has a nonzero activation, continue; otherwise, stop

  8. Hamming Net • Developed by Lippmann, 1987 • A maximum likelihood classifier • used to determine which of several exemplar vectors is most similar to an input vector • Exemplar vectors determine the weights of the net • Measure of similarity between the input vector and the stored exemplar vectors is (n – HD between the vectors)

  9. Weights and Transfer Function of Hamming Net

  10. Architecture of Hamming Net MAXNET y1 y2 B B x1 x2 x3 x4

  11. Procedure of the Hamming Net Initialize weights to store the m exemplar vectors For each input vector x compute initialize activation for MAXNET MAXNET iterates for find the best match exemplar

  12. Hamming Net Example

  13. Mexican Hat Net • Developed by Kohonen, 1989 • Positive weight with “cooperative neighboring” neurons • Negative weight with “competitive neighboring” neurons • Not connect with far away neurons

  14. Teuvo Kohonen • http://www.cis.hut.fi/teuvo/ (his own home page) • published his work starting in 1984 • LVQ - learning vectorquantization • SOM - self organizingmap Professor at Helsinki Univ. Finland

  15. SOM • Also called Topology-Preserving Maps or Self-Organizing Feature Maps (SOFM) • “Winner Take All” learning (also called competitive learning) • winner has the minimum Euclidean distance • learning only takes place for winner • final weights are at the centroids of each cluster • Continuous inputs, continuous or 0/1 (winner take all) outputs • No bias, fully connected • used for data mining and exploration • supervised version exists

  16. O U T P U T S (y’s) I N P U T S (a’s) W n Input Layer Architecture of SOM Net

  17. Kohonen Learning Rule Derivation

  18. Kohonen Learning

  19. Procedure of SOM Initialize weights uniformly and normalize to unit length Normalize inputs to unit length Present an input vector x calculate Euclidean distance between x and all Kohonen neurons select winning output neuron j (with the smallest distance) update the winning neuron re-normalize weights to j (sometimes skipped) present next training vector

  20. Method • Normalize input vectors, a, by: • Normalize weight vectors, w, by: • Calculate distance from a to each w by:

  21. Min d wins (this is the winning neuron) • Update w of the min d neuron by: • Return to 2 and repeat for all input vectors a • Reduce  if applicable • Repeat until weights converge (stop changing)

  22. SOM Example - 4 patterns =0.25

  23. Movement of 4 weight clusters

  24. Adding a “conscience” • prevents neurons from winning too many training vectors using a bias (b) factor • winner had min (d-b) where bj=10(1/n-fj) (n=# output neurons) fjnew=fjold+0.0001(yj-fjold) finitial=1/n • for neurons that win, b becomes negative and for neurons that don’t win, b becomes positive

  25. Supervised Version • Same, except if the winning neuron is “correct” use same weight update: wnew = wold+(a - wold) and • if winning neuron is “incorrect” use: wnew = wold - (a - wold)

More Related