1 / 25

Unsupervised Learning K-means clustering The EM Algorithm Competetive Learning & SOM Networks

IES 511 Machine Learning Dr. T ürker İnce (Lecture notes by Prof. T. M. Mitchell, Machine Learning course at CMU & Prof. E. Alpayd ı n, Introduction to Machine Learning, MIT Press, 2004.). Unsupervised Learning K-means clustering The EM Algorithm Competetive Learning & SOM Networks.

macey-eaton
Télécharger la présentation

Unsupervised Learning K-means clustering The EM Algorithm Competetive Learning & SOM Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IES 511Machine Learning Dr. Türker İnce(Lecture notes by Prof. T. M. Mitchell, Machine Learning course at CMU & Prof. E. Alpaydın, Introduction to Machine Learning, MIT Press, 2004.) Unsupervised Learning K-means clustering The EM Algorithm Competetive Learning & SOM Networks

  2. EM for two Gaussian Mixture

  3. Simple Competitive Learning • Unsupervised learning • Goal: • Learn to form classes/clusters of examplers/sample patterns according to similarities of these examplers. • Patterns in a cluster would have similar features • No prior knowledge as what features are important for classification, and how many classes are there. • Architecture: • Output nodes: Y_1,…….Y_m, representing the m classes • They are competitors (Winner-Take-All algorithm)

  4. Training: • Train the network such that the weight vector wj associated with jth output node becomes the representative vector of a class of similar input patterns. • Initially all weights are randomly assigned • Two phase unsupervised learning • competing phase: • apply an input vector randomly chosen from sample set. • compute output for all output nodes: • determine the winner among all output nodes (winner is not given in training samples so this is unsupervised) • rewarding phase: • the winner is rewarded by updating its weights to be closer to (weights associated with all other output nodes are not updated: kind of WTA) • repeat the two phases many times (and gradually reduce the learning rate) until all weights are stabilized.

  5. Weight update: • Method 1: Method 2: • In each method, is moved closer to il • Normalize the weight vector to unit length after it is updated • Sample input vectors are also normalized • Distance il – wj il +wj η (il - wj) il il ηil wj wj + η(il - wj) wj wj + ηil

  6. wj is moving to the center of a cluster of sample vectors after repeated weight updates • Node j wins for three training • samples: i1 , i2 and i3 • Initial weight vector wj(0) • After successively trained • by i1 , i2 and i3 , • the weight vector • changes to wj(1), • wj(2), and wj(3), wj(0) wj(3) wj(1) i3 i1 wj(2) i2

  7. SOM Examples • Input vectors are uniformly distributed in the region, and randomly drawn from the region • Weight vectors are initially drawn from the same region randomly (not necessarily uniformly) • Weight vectors become ordered according to the given topology (neighborhood), at the end of training

  8. 1-D Lattice

  9. 2-D Lattice

  10. WEBSOM • See http://www.cis.hut.fi/research/som-research. • WEBSOM: http://websom.hut.fi/websom/ Self-organizing maps of document collections. • Goal: Automatically order and organize arbitrary free-form textual document collections to enable their easier browsing and exploration. • Reference paper for next slides: • S. Kaki et al. Statistical aspects of the WEBSOM system in organizing document collections, Computing Science and Statistics 29:281-290, 1998.

  11. WEBSOM All words of document are mapped into the word category map Histogram of “hits” on it is formed • Self-organizing map. • Largest experiments have used: • word-category map 315 neurons with 270 inputs each • Document-map 104040 neurons with 315 inputs each Self-organizing semantic map. 15x21 neurons Interrelated words that have similar contexts appear close to each other on the map • Training done with 1124134 documents

  12. Word histogram

  13. Word categories

  14. A map of documents

More Related