1 / 35

Model-Based Clustering by Probabilistic Self-Organizing Maps

Model-Based Clustering by Probabilistic Self-Organizing Maps. Presenter : Chien-Hsing Chen Author: Shih-Sian Cheng Hsin-Chia Fu Hsin -Min Wang. 2009.IEEE TNN.22. Outline. Motivation Objective Method Experiments Conclusion Comment. Motivation. develop a mixture clustering model

wyatt-berry
Télécharger la présentation

Model-Based Clustering by Probabilistic Self-Organizing Maps

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Model-Based Clustering by Probabilistic Self-Organizing Maps Presenter:Chien-Hsing Chen Author: Shih-Sian Cheng Hsin-Chia Fu Hsin-Min Wang 2009.IEEE TNN.22

  2. Outline • Motivation • Objective • Method • Experiments • Conclusion • Comment

  3. Motivation • develop a mixture clustering model • EM, CEM, DAEM are applied to combine with PbSOM • Background knowledge • competition, cooperation in SOM • EM (E-step, M-step), K-means ? • likelihood • multivariate Gaussian distribution • when K-means = SOM ?

  4. Objective introduce three approaches, and a PbSOM combine the three approaches with PbSOM PbSOM

  5. EM 2 6 8 48 5 9 1 assume t=15 need update? expect that each xi can be close to a certain k 5 6 =p(2;θk=1) = the value is large p(48;θk=1) = the value is small

  6. CEM 48 3 6 8 2 5 9 1 assume t=15 need update? expect that each k has good quality of data projection 5 4

  7. EM: DAEM 2 6 8 48 5 9 1 large 5 4 small expect that do not believe f(k|xi ; θ) too much, when t=1 believe f(k|xi ; θ) larger, when t=10 initialization bias, then local optimal, <1 gradual increase to 1

  8. EM based approaches • EM • CEM • DAEM

  9. Overall PbSOM

  10. Principle concept of PbSOM 3 xi xi 53 23 53 8 87 2 1 98 9 || 3- 1|| || 3-8 || || 3-9 || xi || 8-1 || || 8-1 || When selecting the winning neuron, PbSOM considers the neighborhood information; in contrast, SOM, does not.

  11. PbSOM (Probabilistic SOM) xi xi 5 4 xi k, if k=argmink||xi - nk|| (energy function to be maximized)

  12. Multivariate Gaussian distribution l x5 x1, x5, x7, x8~iid~N(ul, ) x1 x7 x8

  13. PbSOM (Probabilistic SOM) 5 4 xi

  14. PbSOM (Probabilistic SOM) 5 4 xi

  15. Overall PbSOM

  16. EM based approaches • EM • CEM • DAEM h h PbSOM h h h h

  17. SOCEM (PbSOM+CEM) • CEM 48 3 h 6 8 2 h 5 9 1 conversional SOM update: ||xi - nl|| ||nk- nl|| batch update: xi / N ||nk- nl|| similar to a batch K-means algorithm with considering h

  18. 2 SOME (PbSOM+EM) 6 8 • EM 48 h 5 9 1 h similar to a batch K-means algorithm with considering h

  19. SODAEM (PbSOM+DASOM) • DAEM h h

  20. overall PbSOM

  21. Experiment- SOCEM σ in hklgradually recued 0.6 to 0.15 σ= 0.6 σ= 0.45 σ= 0.3 σ= 0.15

  22. Experiment- SOEM compare to previous page, this result is more global

  23. Experiment- SODAEM SODAEM is almost equivalent to SOME and SOCEM, respectively. When uses different β. It is not able to obtain ordered map during learning process if the value of σ is to small.

  24. Experiment-SOCEM

  25. Experiment-SOEM

  26. Experiment-SODAEM

  27. Experiment stability without PbSOM

  28. distinguish Experiment 1/2 KohonenSOM SOCEM SOCEM

  29. Experiment

  30. Experiment

  31. Experiment

  32. Experiment

  33. Experiment

  34. Conclusion PbSOM

  35. Comment • Advantage • a mixture approach, sounds solid, is presented • Drawback • less novelty • Is it better than conversional SOM? • Application • SOM

More Related