1 / 56

Agglomerative clustering (AC)

Agglomerative clustering (AC). Clustering algorithms: Part 2c. Pasi Fränti 25.3.2014 Speech & Image Processing Unit School of Computing University of Eastern Finland Joensuu, FINLAND. Agglomerative clustering Categorization by cost function. Single link

luigi
Télécharger la présentation

Agglomerative clustering (AC)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Agglomerative clustering (AC) Clustering algorithms: Part 2c • Pasi Fränti • 25.3.2014 • Speech & Image Processing Unit • School of Computing • University of Eastern Finland • Joensuu, FINLAND

  2. Agglomerative clusteringCategorization by cost function Single link • Minimize distance of nearest vectors Complete link • Minimize distance of two furthest vectors Ward’s method • Minimize mean square error • In Vector Quantization, known as Pairwise Nearest Neighbor (PNN) method We focus on this

  3. Pseudo code

  4. Pseudo code • PNN(X, M) → C, P • FOR i←1 TO N DO • p[i]←i; c[i]←x[i]; • REPEAT • a,b ← FindSmallestMergeCost(); • MergeClusters(a,b); • m←m-1; • UNTIL m=M; O(N) O(N2) N times T(N) = O(N3)

  5. Ward’s method[Ward 1963: Journal of American Statistical Association] Merge cost: Local optimization strategy: Nearest neighbor search: • Find the cluster pair to be merged • Update of NN pointers

  6. Example of distance calculations

  7. Example of the overall process M=5000 M=50 M=5000 M=4999 M=4998 . . . M=50 . . M=16 M=15 M=16 M=15

  8. Detailed example of the process

  9. Example - 25 Clusters MSE ≈ 1.01*109

  10. Example - 24 Clusters MSE ≈ 1.03*109

  11. Example - 23 Clusters MSE ≈ 1.06*109

  12. Example - 22 Clusters MSE ≈ 1.09*109

  13. Example - 21 Clusters MSE ≈ 1.12*109

  14. Example - 20 Clusters MSE ≈ 1.16*109

  15. Example - 19 Clusters MSE ≈ 1.19*109

  16. Example - 18 Clusters MSE ≈ 1.23*109

  17. Example - 17 Clusters MSE ≈ 1.26*109

  18. Example - 16 Clusters MSE ≈ 1.30*109

  19. Example - 15 Clusters MSE ≈ 1.34*109

  20. Storing distance matrix • Maintain the distance matrix and update rows for the changed cluster only! • Number of distance calculations reduces from O(N2) to O(N) for each step. • Search of the minimum pair still requires O(N2) time  still O(N3) in total. • It also requires O(N2) memory.

  21. Heap structure for fast search[Kurita 1991: Pattern Recognition] • Search reduces O(N2)  O(logN). • In total: O(N2 logN)

  22. Store nearest neighbor (NN) pointers[Fränti et al., 2000: IEEE Trans. Image Processing] Time complexity reduces to O(N 3)  Ω (N 2)

  23. Pseudo code • PNN(X, M) → C, P • FOR i←1 TO N DO • p[i]←i; c[i]←x[i]; • FOR i←1 TO N DO • NN[i]← FindNearestCluster(i); • REPEAT • a ← SmallestMergeCost(NN); • b ← NN[i]; • MergeClusters(C,P,NN,a,b,); • UpdatePointers(C,NN); • UNTIL m=M; O(N) O(N2) O(N) O(N) http://cs.uef.fi/pages/franti/research/pnn.txt

  24. Example with NN pointers[Virmajoki 2004: Pairwise Nearest Neighbor Method Revisited ]

  25. ExampleStep 1

  26. ExampleStep 2

  27. ExampleStep 3

  28. ExampleStep 4

  29. ExampleFinal

  30. Time complexities of the variants

  31. Number of neighbors (τ)

  32. Processing time comparison With NN pointers

  33. Algorithm:Lazy-PNN T. Kaukoranta, P. Fränti and O. Nevalainen, "Vector quantization by lazy pairwise nearest neighbor method", Optical Engineering, 38 (11), 1862-1868, November 1999

  34. Monotony property of merge cost [Kaukoranta et al., Optical Engineering, 1999] Merge costs values are monotonically increasing: d(Sa, Sb) d(Sa, Sc) d(Sb, Sc)  d(Sa, Sc)  d(Sa+b, Sc)

  35. Lazy variant of the PNN • Store merge costs in heap. • Update merge cost value only when it appears at top of the heap. • Processing time reduces about 35%.

  36. Combining PNN and K-means K-means

  37. Algorithm:Iterative shrinking P. Fränti and O. Virmajoki “Iterative shrinking method for clustering problems“Pattern Recognition, 39 (5), 761-765, May 2006.

  38. Agglomerative clustering based on merging

  39. Agglomeration based on cluster removal[Fränti and Virmajoki, Pattern Recognition, 2006]

  40. Merge versus removal

  41. Pseudo code of iterative shrinking (IS)

  42. Cluster removal in practice Find secondary cluster: Calculate removal cost for every vector:

  43. Partition updates

  44. Complexity analysis Number of vectors per cluster: If we iterate until M=1: Adding the processing time per vector:

  45. Algorithm:PNN with kNN-graph P. Fränti, O. Virmajoki and V. Hautamäki, "Fast agglomerative clustering using a k-nearest neighbor graph". IEEE Trans. on Pattern Analysis and Machine Intelligence, 28 (11), 1875-1881, November 2006

  46. Agglomerative clustering with kNN graph

  47. Example of 2NN graph

  48. Example of 4NN graph

  49. Graph using double linked lists

  50. Merging a and b

More Related