1 / 35

Clustering Algorithms

Clustering Algorithms. Information Retrieval: Data Structures and Algorithms by W.B. Frakes and R. Baeza-Yates (Eds.) Englewood Cliffs, NJ: Prentice Hall, 1992. (Chapter 16). Application of Clustering. Term clustering : from column viewpoint thesaurus construction

tbyrne
Télécharger la présentation

Clustering Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Clustering Algorithms Information Retrieval: Data Structures and Algorithms by W.B. Frakes and R. Baeza-Yates (Eds.) Englewood Cliffs, NJ: Prentice Hall, 1992. (Chapter 16)

  2. Application of Clustering • Term clustering: from column viewpoint • thesaurus construction • Document clustering:from row viewpoint • searching • browsing

  3. Automatic Document Classification • Searching vs. Browsing • Disadvantages in using inverted index files • information pertaining to a document is scattered among many different inverted-term lists • information relating to different documents with similar term assignment is not in close proximity in the file system • Approaches • inverted-index files (for searching) +clustered document collection (for browsing) • clustered file organization (for searching and browsing)

  4. Typical Clustered File Organization Highest-level centroid Supercentroids Centroids Documents Typical Search path Centroids Documents

  5. Cluster Generation vs. Cluster Search • Cluster generation • Cluster structure is generated only once. • Cluster maintenance can be carried out at relatively infrequent intervals. • Cluster generation process may be slower and more expensive. • Cluster search • Cluster search operations may have to be performed continually. • Cluster search operations must be carried out efficiently.

  6. Hierarchical Cluster Generation • Two strategies • pairwise item similarities • heuristic methods • Models • Divisive Clustering (top down) • The complete collection is assumed to represent one complete cluster. • Then the collection is subsequently broken down into smaller pieces. • Hierarchical Agglomerative Clustering (bottom up) • Individual item similarities are used as a starting point. • A gluing operation collects similar items, or groups, into larger group.

  7. Hierarchical Agglomerative Clustering • Basic procedure • 1.Place each of N documents into a class of its own. • 2. Compute all pairwise document-document similarity • coefficients. (N(N-1)/2 coefficients) • 3. Form a new cluster by combining the most similar pair • of current clusters i and j; • update similarity matrix by deleting the rows and • columns corresponding to i and j; • calculate the entries in the row corresponding to the • new cluster i+j. • 4. Repeat step 3 if the number of clusters left is great than 1.

  8. How to Combine Clusters? • Intercluster similarity • Single-link • Complete-link • Group average link • Single-link clustering • Each document must have a similarity exceeding a stated threshold value with at least one other document in the same class. • similarity between a pair of clusters is taken to be the similarity between the most similar pair of items • each cluster member will be more similar to at least one member in that same cluster than to any member of another cluster

  9. How to Combine Clusters? (Continued) • Complete-link Clustering • Each document has a similarity to all other documents in the same class that exceeds the the threshold value. • similarity between the least similar pair of items from the two clusters is used as the cluster similarity • each cluster member is more similar to the most dissimilar member of that cluster than to the most dissimilar member of any other cluster

  10. How to Combine Clusters? (Continued) • Group-average link clustering • a compromise between the extremes of single-link and complete-link systems • each cluster member has a greater average similarity to the remaining members of that cluster than it does to all members of any other cluster

  11. Example for Agglomerative Clustering A-F(6 items)6(6-1)/2(15)pairwise similarities decreasing order

  12. Single Link Clustering A B C D E F A . .3 .5 .6 .8 .9 B .3 . .4 .5 .7 .8 C .5 .4 . .3 .5 .2 D .6 .5 .3 . .4 .1 E .8 .7 .5 .4 . .3 F .9 .8 .2 .1 .3 . AF B C D E AF . .8 .5 .6 .8 B .8 . .4 .5 .7 C .5 .4 . .3 .5 D .6 .5 .3 . .4 E .8 .7 .5 .4 . 0.8 0.9 E A F 0.9 1. AF 0.9 A F sim(AF,X)=max(sim(A,X),sim(F,X)) 2. AE 0.8 sim(AEF,X)=max(sim(AF,X),sim(E,X))

  13. Single Link Clustering (Cont.) AEF B C D AEF . .8 .5 .6 B .8 . .4 .5 C .5 .4 . .3 D .6 .5 .3 . 0.8 3. BF 0.8 0.9 B E Note E and B are on the same level. A F sim(ABEF,X)=max(sim(AEF,X), sim(B,X)) ABEF C D ABEF . .5 .6 C .5 . .3 D .6 .3 . 0.8 4. BE 0.7 0.9 B E A F sim(ABDEF,X)=max(sim(ABEF,X), sim(D,X))

  14. Single Link Clustering (Cont.) 0.6 ABDEF C ABDEF . .5 C .5 . 0.8 D 5. AD 0.6 0.9 B E A F 0.5 C 0.6 0.8 6. AC 0.5 D 0.9 B E A F

  15. Single-Link Clusters • Similarity level 0.7 (i.e., similarity threshold) • ABEF • C • D • Similarity level 0.5 (i.e., similarity threshold) • ABEFCD

  16. Complete-Linke Cluster Generation A B C D E F A . .3 .5 .6 .8 .9 B .3 . .4 .5 .7 .8 C .5 .4 . .3 .5 .2 D .6 .5 .3 . .4 .1 E .8 .7 .5 .4 . .3 F .9 .8 .2 .1 .3 . Complete Link Structure & Pairs Covered Similarity Matrix Step Number Similarity Pair Check Operations new 1. AF 0.9 0.9 A F sim(AF,X)=min(sim(A,X), sim(F,X)) check EF 2. AE 0.8 (A,E) (A,F) 3. BF 0.8 check AB (A,E) (A,F) (B,F)

  17. Complete-Linke Cluster Generation (Cont.) Complete Link Structure & Pairs Covered Similarity Matrix Step Number Similarity Pair Check Operations AF B C D E AF . .3 .2 .1 .3 B .3 . .4 .5 .7 C .2 .4 . .3 .5 D .1 .5 .3 . .4 E .3 .7 .5 .4 . new 0.7 4. BE 0.7 B E check DF (A,D)(A,E)(A,F) (B,E)(B,F) 5. AD 0.6 6. AC 0.6 check CF (A,C)(A,D)(A,E)(A,F) (B,E)(B,F) 7. BD 0.5 check DE (A,C)(A,D)(A,E)(A,F) (B,D)(B,E)(B,F)

  18. Complete-Linke Cluster Generation (Cont.) Step Number Similarity Pair Check Operations Similarity Matrix AF BE C D AF . .3 .2 .1 BE .3 . .4 .4 C .2 .4 . .3 D .1 .4 .3 . check BC 8. CE 0.5 (A,C)(A,D)(A,E)(A,F) (B,D)(B,E)(B,F)(C,E) 0.4 check CE0.5 9. BC 0.4 0.7 C B E (in the checklist) 10. DE 0.4 Check BD0.5 DE (A,C)(A,D)(A,E)(A,F) (B,C)(B,D)(B,E)(B,F) (C,E)(D,E) (A,B)(A,C)(A,D)(A,E)(A,F) (B,C)(B,D)(B,E)(B,F) (C,E)(D,E) Check AC0.5 AE0.8 BF0.8 CF  , EF 11. AB 0.3

  19. Complete-Linke Cluster Generation (Cont.) Similarity Matrix Step Number Similarity Pair Check Operations 0.3 AF BCE D AF . .2 .1 BCE .2 . .3 D .1 .3 . 12. CD 0.3 Check BD0.5 DE0.4 0.4 D 0.7 C B E Check BF0.8 CF DF  13. EF 0.3 (A,B)(A,C)(A,D)(A,E)(A,F) (B,C)(B,D)(B,E)(B,F) (C,D)(C,E)(D,E)(E,F) Check BF0.8 EF0.3 DF  14. CF 0.2 (A,B)(A,C)(A,D)(A,E)(A,F) (B,C)(B,D)(B,E)(B,F) (C,D)(C,E)(C,F)(D,E)(E,F)

  20. Complete-Linke Cluster Generation (Cont.) 0.1 AF BCDE AF . .1 BCDE .1 . 15. DF 0.1 last pair 0.9 0.3 A F 0.4 D 0.7 C B E

  21. Complete Link Clusters Similarity level 0.7 Similarity level 0.3 D 0.5 0.4 A F B E A F B E 0.9 0.3 0.9 0.7 0.7 C D 0.4 0.5 C Similarity level 0.4 A F B E 0.9 0.7 0.5 D 0.4 C

  22. Group Average Link Clustering • Group average link clustering • use the average values of the pairwise links within a cluster to determine similarity • all objects contribute to intercluster similarity • resulting in a structure intermediate between the loosely bound single link cluster and tightly bound complete link clusters

  23. Comparison • The Behavior of Single-Link Cluster • The single-link process tends to produce a small number of large clusters that are characterized by a chaining effect. • Each element is usually attached to only one other member of the same cluster at each similarity level. • It is sufficient to remember the list of previously clustered single items.

  24. Comparison • The Behavior of Complete-Link Cluster • Complete-link process produces a much larger number of small, tightly linked groupings. • Each item in a complete-link cluster is guaranteed to resemble all other items in that cluster at the stated similarity level. • It is necessary to remember the list of all item pairs previously considered in the clustering process. • Comparison • The complete-link clustering system may be better adapted to retrieval than the single-link clusters. • A complete-link cluster generation is more expensive to perform than a comparable single-link process.

  25. How to Generate Similarity Di=(w1,i, w2,i, ..., wt,i) document vector for Di Lj=(lj,1, lj,2, ..., lj,nj) inverted list for term Tj lji denotes document identifier of ith document listed under term Tj nj denote number of postings for term Tj for j=1 to t (for each of t possible terms) for i=1 to nj (for all nj entries on the jth list) compute sim(Dlj,i,Dlj,k) i+1<=k<=nj end for end for

  26. Similarity without Recomputation set Sji =0, 1<=j<=N for j=1 to N (fore each document in collection) for each term k in document Dj take up inverted list Lk for i=1 to nk (for each document identifier on list Lk) if j<lk,i or if Sji=1 take up next document Di else compute sim(Dj,Dlk,i) set Sji=1 end for end for end for i j

  27. Heuristic Clustering Methods • Hierarchical clustering strategies • use all pairwise similarities between items • the clustering-generation are relatively expensive • produce a unique set of well-formed clusters for each set of data, regardless of the order in which the similarity pairs are introduced into the clustering process • Heuristic clustering methods • produce rough cluster arrangements at relatively little expense • single-pass clustering

  28. Single-Pass Clustering Heuristic Methods • Item 1 is first taken and placed into a cluster of its own. • Each subsequent item is then compared against all existing clusters. • It is placed in a previously existing cluster whenever it is similar to any existing cluster. • Compute the similarities between all existing centroids and the new incoming item. • When an item is added to an existing cluster, the corresponding centroid must then be appropriately updated. • If a new item is not sufficiently similar to any existing cluster, the new item forms a cluster of its own.

  29. Single-Pass Clustering Heuristic Methods(Continued) • Characteristics • Produce uneven cluster structures. • Solutions • cluster splitting: cluster sizes • variable similarity thresholds: the number of clusters, and the overlap among clusters • Produce cluster arrangements that vary according to the order of individual items.

  30. Cluster Splitting Addition of one more item to cluster A Splitting cluster A into two pieces A’ and A’’ Splitting superclusters S into two pieces S’ and S’’

  31. Cluster Searching • Cluster centroidthe average vector of all the documents in a given cluster • strategies • top downthe query is first compared with the highest-level centroids • bottom uponly the lowest-level centroids are stored, the higher-level cluster structure is disregarded

  32. Top-down entire-clustering search 1. Initialized by adding top item to active node list; 2. Take centroid with highest-query similarity from active node list; if the number of singleton items in subtree headed by that centroid is not larger than number of items wanted, then retrieve these singleton items and eliminate the centroid from active node list; else eliminate the centroid with highest query similarity from active node list and add its sons to active node list; 3. if number of retrieved  number wanted then stop else repeat step 2

  33. Active node list Number of single Retrieved items in subtree items (1,0.2) 14 (too big) (2,0.5), (4,0.7), (3,0) 6 (too big) (2,0.5), (8,0.8), (9,0.3),(3,0) 2 I, J (2,0.5), (9,0.3), (3,0) 4 (too big) (5,0.6), (6,0.5), (9,0.3), (3,0) 2 A,B

  34. Bottom-up Individual-Cluster Search Take a specified number of low-level centroids if there are enough singleton items in those clusters to equal the number of items wanted, then retrieve the number of items wanted in ranked order; else add additional low-level centroids to list and repeat test

  35. Active centroid list: (8,.8), (4,.7), (5,.6) Ranked documents from clusters: (I,.9), (L,.8), (A,.8), (K,.6), (B,.5), (J,.4), (N,.4), (M,.2) Retrieved items: I, L, A

More Related