1 / 18

Clustering

Clustering. Basic concepts with simple examples Categories of clustering methods Challenges. What is clustering?. The process of grouping a set of physical or abstract objects into classes of similar objects. It is also called unsupervised learning .

tasya
Télécharger la présentation

Clustering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Clustering Basic concepts with simple examples Categories of clustering methods Challenges CSE572: Data Mining by H. Liu

  2. What is clustering? • The process of grouping a set of physical or abstract objects into classes of similar objects. • It is also called unsupervised learning. • It is a common and important task that finds many applications • Examples of clusters? • Examples where we need clustering? CSE572: Data Mining by H. Liu

  3. Differences from Classification • How different? • Which one is more difficult as a learning problem? • Do we perform clustering in daily activities? • How do we cluster? • How to measure the results of clustering? • With/without class labels • Between classification and clustering • Semi-supervised clustering CSE572: Data Mining by H. Liu

  4. Major clustering methods • Partitioning methods • k-Means (and EM), k-Medoids • Hierarchical methods • agglomerative, divisive, BIRCH • Similarity and dissimilarity of points in the same cluster and from different clusters • Distance measures between clusters • minimum, maximum • Means of clusters • Average between clusters CSE572: Data Mining by H. Liu

  5. How to evaluate • Without labeled data, how can one know one clustering result is good? • Basic or intuitive idea of clustering for clustered data points • Within a cluster - • Between clusters – • The relationship between the two? • Evaluation methods • Labeled data – another assumption: instances in the same clusters are of the same class • Unlabeled data – we will see below CSE572: Data Mining by H. Liu

  6. Clustering -- Example 1 • For simplicity, 1-dimension objects and k=2. • Objects: 1, 2, 5, 6,7 • K-means: • Randomly select 5 and 6 as centroids; • => Two clusters {1,2,5} and {6,7}; meanC1=8/3, meanC2=6.5 • => {1,2}, {5,6,7}; meanC1=1.5, meanC2=6 • => no change. • Aggregate dissimilarity = 0.5^2 + 0.5^2 + 1^2 + 1^2 = 2.5 CSE572: Data Mining by H. Liu

  7. Issues with k-means • A heuristic method • Sensitive to outliers • How to prove it? • Determining k • Trial and error • X-means, PCA-based • Crisp clustering • EM, Fuzzy c-means • Not be confused with k-NN CSE572: Data Mining by H. Liu

  8. Clustering -- Example 2 • For simplicity, we still use 1-dimension objects. • Objects: 1, 2, 5, 6,7 • agglomerative clustering – a very frequently used algorithm • How to cluster: • find two closest objects and merge; • => {1,2}, so we have now {1.5,5, 6,7}; • => {1,2}, {5,6}, so {1.5, 5.5,7}; • => {1,2}, {{5,6},7}. CSE572: Data Mining by H. Liu

  9. Issues with dendrograms • How to find proper clusters • An alternative: divisive algorithms • Top down • Comparing with bottom-up, which is more efficient • What’s the time complexity? • How to efficiently divide the data • A heuristic – Minimum Spanning Tree http://en.wikipedia.org/wiki/Minimum_spanning_tree • Time complexity – fastest is about O(e) where e - edges CSE572: Data Mining by H. Liu

  10. Distance measures • Single link • Measured by the shortest edge between the two clusters • Complete link • Measured by the longest edge • Average link • Measured by the average edge length • An example is shown next. CSE572: Data Mining by H. Liu

  11. An example to show different Links • Single link • Merge the nearest clusters measured by the shortest edge between the two • (((A B) (C D)) E) • Complete link • Merge the nearest clusters measured by the longest edge between the two • (((A B) E) (C D)) • Average link • Merge the nearest clusters measured by the average edge length between the two • (((A B) (C D)) E) B A E C D CSE572: Data Mining by H. Liu

  12. Other Methods • Density-based methods • DBSCAN: a cluster is a maximal set of density-connected points • Core points defined using epsilon-neighborhood and minPts • Apply directly density reachable (e.g., P and Q, Q and M) and density-reachable (P and M, assuming so are P and N), and density-connected (any density reachable points, P, Q, M, N) form clusters • Grid-based methods • STING: the lowest level is the original data • statistical parameters of higher-level cells are computed from the parameters of the lower-level cells (count, mean, standard deviation, min, max, distribution) • Model-based methods • Conceptual clustering: COBWEB • Category utility • Intraclass similarity • Interclass dissimilarity CSE572: Data Mining by H. Liu

  13. Density-based • DBSCAN – Density-Based Clustering of Applications with Noise • It grows regions with sufficiently high density into clusters and can discover clusters of arbitraryshape in spatial databases with noise. • Many existing clustering algorithms find spherical shapes of clusters • DEBSCAN defines a cluster as a maximal set of density-connected points. • Density is defined by an area and # of points CSE572: Data Mining by H. Liu

  14. Q M R S P O • Defining density and connection • -neighborhood of an object x (core object) (M, P, O) • MinPts of objects within -neighborhood (say, 3) • directly density-reachable (Q from M, M from P) • Only core objects are mutually density reachable • density-reachable (Q from P, P not from Q) [asymmetric] • density-connected (O, R, S) [symmetric] for border points • What is the relationship between DR and DC? CSE572: Data Mining by H. Liu

  15. Clustering with DBSCAN • Search for clusters by checking the -neighborhood of each instance x • If the -neighborhood of x contains more than MinPts, create a new cluster with x as a core object • Iteratively collect directly density-reachable objects from these core object and merge density-reachable clusters • Terminate when no new point can be added to any cluster • DBSCAN is sensitive to the thresholds of density, but it is fast • Time complexity O(N log N) if a spatial index is used, O(N2) otherwise CSE572: Data Mining by H. Liu

  16. Graph-based clustering • Sparsification techniques keep the connections to the most similar (nearest) neighbors of a point while breaking the connections to less similar points. • The nearest neighbors of a point tend to belong to the same class as the point itself. • This reduces the impact of noise and outliers and sharpens the distinction between clusters. CSE572: Data Mining by H. Liu

  17. Neural networks • Self-organizing feature maps (SOMs) • Subspace clustering • Clique: if a k-dimensional unit space is dense, then so are its (k-1)-d subspaces • More will be discussed later • Semi-supervised clustering http://www.cs.utexas.edu/~ml/publication/unsupervised.html http://www.cs.utexas.edu/users/ml/risc/ CSE572: Data Mining by H. Liu

  18. Challenges • Scalability • Dealing with different types of attributes • Clusters with arbitrary shapes • Automatically determining input parameters • Dealing with noise (outliers) • Order insensitivity of instances presented to learning • High dimensionality • Interpretability and usability CSE572: Data Mining by H. Liu

More Related