1 / 55

Clustering

Learn about clustering in data analysis, including measuring quality, similarity metrics, and different types of data. Explore partitioning, hierarchical, density-based, and model-based methods.

grantkaren
Télécharger la présentation

Clustering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Clustering CENG 514

  2. Clustering • Overview • Measuring Quality in Clustering • Similarity Metrics/ Types of Data in Clustering • Basic Categorization of Clustering Methods • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Model-Based Methods • Outlier Analysis 12/20/2019 Clustering 2

  3. What is Clustering? • Cluster: a collection of data objects • Similar to one another within the same cluster • Dissimilar to the objects in other clusters • Clustering/Cluster analysis • Finding similarities between data according to the characteristics found in the data and grouping similar data objects into clusters • Unsupervised learning: no predefined classes 12/20/2019 Clustering 3

  4. Important Issues • Scalability • Ability to deal with different types of attributes • Ability to handle dynamic data • Discovery of clusters with arbitrary shape • Minimal requirements for domain knowledge to determine input parameters • Able to deal with noise and outliers • Insensitive to order of input records • High dimensionality • Incorporation of user-specified constraints • Interpretability and usability 12/20/2019 Clustering 4

  5. Quality of Clustering • A good clustering method will produce high quality clusters with • high intra-class similarity • low inter-class similarity • The quality of a clustering majorly result depends on both the similarity measure used by the method and its implementation 12/20/2019 Clustering 5

  6. Measure the Quality of Clustering • Dissimilarity/Similarity metric: Similarity is expressed in terms of a distance function, typically metric: d(i, j) • The definitions of distance functions are usually very different for interval-scaled, boolean, categorical, ordinal,etc. variables. • Weights should be associated with different variables based on applications and data semantics. • It is hard to define “similar enough” or “good enough” • the answer is typically highly subjective. 12/20/2019 Clustering 6

  7. Data Structures • Data matrix • (obj vs. attr) • Dissimilarity matrix • (obj vs. obj) • distances 12/20/2019 Clustering 7

  8. Type of data in clusteranalysis • Interval-scaled variables • Binary variables • Nominal, ordinal, and ratio variables • Variables of mixed types 12/20/2019 Clustering 8

  9. Interval-valued variables • Normalize data (e.g. z-score normalization) • Calculate the mean absolute deviation: where • Calculate the standardized measurement (z-score) • Using mean absolute deviation is more robust than using standard deviation 12/20/2019 Clustering 9

  10. Similarity and Dissimilarity Between Objects • Distances are normally used to measure the similarity or dissimilarity between two data objects • Some popular ones include: Minkowski distance: where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p-dimensional data objects, and q is a positive integer • If q = 1, d is Manhattan distance 12/20/2019 Clustering 10

  11. Similarity and Dissimilarity Between Objects (Cont.) • If q = 2, d is Euclidean distance: • Properties • d(i,j) 0 • d(i,i) = 0 • d(i,j) = d(j,i) • d(i,j)d(i,k) + d(k,j) • Also, one can use weighted distance 12/20/2019 Clustering 11

  12. Similarity and Dissimilarity Between Objects (Cont.) • Example: X1 = (1,2) X2 = (3,5) Euclidean distance(X1, X2) = sqrt(4+9) = 3.61 Manhattan distance(X1, X2) = 2+3 = 5 12/20/2019 Clustering 12

  13. Object j Object i Binary Variables • A contingency table for binary data • Distance measure for symmetric binary variables: • Distance measure for asymmetric binary variables: • Jaccard coefficient (similarity measure for asymmetric binary variables): 13 12/20/2019 Clustering

  14. Dissimilarity between Binary Variables • Example • gender is a symmetric attribute • the remaining attributes are asymmetric binary • let the values Y and P be set to 1, and the value N be set to 0 12/20/2019 Clustering 14

  15. Nominal (Categorical) Variables • A generalization of the binary variable in that it can take more than 2 states, e.g., red, yellow, blue, green • Simple matching • m: # of matches, p: total # of variables 12/20/2019 Clustering 15

  16. Nominal (Categorical) Variables • Example: 12/20/2019 Clustering 16

  17. Nominal (Categorical) Variables • Example: 12/20/2019 Clustering 17

  18. Ordinal Variables • An ordinal variable can be discrete or continuous • Order is important, e.g., rank • Can be treated like interval-scaled • replace xif by their rank • map the range of each variable onto [0, 1] by replacing i-th object in the f-th variable by • compute the dissimilarity using methods for interval-scaled variables 12/20/2019 Clustering 18

  19. Ordinal Variables • Example: Assume that there is an ordering fair <good<excellent 12/20/2019 Clustering 19

  20. Ordinal Variables • Example: Assume that there is an ordering fair <good<excellent 12/20/2019 Clustering 20

  21. Variables of Mixed Types • A database may contain all of these types of variables • symmetric binary, asymmetric binary, nominal, ordinal, interval • One may use a weighted formula to combine their effects • f is binary or nominal: dij(f) = 0 if xif = xjf , or dij(f) = 1 otherwise • f is interval-based: use the normalized distance • f is ordinal • compute ranks rif and • and treat zif as interval-scaled 12/20/2019 Clustering 21

  22. Vector Objects • Vector objects: keywords in documents, gene features in micro-arrays, etc. • Broad applications: information retrieval, biologic taxonomy, etc. • Cosine measure: Given two vectors of attributes, A and B, the cosine similarity is represented using a dot product and magnitude • A variant: Tanimoto coefficient.it yields the Jaccard coefficient in the case of binary attributes 12/20/2019 Clustering 22

  23. Vector Objects Example: Given two vectors: x = (1, 1, 0, 0) y = (0, 1, 1, 0) s(x, y) = (0+1+0+0) / sqrt(2) * sqrt(2) = 0.5 12/20/2019 Clustering 23

  24. Typical Alternatives to Calculate the Distance between Clusters • Single link: smallest distance between an element in one cluster and an element in the other, i.e., dis(Ki, Kj) = min(tip, tjq) • Complete link: largest distance between an element in one cluster and an element in the other, i.e., dis(Ki, Kj) = max(tip, tjq) • Average: avg distance between an element in one cluster and an element in the other, i.e., dis(Ki, Kj) = avg(tip, tjq) • Centroid: distance between the centroids of two clusters, i.e., dis(Ki, Kj) = dis(Ci, Cj) • Medoid: distance between the medoids of two clusters, i.e., dis(Ki, Kj) = dis(Mi, Mj) • Medoid: one chosen, centrally located object in the cluster 12/20/2019 Clustering 24

  25. Centroid, Radius and Diameter of a Cluster (for numerical data sets) • Centroid: the “middle” of a cluster • Radius: square root of average distance from any point of the cluster to its centroid • Diameter: square root of average mean squared distance between all pairs of points in the cluster 12/20/2019 Clustering 25

  26. Major Clustering Approaches • Partitioning approach: • Construct various partitions and then evaluate them by some criterion, e.g., minimizing the sum of squared distances • Typical methods: k-means, k-medoids, CLARANS • Hierarchical approach: • Create a hierarchical decomposition of the set of data (or objects) using some criterion • Typical methods: Diana, Agnes, BIRCH, ROCK, CAMELEON • Density-based approach: • Based on connectivity and density functions • Typical methods: DBSACN, OPTICS, DenClue 12/20/2019 Clustering 26

  27. Partitioning Algorithms: Basic Concept • Partitioning method: Construct a partition of a database D of n objects into a set of k clusters, s.t., min sum of squared distance • Given a k, find a partition of k clusters that optimizes the chosen partitioning criterion • Global optimal: exhaustively enumerate all partitions • Heuristic methods: k-means and k-medoids algorithms • k-means (MacQueen’67): Each cluster is represented by the center of the cluster • k-medoids or PAM (Partition around medoids) (Kaufman & Rousseeuw’87): Each cluster is represented by one of the objects in the cluster 12/20/2019 Clustering 27

  28. The K-Means Clustering Method • Given k, the k-means algorithm is implemented in four steps: • Partition objects into k nonempty subsets • Compute seed points as the centroids of the clusters of the current partition (the centroid is the center, i.e., mean point, of the cluster) • Assign each object to the cluster with the nearest seed point • Go back to Step 2, stop when no more new assignment 12/20/2019 Clustering 28

  29. 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 The K-Means Clustering Method • Example 10 9 8 7 6 5 Update the cluster means Assign each objects to most similar center 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 reassign reassign K=2 Arbitrarily choose K object as initial cluster center Update the cluster means 12/20/2019 Clustering 29

  30. Comments on the K-Means Method • Strength:Relatively efficient: O(tkn), where n is # objects, k is # clusters, and t is # iterations. Normally, k, t << n. • Comparing: PAM: O(k(n-k)2 ), CLARA: O(ks2 + k(n-k)) • Comment: Often terminates at a local optimum. • Weakness • Applicable only when mean is defined, (problem to find the mean for categorical data) • Need to specify k, the number of clusters, in advance • Unable to handle noisy data and outliers • Not suitable to discover clusters with non-convex shapes 12/20/2019 Clustering 30

  31. Variations of the K-Means Method • A few variants of the k-means which differ in • Selection of the initial k means • Dissimilarity calculations • Strategies to calculate cluster means • Handling categorical data: k-modes (Huang’98) • Replacing means of clusters with modes • Using new dissimilarity measures to deal with categorical objects • Using a frequency-based method to update modes of clusters • A mixture of categorical and numerical data: k-prototype method 12/20/2019 Clustering 31

  32. 10 10 9 9 8 8 7 7 6 6 5 5 4 4 3 3 2 2 1 1 0 0 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 Outlier Problem • The k-means algorithm is sensitive to outliers ! • Since an object with an extremely large value may substantially distort the distribution of the data. • K-Medoids: Instead of taking the mean value of the object in a cluster as a reference point, medoids can be used, which is the most centrally located object in a cluster. 12/20/2019 Clustering 32

  33. The K-MedoidsClustering Method • Find representative objects, called medoids, in clusters • PAM (Partitioning Around Medoids, 1987) • starts from an initial set of medoids and iteratively replaces one of the medoids by one of the non-medoids if it improves the total distance of the resulting clustering • PAM works effectively for small data sets, but does not scale well for large data sets • CLARA (Kaufmann & Rousseeuw, 1990) • CLARANS (Ng & Han, 1994): Randomized sampling 12/20/2019 Clustering 33

  34. 10 10 9 9 8 8 7 7 6 6 5 5 4 4 3 3 2 2 1 1 0 0 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 A Typical K-Medoids Algorithm (PAM) Total Cost = 20 10 9 8 Arbitrary choose k object as initial medoids Assign each remaining object to nearest medoids 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 K=2 Randomly select a nonmedoid object,Oramdom Total Cost = 26 Do loop Until no change Compute total cost of swapping Swapping O and Orandom If quality is improved. 12/20/2019 Clustering 34

  35. What Is the Problem with PAM? • Pam is more robust than k-means in the presence of noise and outliers because a medoid is less influenced by outliers or other extreme values than a mean • Pam works efficiently for small data sets but does not scale well for large data sets. • O(k(n-k)2 ) for each iteration where n is # of data,k is # of clusters • Sampling based method, CLARA(Clustering LARge Applications) 12/20/2019 Clustering 35

  36. Step 1 Step 2 Step 3 Step 4 Step 0 agglomerative (AGNES) a a b b a b c d e c c d e d d e e divisive (DIANA) Step 3 Step 2 Step 1 Step 0 Step 4 Hierarchical Clustering • Use distance matrix as clustering criteria. This method does not require the number of clusters k as an input, but needs a termination condition 12/20/2019 Clustering 36

  37. AGNES (Agglomerative Nesting) • Introduced in Kaufmann and Rousseeuw (1990) • Use the Single-Link method and the dissimilarity matrix. • Merge nodes that have the least dissimilarity • Eventually all nodes belong to the same cluster 12/20/2019 Clustering 37

  38. Dendrogram: Shows How the Clusters are Merged Decompose data objects into a several levels of nested partitioning (tree of clusters), called a dendrogram. A clustering of the data objects is obtained by cutting the dendrogram at the desired level, then each connected component forms a cluster. 12/20/2019 Clustering 38

  39. DIANA (Divisive Analysis) • Introduced in Kaufmann and Rousseeuw (1990) • Inverse order of AGNES • Eventually each node forms a cluster on its own 12/20/2019 Clustering 39

  40. Well-known Hierarchical Clustering Methods • Major weakness of agglomerative clustering methods • do not scale well: time complexity of at least O(n2), where n is the number of total objects • can never undo what was done previously • Integration of hierarchical with distance-based clustering • BIRCH (1996): uses CF-tree and incrementally adjusts the quality of sub-clusters • ROCK (1999): clustering categorical data by neighbor and link analysis • CHAMELEON (1999): hierarchical clustering using dynamic modeling 12/20/2019 Clustering 40

  41. BIRCH (1996) • Birch: Balanced Iterative Reducing and Clustering using Hierarchies (Zhang, Ramakrishnan & Livny, SIGMOD’96) • Incrementally construct a CF (Clustering Feature) tree, a hierarchical data structure for multiphase clustering • Phase 1: scan DB to build an initial in-memory CF tree (a multi-level compression of the data that tries to preserve the inherent clustering structure of the data) • Phase 2: use an arbitrary clustering algorithm to cluster the leaf nodes of the CF-tree • Scales linearly: finds a good clustering with a single scan and improves the quality with a few additional scans • Weakness: handles only numeric data, and sensitive to the order of the data record. 12/20/2019 Clustering 41

  42. Clustering Feature:CF = (N, LS, SS) N: Number of data points LS: Ni=1=Xi SS: Ni=1=Xi2 Clustering Feature Vector in BIRCH CF = (5, (16,30),(54,190)) (3,4) (2,6) (4,5) (4,7) (3,8) 12/20/2019 Clustering 42

  43. CF-Tree in BIRCH • Clustering feature: • summary of the statistics for a given subcluster: the 0-th, 1st and 2nd moments of the subcluster from the statistical point of view. • registers crucial measurements for computing cluster and utilizes storage efficiently • A CF tree is a height-balanced tree that stores the clustering features for a hierarchical clustering • A nonleaf node in a tree has descendants or “children” • The nonleaf nodes store sums of the CFs of their children • A CF tree has two parameters • Branching factor: specify the maximum number of children. • threshold: max diameter of sub-clusters stored at the leaf nodes 12/20/2019 Clustering 43

  44. CF1 CF2 CF3 CF6 child1 child2 child3 child6 The CF Tree Structure Root B = 7 L = 6 Non-leaf node CF1 CF2 CF3 CF5 child1 child2 child3 child5 Leaf node Leaf node prev CF1 CF2 CF6 next prev CF1 CF2 CF4 next 12/20/2019 Clustering 44

  45. Density-Based Clustering Methods • Clustering based on density (local cluster criterion), such as density-connected points • Major features: • Discover clusters of arbitrary shape • Handle noise • One scan • Need density parameters as termination condition • Well-known examples: • DBSCAN: Ester, et al. (KDD’96) • OPTICS: Ankerst, et al (SIGMOD’99). • DENCLUE: Hinneburg & D. Keim (KDD’98) • CLIQUE: Agrawal, et al. (SIGMOD’98) (more grid-based) 12/20/2019 Clustering 45

  46. p MinPts = 5 Eps = 1 cm q Density-Based Clustering: Basic Concepts • Two parameters: • Eps: Maximum radius of the neighbourhood • MinPts: Minimum number of points in an Eps-neighbourhood of that point • NEps(p): {q belongs to D | dist(p,q) <= Eps} • Directly density-reachable: A point p is directly density-reachable from a point q w.r.t. Eps, MinPts if • p belongs to NEps(q) • core point condition: |NEps (q)| >= MinPts 12/20/2019 Clustering 46

  47. p q o Density-Reachable and Density-Connected • Density-reachable: • A point p is density-reachable from a point q w.r.t. Eps, MinPts if there is a chain of points p1, …, pn, p1 = q, pn = p such that pi+1 is directly density-reachable from pi • Density-connected • A point p is density-connected to a point q w.r.t. Eps, MinPts if there is a point o such that both, p and q are density-reachable from o w.r.t. Eps and MinPts p p1 q 12/20/2019 Clustering 47

  48. Outlier Border Eps = 1cm MinPts = 5 Core DBSCAN: Density Based Spatial Clustering of Applications with Noise • Relies on a density-based notion of cluster: A cluster is defined as a maximal set of density-connected points • Discovers clusters of arbitrary shape in spatial databases with noise 12/20/2019 Clustering 48

  49. DBSCAN: The Algorithm • Arbitrary select a point p • Retrieve all points density-reachable from p w.r.t. Eps and MinPts. • If p is a core point, a cluster is formed. • If p is a border point, no points are density-reachable from p and DBSCAN visits the next point of the database. • Continue the process until all of the points have been processed. 12/20/2019 Clustering 49

  50. DBSCAN: Sensitive to Parameters 12/20/2019 Clustering 50

More Related