1 / 27

Design and Evaluation of a Parallel Execution Framework for the CLEVER Clustering Algorithm

Design and Evaluation of a Parallel Execution Framework for the CLEVER Clustering Algorithm. Chung Sheng CHEN, Nauful SHAIKH, Panitee CHAROENRATTANARUK, Christoph F. EICK, Nouhad RIZK and Edgar GABRIEL Department of Computer Science, University of Houston Talk Organization

markku
Télécharger la présentation

Design and Evaluation of a Parallel Execution Framework for the CLEVER Clustering Algorithm

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Design and Evaluation of a ParallelExecution Framework for the CLEVERClustering Algorithm Chung Sheng CHEN, Nauful SHAIKH, Panitee CHAROENRATTANARUK, Christoph F. EICK, Nouhad RIZK and Edgar GABRIEL Department of Computer Science, University of Houston Talk Organization Randomized Hill Climbing CLEVER—A Prototype-based Clustering Algorithm which Supports Fitness Functions OpenMP and CUDA Versions of Clever Experimental Results Summary

  2. 1. Randomized Hill Climbing Neighborhood Randomized Hill Climbing: Sample p points randomly in the neighborhood of the currently best solution; determine the best solution of the n sampled points. If it is better than the current solution, make it the new current solution and continue the search; otherwise, terminate returning the current solution. Advantages: easy to apply, does not need many resources, usually fast. Problems: How do I define my neighborhood; what parameter p should I choose? Eick et al., ParCo11, Ghent

  3. Example Randomized Hill Climbing • Maximizef(x,y,z)=|x-y-0.2|*|x*z-0.8|*|0.3-z*z*y| withx,y,z in [0,1] Neighborhood Design: Create solutions 50 solutions s, such that: s= (min(1, max(0,x+r1)), min(1, max(0,y+r2)), min(1, max(0, z+r3)) with r1, r2, r3 being random numbers in [-0.05,+0.05]. Eick et al., ParCo11, Ghent

  4. 2. CLEVER: Clustering with Plug-in Fitness Functions • In the last 5 years, the UH-DMML Research Group at the University of Houston developed families of clustering algorithms that find contiguous spatial clusters by maximizing a plug-in fitness function. • This work is motivated by a mismatch between evaluation measures of traditional clustering algorithms (such as cluster compactness) and what domain experts are actually looking for. • Plug-in Fitness Functions allow domain experts to instruct clustering algorithms with respect to desirable properties of “good” clusters the clustering algorithm should seek for. Eick et al., ParCo11, Ghent 4

  5. 8 Region Discovery Framework Eick et al., ParCo11, Ghent

  6. 10 Region Discovery Framework3 The algorithms we currently investigate solve the following problem: Given: A dataset O with a schema R A distance function d defined on instances of R A fitness function q(X) that evaluates clusteringsX={c1,…,ck} as follows: q(X)= cXreward(c)=cXi(c) *size(c) with b1 Objective: Find c1,…,ck O such that: • cicj= if ij • X={c1,…,ck} maximizes q(X) • All cluster ciX are contiguous (each pair of objects belonging to cihas to be delaunay-connected with respect to ciand to d) • c1…ck  O • c1,…,ck are usually ranked based on the reward each cluster receives, and low reward clusters are frequently not reported Eick et al., ParCo11, Ghent

  7. 12 Example1: Finding Regional Co-location Patterns in Spatial Data Figure 1: Co-location regions involving deep and shallow ice on Mars Figure 2: Chemical co-location patterns in Texas Water Supply Objective: Find co-location regions using various clustering algorithms and novel fitness functions. Applications: 1. Finding regions on planet Mars where shallow and deep ice are co-located, using point and raster datasets. In figure 1, regions in red have very high co-location and regions in blue have anti co-location. 2. Finding co-location patterns involving chemical concentrations with values on the wings of their statistical distribution in Texas’ ground water supply. Figure 2 indicates discovered regions and their associated chemical patterns.

  8. 13 Example 2: Regional Regression • Geo-regression approaches: Multiple regression functions are used that vary depending on location. • Regional Regression: • To discover regions with strong relationships between dependent & independent variables • Construct regional regression functions for each region • When predicting the dependent variable of an object, use the regression function associated with the location of the object Eick et al., ParCo11, Ghent

  9. Representative-based Clustering 2 Attribute1 1 3 Attribute2 4 Objective: Find a set of objects OR such that the clustering X obtained by using the objects in OR as representatives minimizes q(X). Characteristic: cluster are formed by assigning objects to the closest representative Popular Algorithms: K-means, K-medoids/PAM, CLEVER, CLEVER, Eick et al., ParCo11, Ghent

  10. The CLEVER Algorithm • A prototype-based clustering algorithm which supports plug-in fitness function • Uses a randomized hill climbing procedure to find a “good” set of prototype data objects that represent clusters • “good”  maximize the plug-in fitness function • Search for the “correct number of cluster” • CLEVER is powerful but usually slow; CLEVER Hill Climbing Procedure Neighboring solutions generator Assign cluster members Plug-in fitness function Eick et al., ParCo11, Ghent

  11. Pseudo Code of CLEVERs) Inputs: Dataset O, k’, neighborhood-size, p, q,  , object-distance-function d or distance matrix D, i-max Outputs: Clustering X, fitness q(X), rewards for clusters in X Algorithm: 1. Create a current solution by randomly selecting k’ representatives from O. 2. If i-max iterations have been done terminate with the current solution 3. Create p neighbors of the current solution randomly using the given neighborhood definition. 4. If the best neighbor improves the fitness q, it becomes the current solution. Go back to step 2. 5. If the fitness does not improve, the solution neighborhood is re-sampled by generating p’ (more precisely, first 2*p solutions and then (q-2)*p solutions are re-sampled) more neighbors. If re-sampling does not lead to a better solution, terminate returning the current solution (however, clusters that receive a reward of 0 will be considered outliers and non-reward clusters are therefore not returned); otherwise, go back to step 2 replacing the current solution by the best solution found by re-sampling.

  12. 3. PAR-CLEVER : A Faster Clustering Algorithm OpenMP CUDA (GPU computing) MPI Map/Reduce Eick et al., ParCo11, Ghent

  13. Benchmarks Data Sets Used • 10Ovals • Size:3,359 • Fitness function: purity • Earthquake • Size: 330,561 • Fitness function: find clusters with high variance with respect to earthquake depth • Yahoo Ads Clicks • full size: 3,009,071,396; subset:2,910,613 • Fitness function: minimum intra-cluster distance Eick et al., ParCo11, Ghent

  14. Parallelization targets • Assign cluster members: O(n*k) • Data parallelization • Highly independent • The first priority for parallelization • Fitness value calculation : ~ O(n) • Neighboring solutions generation: ~ O(p) n:= number of object in the dataset k:= number of clusters in the current solution p:= sampling rate (how many neighbors of the current solution are sampled) Eick et al., ParCo11, Ghent

  15. Hardware Specification • crill-001 to crill-016 (OpenMP) • Processor : 4 x AMD Opteron(tm) Processor 6174 • CPU cores : 48 • Core speed : 2200 MHz • Memory : 64 GB • crill-101 and crill-102 (GPU Computing—NVIDIA CUDA) • Processor : 2 x AMD Opteron(tm) Processor 6174 • CPU cores : 24 • Core speed : 2200 MHz • Memory : 32 GB • GPU Device : 4 x Tesla M2050, • Memory : 3 Gb • CUDA cores : 448 Eick et al., ParCo11, Ghent

  16. 4. Experimental Results10Ovals(measured in seconds) Eick et al., ParCo11, Ghent

  17. Experimental Results continued 10Ovals Eick et al., ParCo11, Ghent

  18. Experimental ResultsEarthquake (measured in hours) Eick et al., ParCo11, Ghent

  19. Experimental Results continuedEarthquake Eick et al., ParCo11, Ghent

  20. Experimental ResultsYahoo (measured in hours) Eick et al., ParCo11, Ghent

  21. Experimental Results continuedYahoo Eick et al., ParCo11, Ghent

  22. CUDA Results10Ovals vs. • CUDA version evaluate 5100 solutions in 1.327 seconds 15200 solutions in 3.95 seconds • Speed up = Time(CPU) / Time(GPU) • 63x speed up compares to sequential version • 1.62x speed up compares to 48 threads OpenMP

  23. CUDA ResultsEarthquake (preliminary!) vs. • CUDA version evaluate 28000 solutions in 143.61 seconds 21950 solutions in 109.07 seconds • Speed up = Time(CPU) / Time(GPU) • 6119x speed up compares to sequential version • 202x speed up compares to 48 threads OpenMP Eick et al., ParCo11, Ghent

  24. CUDA implementation Cache representatives in shared memory • The representatives are read frequently in the computation that assigns objects to clusters. The results presented earlier cached the representatives into the shared memory for a faster access. • The following table compares the performances between CLEVER with and without caching the representatives on the earthquake data set. The data size of the representatives being cached is 2MB • The result shows that caching the representatives has very little improvement on the runtime (0.09%) based on the Eick et al., ParCo11, Ghent

  25. The difference between the OpenMP and CUDA implementations—why? • The OpenMP version uses a object oriented programming (OOP) design inherited from its original implementation but the redesigned CUDA version is more a procedural programming implementation. • CUDA hardware has higher bandwidth which contributed to the speedup a little • Caching contributes little of the speedup (we already analyzed that) Eick et al., ParCo11, Ghent

  26. 5. Summary • CUDA and OpenMP results indicate good scalability parallel algorithm using multi-core processors—computations which take days can now be performed in minutes/hours. • OpenMP • Easy to implement • Good Speed up • Limited by the number of cores and the amount of RAM • CUDA GPU • Extra attentions needed for CUDA programming • Lower level of programming: registers, cache memory… • GPU memory hierarchy is different from CPU • Only support for some data structures; • Synchronization between threads in blocks is not possible • Super speed up, some of which are still subject of investigation Eick et al., ParCo11, Ghent

  27. Future Work • More work on the CUDA version • Conduct more experiments which explain what works well and which doesn’t and why it does/does not work well • Analyze impact of the capability to search many more solutions on solution quality in more depth. • Implement a version of CLEVER which conducts multiple randomized hill climbing searches in parallel and which employs dynamic load balancingmore resources are allocated to the “more promising” searches • Reuse code for speeding up other data mining algorithms which uses randomized hill climbing. Eick et al., ParCo11, Ghent

More Related