1 / 24

Scalable Clustering on the Data Grid

Scalable Clustering on the Data Grid. Patrick Wendel (pjw4@doc.ic.ac.uk) Moustafa Ghanem Yike Guo Discovery Net Department of Computing Imperial College, London. Outline. Discovery Net Data Clustering Mining Distributed Data Description of the strategy Deployment Evaluation

samson
Télécharger la présentation

Scalable Clustering on the Data Grid

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scalable Clustering on the Data Grid Patrick Wendel (pjw4@doc.ic.ac.uk) Moustafa Ghanem Yike Guo Discovery Net Department of Computing Imperial College, London

  2. Outline • Discovery Net • Data Clustering • Mining Distributed Data • Description of the strategy • Deployment • Evaluation • Conclusions – Future Works All Hands Meeting, Nottingham

  3. Discovery Net • Multidisciplinary project funded by the EPSRC under the UK e-Science programme (started Oct 2002, ended March 05) • Developed an infrastructure for Knowledge Discovery Services for integrating and analysing data collected from high throughput devices and sensors • Applications to: • Life Sciences • High throughput genomics and proteomics • Real-time Environmental Monitoring • High throughput dispersed air sensing technology • Geo-Hazard modelling • Earthquake modelling through satellite imagery • The project covered many areas including infrastructure, applications and algorithms (text mining) • Produced the Discovery Net platform which aims to integrate, compose, coordinate and deploy knowledge discovery services using a workflow technology. All Hands Meeting, Nottingham

  4. Operational Data Literature Instrument Data Databases Using Distributed Computing Resources Images Discovery Net Scientific Information Scientific Discovery • e-Science • large scale science that will increasingly be carried out through distributed global collaborations enabled by the Internet. All Hands Meeting, Nottingham

  5. Data Clustering • We concentrate on a particular class of data mining algorithms: Clustering • A class of explorative data mining techniques, used to find out groups of points that are similar/close to each other. • Popular analysis technique. Useful for exploring, understanding, modelling large data sets • Two main types of clustering: • Hierarchical: Reorganises the data set into a hierarchy of clusters based on their similarity. • Partition/Model based: Tries to partition the data set into a number of clusters or try to fit a statistical model (e.g. mixture of Gaussians) to a data set • Successfully applied to sociological data, image processing and genomic data. All Hands Meeting, Nottingham

  6. Mining Data on the Grid • Changing environment for data analysis: • From analysing data files held locally (or close to the algorithm), to using remote data source, using remote services through portals, now towards distributed data executions. • Distributed data sources: • Data mining processes can now require data spread across multiple organisations • Service-oriented approach: • High-level functionalities are now available through well-defined services, instead of providing low-level (terminal etc..) access to resources All Hands Meeting, Nottingham

  7. Goal • Design a service-oriented distributed data clustering strategy: • that can be deployed on a Grid environment (i.e. a standard-based, service oriented, secure distributed environment) • that would allow the end-user/data analysts to deploy easily against its own data sets All Hands Meeting, Nottingham

  8. Requirements 1/2 • Performance issues: • The analysis process using data grids directly and analysis services must be more efficient than gathering all the data on my desktop! • Accuracy: • The strategy should at least provide a model more representative of the overall data set • Security • The deployed strategy should ensure consistent handling of authentication and authorization aspects throughout • Privacy: • Restricted access to the data source All Hands Meeting, Nottingham

  9. Requirements 2/2 • Heterogeneity of the resources used and/or connectivity • It’s very unlikely the set of resources involved in the distributed analysis process will be similar or work over networks of similar bandwidth • Loose-coupling between resources participating in the distributed analysis • The analyst has less control on what is available/provided by each data grid or each analysis service. Therefore the framework should, as much as possible, be unaffected by minor differences between functionalities provided by each site. • Service-oriented approach: The deployment of the analysis process should be based on the co-ordination of high-level services (instead of a dedicated distributed algorithm, e.g. MPI implementation) All Hands Meeting, Nottingham

  10. Current strategy • We restrict the current framework to the case where instances are distributed but have the same attributes on each different fragments (~ horizontal fragments) • Based on the EM-Clustering algorithm (mixture of Gaussian model fitting algorithm). • Hierarchical clustering inherently complex to distribute • Statistical approach of EM provides a sound basis to define a model combination strategy All Hands Meeting, Nottingham

  11. Approach • Generate clustering models at each data source location (compute near the data) • Transfer partial models in standard format (PMML) to a combiner site • Normalise the relative weights of each model • Perform an EM-based method on partial models to generate a global model. All Hands Meeting, Nottingham

  12. Combining Cluster Models • Derived from the EM-Clustering algorithm itself • Adapted to take as input the models generated at each site • Each partial model is treated like a (very) compressed representation of the fragment (similar to the two step approaches of some scalable clustering algorithms). • More detailed algorithm and formulae in proceedings All Hands Meeting, Nottingham

  13. Deployment: Discovery Net • The Discovery Net platform is used to build and deploy this framework. • Implementation based on an open architecture re-using common protocols and common infrastructure elements (such as the Globus Toolkits). • It also defines its own protocol for workflows, Discovery Process Markup Language (DPML) which allows the definition of data analysis workflows to be executed on distributed resources. • The platform comprises a server that stores, schedules the workflows and manage the data, and a thick client to help the workflow construction process. • Thus giving the end user the ability to define application-specific workflows performing such tasks as distributed data mining. • The model combiner is implemented as a workflow activity in Discovery Net All Hands Meeting, Nottingham

  14. Data sources Discovery Net servers Source A Partial models PMML Global model Partial clustering Source B PMML Partial clustering Combiner site PMML Source C Partial clustering Deployment All Hands Meeting, Nottingham

  15. The Discovery Net client enables the composition and the execution of the distributed process as a workflow constructed visually. The execution engine will coordinate the distributed execution Deployment: Workflow All Hands Meeting, Nottingham

  16. Accuracy Evaluation: Data Distribution • Comparison of the accuracy of the combined model with the average accuracy of partial models against the entire data sets (i.e. have we gained some accuracy by considering the fragments together) • Accuracy will strongly depend on how the data is distributed among different sites. In the evaluation we introduce a randomness ratio to determine how similar the data distribution is among fragments. • 0 meaning that each site would have data drawn from different distributions • 1 meaning that the data from all fragments are drawn from the same distribution • Measured by log-likelihood function of the test data set: • The likelihood function of a data set represents how much that data is likely to be following the distribution function defined by the model All Hands Meeting, Nottingham

  17. As expected, the ratio has a huge effect on gained accuracy. For low levels, each fragment becomes less and less representative of the complete data set, therefore the combined model will outperform partial ones. Accuracy Evaluation: Data distribution All Hands Meeting, Nottingham

  18. (r= 0.2, 10,000 points, 5 clusters) The accuracy does degrade with increasing number of fragments, but so does the average accuracy of models generated from individual fragments. Accuracy Evaluation: Number of fragments All Hands Meeting, Nottingham

  19. Accuracy Evaluation: Increasing data size • (r=0.2,d=5,5 fragments). Consistent behaviour of the combined model’s accuracy over partial ones. All Hands Meeting, Nottingham

  20. Performance Evaluation • Performance evaluation is only partially relevant, as the process does not feed back combined models and partial models are generated near the data. • The heterogeneity of real deployments is difficult to take into account. • Time in seconds, for an increasing number of fragments All Hands Meeting, Nottingham

  21. Performance Evaluation • Execution time with lower dimensionality and larger data sets All Hands Meeting, Nottingham

  22. Conclusions • Encouraging results in terms of accuracy vs. performance, given the constraints. • But is the trade-off between accuracy and flexibility (generally the case in distributed data mining) acceptable? • This should be part of a wider explorative process, probably as a first step into the understanding of the data set. • Being part of the Discovery Net platform, the distributed analysis process can be simply designed from the Discovery Net client software. All Hands Meeting, Nottingham

  23. Future Works • First step towards more generic distributed data mining strategies (classification algorithms, association rules) • Need evaluation against real data sets ! • Possible improvements including: • Refinement through feedback • Use of a more complex intermediate summary structure for the partial models (e.g. tree structures containing summary information) • Estimation of the number of clusters (using Bayesian Information Criteria) • Plenty of possible clustering algorithms to try to use. All Hands Meeting, Nottingham

  24. Questions? All Hands Meeting, Nottingham

More Related