1 / 46

Graph-based Consensus Maximization among Multiple Supervised and Unsupervised Models

NIPS’2009. Graph-based Consensus Maximization among Multiple Supervised and Unsupervised Models. Jing Gao 1 , Feng Liang 2 , Wei Fan 3 , Yizhou Sun 1 , Jiawei Han 1 1 CS UIUC 2 STAT UIUC 3 IBM TJ Watson. Outline. An overview of ensemble methods Introduction Supervised ensemble techniques

ttims
Télécharger la présentation

Graph-based Consensus Maximization among Multiple Supervised and Unsupervised Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NIPS’2009 Graph-based Consensus Maximization among Multiple Supervised and Unsupervised Models Jing Gao1, Feng Liang2, Wei Fan3, Yizhou Sun1, Jiawei Han1 1 CS UIUC 2 STAT UIUC 3 IBM TJ Watson

  2. Outline • An overview of ensemble methods • Introduction • Supervised ensemble techniques • Unsupervised ensemble techniques • Consensus among supervised and unsupervised models • Problem and motivation • Methodology • Interpretations • Experiments

  3. Ensemble model 1 Ensemble model Data model 2 …… Combine multiple models into one! model k Applications: classification, clustering, collaborative filtering, anomaly detection……

  4. Million-dollar prize Improve the baseline movie recommendation approach of Netflix by 10% in accuracy The top submissions all combine several teams and algorithms as an ensemble Stories of Success • Data mining competitions • Classification problems • Winning teams employ an ensemble of classifiers

  5. Why Ensemble Works? (1) • Intuition • combining diverse, independent opinions in human decision-making as a protective mechanism (e.g. stock portfolio) • Uncorrelated error reduction • Suppose we have 5 completely independent classifiers for majority voting • If accuracy is 70% for each • 10 (.7^3)(.3^2)+5(.7^4)(.3)+(.7^5) • 83.7% majority vote accuracy • 101 such classifiers • 99.9% majority vote accuracy from T. Holloway, Introduction to Ensemble Learning, 2007.

  6. Model 6 Model 1 Model 3 Model 5 Model 2 Model 4 Some unknown distribution Why Ensemble Works? (2) Ensemble gives the global picture! from W. Fan, Random Decision Tree.

  7. Why Ensemble Works? (3) • Overcome limitations of single hypothesis • The target function may not be implementable with individual classifiers, but may be approximated by model averaging Decision Tree Model Averaging from I. Davidson et. al., When Efficient Model Averaging Out-Performs Boosting and Bagging, ECML 06.

  8. Research Focus • Base models • Improve diversity! • Combination scheme • Consensus (unsupervised) • Learn to combine (supervised) • Tasks • Classification (supervised ensemble) • Clustering (unsupervised ensemble)

  9. Outline • An overview of ensemble methods • Introduction • Supervised ensemble techniques • Unsupervised ensemble techniques • Consensus among supervised and unsupervised models • Problem and motivation • Methodology • Interpretations • Experiments

  10. Bagging • Bootstrap • Sampling with replacement • Contains around 63.2% original records in each sample • Ensemble • Train a classifier on each bootstrap sample • Use majority voting to determine the class label of ensemble classifier • Discussions • Incorporate diversity through bootstrap samples • Sensitive base classifiers work better, such as decision tree

  11. Boosting • Principles • Boost a set of weak learners to a strong learner • Make records currently misclassified more important • AdaBoost • Initially, set uniform weights on all the records • At each round • Create a bootstrap sample based on the weights • Train a classifier on the sample and apply it on the original training set • Records that are wrongly classified will have their weights increased • Records that are classified correctly will have their weights decreased • If the error rate is higher than 50%, start over • Final prediction is weighted average of all the classifiers with weight representing the training accuracy

  12. Classifications (colors) and Weights (size) after 1 iteration Of AdaBoost 20 iterations 3 iterations from Elder, John. From Trees to Forests and Rule Sets - A Unified Overview of Ensemble Methods. 2007.

  13. Random Forest • Algorithm • For each tree • Choose a training set by choosing N times with replacement from the training set • For each node, randomly choose m<M features and calculate the best split • Fully grown and not pruned • Use majority voting among all the trees • Discussions • Bagging+random features: improve diversity

  14. Random Decision Tree B1: {0,1} B2: {0,1} B3: continuous B1 chosen randomly Random threshold 0.3 B2: {0,1} B3: continuous B2: {0,1} B3: continuous B3 chosen randomly B2 chosen randomly Random threshold 0.6 B3: continous from W. Fan, Random Decision Tree.

  15. Outline • An overview of ensemble methods • Introduction • Supervised ensemble techniques • Unsupervised ensemble techniques • Consensus among supervised and unsupervised models • Problem and motivation • Methodology • Interpretations • Experiments

  16. Clustering Ensemble • Goal • Combine “weak” clusterings to a better one from A. Topchyet. al. Clustering Ensembles: Models of Consensus and Weak Partitions. PAMI, 2005

  17. Methods • Base Models • Bootstrap samples, different subsets of features • Different clustering algorithms • Random number of clusters • Combination • find the correspondence between the labels in the partitions and fuse the clusters with the same labels • treat each output as a categorical variable and cluster in the new feature space

  18. Meta Clustering (1) • Cluster-based • Regard each cluster from a base model as a record • Similarity is defined as the percentage of shared common examples • Conduct meta-clustering and assign record to the associated meta-cluster • Instance-based • Compute the similarity between two records as the percentage of models that put them into the same cluster c2 c5 v2 c1 c6 c3 c4 v1 v4 c7 v3 v5 c8 c9 c10 v6 from A. Gioniset. al. Clustering Aggregation. TKDD, 2007

  19. Meta Clustering (2) • Probability-based • Assume output comes from a mixture of models • Use EM algorithm to learn the model • Spectral clustering • Formulate the problem as a bipartite graph • Use spectral clustering to partition the graph c1 c2 c3 c4 c5 c6 v1 v2 v3 v4 v5 v6 c7 c8 c9 c10 from A. Gioniset. al. Clustering Aggregation. TKDD, 2007

  20. Outline • An overview of ensemble methods • Introduction • Supervised ensemble techniques • Unsupervised ensemble techniques • Consensus among supervised and unsupervised models • Problem and motivation • Methodology • Interpretations • Experiments

  21. Multiple Source Classification Image Categorization Like? Dislike? Research Area movie genres, cast, director, plots……. users viewing history, movie ratings… publication and co-authorship network, published papers, ……. images, descriptions, notes, comments, albums, tags…….

  22. Model Combination helps! Supervised or unsupervised supervised Some areas share similar keywords People may publish in relevant but different areas There may be cross-discipline co-operations unsupervised

  23. Problem

  24. Motivations • Consensus maximization • Combine output of multiple supervised and unsupervised models on a set of objects • The predicted labels should agree with the base models as much as possible • Motivations • Unsupervised models provide useful constraints for classification tasks • Model diversity improves prediction accuracy and robustness • Model combination at output level is needed due to privacy-preserving or incompatible formats

  25. Related Work (1) • Single models • Supervised: SVM, Logistic regression, …… • Unsupervised: K-means, spectral clustering, …… • Semi-supervised learning, transductive learning • Supervised ensemble • Require raw data and labels: bagging, boosting, Bayesian model averaging • Require labels: mixture of experts, stacked generalization • Majority voting works at output level and does not require labels

  26. Related Work (2) • Unsupervised ensemble • find a consensus clustering from multiple partitionings without accessing the features • Multi-view learning • a joint model is learnt from both labeled and unlabeled data from multiple sources • it can be regarded as a semi-supervised ensemble requiring access to the raw data

  27. Related Work (3)

  28. Outline • An overview of ensemble methods • Introduction • Supervised ensemble techniques • Unsupervised ensemble techniques • Consensus among supervised and unsupervised models • Problem and motivation • Methodology • Interpretations • Experiments

  29. A Toy Example x1 x2 x1 x2 x1 x2 x1 x2 1 1 x3 x4 x3 x4 x3 x4 x3 x4 2 2 x5 3 x6 x5 x6 x5 x6 x5 x6 3 x7 x7 x7 x7

  30. Groups-Objects g1 g4 g7 x1 x2 x1 x2 x1 x2 x1 x2 1 g10 1 g12 x3 x4 x3 x4 x3 x4 x3 x4 2 g5 2 g11 g8 x5 3 x6 x5 x6 x5 x6 x5 x6 3 g3 g6 g2 g9 x7 x7 x7 x7

  31. Bipartite Graph [1 0 0] [0 1 0] [0 0 1] object i group j conditional prob vector M1 adjacency …… M2 initial probability M3 …… M4 Objects Groups

  32. Objective [1 0 0] [0 1 0] [0 0 1] minimize disagreement M1 Similar conditional probability if the object is connected to the group …… M2 M3 Do not deviate much from the initial probability …… M4 Objects Groups

  33. Methodology [1 0 0] [0 1 0] [0 0 1] Iterate until convergence Update probability of a group M1 …… M2 Update probability of an object M3 …… M4 Objects Groups

  34. Outline • An overview of ensemble methods • Introduction • Supervised ensemble techniques • Unsupervised ensemble techniques • Consensus among supervised and unsupervised models • Problem and motivation • Methodology • Interpretations • Experiments

  35. Constrained Embedding groups objects constraints for groups from classification models

  36. Ranking on Consensus Structure [1 0 0] [0 1 0] [0 0 1] adjacency matrix M1 …… query M2 M3 personalized damping factors …… M4 Objects Groups

  37. Incorporating Labeled Information [1 0 0] [0 1 0] [0 0 1] Objective M1 Update probability of a group …… M2 M3 Update probability of an object …… M4 Objects Groups

  38. Outline • An overview of ensemble methods • Introduction • Supervised ensemble techniques • Unsupervised ensemble techniques • Consensus among supervised and unsupervised models • Problem and motivation • Methodology • Interpretations • Experiments

  39. Experiments-Data Sets • 20 Newsgroup • newsgroup messages categorization • only text information available • Cora • research paper area categorization • paper abstracts and citation information available • DBLP • researchers area prediction • publication and co-authorship network, and publication content • conferences’ areas are known

  40. Experiments-Baseline Methods (1) • Single models • 20 Newsgroup: • logistic regression, SVM, K-means, min-cut • Cora • abstracts, citations (with or without a labeled set) • DBLP • publication titles, links (with or without labels from conferences) • Proposed method • BGCM • BGCM-L: semi-supervised version combining four models • 2-L: two models • 3-L: three models

  41. Experiments-Baseline Methods (2) • Ensemble approaches • clustering ensemble on all of the four models-MCLA, HBGF

  42. Accuracy (1)

  43. Accuracy (2)

  44. Conclusions • Ensemble • Combining independent, diversified models improves accuracy • Information explosion, various learning packages available • Consensus Maximization • Combine the complementary predictive powers of multiple supervised and unsupervised models • Propagate labeled information between group and object nodes iteratively over a bipartite graph • Two interpretations: constrained embedding and ranking on consensus structure • Applications • Multiple source learning, Ranking, Truth Finding……

  45. Thanks! • Any questions? http://www.ews.uiuc.edu/~jinggao3/nips09bgcm.htm jinggao3@illinois.edu

More Related