1 / 12

Ensembles

Ensembles. Ensemble Methods. Construct a set of classifiers from training data Predict class label of previously unseen records by aggregating predictions made by multiple classifiers Why might this be useful?

bryga
Télécharger la présentation

Ensembles

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ensembles

  2. Ensemble Methods • Construct a set of classifiers from training data • Predict class label of previously unseen records by aggregating predictions made by multiple classifiers • Why might this be useful? • In Olympic figure skating competition, why are there multiple judges? Why is this a good idea and do these reasons translate to data mining with ensembles?

  3. General Idea

  4. Why does it work? • Suppose there are 25 base classifiers • Each classifier has error rate,  = 0.35 • Assume classifiers are independent • Probability that the ensemble classifier makes a wrong prediction (first term is C(25,i)) • Practice has shown that even when independence does not hold results are good • But you want some diversity in the classifiers

  5. Methods for generating Multiple Classifiers • Manipulate the training data • Sample the data differently each time • Examples: Bagging and Boosting • Manipulate the input features • Sample the featurres differently each time • Makes especially good sense if there is redundancy • Example: Random Forest • Manipulate Class labels • Convert multi-class problem to ensemble of binary classifiers • Manipulate the learning algorithm • Vary some parameter of the learning algorithm • For example: amount of pruning, ANN network topology, etc.

  6. Background • Classifier performance can be impacted by: • Bias: assumptions made to help with generalization • "Simpler is better" is a bias. But we don’t know what the best bias is, so varying it may help • Variance: a learning method will give different results based on small changes (e.g., in training data). • When I run experiments and use random sampling with repeated runs, I get different results each time. • Noise: measurements may have errors or the class may be inherently probabilistic

  7. How Ensembles Help • Ensemble methods can assist with the bias and variance • Averaging the results over multiple runs will reduce the variance • I observe this when I use 10 runs with random sampling and see that my learning curves are much smoother • Ensemble methods especially helpful for unstable classifier algorithms • Decision trees are unstable since small changes in the training data can greatly impact the structure of the learned decision tree • If you combine different classifier methods into an ensemble, then you are using methods with different biases • You are more likely to use a classifier with a bias that is a good match for the problem • You may even be able to identify the best methods and weight them more

  8. Examples of Ensemble Methods • How to generate an ensemble of classifiers? • Bagging • Boosting • These methods have been shown to be quite effective • You can also combine different classifiers built separately • By simple voting • By voting and factoring in the reliability of each classifier • Enterprise Miner allows you to do this by combining diverse classifiers. I believe WEKA permits this also.

  9. Bagging • Sampling with replacement • Build classifier on each bootstrap sample • Each sample has probability (1 – 1/n)n of being selected (about 63% for large n) • Some values will be picked more than once • Combine the resulting classifiers, such as by majority voting • Greatly reduces the variance when compared to a single base classifier

  10. Boosting • An iterative procedure to adaptively change distribution of training data by focusing more on previously misclassified records • Initially, all N records are assigned equal weights • Unlike bagging, weights may change at the end of boosting round

  11. Boosting • Records that are wrongly classified will have their weights increased • Records that are classified correctly will have their weights decreased • Example 4 is hard to classify • Its weight is increased, therefore it is more likely to be chosen again in subsequent rounds

  12. Combining Multiple Models • You can also combine different types of classifiers • For example, a Neural Network and a decision tree model • The simplest manner is to vote or combine the probability estimates associated with the output of each classification decision

More Related