1 / 25

Naïve Bayes Classifier

Naïve Bayes Classifier . Adopted from slides by Ke Chen from University of Manchester and YangQiu Song from MSRA. Generative vs. Discriminative Classifiers. Training classifiers involves estimating f: X  Y, or P(Y|X)

gwidon
Télécharger la présentation

Naïve Bayes Classifier

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Naïve Bayes Classifier Adopted from slides by Ke Chen from University of Manchester and YangQiu Song from MSRA

  2. Generative vs. Discriminative Classifiers • Training classifiers involves estimating f: X  Y, or P(Y|X) • Discriminative classifiers (also called ‘informative’ by Rubinstein&Hastie): • Assume some functional form for P(Y|X) • Estimate parameters of P(Y|X) directly from training data • Generative classifiers • Assume some functional form for P(X|Y), P(X) • Estimate parameters of P(X|Y), P(X) directly from training data • Use Bayes rule to calculate P(Y|X= xi)

  3. Bayes Formula

  4. Generative Model • Color • Size • Texture • Weight • …

  5. Discriminative Model • Logistic Regression • Color • Size • Texture • Weight • …

  6. Comparison • Generative models • Assume some functional form for P(X|Y), P(Y) • Estimate parameters of P(X|Y), P(Y) directly from training data • Use Bayes rule to calculate P(Y|X= x) • Discriminative models • Directly assume some functional form for P(Y|X) • Estimate parameters of P(Y|X) directly from training data

  7. Probability Basics • Prior, conditional and joint probability for random variables • Prior probability: • Conditional probability: • Joint probability: • Relationship: • Independence: • Bayesian Rule

  8. Probability Basics • Quiz: We have two six-sided dice. When they are tolled, it could end up with the following occurance: (A) dice 1 lands on side “3”, (B) dice 2 lands on side “1”, and (C) Two dice sum to eight. Answer the following questions:

  9. Probabilistic Classification • Establishing a probabilistic model for classification • Discriminative model Discriminative Probabilistic Classifier

  10. Probabilistic Classification • Establishing a probabilistic model for classification (cont.) • Generative model Generative Probabilistic Model for Class 2 Generative Probabilistic Model for Class L Generative Probabilistic Model for Class 1

  11. Probabilistic Classification • MAP classification rule • MAP: Maximum APosterior • Assign x to c* if • Generative classification with the MAP rule • Apply Bayesian rule to convert them into posterior probabilities • Then apply the MAP rule

  12. Naïve Bayes • Bayes classification • Difficulty: learning the joint probability • Naïve Bayes classification • Assumption that all input attributes are conditionally independent! • MAP classification rule: for

  13. Naïve Bayes • Naïve Bayes Algorithm (for discrete input attributes) • Learning Phase: Given a training set S, • Output: conditional probability tables; for elements • Test Phase: Given an unknown instance , • Look up tables to assign the label c* to X’ if

  14. Example • Example: Play Tennis

  15. Example • Learning Phase P(Play=Yes) = 9/14 P(Play=No) = 5/14

  16. Example • Test Phase • Given a new instance, • x’=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong) • Look up tables • MAP rule P(Outlook=Sunny|Play=No) = 3/5 P(Temperature=Cool|Play==No) = 1/5 P(Huminity=High|Play=No) = 4/5 P(Wind=Strong|Play=No) = 3/5 P(Play=No) = 5/14 P(Outlook=Sunny|Play=Yes) = 2/9 P(Temperature=Cool|Play=Yes) = 3/9 P(Huminity=High|Play=Yes) = 3/9 P(Wind=Strong|Play=Yes) = 3/9 P(Play=Yes) = 9/14 P(Yes|x’): [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) = 0.0053 P(No|x’): [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) = 0.0206 Given the factP(Yes|x’) < P(No|x’), we label x’ to be “No”.

  17. Example • Test Phase • Given a new instance, • x’=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong) • Look up tables • MAP rule P(Outlook=Sunny|Play=No) = 3/5 P(Temperature=Cool|Play==No) = 1/5 P(Huminity=High|Play=No) = 4/5 P(Wind=Strong|Play=No) = 3/5 P(Play=No) = 5/14 P(Outlook=Sunny|Play=Yes) = 2/9 P(Temperature=Cool|Play=Yes) = 3/9 P(Huminity=High|Play=Yes) = 3/9 P(Wind=Strong|Play=Yes) = 3/9 P(Play=Yes) = 9/14 P(Yes|x’): [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) = 0.0053 P(No|x’): [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) = 0.0206 Given the factP(Yes|x’) < P(No|x’), we label x’ to be “No”.

  18. Relevant Issues • Violation of Independence Assumption • For many real world tasks, • Nevertheless, naïve Bayes works surprisingly well anyway! • Zero conditional probability Problem • If no example contains the attribute value • In this circumstance, during test • For a remedy, conditional probabilities estimated with

  19. Relevant Issues • Continuous-valued Input Attributes • Numberless values for an attribute • Conditional probability modeled with the normal distribution • Learning Phase: • Output: normal distributions and • Test Phase: • Calculate conditional probabilities with all the normal distributions • Apply the MAP rule to make a decision

  20. Conclusions • Naïve Bayes based on the independence assumption • Training is very easy and fast; just requiring considering each attribute in each class separately • Test is straightforward; just looking up tables or calculating conditional probabilities with normal distributions • A popular generative model • Performance competitive to most of state-of-the-art classifiers even in presence of violating independence assumption • Many successful applications, e.g., spam mail filtering • A good candidate of a base learner in ensemble learning • Apart from classification, naïve Bayes can do more…

  21. Extra Slides

  22. Naïve Bayes (1) • Revisit • Which is equal to • Naïve Bayes assumes conditional independency • Then the inference of posterior is

  23. Naïve Bayes (2) • Training: Observation is multinomial; Supervised, with label information • Maximum Likelihood Estimation (MLE) • Maximum a Posteriori (MAP): put Dirichlet prior • Classification

  24. Naïve Bayes (3) • What if we have continuous Xi? • Generative training • Prediction

  25. Naïve Bayes (4) • Problems • Features may overlapped • Features may not be independent • Size and weight of tiger • Use a joint distribution estimation (P(X|Y), P(Y))to solve a conditional problem (P(Y|X= x)) • Can we discriminatively train? • Logistic regression • Regularization • Gradient ascent

More Related