1 / 54

Bayesian Learning

Bayesian Learning. Bayesianism vs. Frequentism. Classical probability: Frequentists Probability of a particular event is defined relative to its frequency in a sample space of events.

amil
Télécharger la présentation

Bayesian Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bayesian Learning

  2. Bayesianism vs. Frequentism • Classical probability: Frequentists • Probability of a particular event is defined relative to its frequency in a sample space of events. • E.g., probability of “the coin will come up heads on the next trial” is defined relative to the frequency of heads in a sample space of coin tosses. • Bayesian probability: • Combine measure of “prior” belief you have in a proposition with your subsequent observations of events. • Example: Bayesian can assign probability to statement “There was life on Mars a billion years ago” but frequentist cannot.

  3. Bayesian Learning

  4. Bayesian Learning • Prior probability of h: • P(h): Probability that hypothesis h is true given our prior knowledge • If no prior knowledge, all hH are equally probable • Posterior probability of h: • P(h|D): Probability that hypothesis h is true, given the data D. • Likelihood of D: • P(D|h): Probability that we will see data D, given hypothesis h is true. Question is: given the data D and our prior beliefs, what is the probability that h is the correct hypothesis, i.e., P(h| D)? Bayesrule says:

  5. The Monty Hall Problem You are a contestant on a game show. There are 3 doors, A, B, and C. There is a new car behind one of them and goats behind the other two. Monty Hall, the host, asks you to pick a door, any door. You pick door A. Monty tells you he will open a door , different from A, that has a goat behind it. He opens door B: behind it there is a goat. Monty now gives you a choice: Stick with your original choice A or switch to C. Should you switch? http://math.ucsd.edu/~crypto/Monty/monty.html

  6. Bayesian probability formulation Hypothesis space H: h1= Car is behind door A h2= Car is behind door B h3= Car is behind door C Data D = Monty opened B What is P(h1 | D)? What is P(h2 | D)? What is P(h3 | D)?

  7. A Medical Example T. takes a test for leukemia. The test has two outcomes: positive and negative. It is known that if the patient has leukemia, the test is positive 98% of the time. If the patient does not have leukemia, the test is positive 3% of the time. It is also known that 0.008 of the population has leukemia. T.’s test is positive. Which is more likely: T. has leukemia or T. does not have leukemia?

  8. Hypothesis space: h1 = T. has leukemia h2 = T. does not have leukemia • Prior: 0.008 of the population has leukemia. Thus P(h1) = 0.008 P(h2) = 0.992 • Likelihood: P(+ | h1) = 0.98, P(− | h1) = 0.02 P(+ | h2) = 0.03, P(− | h2) = 0.97 • Posterior knowledge: Blood test is + for this patient. • Maximum A Posteriori (MAP) hypothesis:

  9. because P(D) is a constant independent of h. Note: If every hH is equally probable, then This is called the “maximum likelihood hypothesis”.

  10. Back to our example • We had: P(h1) = 0.008, P(h2) = 0.992 P(+ | h1) = 0.98, P(− | h1) = 0.02 P(+ | h2) = 0.03, P(− | h2) = 0.97 • Thus:

  11. What is P(leukemia|+)? So, These are called the “posterior” probabilities.

  12. In-class exercise Suppose you receive an e-mail message with the subject “Hi”. You have been keeping statistics on your e-mail, and have found that while only 10% of the total e-mail messages you receive are spam, 50% of the spam messages have the subject “Hi” and 2% of the non-spam messages have the subject “Hi”. What is the probability that the message is spam? P(h|D) = P(D|h)P(h) / P(D)

  13. Independence and Conditional Independence • Two random variables, X and Y, are independent if • Two random variables, X and Y, are independent given C if • Examples?

  14. Naive Bayes Classifier Letx= <a1, a2, ..., an> Let classes = {c1, c2, ..., cv} We want to find MAP class, cMAP :

  15. By Bayes Theorem: P(ci) can be estimated from the training data. How? However, can’t use training data to estimate P(a1, a2, ..., an |cj). Why not?

  16. Naive Bayes classifier: Assume I.e., assume all the attributes are independent of one another. (Is this a good assumption?) Given this assumption, here’s how to classify an instance x= <a1, a2, ...,an>: We can estimate the values of these various probabilities over the training set.

  17. Example • Use training data from decision tree example to classify the new instance: <Outlook=sunny, Temperature=cool, Humidity=high, Wind=strong>

  18. Day Outlook Temp Humidity Wind PlayTennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak Yes D4 Rain Mild High Weak Yes D5 Rain Cool Normal Weak Yes D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Strong Yes D8 Sunny Mild High Weak No D9 Sunny Cool Normal Weak Yes D10 Rain Mild Normal Weak Yes D11 Sunny Mild Normal Strong Yes D12 Overcast Mild High Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High Strong No

  19. In-class example Training set: x1x2x3 class 0 1 0 + 1 0 1 + 0 0 1 − 1 1 0 − 1 0 0 − What class would be assigned by a NB classifier to 1 1 1 ?

  20. Estimating probabilities • Recap: In previous example, we had a training set and a new example, <Outlook=sunny, Temperature=cool, Humidity=high, Wind=strong> • We asked: What classification is given by a naive Bayes classifier? • Let n(c) be the number of training instances with class c, and n(ai,c) be the number of training instances with attribute value ai and class c. Then

  21. In practice, use training data to compute a probablisticmodel:

  22. Problem with this method: If nc is very small, gives a poor estimate. • E.g., P(Outlook= Overcast | no) = 0.

  23. Now suppose we want to classify a new instance: <Outlook=overcast, Temperature=cool, Humidity=high, Wind=strong>. Then: This incorrectly gives us zero probability due to small sample.

  24. One solution: Laplace smoothing (also called “add-one” smoothing) For each class cj and attribute a with value ai, add one “virtual” instance. That is, recalculate: where k is the number of possible values of attribute a.

  25. Day Outlook Temp Humidity Wind PlayTennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak Yes D4 Rain MildHigh Weak Yes D5 Rain Cool Normal Weak Yes D6 Rain CoolNormal Strong No D7 Overcast CoolNormal Strong Yes D8 Sunny Mild High Weak No D9 Sunny Cool Normal Weak Yes D10 Rain MildNormal Weak Yes D11 Sunny MildNormal Strong Yes D12 Overcast MildHigh Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High Strong No − Sunny − − − Yes − Sunny − − − No − Overcast − − − Yes − Overcast − − − No − Rain − − − Yes − Rain − − − No

  26. Day Outlook Temp Humidity Wind PlayTennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak Yes D4 Rain MildHigh Weak Yes D5 Rain Cool Normal Weak Yes D6 Rain CoolNormal Strong No D7 Overcast CoolNormal Strong Yes D8 Sunny Mild High Weak No D9 Sunny Cool Normal Weak Yes D10 Rain MildNormal Weak Yes D11 Sunny MildNormal Strong Yes D12 Overcast MildHigh Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High Strong No − − − High − Yes − − − High − No − − − Normal − Yes − − − Normal − No

  27. In-class exercise Training set: x1x2x3 class 0 1 0 + 1 0 1 + 1 1 1 − 1 1 0 − 1 0 0 − Calculate smoothed probabilities for attribute x1: P(x1=0 | +) P(x1=0 | −) P(x1=1 | +) P(x1=1 | −)

  28. Naive Bayes on continuous-valued attributes • How to deal with continuous-valued attributes? Two possible solutions: • Discretize • Assume particular probability distribution of classes over values (estimate parameters from training data)

  29. Simplest discretization method For each attributea, create k equal-size bins in interval fromamintoamax. Humidity: 25, 38, 50, 80, 93, 98, 98, 99

  30. Simplest discretization method For each attributea, create k equal-size bins in interval fromamintoamax. Choose thresholds in between bins. P(Humidity< 40 | yes) P(40<=Humidity < 80 | yes) P(80<=Humidity < 120 | yes) P(Humidity< 40 | no) P(40<=Humidity < 80 | no) P(80<=Humidity < 120 | no) Threshold: 60 Humidity: 25, 38, 50, 80, 93, 98, 98, 99 Threshold: 80 Threshold: 120 Threshold: 40

  31. Simplest discretization method For each attributea, create k equal-size bins in interval fromamintoamax. Questions: What should k be? What if some bins have very few instances? Problem with balance between discretization bias and variance. The more bins, the lower the bias, but the higher the variance, due to small sample size.

  32. Alternative simple (but effective) discretization method(Yang & Webb, 2001) Let n = number of training examples. For each attribute Ai , create bins. Sort values of Ai in ascending order, and put of them in each bin. Don’t need add-one smoothing of probabilities This gives good balance between discretization bias and variance.

  33. Alternative simple (but effective) discretization method(Yang & Webb, 2001) Let n = number of training examples. For each attribute Ai , create bins. Sort values of Ai in ascending order, and put of them in each bin. Don’t need add-one smoothing of probabilities This gives good balance between discretization bias and variance. Humidity: 25, 38, 50, 80, 93, 98, 98,, 99

  34. Alternative simple (but effective) discretization method(Yang & Webb, 2001) Let n = number of training examples. For each attribute Ai , create bins. Sort values of Ai in ascending order, and put of them in each bin. Don’t need add-one smoothing of probabilities This gives good balance between discretization bias and variance. Humidity: 25, 38, 50, 80, 93, 98, 98,, 99

  35. Alternative simple (but effective) discretization method(Yang & Webb, 2001) Let n = number of training examples. For each attribute Ai , create bins. Sort values of Ai in ascending order, and put of them in each bin. Don’t need add-one smoothing of probabilities This gives good balance between discretization bias and variance. Humidity: 25, 38, 50, 80, 93, 98, 98,, 99

  36. Beyond Independence: Conditions for the Optimality of the Simple Bayesian Classifer(P. Domingos and M. Pazzani) Naive Bayes classifier is called “naive” because it assumes attributes are independent of one another.

  37. This paper asks: why does the naive (“simple”) Bayes classifier, SBC, do so well in domains with clearly dependent attributes?

  38. Experiments • Compare five classification methods on 30 data sets from the UCI ML database. SBC = Simple Bayesian Classifier Default = “Choose class with most representatives in data” C4.5 = Quinlan’s decision tree induction system PEBLS = An instance-based learning system CN2 = A rule-induction system

  39. For SBC, numeric values were discretized into ten equal-length intervals.

  40. Number of domains in which SBC was more accurate versus less accurate than corresponding classifier Same as line 1, but significant at 95% confidence Average rank over all domains (1 is best in each domain)

  41. Measuring Attribute Dependence They used a simple, pairwise mutual information measure: For attributes Am and An dependence is defined as where AmAn is a “derived attribute”, whose values consist of the possible combinations of values of Am and An Note: If Am and An are independent, then D(Am, An | C) = 0. H(Aj | C) = “conditional entropy”

  42. Results: (1) SBC is more successful than more complex methods, even when there is substantial dependence among attributes. (2) No correlation between degree of attribute dependence and SBC’s rank. But why????

  43. An Example • Let C = {+, −}, and attributes = {A, B, C}. • Let P(+) = P(−) = 1/2. • Suppose A and C are completely independent, and A and B are completely dependent (i.e., A=B). • Optimal classification procedure:

  44. This leads to the following conditions: If P(A|+) P(C|+) > P(A | −) P(C| −) then class = + else class = − • In the paper, the authors use Bayes Theorem to rewrite these conditions, and plot the “decision boundaries” for the optimal classifier and for the SBC.

More Related