1 / 15

580.691 Learning Theory Reza Shadmehr Bayesian learning 1:

580.691 Learning Theory Reza Shadmehr Bayesian learning 1: Bayes rule, priors and maximum a posteriori. Frequentist vs. Bayesian Statistics. Frequentist Thinking True parameter: Estimate of this parameter:. Bayesian Thinking Does not have the concept of a true parameter.

dane
Télécharger la présentation

580.691 Learning Theory Reza Shadmehr Bayesian learning 1:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 580.691 Learning Theory Reza Shadmehr Bayesian learning 1: Bayes rule, priors and maximum a posteriori

  2. Frequentist vs. Bayesian Statistics Frequentist Thinking True parameter: Estimate of this parameter: Bayesian Thinking Does not have the concept of a true parameter. Rather, at every given time we have knowledge about w (the prior), gain new data, and then update our knowledge using Bayes rule (the posterior). Many different ways in which we can come up with estimates (e.g. Maximum Likelihood estimate), and we can evaluate them. Conditional Distr. Prior Distr. Posterior distr. Given Bayes rule, there is only ONE correct way of learning.

  3. Binomial distribution and discrete random variables Suppose a random variable can only take one of two variables (e.g., 0 and 1, success and failure, etc.). Such trials are termed Bernoulli trials. Probability density or distribution Probability distribution of a specific sequence of successes and failures

  4. Poor performance of ML estimators with small data samples • Suppose we have a coin and wish to estimate the outcome (head or tail) from observing a series of coin tosses. q = probability of tossing a head. • After observing n coin tosses, we note that: • out of which h trials are head. • To estimate whether the next toss will be head or tail, we form an ML estimator: • After one toss, if it comes up tail, our ML estimate predicts zero probability of seeing heads. If first n tosses are tails, the ML continues to predict zero prob. of seeing heads. Probability of observing a particular sequence of heads and tails in D

  5. Including prior knowledge into the estimation process • Even though the ML estimator might say , we “know” that the coin can come up both heads and tails, i.e.: • Starting point for our consideration is that q is not only a number, but we will give q a full probability distribution function • Suppose we know that the coin is either fair (q=0.5) with prob. p or in favor of tails (q=0.4) with probability 1-p. • We want to combine this prior knowledge with new data D (i.e. number of heads in n throws) to arrive at a posterior distribution for q. We will apply Bayes rule: Prior Distr. Conditional Distr. Posterior distr. The numerator is just the joint distribution of q and D, evaluated at a particular D. The denominator is the marginal distribution of the data D, that is, it is just a number that makes the Numerator integrate to one.

  6. Bayesian estimation for a potentially biased coin • Suppose that we believe that the coin is either fair, or that it is biased toward tails: q = probability of tossing a head. After observing n coin tosses, we note that: out of which h trials are head. Now we can accurately calculate the probability that we have a fair coin, given some data D. In contrast to the ML estimate, which only gave us one number qML, we have here a full probability distribution, that is we know also how certain we are that we have a fair or unfair coin. In some situation we would like a single number, that represents our best guess of q. One possibility for this best guess is the maximum a-posteriori estimate (MAP).

  7. MAP estimator: Maximum a-posteriori estimate We define the MAP estimate as the maximum (i.e. mode) of the posterior distribution. The latter version makes the comparison to the maximum likelihood estimate easy: We see that ML and MAP are identical, if p(q) is a constant that does not depend on q. Thus our prior would be a uniform distribution over the domain of q. We call such a prior for obvious reasons a flat or uniformed prior.

  8. 0.25 0.2 0.15 0.1 0.05 5 10 15 20 • Formulating a continuous prior for the coin toss problem • In the last example the probability of tossing a head, represented by q, could only be either0.5 or p=0.4. How should we choose a prior distribution if q can be between 0 and 1? • Suppose we observed n tosses. The probability density that exactly h of those tosses were heads is: Binomial distribution q = probability of tossing a head

  9. 2.5 3.5 3 2 2.5 1.5 2 1 1.5 1 0.5 0.5 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 • Formulating a continuous prior for the coin toss problem • q represents the probability of a head. We want a continuous distribution that is defined between 0 and 1, and is 0 for 0 and 1. Beta distribution normalizing constant q = probability of tossing a head

  10. Formulating a continuous prior for the coin toss problem • In general, let’s assume our knowledge comes in the form of a beta distribution: When we apply Bayes rule to integrate some old knowledge (the prior) in the form of a beta-distribution with parameters a and b, with some new knowledge h and n (coming from a binomial distribution), then we find that the posterior distribution also has the form of a beta distribution with parameters a+h and b+n-h. Beta and binomial distribution are therefore call conjugate distributions.

  11. 1.5 1 0.5 0.2 0.4 0.6 0.8 1 MAP estimator for the coin toss problem Let us look at the MAP estimator if we start with a prior of a=1, n=2, i.e. we have a slight belief in the fact that the coin is fair. Our posterior is then: Let’s calculate the MAP-estimate so that we can compare it to the ML estimate. Note that after one toss, if we get a tail, our probability of tossing a head is 0.33, not zero as in the ML case.

  12. Classification with a continuous conditional distribution Assume you only know the height of a person, but not their gender. Can height tell you something about gender? Assume y=height and x=gender (0=male or 1=female). What we have: densities What we want: probability Height is normally distributed in the population of men and in the population of women, with different means, and similar variances. Let x be an indicator variable for being a female. Then the conditional distribution of y (the height becomes):

  13. Classification with a continuous conditional distribution Let us further assume that we start with a prior distribution, such that x is 1 with probability p. The posterior is a logistic function of a linear function of the data and parameters (remember this result the section on classification!). The maximum-likelihood argument would just have decided under which model the data would have been more likely. The posterior distribution gives us the full probability that we have a male or female. We can also include prior knowledge in our scheme.

  14. 1 0.8 0.6 0.4 0.2 120 140 160 180 200 220 Classification with a continuous conditional distribution Computing the probability that the subject is female, given that we observed height y. Our prior probability Posterior probability:

  15. Summary • Bayesian estimation involves the application of Bayes rule to combine a prior density and a conditional density to arrive at a posterior density. • Maximum a posteriori (MAP) estimation: If we need a “best guess” from our posterior distribution, often the maximum of the posterior distribution is used. • The MAP and ML estimate are identical, when our prior is uniformly distributed on q, i.e. is flat or uniformed. • With a two-way classification problem and data that is Gaussian given the category membership, the posterior is a logistic function, linear in the data.

More Related