1 / 7

The Uniform Prior and the Laplace Correction

The Uniform Prior and the Laplace Correction. Supplemental Material not on exam. Bayesian Inference. We start with P( ) - prior distribution about the values of  P( x 1 , …, x n | ) - likelihood of examples given a known value 

nili
Télécharger la présentation

The Uniform Prior and the Laplace Correction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Uniform Prior and the Laplace Correction Supplemental Material not on exam

  2. Bayesian Inference We start with • P() - prior distribution about the values of  • P(x1, …, xn|) - likelihood of examples given a known value  Given examples x1, …, xn, we can compute posterior distribution on  Where the marginal likelihood is

  3. Binomial Distribution: Laplace Est. • In this case the unknown parameter is  = P(H) • Simplest prior P() = 1 for 0< <1 • Likelihood where his number of heads in the sequence • Marginal Likelihood:

  4. Marginal Likelihood Using integration by parts we have: Multiply both side by n choose h, we have

  5. Marginal Likelihood - Cont • The recursion terminates when h = n Thus We conclude that the posterior is

  6. Bayesian Prediction • How do we predict using the posterior? • We can think of this as computing the probability of the next element in the sequence • Assumption: if we know , the probability of Xn+1 is independent of X1, …, Xn

  7. Bayesian Prediction • Thus, we conclude that

More Related