1 / 5

Model Choice and Bayes Factors:

egholson
Télécharger la présentation

Model Choice and Bayes Factors:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MCMC Stopping and Variance Estimation: Idea here is to first use multiple Chains from different initial conditions to determine a burn-in period so the variance is similar within a single Chain (e.g. a single trajectory of the MC) to that across several Chains. If we want to do estimation based on the posterior, then we need some variance estimates to test for example whether two MCMC estimates are statistically the same. One option is to partition the Chain (after burn-in) and use these “batches” to estimate the variance. Alternatively, you can use an estimator of the autocorrelation lag to either “thin” or throw out some of the data on the Chain, or else increase the variance estimate based upon the autcorrelation lag.

  2. Bayesian Estimation: Once we have a posterior (or a sample of it from MCMC) then point estimation for measures of central tendency are simply calculated from the posterior. Interval estimates for parameters such as quartiles are found the same as usual: Then the interval [L ,U] gives the Bayesian confidence interval or  credible set (equal tail case).

  3. Model Choice and Bayes Factors: The idea is to determine a procedure to chose between two models M1 and M2 which represent two choices of parameters 1 and 2 (vectors) and to obtain a posterior way to estimate how much confidence you might place in one model over the other. The Bayes factor is a formal method to determine how much the data y support one model over the other. We wish to find the posterior probabilities of the models P( Mi | y ) but summarize this by comparing the posterior to the prior odds of each model: Which simplifies to the likelihood ratio under appropriate assumptions.

  4. The marginal density of the data given the models is And after using Bayes Theorem the Bayes factor is But obviously if the models differ in how many parameters they have (for example if one is nested inside another but has more parameters) then we need to correct for this when saying one has more support, which is what the Bayesian Information Criterion (BIC), Akaike Information Criterion (AIC) and Deviance Information Criterion (DIC) do.

  5. Bayesian Hierarchical Models: The idea of hierarchical modeling is simply to break a complex system into pieces, construct a model for each piece, and then link them as necessary (e.g. model a plant by having models for root, stem and leaves and then link together via carbon/water transport). Bayesian hierarchical models are similar except that the links between models can be thought of as obtained via Bayesian methods (e.g. updating one component with data leads to iterative updating across the model components). In one view, the joint distribution of the full model depends upon submodels for the data, the process and the parameters and one uses Bayes to update the conditional distributions of these.

More Related