1 / 48

Variational Bayes 101

Variational Bayes 101. The Bayes scene. Exact averaging in discrete/small models (Bayes networks) Approximate averaging: - Monte Carlo methods - Ensemble/mean field - Variational Bayes methods. Variational-Bayes .org MLpedia Wikipedia. ISP Bayes:

wiley
Télécharger la présentation

Variational Bayes 101

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Variational Bayes 101 Adv. Signal Proc. 2006

  2. The Bayes scene • Exact averaging in discrete/small models (Bayes networks) • Approximate averaging: - Monte Carlo methods - Ensemble/mean field - Variational Bayes methods Variational-Bayes .org MLpedia Wikipedia • ISP Bayes: ICA: mean field, Kalman, dynamical systems NeuroImaging: Optimal signal detector Approximate inference Machine learning methods Adv. Signal Proc. 2006

  3. Bayes’ methodology Minimal error rate obtained when detector is based on posterior probability (Bayes decision theory) Likelihood may contain unknown parameters Adv. Signal Proc. 2006

  4. Bayes’ methodology Conventional approach is to use most probable parameters However: averaged model is generalization optimal (Hansen, 1999), i.e.: Adv. Signal Proc. 2006

  5. The hidden agenda of learning • Typically learning proceeds by generalization from limited set of samples…but • We would like to identify the model that generated the data • ….Choose the least complex model compatible with data That I figured out in 1386 Adv. Signal Proc. 2006

  6. Generalization! • Generalizability is defined as the expected performance on a random new sample ... the mean performance of a model on a ”fresh” data set is an unbiased estimate of generalization • Typical loss functions: <-log p(x)> , < # prediction errors > < [ g(x)-ĝ(x) ] 2 >, <log p(x,g)/p(x)p(g)>, etc • Results can be presented as ”bias-variance trade-off curves” or ”learning curves” Adv. Signal Proc. 2006

  7. Generalization optimal predictive distribution • ”The game of guessing a pdf” • Assume: Random teacher drawn from P(θ), random data set, D, drawn from P(x|θ) • The prediction / generalization error is Predictive distribution of model A Test sample distribution Adv. Signal Proc. 2006

  8. Generalization optimal predictive distribution We define the ”generalization functional” (Hansen, NIPS 1999) Minimized by the ”Bayesian averaging” predictive distribution Adv. Signal Proc. 2006

  9. Bias-variance trade-off and averaging • Now averaging is good, can we average ”too much”? • Define the family of tempered posterior distributions • Case: univariate normal dist. w. unknown mean parameter… • High temperature: widened posterior average • Low temperature: Narrow average Adv. Signal Proc. 2006

  10. Bayes’ model selection, example Let three models A,B,C be given • A) x is normal N(0,1) • B) x is normal N(0,σ2), σ2 is uniform U(0,∞) • C) x is normal N(μ,σ2), μ,σ2 are uniform U(0,∞) Adv. Signal Proc. 2006

  11. Model A The likelihood of N samples is given by Adv. Signal Proc. 2006

  12. Model B The likelihood of N samples is given by Adv. Signal Proc. 2006

  13. Model C The likelihood of N samples is given by Adv. Signal Proc. 2006

  14. Bayesian model selection • C(green) is the correct model, what if only A(red)+B(blue) are known? Adv. Signal Proc. 2006

  15. Bayesian model selection A (red) is the correct model Adv. Signal Proc. 2006

  16. Bayesian inference • Bayesian averaging • Caveats: Bayes can rarely be implemented exactly Not optimal if the model family is incorrect: ”Bayes can not detect bias” However, still asymptotically optimal if observation model is correct & prior is ”weak” (Hansen, 1999). Adv. Signal Proc. 2006

  17. Hierarchical Bayes models • Multi-level models in Bayesian averaging C.P. Robert: The Bayesian Choice - A Decision-Theoretic Motivation. Springer Texts in Statistics, Springer Verlag, New York (1994). G. Golub, M. Heath and G. Wahba, Generalized crossvalidation as a method for choosing a good ridge parameter, Technometrics 21 pp. 215–223, (1979). K. Friston: A theory of Cortical Responses. Phil. Trans. R. Soc. B 360:815-836 (2005) Adv. Signal Proc. 2006

  18. Hierarchical Bayes models Posterior “learning hyper- parameters by adjusting prior expectations” -empirical Bayes -MacKay, (1992) Prior “Evidence” Hansen et al. (Eusipco, 2006) Cf. Boltzmann learning (Hinton et al. 1983) Target at Maximal evidence Adv. Signal Proc. 2006

  19. Hyperparameter dynamics Gaussian prior w adaptive hyperparameter θ2A is a signal-to-noise measure θMLis maximum lik. opt. Discontinuity: Parameter is pruned at Low signal-to-noise Hansen & Rasmussen, Neural Comp (1994) Tipping “Relevance vector machine” (1999) Adv. Signal Proc. 2006

  20. Hyperparameter dynamics • Hyperparameters dynamically updated implies pruning • Pruning decisions based on SNR • Mechanism for cognitive selection, attention? Adv. Signal Proc. 2006

  21. Hansen & Rasmussen, Neural Comp (1994) Adv. Signal Proc. 2006

  22. Adv. Signal Proc. 2006

  23. Adv. Signal Proc. 2006

  24. Adv. Signal Proc. 2006

  25. Adv. Signal Proc. 2006

  26. Adv. Signal Proc. 2006

  27. Adv. Signal Proc. 2006

  28. Adv. Signal Proc. 2006

  29. Adv. Signal Proc. 2006

  30. Adv. Signal Proc. 2006

  31. Adv. Signal Proc. 2006

  32. Approximations needed for posteriors • Approximations using asymptotic expansions (Laplace etc) -JL • Approximation of posteriors using tractable (factorized) pdf’s by KL-fitting… • Approximation of products using EP -AH Wednesday • Approximation by MCMC –OWI Thursday Adv. Signal Proc. 2006

  33. Illustration of approximation by a gaussian pdf P. Højen-Sørensen: Thesis (2001) Adv. Signal Proc. 2006

  34. Adv. Signal Proc. 2006

  35. Variational Bayes • Notation are observables and hidden variables • – we analyse the log likelihood of a mixture model Adv. Signal Proc. 2006

  36. Variational Bayes Adv. Signal Proc. 2006

  37. Variational Bayes: Adv. Signal Proc. 2006

  38. Conjugate exponential families Adv. Signal Proc. 2006

  39. Mini exercise • What are the natural parameters for a Gaussian? • What are the natural parameters for a MoG? Adv. Signal Proc. 2006

  40. Adv. Signal Proc. 2006

  41. Observation model and “Bayes factor” Adv. Signal Proc. 2006

  42. “Normal inverse gamma” prior – the conjugate prior for the GLM observation model Adv. Signal Proc. 2006

  43. “Normal inverse gamma” prior – the conjugate prior for the GLM observation model Adv. Signal Proc. 2006

  44. Bayes factor is the ratio between normalization const. of NIG’s: Adv. Signal Proc. 2006

  45. Adv. Signal Proc. 2006

  46. Adv. Signal Proc. 2006

  47. Adv. Signal Proc. 2006

  48. Exercises • Matthew Beal’s Mixture of Factor Analyzers code • Code available (variational-bayes.org) • Code a VB version of the BGML for signal detection • Code available for exact posterior Adv. Signal Proc. 2006

More Related