1 / 80

Infinite dynamic bayesian networks

Presented by Patrick Dallaire – DAMAS Workshop november 2 th 2012. Infinite dynamic bayesian networks. ( Doshi et al. 2011). INTRODUCTION. PROBLEM DESCRIPTION. Consider precipitations measured by 500 different weather stations in USA. Observations were discretized into 7 values

fergal
Télécharger la présentation

Infinite dynamic bayesian networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Presented by Patrick Dallaire – DAMAS Workshop november 2th 2012 Infinite dynamic bayesian networks (Doshi et al. 2011)

  2. INTRODUCTION

  3. PROBLEM DESCRIPTION • Consider precipitations measured by 500 different weather stations in USA. • Observations were discretized into 7 values • The dataset consists of a time series including 3,287 daily measures • How can we learn the underlying weather model that produced the sequence of precipitations?

  4. HIDDEN MARKOV MODEL • Observations are produced based on the hidden state • The hidden state evolvesaccording to some dynamics • Markov assumption says that summarizes the states history and is thus enough to generate • The learning task is to infer and from data

  5. INFINITE DYNAMIC BAYESIAN NETWORKS

  6. TRANSITION MODEL • A regular DBN is a directed graphical model • State at time is represented through a set of factors

  7. TRANSITION MODEL • A regular DBN is a directed graphical model • State at time is represented through a set of factors • The next state is sampledaccording to:where representsthe values of the parents

  8. TRANSITION MODEL • A regular DBN is a directed graphical model • State at time is represented through a set of factors • The next state is sampledaccording to:where representsthe values of the parents

  9. OBSERVATION MODEL • The state of a DBN isgenerally hidden • State values must be inferred from a set of observable nodes • The observations are sampled from:where is the values of the parents

  10. OBSERVATION MODEL • The state of a DBN isgenerally hidden • State values must be inferred from a set of observable nodes • The observations are sampled from:where is the values of the parents

  11. OBSERVATION MODEL • The state of a DBN isgenerally hidden • State values must be inferred from a set of observable nodes • The observations are sampled from:where is the values of the parents

  12. LEARNING THE STRUCTURE • The number of hidden factors is unknown • The state transition structure is unknown • The state observation structure is unknown

  13. PRIOR OVER DBN STRUCTURES • A Bayesian nonparametric prior is specified over structures with Indian buffet processes (IBP) • We specify a prior over observation connection structures: • We specify a prior over transition connection structures:

  14. IBP ON OBSERVATION STRUCTURE

  15. IBP ON OBSERVATION STRUCTURE

  16. IBP ON OBSERVATION STRUCTURE • selects a parent factor with probability • samplesnew parent factors

  17. IBP ON OBSERVATION STRUCTURE • selects a parent factor with probability • samplesnew parent factors

  18. IBP ON OBSERVATION STRUCTURE • selects a parent factor with probability • samplesnew parent factors

  19. IBP ON OBSERVATION STRUCTURE • selects a parent factor with probability • samplesnew parent factors

  20. IBP ON OBSERVATION STRUCTURE • selects a parent factor with probability • samplesnew parent factors

  21. IBP ON OBSERVATION STRUCTURE • selects a parent factor with probability • samplesnew parent factors

  22. IBP ON OBSERVATION STRUCTURE • selects a parent factor with probability • samplesnew parent factors

  23. IBP ON TRANSITION STRUCTURE

  24. IBP ON TRANSITION STRUCTURE

  25. IBP ON TRANSITION STRUCTURE

  26. IBP ON TRANSITION STRUCTURE • selects a parent factor with probability • samplesnew parent factors

  27. IBP ON TRANSITION STRUCTURE • selects a parent factor with probability • samplesnew parent factors

  28. IBP ON TRANSITION STRUCTURE • selects a parent factor with probability • samplesnew parent factors

  29. IBP ON TRANSITION STRUCTURE • selects a parent factor with probability • samplesnew parent factors

  30. IBP ON TRANSITION STRUCTURE • selects a parent factor with probability • samplesnew parent factors

  31. IBP ON TRANSITION STRUCTURE • selects a parent factor with probability • samplesnew parent factors

  32. IBP ON TRANSITION STRUCTURE • selects a parent factor with probability • samplesnew parent factors

  33. GRAPHICAL MODEL OF THE PRIOR

  34. LEARNING MODEL DISTRIBUTIONS • The observation distribution is unknown • The transition distribution is unknown

  35. PRIOR OVER DBN DISTRIBUTIONS • A Bayesian prior is specified over observation distributions: where is a prior base distribution

  36. PRIOR OVER DBN DISTRIBUTIONS • A Bayesian prior is specified over observation distributions: where is a prior base distribution • A Bayesian nonparametric prior is specified over transition distributions:where is a Dirichlet process and is a Stickbreaking distribution

  37. PRIOR ON OBSERVATION MODEL • For each observable variable , we can draw an observation distribution from:

  38. PRIOR ON OBSERVATION MODEL • For each observable variable , we can draw an observation distribution from: • Assuming is discrete, could be a Dirichlet

  39. PRIOR ON OBSERVATION MODEL • For each observable variable , we can draw an observation distribution from: • Assuming is discrete, could be a Dirichlet • The prior could also be a Dirichlet

  40. PRIOR ON OBSERVATION MODEL • For each observable variable , we can draw an observation distribution from: • Assuming is discrete, could be a Dirichlet • The prior could also be a Dirichlet • The posterior is obtained by counting how many times specific observations occurred

  41. EXAMPLE

  42. EXAMPLE

  43. EXAMPLE red blue

  44. EXAMPLE red blue

  45. PRIOR ON TRANSITION MODEL • First, we sample the expected factor transition distribution:

  46. PRIOR ON TRANSITION MODEL • First, we sample the expected factor transition distribution: • For each active hidden factor, we sample an individual transition distribution: where controls the variance around

  47. PRIOR ON TRANSITION MODEL • First, we sample the expected factor transition distribution for infinitely many factors: • For each active hidden factor, we sample an individual transition distribution: where controls the (inverse) variance • The posterior is again obtained by counting

  48. EXAMPLE

  49. EXAMPLE

  50. EXAMPLE

More Related