1 / 41

QUIZ!!

QUIZ!!. T /F : Rejection Sampling without weighting is not consistent. FALSE T/F: Rejection Sampling (often) converges faster than Forward Sampling. FALSE T/F: Likelihood weighting ( often ) converges faster than Rejection Sampling. TRUE

aria
Télécharger la présentation

QUIZ!!

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. QUIZ!! • T/F: Rejection Sampling without weighting is not consistent. FALSE • T/F: Rejection Sampling (often) converges faster than Forward Sampling. FALSE • T/F: Likelihood weighting (often) converges faster than Rejection Sampling. TRUE • T/F: The Markov Blanket of X contains other children of parents of X. FALSE • T/F: The Markov Blanket of X contains other parents of children of X. TRUE • T/F: GIBBS sampling requires you to weight samples by their likelihood. FALSE • T/F: In GIBBS sampling, it is a good idea to reject the first M<N samples. TRUE • Decision Networks: • T/F: Utility nodes never have parents. FALSE • T/F: Value of Perfect Information (VPI) is always non-negative. TRUE

  2. CSE 511a: Artificial IntelligenceSpring 2013 Lecture 19: Hidden Markov Models 04/10/2013 Robert Pless Via KilianQ. Weinberger, slides adapted from Dan Klein – UC Berkeley

  3. U Recap: Decision Diagrams Umbrella Weather Forecast

  4. U Example: MEU decisions Umbrella Umbrella = leave Weather Umbrella = take Forecast =bad Optimal decision = take

  5. Value of Information • Assume we have evidence E=e. Value if we act now: • Assume we see that E’ = e’. Value if we act then: • BUT E’ is a random variable whose value is unknown, so we don’t know what e’ will be. • Expected value if E’ is revealed and then we act: • Value of information: how much MEU goes up by revealing E’ first: VPI == “Value of perfect information”

  6. U VPI Example: Weather MEU with no evidence Umbrella MEU if forecast is bad Weather MEU if forecast is good Forecast Forecast distribution

  7. VPI Properties • Nonnegative • Nonadditive ---consider, e.g., obtaining Ej twice • Order-independent

  8. Now for something completely different

  9. “Our youth now love luxury. They have bad manners, contempt for authority; they show disrespect for their elders and love chatter in place of exercise; they no longer rise when elders enter the room; they contradict their parents, chatter before company; gobble up their food and tyrannize their teachers.”

  10. “Our youth now love luxury. They have bad manners, contempt for authority; they show disrespect for their elders and love chatter in place of exercise; they no longer rise when elders enter the room; they contradict their parents, chatter before company; gobble up their food and tyrannize their teachers.” – Socrates 469–399 BC

  11. Adding time!

  12. Reasoning over Time • Often, we want to reason about a sequence of observations • Speech recognition • Robot localization • User attention • Medical monitoring • Need to introduce time into our models • Basic approach: hidden Markov models (HMMs) • More general: dynamic Bayes’ nets

  13. Markov Model

  14. Markov Models • A Markov model is a chain-structured BN • Each node is identically distributed (stationarity) • Value of X at a given time is called the state • As a BN: ….P(Xt|Xt-1)….. • Parameters: called transition probabilities or dynamics, specify how the state evolves over time (also, initial probs) X1 X2 X3 X4

  15. Conditional Independence • Basic conditional independence: • Past and future independent of the present • Each time step only depends on the previous • This is called the (first order) Markov property • Note that the chain is just a (growing) BN • We can always use generic BN reasoning on it if we truncate the chain at a fixed length X1 X2 X3 X4

  16. Example: Markov Chain 0.1 • Weather: • States: X = {rain, sun} • Transitions: • Initial distribution: 1.0 sun • What’s the probability distribution after one step? 0.9 rain sun This is a CPT, not a BN! 0.9 0.1

  17. Mini-Forward Algorithm • Question: What’s P(X) on some day t? • An instance of variable elimination! sun sun sun sun rain rain rain rain Forward simulation

  18. Example • From initial observation of sun • From initial observation of rain P(X1) P(X2) P(X3) P(X) P(X1) P(X2) P(X3) P(X)

  19. Stationary Distributions • If we simulate the chain long enough: • What happens? • Uncertainty accumulates • Eventually, we have no idea what the state is! • Stationary distributions: • For most chains, the distribution we end up in is independent of the initial distribution • Called the stationary distribution of the chain • Usually, can only predict a short time out

  20. Hidden Markov Model

  21. Hidden Markov Models • Markov chains not so useful for most agents • Eventually you don’t know anything anymore • Need observations to update your beliefs • Hidden Markov models (HMMs) • Underlying Markov chain over states S • You observe outputs (effects) at each time step • As a Bayes’ net: X1 X2 X3 X4 X5 E1 E2 E3 E4 E5

  22. Example • An HMM is defined by: • Initial distribution: • Transitions: • Emissions:

  23. 1/9 1/6 1/6 1/9 1/2 1/9 0 1/9 1/6 1/9 1/9 0 0 1/9 0 1/9 1/9 0 Ghostbusters HMM • P(X1) = uniform • P(X|X’) = usually move clockwise, but sometimes move in a random direction or stay in place • P(Rij|X) = same sensor model as before:red means close, green means far away. P(X1) X1 X2 X3 X4 X5 Ri,j Ri,j Ri,j Ri,j P(X|X’=<1,2>) E5

  24. Conditional Independence • HMMs have two important independence properties: • Markov hidden process, future depends on past via the present • Current observation independent of all else given current state • Quiz: does this mean that observations are independent given no evidence? • [No, correlated by the hidden state] X1 X2 X3 X4 X5 E1 E2 E3 E4 E5

  25. Real HMM Examples • Speech recognition HMMs: • Observations are acoustic signals (continuous valued) • States are specific positions in specific words (so, tens of thousands) • Machine translation HMMs: • Observations are words (tens of thousands) • States are translation options • Robot tracking: • Observations are range readings (continuous) • States are positions on a map (continuous)

  26. Filtering / Monitoring • Filtering, or monitoring, is the task of tracking the distribution B(X) (the belief state) over time • We start with B(X) in an initial setting, usually uniform • As time passes, or we get observations, we update B(X) • The Kalman filter was invented in the 60’s and first implemented as a method of trajectory estimation for the Apollo program

  27. Example: Robot Localization Example from Michael Pfeiffer t=0 Sensor model: never more than 1 mistake Motion model: may not execute action with small prob. Prob 0 1

  28. Example: Robot Localization t=1 Prob 0 1

  29. Example: Robot Localization t=2 Prob 0 1

  30. Example: Robot Localization t=3 Prob 0 1

  31. Example: Robot Localization t=4 Prob 0 1

  32. Example: Robot Localization t=5 Prob 0 1

  33. Inference Recap: Simple Cases X1 X1 X2 E1

  34. Passage of Time • Assume we have current belief P(X | evidence to date) • Then, after one time step passes: • Or, compactly: • Basic idea: beliefs get “pushed” through the transitions • With the “B” notation, we have to be careful about what time step t the belief is about, and what evidence it includes X1 X2

  35. Example: Passage of Time • As time passes, uncertainty “accumulates” T = 1 T = 2 T = 5 Transition model: ghosts usually go clockwise

  36. Observation • Assume we have current belief P(X | previous evidence): • Then: • Or: • Basic idea: beliefs reweighted by likelihood of evidence • Unlike passage of time, we have to renormalize X1 E1

  37. Example: Observation • As we get observations, beliefs get reweighted, uncertainty “decreases” Before observation After observation

  38. Example HMM

  39. The Forward Algorithm • We are given evidence at each time and want to know • We can derive the following updates We can normalize as we go if we want to have P(x|e) at each time step, or just once at the end…

  40. X1 X2 X2 E2 Online Belief Updates • Every time step, we start with current P(X | evidence) • We update for time: • We update for evidence: • The forward algorithm does both at once (and doesn’t normalize) • Problem: space is |X| and time is |X|2 per time step

  41. Next Lecture: • Sampling! (Particle Filtering)

More Related