1 / 17

HMMs and Particle Filters

HMMs and Particle Filters. Observations and Latent States. Markov models don’t get used much in AI. The reason is that Markov models assume that you know exactly what state you are in, at each time step. This is rarely true for AI agents.

dory
Télécharger la présentation

HMMs and Particle Filters

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HMMs and Particle Filters

  2. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know exactly what state you are in, at each time step. This is rarely true for AI agents. Instead, we will say that the agent has a set of possible latent states – states that are not observed, or known to the agent. In addition, the agent has sensors that allow it to sense some aspects of the environment, to take measurements or observations.

  3. Hidden Markov Models Suppose you are the parent of a college student, and would like to know how studious your child is. You can’t observe them at all times, but you can periodically call, and see if your child answers. 0.6 0.6 0.6 0.5 0.5 0.5 0.4 0.4 0.4 Sleep Study Sleep Study Sleep Study 0.5 0.5 0.5 H1 H2 H3 … Answer call or not? Answer call or not? Answer call or not? O1 O2 O3

  4. Hidden Markov Models Here’s the same model, with probabilities in tables. H1 H2 H3 … O1 O2 O3

  5. Hidden Markov Models HMMs (and MMs) are a special type of Bayes Net. Everything you have learned about BNs applies here. H1 H2 H3 … O1 O2 O3

  6. Quick Review of BNs for HMMs H1 O1 H1 H2

  7. Hidden Markov Models Suppose a parent calls and gets an answer at time step 1. What is P(H1=Sleep|O1=Ans)? Notice: before the observation, P(Sleep) was 0.5. By making a call and getting an answer, the parent’s belief in Sleep drops to P(Sleep) = 0.111. H1 … O1

  8. Hidden Markov Models Suppose a parent calls and gets an answer at time step 2. What is P(H2=Sleep|O2=Ans)? H1 H2 O1 O2

  9. Quiz: Hidden Markov Models Suppose a parent calls twice, once at time step 1 and once at time step 2. The first time, the child does not answer, and the second time the child does. Now what is P(H2=Sleep)? H1 H2 O1 O2

  10. Answer: Hidden Markov Models Suppose a parent calls twice, once at time step 1 and once at time step 2. The first time, the child does not answer, and the second time the child does. Now what is P(H2=Sleep)? Numerator: + Denominator: + H1 H2 It’s a pain to calculate by Enumeration. O1 O2

  11. Quiz: Complexity of Enumeration for HMMs Suppose we have an HMM with T time steps. To compute any query like P(Hi|O1, …, OT), we need to compute P(O1, …, OT). How many terms are in this sum, if there are 2 possible values for each Hi?

  12. Answer: Complexity of Enumeration for HMMs Suppose we have an HMM with T time steps. To compute any query like P(Hi|O1, …, OT), we need to compute P(O1, …, OT). How many terms are in this sum, if there are 2 possible values for each Hi? 2T terms in this sum. Regular enumeration is an O(2T) algorithm. This makes it intractable, for example, for sentences with 20 words, or DNA sequences with hundreds or millions of base pairs.

  13. Specialized Inference Algorithm: Dynamic Programming There is a fairly simple way to compute this sum exactly in O(T), or linear time, using dynamic programming. Essentially, this works by computing partial sums, storing them, and re-using them during calculations of the sums for longer sequences. This is called the forward algorithm. We won’t cover this here, but you can see the book or online tutorials if you are interested.

  14. Demo of HMM Robot Localization Youtube demo from Udacity.com’s AI course: https://www.youtube.com/watch?v=Nc9-iLy_rgY&feature=player_embedded 1-dimensional robot demo: https://www.youtube.com/watch?v=8mi8z-EnYq8&feature=player_embedded

  15. Particle Filter Demos Real robot localization with particle filter: https://www.youtube.com/watch?v=H0G1yslM5rc&feature=player_embedded 1-dimensional case: https://www.youtube.com/watch?v=qQQYkvS5CzU&feature=player_embedded

  16. Particle Filter Algorithm Inputs: • set of particles S, each with location si.locand weight si.w • Control vector u (where robot should move next) • Measurement vector z (sensor readings) Outputs: • New particles S’, for the next iteration

  17. Particle Filter Algorithm Init: For (N is number of particles in S): Pick a particle sj from S randomly (with replacment), in proportion to the weights s.w Create a new particle s’ Sample Set Set Set End For For: Set End For

More Related