1 / 15

Probabilistic reasoning over time

Probabilistic reasoning over time. This sentence is likely to be untrue in the future!. The basic problem. What do we know about the state of the world now given a history of the world before . The only evidence we have are probabilities.

annick
Télécharger la présentation

Probabilistic reasoning over time

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Probabilistic reasoning over time This sentence is likely to be untrue in the future!

  2. The basic problem • What do we know about the state of the world now given a history of the world before. • The only evidence we have are probabilities. • “Past performance may not be a guide to future performance.”

  3. Simplifying assumptions and notations • States are our “events”. • (Partial) states can be measured at reasonable time intervals. • Xt unobservable state variables at t. • Et (“evidence”) observable state variables at t. • Vm:n : Variables Vm, Vm+1,…,Vn

  4. Stationary, Markovian (transition model) • Stationary: the laws of probability don’t change over time • Markovian: current unobservalbe state depends on a finite number of past states • First-order: current state depends only on the previous state, i.e.: • P(Xt|X0:t-1)=P(Xt|Xt-1) • Second-order: etc., etc.

  5. Observable variables (the sensor model) • Observable variables depend only on the current state (by definition, essentially), these are the “sensors”. • The current state causes the sensor values. • P(Et|X0:t,E0:t-1)=P(Et|Xt)

  6. Start it up (the prior probability model) • What is P(X0)? • At time t, the joint is completely determined: • P(X0,X1,…Xt,E1,…,Et) =P(X0) • ∏i  t P(Xi|Xi-1)P(Ei|Xi)

  7. Better predictions? • More state variables (temperature, humidity, pressure, season…) • Higher order Markov processes (take more of the past into account). • Tradeoffs?

  8. What’s it good for? • Belief/monitoring the current state • Prediction about the next state • Hindsight about previous states • Explanation of possible causes

  9. Example

  10. Hidden Markov Models (HMMs) • Further simplification: • Only one state variable. • We can use matrices, now. • Ti,j = P(Xt=j|Xt-1=i)

  11. Speech Recognition • P(words|signal) = P(signal|words)P(words) • P(words) “language model” • “Every time I fire a linguist, the recognition rate goes up.”

  12. Model 1: Speech • Sample the speech signal • Decide the most likely sequence of speech symbols

  13. Phonetic alphabet • Phonemes: minimal units of sound that make a meaning difference (beat vs. bit; fit vs. bit) • Phones: normalized articulation results paid vs. tap • English has about 40 • Co-articulation effects modeled as new symbols. sweet = w(s,iy)

  14. Model 2,3: Words, Sentences • Given the phones, what is the most likely word/word in the sentence? • “Give me all your money. I have a gub.” • Gub is unlikely to be a word, • And if it were, it would be less likely than “gun.”

  15. Lots of tricky bits

More Related