1 / 6

Temporal Processes

Temporal Processes. Eran Segal Weizmann Institute. Representing Time. Add the time dimension to variables X  X (t) Assumptions Time can be discretized into interesting points t 1 ,t 2 ,...t n Markov assumption holds Distribution is stationary (time invariant or homogeneous). Weather.

candy
Télécharger la présentation

Temporal Processes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Temporal Processes Eran Segal Weizmann Institute

  2. Representing Time • Add the time dimension to variables X  X(t) • Assumptions • Time can be discretized into interesting points t1,t2,...tn • Markov assumption holds • Distribution is stationary (time invariant or homogeneous)

  3. Weather Weather’ Weather0 Weather0 Weather1 Weather2 Velocity Velocity’ Velocity0 Velocity0 Velocity1 Velocity2 Location Location’ Location0 Location0 Location1 Location2 Failure Failure’ Failure0 Failure0 Failure1 Failure2 Obs’. Obs.0 Obs.1 Obs.2 G G0 Unrolled network Dynamic Bayesian Networks • Pair G0 and G such that • G0 is a Bayesian network over X(0) • G is a conditional Bayesian network for P(X(t+1)|X(t))

  4. Hidden Markov Model • Special case of Dynamic Bayesian network • Single (hidden) state variable • Single (observed) observation variable • Transition probability P(S’|S) assumed to be sparse • Usually encoded by a state transition graph S0 S0 S1 S2 S S’ S3 O0 O1 O2 O’ O3 G G0 Unrolled network

  5. Hidden Markov Model • Special case of Dynamic Bayesian network • Single (hidden) state variable • Single (observed) observation variable • Transition probability P(S’|S) assumed to be sparse • Usually encoded by a state transition graph P(S’|S) S1 S2 S3 S4 State transition representation

  6. Inference • Forward backward algorithm • Forward step: choose last node as root and use BP • Backward step: outward messages from last node as root • Efficient for HMM, inefficient for DBNs Weather0 Weather1 Weather2 There is an active path between any pair of variables in time 2 Velocity0 Velocity1 Velocity2 Location0 Location1 Location2 Failure0 Failure1 Failure2 Obs.1 Obs.2 Unrolled network

More Related