1 / 20

Hidden Markov Models Problem 2

Hidden Markov Models Problem 2. By Amanda Videtich CS 5368: Intelligent Systems (Fall 2010). Hidden Markov Models (HMM). Stochastic signal model. States are hidden. The model parameters are known, it is the state the model passes through. Can still be modeled as a Bayesian network.

dobry
Télécharger la présentation

Hidden Markov Models Problem 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hidden Markov ModelsProblem 2 By Amanda Videtich CS 5368: Intelligent Systems (Fall 2010)

  2. Hidden Markov Models (HMM) • Stochastic signal model. • States are hidden. • The model parameters are known, it is the state the model passes through. • Can still be modeled as a Bayesian network.

  3. Elements of HMM • N, the number of states. • States are denoted as S = { , , … } • M, the number of distinct observation symbols per state. • Observation symbols are denoted as V = { , , …}

  4. Elements of HMM (cont.) • State transition probability distribution A = { } • where A is a matrix and i represents the row and j represents the column. • , . • Observation symbol probability distribution in state j, B = { } • where at , .

  5. Elements of HMM (cont.) • Initial state distribution • where , . • Given appropriate values for all of these, the HMM can be used as a generator to give an observation sequence • where each observation O is one of the symbols from V and T is the number of observations in the sequence.

  6. Problem 2 • Given the observation sequence , and the model , how do we choose a corresponding state sequence which is optimal in some meaningful sense? • is the complete parameter set of the model

  7. Solving for individually most likely state • First we define the variable • This can be expressed simply in terms of the forward-backward variables, • We can solve for the individually most likely state at time t, as

  8. Solving forsequences of states • To find the single best state sequence is called the Viterbi Algorithm. • is the highest probability along a single path, at time t, which ends in state . • By induction we have

  9. Viterbi Algorithm • Initialization: is an array that keeps track of argument that maximized • Recursion: ,

  10. Viterbi Algorithm (cont.) • Termination: • Path (state sequence) backtracking: • It should be noted that the Viterbi algorithm is similar (except for the backtracking step) in implementation to the forward calculation. The major difference is the maximization over previous states which is used in place of the summing procedure.

  11. Example • Our example is about whether or not a professor of a course is sick or healthy based on whether he/she present for class. • The states, S, in this are Healthy and Sick • The possible observations, V, are present and absent.

  12. Example (cont.) • Given the state transition distribution A: • And the observation symbol probability distribution B: Today Healthy Sick Healthy Sick Yesterday Observations Present Absent Healthy Sick Hidden States

  13. Example (cont.) • Our initial state, , is Healthy. • Therefore, if given the observation sequence of the teacher’s attendance being present the second day and absent the third what is the likely states (whether teacher was healthy or sick) that caused the observations. • So our where Present, Present, Absent

  14. Example (cont.) • We know that at t=1, where t is the day that the professor was present and healthy. So first we initialize: where i (Today) is equal to healthy. Observations Present Absent Healthy Sick Hidden States

  15. Example (cont.) • Now we enter the recursive part: • First we select the max value of possible j (today) (Healthy Today) Or (Sick Today) The max is the first one. Today Healthy Sick Healthy Sick Yesterday

  16. Example (cont.) • We have selected j to be the healthy row so we can now finish plugging into the equations: Observations Present Absent Healthy Sick Hidden States

  17. Example (cont.) • So for the next step: However, if we selected sick: This probability is higher. Today Healthy Sick Healthy Sick Yesterday Observations Present Absent Healthy Sick Hidden States

  18. Example (cont.) • Then we terminate: This returns the sequence of states that likely produced the output: If we had done another step, there would have been a list of states.

  19. Any questions?

  20. References • A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition by Lawrence R. Rabiner http://www.cs.ttu.edu/~smohan/Teaching/ClassPapers/General/rabiner_HMM_IEEE89.pdf

More Related