1 / 33

H idden M arkov M odels

Learn about hidden Markov models, their definition, the three basic problems they solve, and the solutions to these problems. Explore implementation issues and extensions of HMMs.

dcutter
Télécharger la présentation

H idden M arkov M odels

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hidden Markov Models

  2. Definition A doubly embedded stochastic process with an underlying stochastic process, that is not observable (it is hidden), but can only be observed through another set of stochastic processes that produce the sequence of observations

  3. Interpretation A state machine where at each state one randomly chooses both the symbol to produce and the next state to move to

  4. Elements • N - the number of states the set of states is denoted by • M - the number of distinct observation symbols, i.e. the discrete alphabet size the set of symbols is denoted by • The state transition probability distribution , where

  5. Elements - continue • The observation symbol probability distribution in state j, ,where • The initial state distribution ,where • The final state distribution ,where

  6. In this case a discrete and first order HMM was defined

  7. Observation Sequence Generation • Choose an initial state according to the initial state distribution • Set t=1 • Choose according to the symbol probability distribution in state , i.e. • Transit to a new state according to the state transition probability distribution for state , i.e. • Repeat for t=t+1 until t=T

  8. The 3 basic problems for HMMs • Problem 1: Given an observation sequence and a model what is the probability of the observation sequence given the model ?

  9. The 3 basic problems for HMMs • Problem 2 (recognition):Given an observation sequence and a model what is the corresponding state sequence which best “explains” the observation sequence?

  10. The 3 basic problems for HMMs • Problem 3 (training):how do we adjust the model parameters to maximize ?

  11. Solution to Problem 1 Analysis The probability of the observation sequence O for a state sequence Q is assuming statistical independence of observations, we get

  12. Solution to Problem 1 Analysis - continue The probability of such a state sequence Q is and the joint probability of O and Q is therefore the probability of O is a summery over all possible state sequences Q

  13. Solution to Problem 1 Definitions The forward variable defined as is the probability of the partial observation sequence (until time t) and state at time t, given the model

  14. Solution to Problem 1 The Forward-Backward Procedure is solved inductively as follows: • Initialization: • Induction: • Termination:

  15. Solution to Problem 1 Illustration of the Forward Procedure

  16. Solution to Problem 1 Lattice Illustration N STATE 2 1 1 2 3 T OBSERVATION, t

  17. Solution to Problem 2 Definitions The quantity defined as is the best score (highest probability) along a single path, at time t, which accounts for the first t observations, and ends in state

  18. Solution to Problem 2 The Viterbi Algorithm The best state sequence is found as follows: • Initialization: • Recursion:

  19. Solution to Problem 2 The Viterbi Algorithm - continue • Termination: • Path (state sequence) backtracking:

  20. Solution to Problem 2 Lattice Illustration N STATE 2 1 1 2 3 T OBSERVATION, t

  21. Solution to Problem 3 Definitions The backward variable defined as is the probability of the partial observation sequence (from time t+1)and state at time t, given the model

  22. Solution to Problem 3 Definitions - continue Let be defined as i.e. the probability of being in state at time t, given the observation sequence O and the model

  23. Solution to Problem 3 Definitions - continue Let be defined as i.e. the probability of being in state at time t, and in state at time t+1, given the observation sequence O and the model

  24. Solution to Problem 3 Variable Illustration

  25. Solution to Problem 3 Analysis In terms of forward-backward variables

  26. Solution to Problem 3 Analysis - continue In terms of forward-backward variables

  27. Solution to Problem 3 Interpretations

  28. Solution to Problem 3 The Forward-Backward Procedure is solved inductively as follows: • Initialization: • Induction:

  29. Solution to Problem 3 Illustration of the Backward Procedure

  30. Solution to Problem 3 The Baum-Welch Algorithm A set of reasonable reestimation formulas for , A and B are

  31. Solution to Problem 3 Summary Let be the initial model and let be the reestimated model, then it has been proven that either • The initial model defines a critical point of the likelihood function • Model is more likely then model

  32. Extensions • Continuous observation densities • Autoregressive observation sequences • Explicit state duration densities

  33. Implementation issues • Multiple Observation Sequences • Insufficient training data

More Related