1 / 32

Hidden Markov Models I

Hidden Markov Models I. Biology 162 Computational Genetics Todd Vision 14 Sep 2004. Hidden Markov Models I. Markov chains Hidden Markov models Transition and emission probabilities Decoding algorithms Viterbi Forward Forward and backward Parameter estimation Baum-Welch algorithm.

gwylan
Télécharger la présentation

Hidden Markov Models I

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hidden Markov Models I Biology 162 Computational Genetics Todd Vision 14 Sep 2004

  2. Hidden Markov Models I • Markov chains • Hidden Markov models • Transition and emission probabilities • Decoding algorithms • Viterbi • Forward • Forward and backward • Parameter estimation • Baum-Welch algorithm

  3. Markov Chain • A particular class of Markov process • Finite set of states • Probability of being in state i at time t+1 depends only on state at time t (Markov property) • Can be described by • Transition probability matrix • Initial probability distribution 0

  4. Markov Chain

  5. Markov chain a13 a23 a12 a11 1 a22 2 3 a33 a21 a32 a31

  6. Transition probability matrix • Square matrix with dimensions equal to the number of states • Describes the probability of going from state i to state j in the next step • Sum of each row must equal 1

  7. Multistep transitions • Probability of 2 step transition is sum of probability of all 1 step transitions • And so on for n steps

  8. Stationary distribution • A vector of frequencies that exists if chain • Is irreducible: each state can eventually be reached from every other • Is aperiodic: state sequence does not necessarily cycle

  9. Reducibility

  10. Periodicity

  11. Applications • Substitution models • PAM • DNA and codon substitution models • Phylogenetics and molecular evolution • Hidden Markov models

  12. Hidden Markov models: applications • Alignment and homology search • Gene finding • Physical mapping • Genetic linkage mapping • Protein secondary structure prediction

  13. Hidden Markov models • Observed sequence of symbols • Hidden sequence of underlying states • Transition probabilities still govern transitions among states • Emission probabilities govern the likelihood of observing a symbol in a particular state

  14. Hidden Markov models

  15. A coin flip HMM • Two coins • Fair: 50% Heads, 50% Tails • Loaded: 90% Heads, 10% Tails What is the probability for each of these sequences assuming one coin or the other? A: HHTHTHTTHT B: HHHHHTHHHH

  16. A coin flip HMM • Now imagine the coin is switched with some probability Symbol: HTTHHTHHHTHHHHHTHHTHTTHTTHTTH State: FFFFFFFLLLLLLLLFFFFFFFFFFFFFL HHHHTHHHTHTTHTTHHTTHHTHHTHHHHHHHTTHTT LLLLLLLLFFFFFFFFFFFFFFLLLLLLLLLLFFFFF

  17. aFL F L aFF aLL aLF H 0.5 T 0.5 H 0.9 T 0.1 The formal model where aFF, aLL > aFL, aLF

  18. Probability of a state path Symbol: T H H H State: F F L L Symbol: T H H H State: L L F F Generally

  19. HMMs as sequence generators • An HMM can generate an infinite number of sequences • There is a probability associated with each one • This is unlike regular expressions • With a given sequence • We might want to ask how often that sequence would be generated by a given HMM • The problem is there are many possible state paths even for a single HMM • Forward algorithm • Gives us the summed probability of all state paths

  20. Decoding • How do we infer the “best” state path? • We can observe the sequence of symbols • Assume we also know • Transition probabilities • Emission probabilities • Initial state probabilities • Two ways to answer that question • Viterbi algorithm - finds the single most likely state path • Forward-backward algorithm - finds the probability of each state at each position • These may give different answers

  21. Viterbi algorithm

  22. Viterbi with coin example • Let aFF=aLL=0.7, aFLaLF=0.3, a0=(0.5, 0.5) T H H H B 1 0 0 0 0 F 0 0.25 0.03125 0.0182* 0.0115* L 0 0.05 0.0675* 0.0425 0.0268 • p* = F L L L • Better to use log probabilities!

  23. Forward algorithm • Gives us the sum of all paths through the model • Recursion similar to Viterbi but with a twist • Rather than using the maximum state k at position i , we take the sum of all possible states k at i

  24. Forward with coin example • Let aFF=aLL=0.7, aFLaLF=0.3, a0=(0.5, 0.5) • eL(H)=0.9 T H H H B 1 0 0 0 0 F 0 0.25 0.101 ? ? L 0 0.05 0.353 ? ?

  25. Forward-Backward algorithm

  26. Posterior decoding • We can use the forward-backward algorithm to define a simple state sequence, as in Viterbi • Or we can use it to look at ‘composite states’ • Example: a gene prediction HMM • Model contains states for UTRs, exons, introns, etc. versus noncoding sequence • A composite state for a gene would consist of all the above except for noncoding sequence • We can calculate the probability of finding a gene, independent of the specific match states

  27. Parameter estimation • Design of model (specific to application) • What states are there? • How are they connected? • Assigning values to • Transition probabilities • Emission probabilities

  28. Model training • Assume the states and connectivity are given • We use a training set from which our model will learn the parameters  • An example of machine learning • The likelihood is probability of the data given the model • Calculate likelihood assuming j, j=1..n sequences in training set are independent

  29. When state sequence is known • Maximum likelihood estimators • Adjusted with pseudocounts

  30. When state sequence is unknown • Baum-Welch algorithm • Example of a general class of EM (Expectation-Maximization) algorithms • Initialize with a guess at akl and ek(b) • Iterate until convergence • Calculate likely paths with current parameters • Recaculate parameters from likely paths • Akl and Ek(b) are calculated from posterior decoding (ie forward-backward algorithm) at each iteration • Can get stuck on local optima

  31. Preview: Profile HMMs

  32. Reading assignment • Continue studying: • Durbin et al. (1998) pgs. 46-79 in Biological Sequence Analysis

More Related