1 / 46

Parameter Estimation For HMM Lecture #7

Parameter Estimation For HMM Lecture #7. Background Readings : Chapter 3.3 in the text book, Biological Sequence Analysis , Durbin et al., 2001. M. M. M. M. S 1. S 2. S L-1. S L. T. T. T. T. x 1. x 2. X L-1. x L. Reminder: Hidden Markov Model.

haviva-ross
Télécharger la présentation

Parameter Estimation For HMM Lecture #7

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parameter Estimation For HMM Lecture #7 Background Readings: Chapter 3.3 in the text book, Biological Sequence Analysis, Durbin et al., 2001. .

  2. M M M M S1 S2 SL-1 SL T T T T x1 x2 XL-1 xL Reminder: Hidden Markov Model Markov Chain transition probabilities: p(Si+1= t|Si= s) = ast Emission probabilities: p(Xi = b| Si = s) = es(b) For each integer L>0, this defines a probability space in which the simple events are HMM of length L.

  3. Hidden Markov Model for CpG Islands Domain(Si)={+, -}  {A,C,T,G} (8 values) The states: … … A- T+ G + … … A T G In this representation P(xi| si) = 0 or 1 depending on whether xi is consistent with si . E.g. xi= G is consistent with si=(+,G) and with si=(-,G) but not with any other state of si.

  4. M M M M S1 S2 SL-1 SL T T T T x1 x2 XL-1 xL Reminder: Most Probable state path Given an output sequence x = (x1,…,xL), Amost probablepaths*= (s*1,…,s*L)is one which maximizes p(s|x).

  5. A- C- T- T+ G + A C T T G Predicting CpG islands via most probable path: • Output symbols: A, C, G, T (4 letters). • Markov Chain states: 4 “-” states and 4 “+” states, two for each letter (8 states total). • The most probable path (found by Viterbi’s algorithm) predicts CpG islands. Experiment (Durbin et al, p. 60-61) shows that the predicted islands are shorter than the assumed ones. In addition quite a few “false negatives” are found.

  6. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Reminder: Most probable state Given an output sequence x = (x1,…,xL), si is amost probablestate (at location i)if: si=argmaxkp(Si=k |x).

  7. A- C- T- T+ G + A C T T G Finding the probability that a letteris in a CpG island via the algorithm for most probable state: i • The probability that the occurrence of G in the i-th location is in a CpG island (+ state) is: • ∑s+p(Si =s+|x) = ∑s+F(Si=s+)B(Si=s+) • Where the summation is formally over the 4 “+” states, but actually only state G+ need to be considered (why?)

  8. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi akl k l ek(b) b Parameter Estimation for HMM An HMM model is defined by the parameters: akland ek(b), for all states k,l and all symbols b. Let θdenote the collection of these parameters.

  9. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Parameter Estimation for HMM To determine the values of (the parameters in) θ, use a training set = {x1,...,xn}, where each xjis a sequence which is assumed to fit the model. Given the parameters θ, each sequence xj has an assigned probability p(xj|θ).

  10. Maximum Likelihood Parameter Estimation for HMM The elements of the training set{x1,...,xn}, are assumed to be independent, p(x1,..., xn|θ) = ∏j p (xj|θ). ML parameter estimation looks for θ which maximizes the above. The exact method for finding or approximating this θdepends on the nature of the training set used.

  11. M M M M S1 S2 SL-1 SL T T T T x1 x2 XL-1 xL Data for HMM • Possible properties of (the sequences in) the training set: • For each xj, the information on the states sji (the symbolsxji are usually known). • The size (number of sequences) of the training set

  12. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Case 1: ML when Sequences are fully known We know the complete structure of each sequence in the training set{x1,...,xn}. We wish to estimate akl and ek(b) for all pairs of states k, l and symbols b. By the ML method, we look for parameters θ* which maximize the probability of the sample set: p(x1,...,xn| θ*) =MAXθ p(x1,...,xn| θ).

  13. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Case 1: Sequences are fully known For each xjwe have: Let mkl= |{i: si-1=k,si=l}| (in xj). mk(b)=|{i:si=k,xi=b}| (in xj).

  14. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Case 1 (cont) By the independence of the xj’s, p(x1,...,xn| θ)=∏jp(xj|θ). Thus, if Akl = #(transitions from k to l) in the training set, and Ek(b) = #(emissions of symbol b from state k) in the training set, we have:

  15. Case 1 (cont) So we need to find akl’s and ek(b)’s which maximize: Subject to:

  16. Case 1 (cont) Rewriting, we need to maximize:

  17. Case 1 (cont) Then we will maximize also F. Each of the above is a simpler ML problem, which is similar to ML parameters estimation for a die, treated next.

  18. ML parameters estimation for a die Let X be a random variable with 6 values x1,…,x6 denoting the six outcomes of a die. Here the parameters are θ ={1,2,3,4,5, 6} , ∑θi=1 Assume that the data is one sequence: Data = (x6,x1,x1,x3,x2,x2,x3,x4,x5,x2,x6) So we have to maximize Subject to: θ1+θ2+ θ3+ θ4+ θ5+ θ6=1 [and θi0 ]

  19. log P ( Data | ) N N θ = - = j 6 0 å ¶ q q 5 - q 1 j j i = i 1 Maximum Likelihood Estimate By the ML approach, we look for parameters that maximizes the probability of data (i.e., the likelihood function ). We will find the parameters by considering the corresponding log-likelihood function: A necessary condition for (local) maximum is:

  20. Finding the Maximum Rearranging terms: Divide the jth equation by the ith equation: Sum from j=1 to 6: So there is only one local – and hence global – maximum. Hence the MLE is given by:

  21. Generalization for distribution with any number n of outcomes Let X be a random variable with n values x1,…,xkdenoting the k outcomes of Independently and Identically Distributed experiments, with parameters θ ={1,2,...,k} (θi is the probability of xi). Again, the data is one sequence of length n: Data = (xi1,xi2,...,xin) Then we have to maximize Subject to: θ1+θ2+ ....+ θk=1

  22. Generalization for n outcomes (cont) By treatment identical to the die case, the maximum is obtained when for all i: Hence the MLE is given by the relative frequencies:

  23. Fractional Exponents Some models allow ni’s to be fractions(eg, if we are uncertain of a die outcome, we may consider it “6” with 20% confidence and “5” with 80%): We still can have And the same analysis yields:

  24. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Apply the ML method to HMM Let Akl = #(transitions from k to l) in the training set. Ek(b) = #(emissions of symbol b from state k) in the training set. We need to:

  25. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Apply to HMM (cont.) We apply the previous technique to get for each k the parameters {akl|l=1,..,m} and {ek(b)|bΣ}: Which gives the optimal ML parameters

  26. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Summary of Case 1: Sequences are fully known We know the complete structure of each sequence in the training set{x1,...,xn}. We wish to estimate akl and ek(b) for all pairs of states k, l and symbols b. Summary: when everything is known, we can find the (unique set of) parameters θ* which maximizes p(x1,...,xn| θ*) =MAXθ p(x1,...,xn| θ).

  27. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Adding pseudo counts in HMM If the sample set is too small, we may get a biased result. In this case we modify the actual count by our prior knowledge/belief: rkl is our prior belief and transitions from k to l. rk(b) is our prior belief on emissions of b from state k.

  28. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Case 2: State paths are unknown For a given θ we have: p(x1,..., xn|θ)= p(x1| θ)   p (xn|θ) (since the xjare independent) For each sequence x, p(x|θ)=∑s p(x,s|θ), The sum taken over all state paths s which emit x.

  29. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Case 2: State paths are unknown Thus, for the n sequences (x1,..., xn) we have: p(x1,..., xn |θ)= ∑ p(x1,..., xn, s1,..., sn|θ), Where the summation is taken over all tuples of n state paths (s1,..., sn) which generate (x1,..., xn) . For simplicity, we will assume that n=1. (s1,..., sn)

  30. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Case 2: State paths are unknown So we need to maximize p(x|θ)=∑s p(x,s|θ), where the summation is over all the sequences Swhich produce the output sequence x. Finding θ* which maximizes ∑s p(x,s|θ) is hard. [Unlike finding θ* which maximizes p(x,s|θ) for a single sequence (x,s).]

  31. ML Parameter Estimation for HMM • The general process for finding θ in this case is • Start with an initial value of θ. • Find θ’ so thatp(x|θ’) > p(x|θ) • set θ = θ’. • Repeat until some convergence criterion is met. A general algorithm of this type is the Expectation Maximization algorithm, which we will meet later. For the specific case of HMM, it is the Baum-Welch training.

  32. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Baum Welch training We start with some values of akland ek(b), which define prior values of θ. Then we use an iterative algorithm which attempts to replace θ by a θ* s.t. p(x|θ*) > p(x|θ) This is done by “imitating” the algorithm for Case 1, where all states are known:

  33. … xi-1= b Si-1= k Si= l Baum Welch training xi= c In case 1 we computed the optimal values of akland ek(b), (for the optimal θ)by simply counting the number Akl of transitions from state k to state l, and the number Ek(b) of emissions of symbol b from state k, in the training set. This was possible since we knew all the states.

  34. … Si-1= ? Si= ? xi= c xi-1= b Baum Welch training When the states are unknown, the counting process is replaced by averaging process: For each edge si-1 si we compute the average number of “k to l” transitions, for all possible pairs (k,l), over this edge. Then, for each k and l, we take Aklto bethe sum over all edges.

  35. Baum Welch training Si = ? xi-1= b Similarly, For each edge si b and each state k, we compute the average number of times that si=k, which is the expected number of “k → b” transmission on this edge. Then we take Ek(b) to be the sum over all such edges. These expected values are computed by assuming the current parameters θ:

  36. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi Akl and Ek(b) when states are unknown Akl and Ek(b) are computed according to the current distribution θ, that is: Akl=∑sAsklp(s|x,θ), where Askl is the number of k to l transitions in the sequence s. Ek(b)=∑sEsk (b)p(s|x,θ), where Esk (b) is the number of times k emits b in the sequence s with output x.

  37. sL .. Si-1 Si s1 .. Xi-1 Xi X1 XL Baum Welch: step 1aCount expected number of state transitions For each i, k,l, compute the state transitions probabilities by the current θ: P(si-1=k, si=l | x,θ) For this, we use the forwards and backwards algorithms

  38. sL .. Si-1 Si s1 .. Xi-1 Xi X1 XL Baum Welch: Step 1a (cont) Claim: By the probability distribution of HMM (akland el(xi) are the parameters defined by  , and Fk(i-1), Bk(i) are the forward andbackward algorithms)

  39. s1 s2 sL-1 sL Si-1 si X1 X2 XL-1 XL Xi-1 Xi = Fk(i-1)aklel(xi )Bl(i) x Via the backward algorithm Via the forward algorithm Fk(i-1)aklel(xi)Bl(i) p(si-1=k,si=l | x, ) = Step 1a: Computing P(si-1=k, si=l | x,θ) P(x1,…,xL,si-1=k,si=l|) = P(x1,…,xi-1,si-1=k|)aklel(xi )P(xi+1,…,xL |si=l,)

  40. Step 1a (end) For each pair (k,l), compute the expected number of state transitions from k to l, as the sum of the expected number of k to l transitions over all L edges:

  41. Step 1a for many sequences: Exercise: Prove that when we have n independent input sequences (x1,..., xn), then Akl is given by:

  42. s1 s2 sL-1 sL si X1 X2 XL-1 XL Xi=b Baum-Welch: Step 1b count expected number of symbols emissions for state k and each symbol b, for each i whereXi=b, compute the expected number of times that Xi=b.

  43. Baum-Welch: Step 1b For each state k and each symbol b, compute the expected number of emissions of b from k as the sum of the expected number of times that si = k, over all i’s for which xi = b.

  44. Step 1b for many sequences Exercise: when we have n sequences (x1,..., xn), the expected number of emissions of b from k is given by:

  45. Summary of Steps 1a and 1b: the E part of the Baum Welch training These steps compute the expected numbers Akl of k,l transitions for all pairs of states k and l, and the expected numbers Ek(b) of transmitions of symbol b from state k, for all states k and symbols b. The next step is the M step, which is identical to the computation of optimal ML parameters when all states are known.

  46. Baum-Welch: step 2 Use the Akl’s, Ek(b)’s to compute the new values of akl and ek(b). These values define θ*. The correctness of the EM algorithm implies that: p(x1,..., xn|θ*) p(x1,..., xn|θ) i.e, θ* increases the probability of the data This procedure is iterated, until some convergence criterion is met.

More Related