1 / 28

Maximum Entropy Model

Maximum Entropy Model, Bayesian Networks, HMM, Markov Random Fields, (Hidden/Segmental) Conditional Random Fields. 2. 2. 3. 3. (. ). ¸. f. x. y. 1. 1. ;. (. ). ¸. f. 6. 6. 7. 7. x. y. =. =. ;. 4. 4. 5. 5. T. (. ). (. ). l. ¸. f. (. ). P. (. ). f. (. ).

wanda
Télécharger la présentation

Maximum Entropy Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Maximum Entropy Model, Bayesian Networks, HMM, Markov Random Fields, (Hidden/Segmental) Conditional Random Fields

  2. 2 2 3 3 ( ) ¸ f x y 1 1 ; ( ) ¸ f . . 6 6 7 7 x y = = . . ; 4 4 5 5 . . T ( ) ( ) l ¸ f ( ) P ( ) f ( ) g ¸ f t + ¸ f x x x y o g p y y c o n s K K ( ) P ( ) = x x l ¸ f p y / e x p y t ; k k + ; ; k x x o g p y y c o n s ; ; = k k k ; ; Maximum Entropy Model • x – observations • y – class identity • fk – feature functions • λk – trainable parameters

  3. ( ) ( ) x x p y p y ( j ) P ; ; x y = = P ( ) ( ) 0 x x p p y 0 T ^ ; f ( ) g y ¸ f Q ( j ) ¸ P x e x p y ( j ) P x ; a r g m a x y = ¸ ¸ x y = r r T P f ( ) g r ¸ f x e x p y 0 ; y Maximum Entropy Model • We train parameters λk to maximize conditional likelihood (minimize cross entropy; maximize MMI objective function) of training data

  4. ( ) 2 3 2 3 ± 1 w y x = 1 1 1 ( ) ± W 1 y x = 1 2 2 6 7 6 7 6 7 6 7 . . 6 7 6 7 . . 6 7 6 7 . . T 6 7 6 7 ( ) ± W f g 1 y x = N N 1 w x e x p 6 7 6 7 ( j ) P y 6 7 6 7 x y = ( ) f ¸ . . 6 7 6 7 T x y P f g = = . . ; w x e x p 6 7 6 7 0 . . 0 y 6 7 6 7 y ( ) ± K w y x = K 1 1 6 7 6 7 6 7 6 7 ( ) ± W K y x = K 2 2 6 7 6 7 6 7 6 7 6 7 6 7 . . . . 4 5 4 5 . . ( ) ± W K y x = K N N Multiclass Logistic Regression

  5. 2 2 3 3 2 2 2 ( ( ) ) ( ( ) ( ) = ( ) j ) l l ± l l l P P 1 0 5 1 2 ¡ + + o g o g p x y y o o g g ¼ ¾ y ¹ o g p ¾ x y = = 1 1 1 ; : 2 2 = ( ) ( ) ( ) ± l l P N 1 + ¹ ¾ y o g x y o g x ; ¹ ¾ = = 1 6 6 7 7 1 y ; y 6 6 7 7 2 2 2 2 2 = ( ) ( ) ( ) ( ) = ± l l P 0 5 1 0 5 2 0 5 ¡ ¡ ¡ ¡ ¾ y x o g y o g ¼ ¾ x ¹ ¾ = = 6 6 7 7 1 ( ) y : f : ¸ : y y x y = = 6 6 7 7 2 2 2 2 2 2 2 2 2 ( ) ( ( ) = ) ( ) ( ) = = = l ; ± l P l l 2 0 5 2 2 P 0 5 2 0 5 0 5 ¡ + ¡ ¡ + ¡ o g y o g ¼ ¾ ¹ ¾ o g y o g ¼ ¾ x ¾ x ¹ ¾ ¹ ¾ = = 6 6 7 7 2 2 2 y : : : : y y y y y 6 6 7 7 2 = ( ) ± 2 ¹ ¾ y x = 2 4 4 5 5 2 2 2 = ( ) ± 0 5 2 ¡ ¾ y x = 2 : MaxEnt example • MaxEnt model can be initialized to simulate recognizer where classes are modeled by Gaussians • Example for two classes and 1-dimensional data

  6. Bayesian Networks • Graph corresponds to a particular factorization of joint probability distribution over a set of random variables • Nodes are random variables, but the graph does not say what are the distributions of the variables • The graph represents a set of distributions that conform to the factorization.

  7. ( ) ( j ) ( ) ( ) P ( j ) ( ) P P x x s p s p s x x = p s p s = ; s Bayesian Networks for GMM • s is discrete latent random variable identifying Gaussian component generating observation x s x • To compute likelihood of observed data, we need to marginalize over latent variable s:

  8. N ( ) Q ( j ) ( ) P x x x s s p s p s = N N 1 1 1 n n n ; : : : ; ; ; : : : ; n = N X Y ( ) ( j ) ( ) P x x x p s p s = N 1 n n n ; : : : ; S 1 n = N Y X ( j ) ( ) x p s p s = n 1 s n = Bayesian Networks for GMM • Multiple observations: s1 s2 sN-1 sN x1 x2 xN-1 xN

  9. h i N N ( ) ( ) Q ( j ) Q ( j ) P x x s s s s s x s p p p = N N 1 1 1 1 ¡ n n n n 2 1 ; : : : ; ; ; : : : ; n n = = s1 s2 sN-1 sN x1 x2 xN-1 xN Bayesian Networks for HMM • si nodes are not “HMM states”, these are random variables (one for each frame) with values saying what state we are in for a particular frame i • To evaluate likelihood of data p(x1,… , xN), we marginalize over all state sequences (all possible values s1,… , sN ), e.g. using Dynamic Programming

  10. Conditional independence • Bayesian Networks allows to see conditional independence properties. But the opposite is true for:

  11. Markov Random Fields • Undirected graphical model • Directly describe the conditional independence property • On the example: P(x1, x4 | x2, x3) = P(x1 | x2, x3) P(x4 | x2, x3) • x1 and x4 are independent given x2 and x3 as there is no path from x1 to x4 not leading through either x2 or x3. • Subsets of nodes where all nodes are connected with each other are called cliques • The outline in blue is Maximal clique. • When factorizing distribution described by MRF, variables not connected by link must not appear in the same factor  lets make factor corresponding to (Maximal) cliques.

  12. MRF - factorization • Joint probability distribution over all random variables x can be expressed as normalized product of potential functions , which are positive valued functions of subsets of variables xC corresponding to maximal cliques C • It is useful to express the potential functions in terms of energy functions E(xC) sum of E(xC) instead of product

  13. 1 ( ) f ( ) ( ) g P E E ¡ ¡ x x x x e x p x x x x x x = 1 2 3 4 1 2 3 2 3 4 Z ; ; ; ; ; ; ; MRF - factorization

  14. ( ) P 1 1 x x x x ( ( ) P ) ( ( ) ) ( ( ) ) ( j ) Ã Ã 1 2 3 4 Ã Ã P P P ; ; ; x x x x x x x x x x x x x x x x x x x x x x = = = 1 2 2 3 3 4 1 1 1 1 2 2 3 3 2 2 2 2 3 3 4 4 1 4 2 3 ( ) Z Z P ; ; ; ; ; ; ; ; ; ; ; ; ; ; x x x x 1 4 2 3 ; ; ( ) ( ) Ã Ã x x x x x x 1 1 2 3 2 2 3 4 ; ; ; ; = P ( ) ( ) Ã Ã x x x x x x 1 1 2 3 2 2 3 4 ; ; ; ; x x 1 4 ; ( j ) ( j ) P P x x x x x x = 1 2 3 4 2 3 ; ; Checking conditional Independence

  15. h h i i N N N N ~ 1 ( ( ) ) ( Q ) Q ( ( j ) ) Q Q ( ( j ) ) Ã Ã P P x x x x s s s s s s s s s x x s s p p p = = N N N N 1 1 1 1 1 1 1 ¡ ¡ n n n n n n n n n n 2 2 1 1 Z ; ; : : : : : : ; ; ; ; ; ; : : : : : : ; ; ; ; n n n n = = = = ~ ( ) ( j ) ( ) Ã s s s s s p p = 2 1 2 1 1 ; ~ ( ) ( j ) Ã s s s s p = 1 1 ¡ ¡ n n n n ; ( ) ( j ) Ã x s x s p = n n n n ; Markov Random Fields for HMM s1 s2 sn-1 sn For Z = 1 and x1 x2 xn-1 xn We obtain HMM model:

  16. ( ) ( ) X S X S p p ( j ) S X P ; ; = = P ( ) ( ) X X S p p S ; Markov Random Fields for HMM • HMM is only one of possible distributions represented by the graph • In the case of HMM, individual factors are already well normalized distributions  Z = 1 • With general “unnormalized” potential functions, it would be difficult to compute Z as we would have to integrate over all real-valued variables xn. • However, it is not difficult to evaluate the conditional probability: • Normalization terms Z in numerator and denominator cancels • Sum in the denominator is over all possible state sequences, but the terms in the sum are just products of factors like for HMM  we can use the same Dynamic Programming trick. • We can also find the most likely sequence S using familiar Viterbi algorithm. • To train such model (parameters of potential functions), we can directly maximize the conditional likelihood P(S|X)  discriminative training (like MMI or logistic regression)

  17. h i N N ~ ( j ) ( ) Q ( ) Q ( ) à à S X X S P P s s x s / / 1 ¡ ( ) ~ n n ~ n n n n 2 1 N N ; ; ; P n P ( n ) P P ( ) = = ¸ f ¸ f + s s x s e x p k k l l ¡ 1 k l ( j ) n n n n 2 1 ; ; S X P n n = = X ~ ~ ~ = ( ) f ( ) g à ¸ f ( ) ~ ~ N N P P P ( ) P P ( ) ¸ f ¸ f s s 0 s 0 s 0 e x p + = k k 1 1 ¡ ¡ s s x s e x p k k l l n n n n ¡ 1 ; ; 0 k l S n n n n 2 1 ; ; n n = = k X ( ) f ( ) g à ¸ f x s x s e x p = l l n n n n ; ; l Conditional Random Fields • Lets consider special form of potential functions: • fn– are predefined feature functions • λn– are trainable parameters • We can rewrite: as

  18. ~ N 2 ~ 3 2 3 P ( ) ¸ f s s 1 1 1 ¡ n n 2 ; n = 6 7 . . 6 7 . . 6 7 6 7 . . 6 7 6 7 ~ T N ~ f ( ) g ¸ ¸ f P ( ) S X f 6 7 e x p s s 6 7 K ( j ) K P S X 1 ¡ ¸ n n 2 ( ) ; ; f S X n = = 6 7 6 7 = = N ¸ T ; P f ( ) g P ( ) ¸ f f S X 0 6 7 6 7 1 x s e x p 1 0 n n 1 S ; 6 7 ; n 6 7 = 6 7 6 7 . . 6 7 . 4 5 . . 4 5 . ¸ N P ( ) L f x s L n n 1 ; n = Conditional Random Fields • The model can be re-writen to a form that is very similar to Maximum entropy models (Logistic regression) • However, S and X are sequences here (not only class identity and input vector)

  19. Hidden Conditional Random Fields • As with HMM, we can use CRFs to model state sequences, but state sequence is not really what we are interested in. We are interested in sequences of words. • Maybe, we can live with decoding the most likely sequence of states (as we anyway do with HMMs), but for training, we usually only know the sequence of words (or phonemes). Not states. • HCRF therefore marginalize over all state sequences corresponding to a sequence of words.

  20. Hidden Conditional Random Fields • Still, we can initialize HCRF to simulate HMMs with states modeled by Gaussians or GMMs

  21. Hidden Conditional Random Fields • Still we can use Dynamic Programming to efficiently evaluate the normalizing constant and to decode • similarly to HMMs

  22. Segmental CRF for LVCSR • Lets have some unit “detectors” • phone bigram recognizer • multi-phone unit recognizer • If we knew the word boundary, we would not care about any sequences; we would just train Maximum Entropy model were feature functions are return quantities derived from units detected in the word span

  23. SCRF for LVCSR featues • Ngram Existence Features • Ngram Expectation Features

  24. SCRF for LVCSR featues • Levenshtein Features – compares units detected in the segment/word span with the desired pronunciation

  25. Segmental CRF for LVCSR • State sequence = word sequence • CRF observations are segments of frames • All possible segmentations of frames into observations must be, however, taken into account

  26. Segmental CRF for LVCSR • For convenience, we make observation dependent also on previous state/word  simplifies the equation below • We marginalize over all possible segmentations

  27. State transition features • LM features • Baseline features

  28. Results

More Related