1 / 92

A Bit of Progress in Language Modeling Extended Version

A Bit of Progress in Language Modeling Extended Version. Presented by Louis-Tsai Speech Lab, CSIE, NTNU louis@csie.ntnu.edu.tw. Introduction Overview. LM is the art of determining the probability of a sequence of words

jayme
Télécharger la présentation

A Bit of Progress in Language Modeling Extended Version

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Bit of Progress in Language Modeling Extended Version Presented by Louis-Tsai Speech Lab, CSIE, NTNU louis@csie.ntnu.edu.tw

  2. IntroductionOverview • LM is the art of determining the probability of a sequence of words • Speech recognition, optical character recognition, handwriting recognition, machine translation, spelling correction • Improvements • Higher-order n-grams • Skipping models • Clustering • Caching • Sentence-mixture models

  3. IntroductionTechnique introductions • The goal of a LM is to determine the probability of a word sequence w1…wn, P(w1…wn) • Trigram assumption

  4. IntroductionTechnique introductions • C(wi-2wi-1wi) represent the number of occurrences of wi-2wi-1wi in the training corpus, and similarly for C(wi-2wi-1) • There are many three word sequences that never occur, consider the sequence “party on Tuesday”, what is P(Tuesday | party on)?

  5. IntroductionSmoothing • The training corpus might not contain any instances of the phrase, so C(party on Tuesday) would be 0, while there might still be 20 instances of the phrase “party on” P(Tuesday | party on) = 0 • Smoothing techniques take some probability away from some occurrences • Imagine we have “party on Stan Chen’s birthday” in the training data and occurs only one time

  6. IntroductionSmoothing • By taking some probability away from some words, such as “Stan” and redistributing it to other words, such as “Tuesday”, zero probabilities can be avoided • Katz smoothingJelinek-Mercer smoothing (deleted interpolation)Kneser-Ney smoothing

  7. IntroductionHigher-order n-grams • The most obvious extension to trigram models is to simply move to higher-order n-grams, such as four-grams and five-grams • There is a significant interaction between smoothing and n-gram order :higher-order n-grams work better with Kneser-Ney smoothing than with some other methods, especially Katz smoothing

  8. IntroductionSkipping • We condition on a different context than the previous two words • Instead computing P(wi|wi-2wi-1) of computing P(wi|wi-3wi-2)

  9. IntroductionClustering • Clustering (classing) models attempt to make use of the similarities between words • If we have seen occurrences of phrases like “party on Monday” and “party on Wednesday” then we might imagine that the word “Tuesday” is also likely to follow the phrase “party on”

  10. IntroductionCaching • Caching models make use of the observation that if you use a word, you are likely to use it again

  11. IntroductionSentence Mixture • Sentence Mixture models make use of the observation that there are many different sentence types, and that making models for each type of sentence may be better than using one global model

  12. IntroductionEvaluation • A LM that assigned equal probability to 100 words would have perplexity 100

  13. IntroductionEvaluation • In general, the perplexity of a LM is equal to the geometric average of the inverse probability of the words measured on test data:

  14. IntroductionEvaluation • “true” model for any data source will have the lowest possible perplexity • The lower the perplexity of our model, the closer it is, in some sense, to the true model • Entropy, which is simply log2 of perplexity • Entropy is the average number of bits per word that would be necessary to encode the test data using an optimal coder

  15. IntroductionEvaluation • entropy : 54perplexity : 3216 50% • entropy : 54.5perplexity : 32 29.3%

  16. IntroductionEvaluation • Experiments corpus: 1996 NAB • Experiments performed at 4 different training data sizes :100K words, 1M words, 10M words, 284M words • Heldout and test data taken from the 1994 WSJ • Heldout data: 20K words • Test data: 20K words • Vocabulary: 58,546 words

  17. Smoothingsimply interpolation where 0≦,≦1 • In practice, the uniform distribution are also interpolatedthis ensures that no word is assigned probability 0

  18. SmoothingKatz smoothing • Katz smoothing is based on the Good-Turing formula • Let nr represent the number of n-grams that occur r times • discount :

  19. SmoothingKatz smoothing (r+1)nr+1=0 • Let N represent the total size of the training set, this left-over probability will be equal to n1/N Sum=n1

  20. SmoothingKatz smoothing • Consider a bigram model of a phrase such as Pkatz(Francisco | on). Since the phrase San Francisco is fairly common, the unigram probability will also be fairly high. • This means that using Katz smoothing, the probabilitywill also be fairly high. But, the word Francisco occurs in exceedingly few contexts, and its probability of occurring in a new one is very low

  21. SmoothingKneser-Ney smoothing • KN smoothing uses a modified backoff distribution based on the number of contexts each word occurs in, rather than the number of occurrences of the word. Thus, the probability PKN(Francisco | on) would be fairly low, while for a word like Tuesday that occurs in many contexts, PKN(Tuesday | on) would be relatively high, even if the phrase on Tuesday did not occur in the training data

  22. SmoothingKneser-Ney smoothing • Backoff Kneser-Ney smoothing where |{v|C(vwi)>0}| is the number of words v that wi can occur in the context, D is the discount,  is a normalization constant such that the probabilities sum to 1

  23. SmoothingKneser-Ney smoothing V={a,b,c,d} b b c a a c d a a b b b b c c a a b b c c c c d d a d c

  24. SmoothingKneser-Ney smoothing • Interpolated models always combine both the higher-order and the lower-order distribution • Interpolated Kneser-Ney smoothingwhere (wi-1) is a normalization constant such that the probabilities sum to 1

  25. SmoothingKneser-Ney smoothing • Multiple discounts, one for one counts, another for tow counts, and another for three or more counts. But it have too many parameters • Modified Kneser-Ney smoothing

  26. SmoothingJelinek-mercer smoothing • Combines different N-gram orders by linearly interpolating all three models whenever computing trigram

  27. Smoothingabsolute discounting • Absolute discounting subtracting a fixed discount D<=1 from each nonzero count

  28. Witten-Bell Discounting • Key Concept—Things Seen Once: Use the count of things you’ve seen once to help estimate the count of things you’ve never seen • So we estimate the total probability mass of all the zero N-grams with the number of types divided by the number of tokens plus observed types: N : the number of tokensT : observed types

  29. Witten-Bell Discounting • T/(N+T) gives the total “probability of unseen N-grams”, we need to divide this up among all the zero N-grams • We could just choose to divide it equally Z is the total number of N-grams with count zero

  30. Witten-Bell Discounting Alternatively, we can represent the smoothed counts directly as:

  31. Witten-Bell Discounting

  32. Witten-Bell Discounting • For bigramT: the number of bigram types, N: the number of bigram token

  33. 20 words per sentence

  34. Higher-order n-grams • Trigram P(wi|wi-2wi-1)  five-gram P(wi|wi-4wi-3wi-2wi-1) • In many cases, no sequence of the form wi-4wi-3wi-2wi-1 will have been seen in the training databackoff to or interpolation with four-grams, trigrams, bigrams, or even unigrams • But in those cases where such a long sequence has been seen, it may be a good predictor of wi

  35. 0.06 0.02 0.01 284,000,000

  36. Higher-order n-grams • As we can see, the behavior for Katz smoothing is very different than the behavior for KN smoothing the main cause of this difference was backoff smoothing techniques, such as Katz smoothing, or even the backoff version of KN smoothing • Backoff smoothing techniques work poorly on low counts, especially one counts, and that as the n-grams order increases, the number of one counts increases

  37. Higher-order n-grams • Katz smoothing has its best performance around the trigram level, and actually gets worse as this level is exceeded • KN smoothing is essentially monotonic even through 20-grams • The plateau point for KN smoothing depends on the amount of training data availablesmall (100,000 words) at trigram levelfull (284 million words) at 5 to 7 gram • (6-gram has .02 bits better than 5-gram, 7-gram has .01 bits better than 6-gram)

  38. Skipping • When considering a 5-gram context, there are many subsets of the 5-gram we could consider, such as P(wi|wi-4wi-3wi-1) or P(wi|wi-4wi-2wi-1) • If have never seen “Show John a good time” but we have seen “Show Stan a good time”. A normal 5-gram predicting P(time | show John a good) would back off to P(time | John a good) and from there to P(time | a good), which would have a relatively low probability • A skipping model of the from P(wi|wi-4wi-2wi-1) would assign high probability to P(time | show ____ a good)

  39. Skipping • These skipping 5-grams are then interpolated with a normal 5-gram, forming models such aswhere 0≦  ≦1 and 0 ≦  ≦1 and 0 ≦ (1--) ≦1 • Another (and more traditional) use for skipping is as a sort of poor man’s higher order n-gram. One can, for instance, create a model of the formno component probability depends on more than two previous words but the overall probability is 4-gram-like, since it depends on wi-3, wi-2, and wi-1  P(wi|wi-4wi-3wi-2wi-1) + P(wi|wi-4wi-3wi-1) +(1--) P(wi|wi-4wi-2wi-1)  P(wi|wi-2wi-1) + P(wi|wi-3wi-1) +(1--) P(wi|wi-3wi-2)

  40. Skipping • For a 5-gram skipping experiments, all contexts depended on at most the previous four words, wi-4, wi-3, wi-2,and wi-1, but used the four words in a variety of ways • For readability and conciseness, we define v = wi-4, w = wi-3, x = wi-2, y = wi-1

  41. Skipping • First model interpolated dependencies on vw_y and v_xydoes not work well on the smallest training data size, but is competitive for larger ones • In second model, we add vwx_ into first modelroughly .02 to .04 bits over the first model • Next, adding back in the dependencies on the missing words, xvwy, wvxy, and yvwx; that is, all models depended on the same variables, but with the interpolation order modified • e.g., by xvwy, we refer to a model of the form P(z|vwxy) interpolated with P(z|vw_y) interpolated with P(z|w_y) interpolated with P(z|y) interpolated with P(z)

  42. Skipping • Interpolating together vwyx, vxyw, wxyv (base on vwxy) This model puts each of the four preceding words in the last position for one componentthis model does not work as well as the previous two, leading us to conclude that the y word is by far the most important

  43. Skipping • Interpolating together vwyx, vywx, yvwx, which put the y word in each possible position in the backoff modelthis was overall the worst model, reconfirming the intuition that the y word is critical • Finally we interpolating together vwyx, vxyw, wxyv, vywx, yvwx,xvwy, wvxy the result is a marginal gain – less than 0.01 bits – over the best previous model

  44. Skipping • 1-back word (y) xy, wy, vy, uy and ty • 4-gram level : xy, wy and wx • The improvement over 4-gram pairs was still marginal

  45. Clustering • Consider a probability such as P(Tuesday | party on) • Perhaps the training data contains no instances of the phrase “party on Tuesday”, although other phrase such as “party on Wednesday” and “party on Friday” do appear • We can put words into classes, such as the word “Tuesday” into the class WEEKDAY • P(Tuesday | party on WEEKDAY)

  46. Clustering • When each word belongs to only one class, which is called hard clustering, this decomposition is a strict equality a fact that can be trivially provenLet Wi represent the cluster of word wi (1)

  47. Clustering • Since each word belongs to a single cluster, P(Wi|wi) = 1 (2) (2) 代入 (1) 中: (3) predictive clustering

More Related