1 / 58

Statistical NLP Spring 2011

Statistical NLP Spring 2011. Lecture 7: Phrase-Based MT. Dan Klein – UC Berkeley. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A. Machine Translation: Examples. Levels of Transfer. World-Level MT: Examples.

lucita
Télécharger la présentation

Statistical NLP Spring 2011

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistical NLPSpring 2011 Lecture 7: Phrase-Based MT Dan Klein – UC Berkeley TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA

  2. Machine Translation: Examples

  3. Levels of Transfer

  4. World-Level MT: Examples • la politique de la haine . (Foreign Original) • politics of hate . (Reference Translation) • the policy of the hatred . (IBM4+N-grams+Stack) • nous avons signé le protocole . (Foreign Original) • we did sign the memorandum of agreement . (Reference Translation) • we have signed the protocol . (IBM4+N-grams+Stack) • où était le plan solide ? (Foreign Original) • but where was the solid plan ? (Reference Translation) • where was the economic base ? (IBM4+N-grams+Stack)

  5. Phrasal / Syntactic MT: Examples

  6. MT: Evaluation • Human evaluations: subject measures, fluency/adequacy • Automatic measures: n-gram match to references • NIST measure: n-gram recall (worked poorly) • BLEU: n-gram precision (no one really likes it, but everyone uses it) • BLEU: • P1 = unigram precision • P2, P3, P4 = bi-, tri-, 4-gram precision • Weighted geometric mean of P1-4 • Brevity penalty (why?) • Somewhat hard to game…

  7. Automatic Metrics Work (?)

  8. See you tomorrow Corpus-Based MT Hasta pronto Yo lo haré pronto I will do it soon I will do it around Novel Sentence See you around Model of translation Modeling correspondences between languages Sentence-aligned parallel corpus: Yo lo haré mañana Hasta pronto I will do it tomorrow See you soon Machine translation system:

  9. Phrase-Based Systems cat ||| chat ||| 0.9 the cat ||| le chat ||| 0.8 dog ||| chien ||| 0.8 house ||| maison ||| 0.6 my house ||| ma maison ||| 0.9 language ||| langue ||| 0.9 … Phrase table (translation model) Sentence-aligned corpus Word alignments Many slides and examples from Philipp Koehn or John DeNero

  10. Phrase-Based Decoding 这 7人 中包括 来自 法国 和 俄罗斯 的 宇航 员 . Decoder design is important: [Koehn et al. 03]

  11. The Pharaoh “Model” [Koehn et al, 2003] Segmentation Translation Distortion

  12. The Pharaoh “Model” Where do we get these counts?

  13. Phrase Weights

  14. Phrase-Based Decoding

  15. Monotonic Word Translation • Cost is LM * TM • It’s an HMM? • P(e|e-1,e-2) • P(f|e) • State includes • Exposed English • Position in foreign • Dynamic program loop? a slap to 0.02 a <- to 0.8 […. slap to, 6] 0.00000016 […. a slap, 5] 0.00001 […. slap by, 6] 0.00000001 a <- by 0.1 a slap by 0.01 for (fPosition in 1…|f|) for (eContext in allEContexts) for (eOption in translations[fPosition]) score = scores[fPosition-1][eContext] * LM(eContext+eOption) * TM(eOption, fWord[fPosition]) scores[fPosition][eContext[2]+eOption] =max score

  16. Beam Decoding • For real MT models, this kind of dynamic program is a disaster (why?) • Standard solution is beam search: for each position, keep track of only the best k hypotheses • Still pretty slow… why? • Useful trick: cube pruning (Chiang 2005) for (fPosition in 1…|f|) for (eContext in bestEContexts[fPosition]) for (eOption in translations[fPosition]) score = scores[fPosition-1][eContext] * LM(eContext+eOption) * TM(eOption, fWord[fPosition]) bestEContexts.maybeAdd(eContext[2]+eOption, score) Example from David Chiang

  17. Phrase Translation • If monotonic, almost an HMM; technically a semi-HMM • If distortion… now what? for (fPosition in 1…|f|) for (lastPosition < fPosition) for (eContext in eContexts) for (eOption in translations[fPosition]) … combine hypothesis for (lastPosition ending in eContext) with eOption

  18. Non-Monotonic Phrasal MT

  19. Pruning: Beams + Forward Costs • Problem: easy partial analyses are cheaper • Solution 1: use beams per foreign subset • Solution 2: estimate forward costs (A*-like)

  20. The Pharaoh Decoder

  21. Hypotheis Lattices

  22. Word Alignment

  23. Word Alignment En vertu de les nouvelles propositions , quel est le coût prévu de perception de les droits ? x z What is the anticipated cost of collecting fees under the new proposal ? What is the anticipated cost of collecting fees under the new proposal? En vertu des nouvelles propositions, quel est le coût prévu de perception des droits?

  24. Unsupervised Word Alignment • Input: a bitext: pairs of translated sentences • Output: alignments: pairs of translated words • When words have unique sources, can represent as a (forward) alignment function a from French to English positions nous acceptons votre opinion . we accept your view .

  25. 1-to-Many Alignments

  26. Many-to-Many Alignments

  27. IBM Model 1 (Brown 93) • Alignments: a hidden vector called an alignment specifies which English source is responsible for each French target word.

  28. A: Gracias 1 2 , 3 lo 4 haré 5 de 6 muy 7 buen 8 grado 9 . IBM Models 1/2 Model Parameters Transitions: P( A2 = 3) Emissions: P( F1 = Gracias | EA1 = Thank ) 7 3 1 6 8 8 8 8 9 E: Thank you , I shall do so gladly . F:

  29. Evaluating TMs • How do we measure quality of a word-to-word model? • Method 1: use in an end-to-end translation system • Hard to measure translation quality • Option: human judges • Option: reference translations (NIST, BLEU) • Option: combinations (HTER) • Actually, no one uses word-to-word models alone as TMs • Method 2: measure quality of the alignments produced • Easy to measure • Hard to know what the gold alignments should be • Often does not correlate well with translation quality (like perplexity in LMs)

  30. Alignment Error Rate • Alignment Error Rate Sure align. = Possible align. = Predicted align. =

  31. Problems with Model 1 • There’s a reason they designed models 2-5! • Problems: alignments jump around, align everything to rare words • Experimental setup: • Training data: 1.1M sentences of French-English text, Canadian Hansards • Evaluation metric: alignment error Rate (AER) • Evaluation data: 447 hand-aligned sentences

  32. Intersected Model 1 • Post-intersection: standard practice to train models in each direction then intersect their predictions [Och and Ney, 03] • Second model is basically a filter on the first • Precision jumps, recall drops • End up not guessing hard alignments

  33. Joint Training? • Overall: • Similar high precision to post-intersection • But recall is much higher • More confident about positing non-null alignments

  34. Monotonic Translation Japan shaken by two new quakes Le Japon secoué par deux nouveaux séismes

  35. Local Order Change Japan is at the junction of four tectonic plates Le Japon est au confluent de quatre plaques tectoniques

  36. IBM Model 2 • Alignments tend to the diagonal (broadly at least) • Other schemes for biasing alignments towards the diagonal: • Relative vs absolute alignment • Asymmetric distances • Learning a full multinomial over distances

  37. EM for Models 1/2 • Model 1 Parameters: Translation probabilities (1+2) Distortion parameters (2 only) • Start with uniform, including • For each sentence: • For each French position j • Calculate posterior over English positions • (or just use best single alignment) • Increment count of word fj with word ei by these amounts • Also re-estimate distortion probabilities for model 2 • Iterate until convergence

  38. Example

  39. Phrase Movement On Tuesday Nov. 4, earthquakes rocked Japan once again Des tremblements de terre ont à nouveau touché le Japon jeudi 4 novembre.

  40. A: Gracias 1 2 , 3 lo 4 haré 5 de 6 muy 7 buen 8 grado 9 . The HMM Model Model Parameters Transitions: P( A2 = 3 | A1 = 1) Emissions: P( F1 = Gracias | EA1 = Thank ) 7 3 1 6 8 8 8 8 9 E: Thank you , I shall do so gladly . F:

  41. The HMM Model • Model 2 preferred global monotonicity • We want local monotonicity: • Most jumps are small • HMM model (Vogel 96) • Re-estimate using the forward-backward algorithm • Handling nulls requires some care • What are we still missing? -2 -1 0 1 2 3

  42. HMM Examples

  43. AER for HMMs

  44. IBM Models 3/4/5 Mary did not slap the green witch n(3|slap) Mary not slap slap slap the green witch P(NULL) Mary not slap slap slap NULL the green witch t(la|the) Mary no daba una botefada a la verde bruja d(j|i) Mary no daba una botefada a la bruja verde [from Al-Onaizan and Knight, 1998]

  45. Examples: Translation and Fertility

  46. Example: Idioms he is nodding il hoche la tête

  47. Example: Morphology

  48. Some Results • [Och and Ney 03]

  49. Decoding • In these word-to-word models • Finding best alignments is easy • Finding translations is hard (why?)

More Related