html5-img
1 / 71

Gholamreza Haffari Simon Fraser University PhD Seminar, August 2009

Machine Learning approaches for dealing with Limited Bilingual Data in SMT. Gholamreza Haffari Simon Fraser University PhD Seminar, August 2009. Learning Problems (I). Supervised Learning :

varsha
Télécharger la présentation

Gholamreza Haffari Simon Fraser University PhD Seminar, August 2009

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Machine Learning approaches for dealing with Limited Bilingual Data in SMT Gholamreza Haffari Simon Fraser University PhD Seminar, August 2009

  2. Learning Problems (I) • Supervised Learning: • Given a sample of object-label pairs (xi,yi), find the predictive relationship between object and labels • Un-supervised learning: • Given a sample consisting of only objects, look for interesting structures in the data, and group similar objects

  3. Learning Problems (II) • Now consider a training data consisting of: • Labeled data: Object-label pairs (xi,yi) • Unlabeled data: Objects xj • Leads to the following learning scenarios: • Semi-Supervised Learning: Find the best mapping from objects to labels benefiting from Unlabeled data • Transductive Learning: Find the labels of unlabeled data • Active Learning: Find the mapping while actively query an oracle for the label of unlabeled data

  4. This Thesis • I consider semi-supervised / transductive / active learning scenarios for statistical machine translation • Facts: • Untranslated sentences (unlabeled data) are much cheaper to collect than translated sentences (labeled data) • Large number of labeled data (sentence pairs) is necessary to train a high quality SMT model

  5. Motivations • Low-density Language pairs • Number of people speaking the language is small • Limited online resources are available • Adapting to a new style/domain/topic • Training on sports, and testing on politics • Overcome training and test mismatch • Training on text, and testing on speech

  6. Statistical Machine Translation • Translate from a source language to a target language by computer using a statistical model • MFE is a standard log-linear model: MFE Target Lang. E Source Lang. F Feature functions Weights

  7. Phrase-based SMT Model • MFE is composed of two main components: • The language model score flm : Takes care of the fluency of the generated translation in the target language • The phrase table score fpt : Takes care of keeping the content of the source sentence in the generated translation Huge bitext is needed to learn a high quality phrase dictionary

  8. Train Select How to do it? Self-Training Data Labaled {(xi,yi)} Unlabaled {xj}

  9. Outline • An analysis of Self-training for Decision Lists • Semi-supervised / transductive Learning for SMT • Active Learning for SMT • Single Language-Pair • Multiple Language-Pair • Conclusions& Future Work

  10. Outline • An analysis of Self-training for Decision Lists • Semi-supervised / transductive Learning for SMT • Active Learning for SMT • Single Language-Pair • Multiple Language-Pair • Conclusions & Future Work

  11. Decision List (DL) parameters • ADecision List is an ordered set of rules. • Given an instance x, the first applicable rule determines the class label. • Instead of ordering the rules, we can give weight to them. • Among all applicable rules to an instance x, apply the rule which has the highest weight. • The parameters are the weights which specify the ordering of the rules. • Rules: If x has feature f  class k • , f,k 11

  12. DL for Word Sense Disambiguation • Ifcompany +1 , confidence weight .96 • Iflife -1 , confidence weight .97 • … (Yarowsky 1995) • WSD: Specify the most appropriate sense (meaning) of a word in a given sentence. • Consider these two sentences: • … company said the plant is still operating. factory sense + • …and divide life into plant and animal kingdom. living organism sense - • Consider these two sentences: • … company said theplantis still operating. sense + • …and divide life into plantand animal kingdom. sense - • Consider these two sentences: • … company said the plant is still operating. (company , operating) sense + • …and divide life into plant and animal kingdom. (life , animal) sense - 12

  13. Bipartite Graph Representation (Features) F X (Instances) company operating life animal … Unlabeled … (Cordunneanu 2006, Haffari & Sarkar 2007) +1companysaid theplantis stilloperating -1dividelifeintoplantandanimalkingdom • We propose to view self-training as propagating the labels of initially labeled nodes to the rest of the graph nodes. 13

  14. Self-Training on the Graph Labeling distribution .6 1 .4 qx + - + - x Labeling distribution 1 qx .7 .3 f - + + - (Haffari & Sarkar 2007) (Features) F X (Instances) f f qx x … … 14

  15. Goals of the Analysis To find reasonable objective functions for the self-training algorithms on the bipartite graph. The objective functions may shed light to the empirical success of different DL-based self-training algorithms. It can tell us the kind of properties in the data which are well exploited and captured by the algorithms. It is also useful in proving the convergence of the algorithms. 15

  16. Useful Operations Average: takes the average distribution of the neighbors Majority: takes the majority label of the neighbors (.2 , .8) (.3 , .7) (.4 , .6) (.2 , .8) (0 , 1) (.4 , .6) 16

  17. Analyzing Self-Training F X • Theorem. The following objective functions are optimized by the corresponding label propagation algorithms on the bipartite graph: Related to graph-based SS learning (Zhu et al 2003) Converges in Poly time O(|F|2 |X|2|) where: 17

  18. Another Useful Operation • Product: takes the label with the highest mass in (component-wise) product distribution of the neighbors. • This way of combining distributions is motivated by Product-of-Experts framework (Hinton 1999). (.4 , .6) (1 , 0) (.8 , .2) 18

  19. Average-Product Theorem. This algorithm Optimizes the following objective function: where The instances get hard labels and features get soft labels. F X features instances 19

  20. What about Log-Likelihood ? Initially, the labeling distribution is uniform for unlabeled vertices and a -like distribution for labeled vertices. By learning the parameters, we would like to reduce the uncertaintyin the labelingdistribution while respecting the labeled data: Negative log-Likelihood of the oldand newly labeled data 20

  21. Connection between the two Analyses Lemma. By minimizing K1t log t (Avg-Prod), we are minimizing an upperbound on negative log-likelihood: Lemma. If m is the number of features connected to an instance, then: 21

  22. Outline • An analysis of Self-training for Decision Lists • Semi-supervised / transductive Learning for SMT • Active Learning for SMT • Single Language-Pair • Multiple Language-Pair • Conclusions & Future Work

  23. Translated text Bilingual text Re- Train Decode F F F E E E MFE Monolingual text Re-training the SMT model Select high quality Sent. pairs Self-Training for SMT Log-linear Model

  24. Translated text Bilingual text Re- Train Decode F F F E E E MFE Monolingual text Re-training the SMT model Select high quality Sent. pairs Self-Training for SMT Log-linear Model

  25. Selecting Sentence Pairs • First give scores: • Use normalized decoder’s score • Confidence estimation method (Ueffing & Ney 2007) • Then select based on the scores: • Importance sampling: • Those whose score is above a threshold • Keep all sentence pairs

  26. Translated text Bilingual text Re- Train Decode F F F E E E MFE Monolingual text Re-training the SMT model Select high quality Sent. pairs Self-Training for SMT Log-linear Model

  27. Initial Phrase Table New Phrase Table  + (1-) Re-Training the SMT Model (I) • Simply add the newly selected sentence pairs to the initial bitext, and fully re-train the phrase table • A mixture model of phrase pair probabilities from training set combined with phrase pairs from the newly selected sentence pairs

  28. Phrase Table 1 Phrase Table 2 Re-training the SMT Model (II) • Use new sentence pairs to train an additional phrase table and use it as a new feature function in the SMT log-linear model • One phrase table trained on sentences for which we have the true translations • One phrase table trained on sentences with their generated translations

  29. Experimental Setup • We use Portage from NRC as the underlying SMT system (Ueffing et al, 2007) • It is an implementation of the phrase-based SMT • We provide the following features among others: • Language model • Several (smoothed) phrase tables • Distortion penalty based on the skipped words

  30. Better French to English (Transductive) • Select fixed number of newly translated sentences with importance sampling based on normalized decoder’s scores, fully re-train the phrase table. • Improvement in BLEU score is almost equivalent to adding 50K training examples

  31. Chinese to English (Transductive) Using additional phrase table • WER: Lower is better (Word error rate) • PER: Lower is better (Position independent WER ) • BLEU: Higher is better Bold: best result, italic: significantly better

  32. Chinese to English (Inductive) Using importance sampling and additional phrase table • WER: Lower is better (Word error rate) • PER: Lower is better (Position independent WER ) • BLEU: Higher is better Bold: best result, italic: significantly better

  33. Chinese to English (Inductive) Using importance sampling and additional phrase table • WER: Lower is better (Word error rate) • PER: Lower is better (Position independent WER ) • BLEU: Higher is better Bold: best result, italic: significantly better

  34. Why does it work? • Reinforces parts of the phrase translation model which are relevant for test corpus, hence obtain more focused probability distribution • Composes new phrases, for example:

  35. Outline • An analysis of Self-training for Decision Lists • Semi-supervised / transductive Learning for SMT • Active Learning for SMT • Single Language-Pair • Multiple Language-Pair • Conclusions & Future Work

  36. Translated text Bilingual text Re- Train Decode F F F E E E MFE Monolingual text Select Informative Sentences Re-training the SMT models F Translate by human Active Learning for SMT Log-linear Model

  37. Translated text Bilingual text Re- Train Decode F F F E E E MFE Monolingual text Select Informative Sentences Re-training the SMT models Translate by human Active Learning for SMT Log-linear Model F

  38. Sentence Selection strategies • Baselines: • Randomly choose sentences from the pool of monolingual sentences • Choose longer sentences from the monolingual corpus • Other methods • Similarity to the bilingual training data • Decoder’s confidence for the translations (Kato & Barnard, 2007) • Entropy of the translations • Reverse model • Utility of the translation units

  39. Similarity & Confidence • Sentences similar to bilingual text are easy to translate by the model • Select the dissimilar sentences to the bilingual text • Sentences for which themodel is not confident about their translations are selected first • Hopefully high confident translations are good ones • Use the normalized decoder’s score to measure confidence

  40. Entropy of the Translations • The higher the entropy of the translation distribution, the higher the chance of selecting that sentence • Since the SMT model is not confident about the translation • The entropy is approximated using the n-best list of translations

  41. MEF Rev: MFE Reverse Model Comparing • the original sentence, and • the final sentence Tells us something about the value of the sentence I will let you know about the issue later Je vais vous faire plus tard sur la question I will later on the question

  42. m b Monolingual Text I will let you know about the issue later Bilingual Text 5 6 6 6 1 1 2 8 3 3 7 Utility of the Translation Units Phrases are the basic units of translations in phrase-based SMT The more frequent a phrase is in the bilingual text, the less important it is The more frequent a phrase is in the monolingual text, the more important it is

  43. Sentence Selection: Probability Ratio Score • For a monolingual sentence S • Consider the bag of its phrases: • Score of S depends on its probability ratio: • Phrase probability ratio captures our intuition about the utility of the translation units = { , , } Phrase Prob. Ratio

  44. Sentence Segmentation • How to prepare the bag of phrases for a sentence S? • For the bilingual text, we have the segmentation from the training phase of the SMT model • For the monolingual text, we run the SMT model to produce the top-n translations and segmentations • Instead of phrases, we can use n-grams

  45. Translated text Bilingual text Re- Train Decode F F F E E E MFE Monolingual text Select Informative Sentences Re-training the SMT models Translate by human Active Learning for SMT Log-linear Model F

  46. Fi Ei Phrase Table 2 Phrase Table 1 Re-training the SMT Model • We use two phrase tables in each SMT model MFiE • One trained on sents for which we have the true translations • One trained on sents with their generated translations (Self-training)

  47. Experimental Setup • Dataset size: • We select 200 sentences from the monolingual sentence set for 25 iterations • We use Portage from NRC as the underlying SMT system (Ueffing et al, 2007)

  48. Better The Simulated AL Setting Utility of phrases Random Decoder’s Confidence

  49. Better The Simulated AL Setting

  50. Random Random Domain Adaptation • Now suppose both test and monolingual text are out-of-domain with respect to the bilingual text • The ‘Decoder’s Confidence’ does a good job • The ‘Utility 1-gram’ outperforms other methods since it quickly expands the lexicon set in an effective manner Utility 1-gram Decoder’s Conf

More Related