1 / 82

Machine Translation: Approaches, Challenges and Future

Machine Translation: Approaches, Challenges and Future. Alon Lavie Language Technologies Institute School of Computer Science Carnegie Mellon University ITEC Dinner May 21, 2009. Machine Translation: History. MT started in 1940’s, one of the first conceived application of computers

jonco
Télécharger la présentation

Machine Translation: Approaches, Challenges and Future

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Machine Translation: Approaches, Challenges and Future Alon Lavie Language Technologies Institute School of Computer Science Carnegie Mellon University ITEC Dinner May 21, 2009

  2. Machine Translation: History • MT started in 1940’s, one of the first conceived application of computers • Promising “toy” demonstrations in the 1950’s, failed miserably to scale up to “real” systems • ALPAC Report: MT recognized as an extremely difficult, “AI-complete” problem in the early 1960’s • MT Revival started in earnest in 1980s (US, Japan) • Field dominated by rule-based approaches, requiring 100s of K-years of manual development • Economic incentive for developing MT systems for small number of language pairs (mostly European languages) • Major paradigm shift in MT over the past decade: • From manually developed rule-based systems • To Data-driven statistical search-based approaches ITEC Dinner

  3. Machine Translation: Where are we today? • Age of Internet and Globalization – great demand for translation services and MT: • Multiple official languages of UN, EU, Canada, etc. • Documentation dissemination for large manufacturers (Microsoft, IBM, Caterpillar, US Steel, ALCOA) • Language and translation services business sector estimated at $15 Billion worldwide in 2008 and growing at a healthy pace • Economic incentive is still primarily within a small number of language pairs • Some fairly decent commercial products in the market for these language pairs • Primarily a product of rule-based systems after many years of development • New generation of data-driven “statistical” MT: Google, Language Weaver • Web-based (mostly free) MT services: Google, Babelfish, others… • Pervasive MT between many language pairs still non-existent, but Google is trying to change that! ITEC Dinner

  4. Representative Example: Google Translate • http://translate.google.com ITEC Dinner

  5. Google Translate ITEC Dinner

  6. Google Translate

  7. Example of High-Quality Rule-based MT PAHO’s Spanam system: • Mediante petición recibida por la Comisión Interamericana de Derechos Humanos (en adelante …) el 6 de octubre de 1997, el señor Lino César Oviedo (en adelante …) denunció que la República del Paraguay (en adelante …) violó en su perjuicio los derechos a las garantías judiciales … en su contra. • Through petition received by the `Inter-American Commission on Human Rights` (hereinafter …) on 6 October 1997, Mr. Linen César Oviedo (hereinafter “the petitioner”) denounced that the Republic of Paraguay (hereinafter …) violated to his detriment the rights to the judicial guarantees, to the political participation, to // equal protection and to the honor and dignity consecrated in articles 8, 23, 24 and 11, respectively, of the `American Convention on Human Rights` (hereinafter …”), as a consequence of judgments initiated against it. ITEC Dinner

  8. Machine Translation: Some Basic Terminology • Source Language (SL): the language of the original text that we wish to translate • Target Language (TL): the language into which we wish to translate • Translation Segment: language “unit” which is translated independently; usually sentences, sometimes smaller phrases or terms ITEC Dinner

  9. How Does MT Work? • Naïve MT: Translation Memory • Store in a database human translations of sentences (or shorter phrases) that have been already translated before • When translating a new document: • For each source sentence, search the DB to see if it has been translated before • If found, retrieve it’s translation! • “Fuzzy matches”, multiple translations • Main Advantage: translation output is always human-quality! • Main Disadvantage: many/most sentences haven’t been translated before, cannot be retrieved… • Translation Memories are heavily used by the Commercial Language Service Provider Industry – companies such as Echo International ITEC Dinner

  10. How Does MT Work? • All modern MT approaches are based on building translations for complete segments by putting together smaller pieces of translation • Core Questions: • What are these smaller pieces of translation? Where do they come from? • How does MT put these pieces together? • How does the MT system pick the correct (or best) translation among many options? ITEC Dinner

  11. Core Challenges of MT • Ambiguity and Language Divergences: • Human languages are highly ambiguous, and differently in different languages • Ambiguity at all “levels”: lexical, syntactic, semantic, language-specific constructions and idioms • Amount of required knowledge: • Translation equivalencies for vast vocabularies (several 100k words and phrases) • Syntactic knowledge (how to map syntax of one language to another), plus more complex language divergences (semantic differences, constructions and idioms, etc.) • How do you acquire and construct a knowledge base that big that is (even mostly) correct and consistent? ITEC Dinner

  12. How to Tackle the Core Challenges • Manual Labor: 1000s of person-years of human experts developing large word and phrase translation lexicons and translation rules. Example: Systran’s RBMT systems. • Lots of Parallel Data: data-driven approaches for finding word and phrase correspondences automatically from large amounts of sentence-aligned parallel texts. Example: Statistical MT systems. • Learning Approaches: learn translation rules automatically from small amounts of human translated and word-aligned data. Example: CMU’s Statistical XFER approach. • Simplify the Problem: build systems that are limited-domain or constrained in other ways. Example: CATLYST system built for Caterpillar ITEC Dinner

  13. Major Sources of Translation Problems • Lexical Differences: • Multiple possible translations for SL word, or difficulties expressing SL word meaning in a single TL word • Structural Differences: • Syntax of SL is different than syntax of the TL: word order, sentence and constituent structure • Differences in Mappings of Syntax to Semantics: • Meaning in TL is conveyed using a different syntactic structure than in the SL • Idioms and Constructions ITEC Dinner

  14. Lexical Differences • SL word has several different meanings, that translate differently into TL • Ex: financial bank vs. river bank • Lexical Gaps: SL word reflects a unique meaning that cannot be expressed by a single word in TL • Ex: English snubdoesn’t have a corresponding verb in French or German • TL has finer distinctions than SL  SL word should be translated differently in different contexts • Ex: English wall can be German wand(internal), mauer (external) ITEC Dinner

  15. Structural Differences • Syntax of SL is different than syntax of the TL: • Word order within constituents: • English NPs: art adj n the big boy • Hebrew NPs: art n art adj ha yeled ha gadol • Constituent structure: • English is SVO: Subj Verb Obj I saw the man • Modern Arabic is VSO: Verb Subj Obj • Different verb syntax: • Verb complexes in English vs. in German I can eat the apple Ich kann den apfel essen • Case marking and free constituent order • German and other languages that mark case: den apfel esse Ich the(acc) apple eat I(nom) ITEC Dinner

  16. Syntax-to-Semantics Differences • Structure-change example: I like swimming “Ich scwhimme gern” I swim gladly • Verb-argument example: Jones likes the film. “Le film plait à Jones.” (lit: “the film pleases to Jones”) • Passive Constructions • Example: French reflexive passives:Ces livres se lisent facilement*”These books read themselves easily”These books are easily read ITEC Dinner

  17. Idioms and Constructions • Main Distinction: meaning of whole is not directly compositional from meaning of its sub-parts  no compositional translation • Examples: • George is a bull in a china shop • He kicked the bucket • Can you please open the window? ITEC Dinner

  18. Formulaic Utterances • Good night. • tisbaH cala xEr waking up on good • Romanization of Arabic from CallHome Egypt ITEC Dinner

  19. State-of-the-Art in MT • What users want: • General purpose (any text) • High quality (human level) • Fully automatic (no user intervention) • We can meet any 2 of these 3 goals today, but not all three at once: • FA HQ: Knowledge-Based MT (KBMT) • FA GP: Data-driven (Statistical) MT • GP HQ: Include humans in the MT-loop – post-editing! ITEC Dinner

  20. Types of MT Applications: • Assimilation: multiple source languages, uncontrolled style/topic. Requires general purpose MT: good fit for “Google Translate” • Dissemination: one source language, into multiple target languages; often controlled style, single topic/domain (at least per user): this is the common commercial translation scenario: good fit for KBMT and customized Statistical MT • Communication: Lower quality may be okay, but system robustness, real-time required. ITEC Dinner

  21. Approaches to MT:Vaquois MT Triangle Interlingua Give-information+personal-data (name=alon_lavie) Generation Analysis Transfer [s [vp accusative_pronoun “chiamare” proper_name]] [s [np [possessive_pronoun “name”]] [vp “be” proper_name]] Direct Mi chiamo Alon Lavie My name is Alon Lavie ITEC Dinner

  22. Interlingua versus Transfer • With interlingua, need only N parsers/ generators instead of N2 transfer systems: L2 L2 L3 L1 L1 L3 interlingua L6 L4 L6 L4 L5 L5 ITEC Dinner

  23. Rule-based vs. Data-driven Approaches to MT • What are the pieces of translation? Where do they come from? • Rule-based: large-scale “clean” word translation lexicons, manually constructed over time by human experts • Data-driven: broad-coverage word and multi-word translation lexicons, learned automatically from available sentence-parallel corpora • How does MT put these pieces together? • Rule-based: large collections of rules, manually developed over time by human experts, that map structures from the source to the target language • Data-driven: a computer algorithm that explores millions of possible ways of putting the small pieces together, looking for the translation that statistically looks best ITEC Dinner

  24. Rule-based vs. Data-driven Approaches to MT • How does the MT system pick the correct (or best) translation among many options? • Rule-based: Human experts encode preferences among the rules designed to prefer creation of better translations • Data-driven: a variety of fitness and preference scores, many of which can be learned from available training data, are used to model a total score for each of the millions of possible translation candidates; algorithm then selects and outputs the best scoring translation ITEC Dinner

  25. Rule-based vs. Data-driven Approaches to MT • Why have the data-driven approaches become so popular? • We can now do this! • Increasing amounts of sentence-parallel data are constantly being created on the web • Advances in machine learning algorithms • Computational power of today’s computers can train systems on these massive amounts of data and can perform these massive search-based translation computations when translating new texts • Building and maintaining rule-based systems is too difficult, expensive and time-consuming • In many scenarios, it actually works better! ITEC Dinner

  26. Rule-based vs. Data-driven MT We thank all participants of the whole world for their comical and creative drawings; to choose the victors was not easy task! Click here to see work of winning European of these two months, and use it to look at what the winning of USA sent us. We thank all the participants from around the world for their designs cocasses and creative; selecting winners was not easy! Click here to see the artwork of winners European of these two months, and disclosure to look at what the winners of the US have been sending. Rule-based Data-driven ITEC Dinner

  27. Data-driven MT:Some Major Challenges • Current approaches are too naïve and “direct”: • Good at learning word-to-word and phrase-to-phrase correspondences from data • Not good enough at learning how to combine these pieces and reorder them during translation • Learning general rules requires much more complicated algorithms and computer processing of the data • The space of translations that is “searched” often doesn’t contain a perfect translation • The fitness scores that are used aren’t good enough to always assign better scores to the better translations  we don’t always find the best translation even when it’s there! • Solutions: • Google solution: more and more data! • Research solution: “smarter” algorithms and learning methods ITEC Dinner

  28. Multi-Engine MT • Apply several MT engines to each input in parallel • Create a combined translation from the individual translations • Goal is to combine strengths, and avoid weaknesses. • Along all dimensions: domain limits, quality, development time/cost, run-time speed, etc. • Various approaches to the problem ITEC Dinner

  29. Synthetic Combination MEMT Two Stage Approach: • Align: Identify common words and phrases across the translations provided by the engines • Decode: search the space of synthetic combinations of words/phrases and select the highest scoring combined translation Example: • announced afghan authoritieson saturday reconstituted four intergovernmental committees • The Afghan authoritieson Saturday the formation of the four committees of government MEMT: the afghan authorities announced on Saturday the formation of four intergovernmental committees ITEC Dinner

  30. Speech-to-Speech MT • Speech just makes MT (much) more difficult: • Spoken language is messier • False starts, filled pauses, repetitions, out-of-vocabulary words • Lack of punctuation and explicit sentence boundaries • Current Speech technology is far from perfect • Need for speech recognition (SR) and synthesis in foreign languages • Robustness: MT quality degradation should be proportional to SR quality • Tight Integration: rather than separate sequential tasks, can SR + MT be integrated in ways that improves end-to-end performance? ITEC Dinner

  31. MT Evaluation • How do you evaluate the quality of the output of MT systems? • Human notions of translation quality: • Adequacy: to what extent does the translation have the same meaning as the original sentence? • Fluency: to what extent is the translation fluent and grammatical in the target language • Rankings: given two (or more) translations of the same input sentence, which one is better? (Or rank them by quality) • Task-based measures: is the translation sufficient for accomplishing a specific task or goal (i.e. understanding the gist of the document, flagging important documents that should be translated by a human) ITEC Dinner

  32. Automated MT Evaluation • Automatic Evaluation Metrics are extremely important: • Human evaluations are too expensive and time consuming to be done very frequently • Need to test changes and assess whether system is getting better quickly and on an on-going basis • Data-driven MT systems have lots of tunable parameters – need to optimize them for best performance • Goal: Fully automatic fast metric that can assess quality and that correlates well with human notions of quality • Major Approach: • Obtain human “reference” translations for test data sets • Estimate how “close” the MT translations are to the human translations (on a scale of [0,1]) • Major Challenge: translation variability – there are many correct translations. How to measure similarity? ITEC Dinner

  33. MT for Commercial Language Service Providers • Most dissemination-type high-quality translation is performed by commercial Language service provider (LSP) companies, such as Echo International • Translation process used by most LSPs: • Heavily dependent on expensive human translators, to ensure high-quality • Current automation mostly in the form of Translation Memories: • Previously translated segments don’t have to be translated again by human translators • The retrieved translations are of guaranteed high-quality – limited post-editing is required • No extensive use of modern state-of-the-art MT! ITEC Dinner

  34. MT for Commercial Language Service Providers • Why is MT currently not used by LSPs? • The quality of translations produced by MT varies widely: some sentences can be perfect translations, others can be very bad • Post-editing bad MT output using human translators is frustrating and unappealing, and is often not cost-effective • No existing good technical solutions that integrate MT seamlessly within the existing translation workflow processes used by LSPs • LSPs are wary of taking a “leap of faith” on unproven technology that may not save money ITEC Dinner

  35. Safaba Translation Solutions LLC • New CMU spin-off technology start-up company • Mission: Develop innovative solutions using automated Machine Translation for commercial Language Service Providers (LSPs) • Concept: • Identify and enhance high-quality automatically-produced translations and efficiently integrate them into the human translation loop, dramatically reducing cost and turn-around times of translation projects for commercial LSPs • Founders: • Alon Lavie – Associate Research Professor, LTI, CMU • Robert Olszewski – CMU CS Ph.D. (2001) • Partnering with Echo International for feasibility analysis and as a potential primary beta-testing client ITEC Dinner

  36. Some Take-home Messages • Machine Translation is already quite good for many purposes and needs, and is getting better all the time • Modern state-of-the-art MT is data-driven – computers learning from data. This paradigm shift is not reversible • For casual (assimilation-type) needs, free web-based translation services such as Google are already very useful • For business (dissemination-type) needs, LSPs are usually required, but MT will increasingly integrate into the way LSPs do translation ITEC Dinner

  37. Questions… ITEC Dinner

  38. Lexical Differences • SL word has several different meanings, that translate differently into TL • Ex: financial bank vs. river bank • Lexical Gaps: SL word reflects a unique meaning that cannot be expressed by a single word in TL • Ex: English snubdoesn’t have a corresponding verb in French or German • TL has finer distinctions than SL  SL word should be translated differently in different contexts • Ex: English wall can be German wand(internal), mauer (external) ITEC Dinner

  39. Google at Work… ITEC Dinner

  40. ITEC Dinner

  41. ITEC Dinner

  42. Lexical Differences • Lexical gaps: • Examples: these have no direct equivalent in English:gratiner(v., French, “to cook with a cheese coating”)ōtosanrin(n., Japanese, “three-wheeled truck or van”) ITEC Dinner

  43. Lexical Differences [From Hutchins & Somers] ITEC Dinner

  44. MT Handling of Lexical Differences • Direct MT and Syntactic Transfer: • Lexical Transfer stage uses bilingual lexicon • SL word can have multiple translation entries, possibly augmented with disambiguation features or probabilities • Lexical Transfer can involve use of limited context (on SL side, TL side, or both) • Lexical Gaps can partly be addressed via phrasal lexicons • Semantic Transfer: • Ambiguity of SL word must be resolved during analysis  correct symbolic representation at semantic level • TL Generation must select appropriate word or structure for correctly conveying the concept in TL ITEC Dinner

  45. Structural Differences • Syntax of SL is different than syntax of the TL: • Word order within constituents: • English NPs: art adj n the big boy • Hebrew NPs: art n art adj ha yeled ha gadol • Constituent structure: • English is SVO: Subj Verb Obj I saw the man • Modern Arabic is VSO: Verb Subj Obj • Different verb syntax: • Verb complexes in English vs. in German I can eat the apple Ich kann den apfel essen • Case marking and free constituent order • German and other languages that mark case: den apfel esse Ich the(acc) apple eat I(nom) ITEC Dinner

  46. ITEC Dinner

  47. ITEC Dinner

  48. ITEC Dinner

  49. MT Handling of Structural Differences • Direct MT Approaches: • No explicit treatment: Phrasal Lexicons and sentence level matches or templates • Syntactic Transfer: • Structural Transfer Grammars • Trigger rule by matching against syntactic structure on SL side • Rule specifies how to reorder and re-structure the syntactic constituents to reflect syntax of TL side • Semantic Transfer: • SL Semantic Representation abstracts away from SL syntax to functional roles  done during analysis • TL Generation maps semantic structures to correct TL syntax ITEC Dinner

  50. Syntax-to-Semantics Differences • Meaning in TL is conveyed using a different syntactic structure than in the SL • Changes in verb and its arguments • Passive constructions • Motion verbs and state verbs • Case creation and case absorption • Main Distinction from Structural Differences: • Structural differences are mostly independent of lexical choices and their semantic meaning  addressed by transfer rules that are syntactic in nature • Syntax-to-semantic mapping differences are meaning-specific: require the presence of specific words (and meanings) in the SL ITEC Dinner

More Related