1 / 110

Linguistics 187/287 Week 6

Linguistics 187/287 Week 6. Generation Term-rewrite System Machine Translation. Martin Forst, Ron Kaplan, and Tracy King. Generation. Parsing: string to analysis Generation: analysis to string What type of input? How to generate. Why generate?. Machine translation

ansel
Télécharger la présentation

Linguistics 187/287 Week 6

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Linguistics 187/287 Week 6 Generation Term-rewrite System Machine Translation Martin Forst, Ron Kaplan, and Tracy King

  2. Generation • Parsing: string to analysis • Generation: analysis to string • What type of input? • How to generate

  3. Why generate? • Machine translation Lang1 string -> Lang1 fstr -> Lang2 fstr -> Lang2 string • Sentence condensation Long string -> fstr -> smaller fstr -> new string • Question answering • Production of NL reports • State of machine or process • Explanation of logical deduction • Grammar debugging

  4. F-structures as input • Use f-structures as input to the generator • May parse sentences that shouldn’t be generated • May want to constrain number of generated options • Input f-structure may be underspecified

  5. XLE generator • Use the same grammar for parsing and generation • Advantages • maintainability • write rules and lexicons once • But • special generation tokenizer • different OT ranking

  6. Generation tokenizer/morphology • White space • Parsing: multiple white space becomes a single TB John appears. -> John TB appears TB . TB • Generation: single TB becomes a single space (or nothing) John TB appears TB . TB -> John appears. *John appears . • Suppress variant forms • Parse both favor and favour • Generate only one

  7. Morphconfig for parsing & generation STANDARD ENGLISH MOPRHOLOGY (1.0) TOKENIZE: P!eng.tok.parse.fst G!eng.tok.gen.fst ANALYZE: eng.infl-morph.fst G!amerbritfilter.fst G!amergen.fst ----

  8. Reversing the parsing grammar • The parsing grammar can be used directly as a generator • Adapt the grammar with a special OT ranking GENOPTIMALITYORDER • Why do this? • parse ungrammatical input • have too many options

  9. Ungrammatical input • Linguistically ungrammatical • They walks. • They ate banana. • Stylistically ungrammatical • No ending punctuation: They appear • Superfluous commas: John, and Mary appear. • Shallow markup: [NP John and Mary] appear.

  10. Too many options • All the generated options can be linguistically valid, but too many for applications • Occurs when more than one string has the same, legitimate f-structure • PP placement: • In the morning I left. I left in the morning.

  11. Using the Gen OT ranking • Generally much simpler than in the parsing direction • Usually only use standard marks and NOGOOD no * marks, no STOPPOINT • Can have a few marks that are shared by several constructions one or two for dispreferred one or two for preferred

  12. Example: Prefer initial PP S --> (PP: @ADJUNCT @(OT-MARK GenGood)) NP: @SUBJ; VP. VP --> V (NP: @OBJ) (PP: @ADJUNCT). GENOPTIMALITYORDER NOGOOD +GenGood. parse: they appear in the morning. generate: without OT: In the morning they appear. They appear in the morning. with OT: In the morning they appear.

  13. Debugging the generator • When generating from an f-structure produced by the same grammar, XLE should always generate • Unless: • OT marks block the only possible string • something is wrong with the tokenizer/morphology regenerate-morphemes: if this gets a string the tokenizer/morphology is not the problem • Hard to debug: XLE has robustness features to help

  14. Underspecified Input • F-structures provided by applications are not perfect • may be missing features • may have extra features • may simply not match the grammar coverage • Missing and extra features are often systematic • specify in XLE which features can be added and deleted • Not matching the grammar is a more serious problem

  15. Adding features • English to French translation: • English nouns have no gender • French nouns need gender • Soln: have XLE add gender the French morphology will control the value • Specify additions in xlerc: • set-gen-addsadd "GEND" • can add multiple features: set-gen-adds add "GEND CASE PCASE" • XLE will optionally insert the feature Note: Unconstrained additions make generation undecidable

  16. Example The cat sleeps. -> Le chat dort. [ PRED 'dormir<SUBJ>' SUBJ [ PRED 'chat' NUM sg SPEC def ] TENSE present ] [ PRED 'dormir<SUBJ>' SUBJ [ PRED 'chat' NUM sg GEND masc SPEC def ] TENSE present ]

  17. Deleting features • French to English translation • delete the GEND feature • Specify deletions in xlerc • set-gen-addsremove "GEND" • can remove multiple features set-gen-adds remove "GEND CASE PCASE" • XLE obligatorily removes the features no GEND feature will remain in the f-structure • if a feature takes an f-structure value, that f-structure is also removed

  18. Changing values • If values of a feature do not match between the input f-structure and the grammar: • delete the feature and then add it • Example: case assignment in translation • set-gen-adds remove "CASE" set-gen-adds add "CASE" • allows dative case in input to become accusative e.g., exceptional case marking verb in input language but regular case in output language

  19. Generation for Debugging • Checking for grammar and lexicon errors • create-generator english.lfg • reports ill-formed rules, templates, feature declarations, lexical entries • Checking for ill-formed sentences that can be parsed • parse a sentence • see if all the results are legitimate strings • regenerate “they appear.”

  20. Rewriting/Transfer System

  21. Why a Rewrite System • Grammars produce c-/f-structure output • Applications may need to manipulate this • Remove features • Rearrange features • Continue linguistic analysis (semantics, knowledge representation – next week) • XLE has a general purpose rewrite system (aka "transfer" or "xfr" system)

  22. Sample Uses of Rewrite System • Sentence condensation • Machine translation • Mapping to logic for knowledge representation and reasoning • Tutoring systems

  23. What does the system do? • Input: set of "facts" • Apply a set of ordered rules to the facts • this gradually changes the set of input facts • Output: new set of facts • Rewrite system uses the same ambiguity management as XLE • can efficiently rewrite packed structures, maintaining the packing

  24. Example F-structure Facts PERS(var(1),3) PRED(var(1),girl) CASE(var(1),nom) NTYPE(var(1),common) NUM(var(1),pl) SUBJ(var(0),var(1)) PRED(var(0),laugh) TNS-ASP(var(0),var(2)) TENSE(var(2),pres) arg(var(0),1,var(1)) lex_id(var(0),1) lex_id(var(1),0) • F-structures get var(#) • Special arg facts • lex_id for each PRED • Facts have two arguments (except arg) • Rewrite system allows for any number of arguments

  25. Rule format • Obligatory rule: LHS ==> RHS. • Optional rule: LHS ?=> RHS. • Unresourced fact: |- clause. • LHS clause : match and delete +clause : match and keep -LHS : negation (don't have fact) LHS, LHS : conjunction ( LHS | LHS ) : disjunction { ProcedureCall } : procedural attachment • RHS clause : replacement facts 0 : empty set of replacement facts stop : abandon the analysis

  26. Example rules PERS(var(1),3) PRED(var(1),girl) CASE(var(1),nom) NTYPE(var(1),common) NUM(var(1),pl) SUBJ(var(0),var(1)) PRED(var(0),laugh) TNS-ASP(var(0),var(2)) TENSE(var(2),pres) arg(var(0),1,var(1)) lex_id(var(0),1) lex_id(var(1),0) "PRS (1.0)" grammar = toy_rules. "obligatorily add a determiner if there is a noun with no spec" +NTYPE(%F,%%), -SPEC(%F,%%) ==> SPEC(%F,def). "optionally make plural nouns singular this will split the choice space" NUM(%F, pl) ?=> NUM(%F, sg).

  27. Example Obligatory Rule PERS(var(1),3) PRED(var(1),girl) CASE(var(1),nom) NTYPE(var(1),common) NUM(var(1),pl) SUBJ(var(0),var(1)) PRED(var(0),laugh) TNS-ASP(var(0),var(2)) TENSE(var(2),pres) arg(var(0),1,var(1)) lex_id(var(0),1) lex_id(var(1),0) "obligatorily add a determiner if there is a noun with no spec" +NTYPE(%F,%%), -SPEC(%F,%%) ==> SPEC(%F,def). Output facts: all the input facts plus: SPEC(var(1),def)

  28. Example Optional Rule "optionally make plural nouns singular this will split the choice space" NUM(%F, pl) ?=> NUM(%F, sg). PERS(var(1),3) PRED(var(1),girl) CASE(var(1),nom) NTYPE(var(1),common) NUM(var(1),pl) SPEC(var(1),def) SUBJ(var(0),var(1)) PRED(var(0),laugh) TNS-ASP(var(0),var(2)) TENSE(var(2),pres) arg(var(0),1,var(1)) lex_id(var(0),1) lex_id(var(1),0) Output facts: all the input facts plus choice split: A1: NUM(var(1),pl) A2: NUM(var(1),sg)

  29. Output of example rules • Output is a packed f-structure • Generation gives two sets of strings • The girls {laugh.|laugh!|laugh} • The girl {laughs.|laughs!|laughs}

  30. Manipulating sets • Sets are represented with an in_set feature • He laughs in the park with the telescope ADJUNCT(var(0),var(2)) in_set(var(4),var(2)) in_set(var(5),var(2)) PRED(var(4),in) PRED(var(5),with) • Might want to optionally remove adjuncts • but not negation

  31. Example Adjunct Deletion Rules "optionally remove member of adjunct set" +ADJUNCT(%%, %AdjSet), in_set(%Adj, %AdjSet), -PRED(%Adj, not) ?=> 0. "obligatorily remove adjunct with nothing in it" ADJUNCT(%%, %Adj), -in_set(%%,%Adj) ==> 0. He laughs with the telescope in the park. He laughs in the park with the telescope He laughs with the telescope. He laughs in the park. He laughs.

  32. Manipulating PREDs • Changing the value of a PRED is easy • PRED(%F,girl) ==> PRED(%F,boy). • Changing the argument structure is trickier • Make any changes to the grammatical functions • Make the arg facts correlate with these

  33. Example Passive Rule "make actives passive make the subject NULL; make the object the subject; put in features" SUBJ( %Verb, %Subj), arg( %Verb, %Num, %Subj), OBJ( %Verb, %Obj), CASE( %Obj, acc) ==> SUBJ( %Verb, %Obj), arg( %Verb, %Num, NULL), CASE( %Obj, nom), PASSIVE( %Verb, +), VFORM( %Verb, pass). the girls saw the monkeys ==> The monkeys were seen. in the park the girls saw the monkeys ==> In the park the monkeys were seen.

  34. Templates and Macros • Rules can be encoded as templates n2n(%Eng,%Frn) :: PRED(%F,%Eng), +NTYPE(%F,%%) ==> PRED(%F,%Frn). @n2n(man, homme). @n2n(woman, femme). • Macros encode groups of clauses/facts sg_noun(%F) := +NTYPE(%F,%%), +NUM(%F,sg). @sg_noun(%F), -SPEC(%F) ==> SPEC(%F,def).

  35. Unresourced Facts • Facts can be stipulated in the rules and refered to • Often used as a lexicon of information not encoded in the f-structure • For example, list of days and months for manipulation of dates |- day(Monday). |- day(Tuesday). etc. |- month(January). |- month(February). etc. +PRED(%F,%Pred), ( day(%Pred) | month(%Pred) ) ==> …

  36. Rule Ordering • Rewrite rules are ordered (unlike LFG syntax rules but like finite-state rules) • Output of rule1 is input to rule2 • Output of rule2 is input to rule3 • This allows for feeding and bleeding • Feeding: insert facts used by later rules • Bleeding: remove facts needed by later rules • Can make debugging challenging

  37. Example of Rule Feeding • Early Rule: Insert SPEC on nouns +NTYPE(%F,%%), -SPEC(%F,%%) ==> SPEC(%F, def). • Later Rule: Allow plural nouns to become singular only if have a specifier (to avoid bad count nouns) NUM(%F,pl), +SPEC(%F,%%) ==> NUM(%F,sg).

  38. Example of Rule Bleeding • Early Rule: Turn actives into passives (simplified) SUBJ(%F,%S), OBJ(%F,%O) ==> SUBJ(%F,%O), PASSIVE(%F,+). • Later Rule: Impersonalize actives SUBJ(%F,%%), -PASSIVE(%F,+) ==> SUBJ(%F,%S), PRED(%S,they), PERS(%S,3), NUM(%S,pl). • will apply to intransitives and verbs with (X)COMPs but not transitives

  39. Debugging • XLE command line:tdbg • steps through rules stating how they apply ============================================ Rule 1: +(NTYPE(%F,A)), -(SPEC(%F,B)) ==>SPEC(%F,def) File /tilde/thking/courses/ling187/hws/thk.pl, lines 4-10 Rule 1 matches: [+(2)] NTYPE(var(1),common) 1 --> SPEC(var(1),def) ============================================ Rule 2: NUM(%F,pl) ?=>NUM(%F,sg) File /tilde/thking/courses/ling187/hws/thk.pl, lines 11-17 Rule 2 matches: [3] NUM(var(1),pl) 1 --> NUM(var(1),sg) ============================================ Rule 5: SUBJ(%Verb,%Subj), arg(%Verb,%Num,%Subj), OBJ(%Verb,%Obj), CASE(%Obj,acc) ==>SUBJ(%Verb,%Obj), arg(%Verb,%Num,NULL), CASE(%Obj,nom), PASSIVE(%Verb,+), VFORM(%Verb,pass) File /tilde/thking/courses/ling187/hws/thk.pl, lines 28-37 Rule does not apply girls laughed

  40. Running the Rewrite System • create-transfer : adds menu items • load-transfer-rules FILE : loads rules from file • f-str window under commands has: • transfer : prints output of rules in XLE window • translate : runs output through generator • Need to do (where path is $XLEPATH/lib): setenv LD_LIBRARY_PATH /afs/ir.stanford.edu/data/linguistics/XLE/SunOS/lib

  41. Rewrite Summary • The XLE rewrite system lets you manipulate the output of parsing • Creates versions of output suitable for applications • Can involve significant reprocessing • Rules are ordered • Ambiguity management is as with parsing

  42. Grammatical Machine Translation Stefan Riezler & John Maxwell

  43. Target Source Transfer Translation System + Lots of statistics Translationrules XLEParsing XLEGeneration F-structures F-structures. GermanLFG English LFG

  44. Transfer-Rule Induction from aligned bilingual corpora • Use standard techniques to find many-to-many candidate word-alignments in source-target sentence-pairs • Parse source and target sentences using LFG grammars for German and English • Select most similar f-structures in source and target • Define many-to-many correspondences between substructures of f-structures based on many-to-many word alignment • Extract primitive transfer rules directly from aligned f-structure units • Create powerset of possible combinations of basic rules and filter according to contiguity and type matching constraints

  45. Induction Example sentences: Dafür bin ich zutiefst dankbar. I have a deep appreciation for that. Many-to-many word alignment: Dafür{6 7} bin{2} ich{1} zutiefst{3 4 5} dankbar{5} F-structure alignment:

  46. Extracting Primitive Transfer Rules • Rule (1) maps lexical predicates • Rule (2) maps lexical predicates and interprets subj-to-subj link as indication to map subj of source with this predicate into subject of target and xcomp of source into object of target • %X1, %X2, %X3, … are variables for f-structures (2) PRED(%X1, sein), SUBJ(%X1,%X2), XCOMP(%X1,%X3) ==> PRED(%X1, have), SUBJ(%X1,%X2) OBJ(%X1,%X3) (1) PRED(%X1, ich) ==> PRED(%X1, I)

  47. Extracting Complex Transfer Rules • Complex rules are created by taking all combinations of primitive rules, and filtering (4) zutiefst dankbar sein ==> have a deep appreciation (5) zutiefst dankbar dafür sein ==> have a deep appreciation for that (6) ich bin zutiefst dankbar dafür ==> I have a deep appreciation for that

  48. Transfer Contiguity constraint • Transfer contiguity constraint: • Source and target f-structures each have to be connected • F-structures in the transfer source can only be aligned with f-structures in the transfer target, and vice versa • Analogous to constraint on contiguous and alignment-consistent phrases in phrase-based SMT • Prevents extraction of rule that would translate dankbar directly into appreciation since appreciation is aligned also to zutiefst • Transfer contiguity allows learning idioms like es gibt - there is from configurations that are local in f-structure but non-local in string, e.g., es scheint […] zu geben - there seems […] to be

  49. Linguistic Filters on Transfer Rules • Morphological stemming of PRED values • (Optional) filtering of f-structure snippets based on consistency of linguistic categories • Extraction of snippet that translates zutiefstdankbar into a deep appreciation maps incompatible categories adjectival and nominal; valid in string-based world • Translation of sein to have might be discarded because of adjectival vs. nominal types of their arguments • Larger rule mapping zutiefst dankbar sein to have a deep appreciation is ok since verbal types match

  50. Transfer • Parallel application of transfer rules in non-deterministic fashion • Unlike XLE ordered-rule rewrite system • Each fact must be transferred by exactly one rule • Default rule transfers any fact as itself • Transfer works on chart using parser’s unification mechanism for consistency checking • Selection of most probable transfer output is done by beam-decoding on transfer chart

More Related