1 / 159

Computational Lexicology, Morphology and Syntax

Computational Lexicology, Morphology and Syntax. Diana Trandabăț 2013-2014. Word Classes. Words that somehow ‘behave’ alike: Appear in similar contexts Perform similar functions in sentences Undergo similar transformations ~9 traditional word classes of parts of speech

stu
Télécharger la présentation

Computational Lexicology, Morphology and Syntax

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Lexicology, Morphology and Syntax Diana Trandabăț 2013-2014

  2. Word Classes • Words that somehow ‘behave’ alike: • Appear in similar contexts • Perform similar functions in sentences • Undergo similar transformations • ~9 traditional word classes of parts of speech • Noun, verb, adjective, preposition, adverb, article, interjection, pronoun, conjunction 2

  3. Some Examples • N noun chair, bandwidth, pacing • V verb study, debate, munch • ADJ adjective purple, tall, ridiculous • ADV adverb unfortunately, slowly • P preposition of, by, to • PRO pronoun I, me, mine • DET determiner the, a, that, those 3

  4. WORDS TAGS the koala put the keys on the table N V P DET Defining POS Tagging • The process of assigning a part-of-speech or lexical class marker to each word in a corpus: 4

  5. Applications for POS Tagging • Speech synthesis pronunciation • Lead Lead • INsult inSULT • OBject obJECT • OVERflow overFLOW • DIScount disCOUNT • CONtent conTENT • Word Sense Disambiguation: e.g. Time flies like an arrow • Is flies an N or V? • Word prediction in speech recognition • Possessive pronouns (my, your, her) are likely to be followed by nouns • Personal pronouns (I, you, he) are likely to be followed by verbs • Machine Translation 5

  6. Closed vs. Open Class Words • Closed class categories are composed of a small, fixed set of grammatical function words for a given language. • Prepositions: of, in, by, … • Auxiliaries: may, can, will, had, been, … • Pronouns: I, you, she, mine, his, them, … • Usually function words(short common words which play a role in grammar) • Open class categories have large number of words and new ones are easily invented. • English has 4 categories: Nouns, Verbs, Adjectives, Adverbs • Nouns: Googler, textlish • Verbs: Google • Adjectives:geeky

  7. Ambiguity in POS Tagging • “Like” can be a verb or a preposition • I like/VBP candy. • Time flies like/IN an arrow. • “Around” can be a preposition, particle, or adverb • I bought it at the shop around/IN the corner. • I never got around/RP to getting a car. • A new Prius costs around/RB $25K.

  8. Open Class Words • Nouns • Proper nouns • Columbia University, New York City, Arthi Ramachandran, Metropolitan Transit Center • English (and Romanian, and many other languages) capitalizes these • Many have abbreviations • Common nouns • All the rest 8

  9. Count nouns vs. mass nouns • Count: Have plurals, countable: goat/goats, one goat, two goats • Mass: Not countable (fish, salt, communism) (?two fishes) • Adjectives: identify properties or qualities of nouns • Color, size, age, … • Adjective ordering restrictions in English: • Old blue book, notBlue old book • In Korean, adjectives are realized as verbs • Adverbs: also modify things (verbs, adjectives, adverbs) • The very happy man walked home extremely slowly yesterday. 9

  10. Directional/locative adverbs (here, home, downhill) • Degree adverbs (extremely, very, somewhat) • Manner adverbs (slowly, slinkily, delicately) • Temporal adverbs (Monday, tomorrow) • Verbs: • In English, take morphological affixes (eat/eats/eaten) • Represent actions (walk, ate), processes (provide, see), and states (be, seem) • Many subclasses, e.g. • eats/V  eat/VB, eat/VBP, eats/VBZ, ate/VBD, eaten/VBN, eating/VBG, ... • Reflect morphological form & syntactic function

  11. How Do We Assign Words to Open or Closed? • Nouns denote people, places and things and can be preceded by articles? But… My typing is very bad. *The Mary loves John. • Verbs are used to refer to actions, processes, states • But some are closed class and some are open I will have emailed everyone by noon. • Adverbsmodify actions • Is Monday a temporal adverbial or a noun? 11

  12. Closed Class Words • Closed class words (Prep, Det, Pron, Conj, Aux, Part, Num) are generally easy to process, since we can enumerate them….but • Is it a Particles or a Preposition? • George eats up his dinner/George eats his dinner up. • George eats up the street/*George eats the street up. • Articles come in 2 flavors: definite (the) and indefinite (a, an) • What is this in ‘this guy…’? 12

  13. Choosing a POS Tagset • To do POS tagging, first need to choose a set of tags • Could pick very coarse (small) tagsets • N, V, Adj, Adv. • More commonly used: Brown Corpus (Francis & Kucera ‘82), 1M words, 87 tags – more informative but more difficult to tag • Most commonly used: Penn Treebank: hand-annotated corpus of Wall Street Journal, 1M words, 45 tags 13

  14. English Parts of Speech • Noun (person, place or thing) • Singular (NN): dog, fork • Plural (NNS): dogs, forks • Proper (NNP, NNPS): John, Springfields • Personal pronoun (PRP): I, you, he, she, it • Wh-pronoun (WP): who, what • Verb (actions and processes) • Base, infinitive (VB): eat • Past tense (VBD): ate • Gerund (VBG): eating • Past participle (VBN): eaten • Non 3rd person singular present tense (VBP): eat • 3rd person singular present tense: (VBZ): eats • Modal (MD): should, can • To (TO): to (to eat)

  15. English Parts of Speech (cont.) • Adjective (modify nouns) • Basic (JJ): red, tall • Comparative (JJR): redder, taller • Superlative (JJS): reddest, tallest • Adverb (modify verbs) • Basic (RB): quickly • Comparative (RBR): quicker • Superlative (RBS): quickest • Preposition (IN): on, in, by, to, with • Determiner: • Basic (DT) a, an, the • WH-determiner (WDT): which, that • Coordinating Conjunction (CC): and, but, or, • Particle (RP): off (took off), up (put up)

  16. Penn Treebank Tagset 16

  17. Part of Speech (POS) Tagging • Lowest level of syntactic analysis. John saw the saw and decided to take it to the table. NNP VBD DT NN CC VBD TO VB PRP IN DT NN

  18. POS Tagging Process • Usually assume a separate initial tokenization process that separates and/or disambiguates punctuation, including detecting sentence boundaries. • Degree of ambiguity in English (based on Brown corpus) • 11.5% of word types are ambiguous. • 40% of word tokens are ambiguous. • Average POS tagging disagreement amongst expert human judges for the Penn treebank was 3.5% • Based on correcting the output of an initial automated tagger, which was deemed to be more accurate than tagging from scratch. • Baseline: Picking the most frequent tag for each specific word type gives about 90% accuracy • 93.7% if use model for unknown words for Penn Treebank tagset.

  19. POS Tagging Approaches • Rule-Based: Human crafted rules based on lexical and other linguistic knowledge. • Learning-Based: Trained on human annotated corpora like the Penn Treebank. • Statistical models: Hidden Markov Model (HMM), Maximum Entropy Markov Model (MEMM), Conditional Random Field (CRF) • Rule learning: Transformation Based Learning (TBL) • Generally, learning-based approaches have been found to be more effective overall, taking into account the total amount of human expertise and effort involved.

  20. Simple taggers • Default tagger has one tag per word, and assigns it on the basis of dictionary lookup • Tags may indicate ambiguity but not resolve it, e.g. NVB for noun-or-verb • Words may be assigned different tags with associated probabilities • Tagger will assign most probable tag unless • there is some way to identify when a less probable tag is in fact correct • Tag sequences may be defined by regular expressions, and assigned probabilities (including 0 for illegal sequences – negative rules)

  21. Rule-based taggers • Earliest type of tagging: two stages • Stage 1: look up word in lexicon to give list of potential POSs • Stage 2: Apply rules which certify or disallow tag sequences • Rules originally handwritten; more recently Machine Learning methods can be used

  22. Rule-Based Tagging • Typically…start with a dictionary of words and possible tags • Assign all possible tags to words using the dictionary • Write rules by hand to selectively remove tags • Stop when each word has exactly one (presumably correct) tag 22

  23. Start with a POS Dictionary • she: PRP • promised: VBN,VBD • to: TO • back: VB, JJ, RB, NN • the: DT • bill: NN, VB • Etc… for the ~100,000 words of English 23

  24. Assign All Possible POS to Each Word NN RB VBNJJ VB PRP VBD TO VB DT NN She promised to back the bill 24

  25. Apply Rules Eliminating Some POS E.g., Eliminate VBN if VBD is an option when VBN|VBD follows “<start> PRP” NN RB VBN JJ VB PRP VBD TO VB DT NN She promised to back the bill 25

  26. Apply Rules Eliminating Some POS E.g., Eliminate VBN if VBD is an option when VBN|VBD follows “<start> PRP” NN RB JJ VB PRP VBD TO VB DT NN She promised to back the bill 26

  27. EngCG ENGTWOL Tagger • Richer dictionary includes morphological and syntactic features (e.g. subcategorization frames) as well as possible POS • Uses two-level morphological analysis on input and returns all possible POS • Apply negative constraints (> 3744) to rule out incorrect POS

  28. Sample ENGTWOL Dictionary 28

  29. ENGTWOL Tagging: Stage 1 • First Stage: Run words through FST morphological analyzer to get POS info from morph • E.g.: Pavlov had shown that salivation …PavlovPAVLOV N NOM SG PROPERhadHAVE V PAST VFIN SVO HAVE PCP2 SVOshownSHOW PCP2 SVOO SVO SVthat ADV PRON DEM SG DET CENTRAL DEM SGCSsalivationN NOM SG 29

  30. ENGTWOL Tagging: Stage 2 • Second Stage: Apply NEGATIVE constraints • E.g., Adverbial that rule • Eliminate all readings of that except the one in It isn’t that odd. Given input: that If(+1 A/ADV/QUANT) ; if next word is adj/adv/quantifier (+2 SENT-LIM) ; followed by E-O-S (NOT -1 SVOC/A) ; and the previous word is not a verb like consider which allows adjective complements (e.g. I consider that odd) Then eliminate non-ADV tagsElse eliminate ADV 30

  31. How do they work? • Tagger must be “trained” • Many different techniques, but typically … • Small “training corpus” hand-tagged • Tagging rules learned automatically • Rules define most likely sequence of tags • Rules based on • Internal evidence (morphology) • External evidence (context) • Probabilities

  32. What probabilities do we have to learn? • Individual word probabilities: • Probability that a given tag t is appropriate for a givenword w • Easy (in principle): learn from training corpus: • Problem of “sparse data”: • Add a small amount to each calculation, so we get no zeros run occurs 4800 times in the training corpus: 3600 times as a verb, 1200 times as a noun: P(verb|run) = 0.75

  33. (b) Tag sequence probability: • Probability that a given tag sequence t1,t2,…,tn is appropriate for a givenword sequence w1,w2,…,wn • P(t1,t2,…,tn | w1,w2,…,wn ) = ??? • Too hard to calculate for entire sequence: • P(t1,t2 ,t3 ,t4 ,...) = P(t2|t1 ) P(t3|t1,t2 ) P(t4|t1,t2 ,t3 )  … • Subsequence is more tractable • Sequence of 2 or 3 should be enough: • Bigram model: P(t1,t2) = P(t2|t1 ) • Trigram model: P(t1,t2 ,t3) = P(t2|t1 ) P(t3|t2 ) • N-gram model:

  34. More complex taggers • Bigram taggers assign tags on the basis of sequences of two words (usually assigning tag to wordn on the basis of wordn-1) • An nth-order tagger assigns tags on the basis of sequences of n words • As the value of n increases, so does the complexity of the statistical calculation involved in comparing probability combinations

  35. Stochastic taggers • Nowadays, pretty much all taggers are statistics-based and have been since 1980s (or even earlier ... Some primitive algorithms were already published in 60s and 70s) • Most common is based on Hidden Markov Models (also found in speech processing, etc.)

  36. (Hidden) Markov Models • Probability calculations imply Markov models: we assume that P(t|w) is dependent only on the (or, a sequence of) previous word(s) • (Informally) Markov models are the class of probabilistic models that assume we can predict the future without taking too much account of the past • Markov chains can be modelled by finite state automata: the next state in a Markov chain is always dependent on some finite history of previous states • Model is “hidden” if it is actually a succession of Markov models, whose intermediate states are of no interest

  37. Supervised vs unsupervised training • Learning tagging rules from a marked-up corpus (supervised learning) gives very good results (98% accuracy) • Though assigning most probable tag, and “proper noun” to unknowns will give 90% • But it depends on having a corpus already marked up to a high quality • If this is not available, we have to try something else: • “forward-backward” algorithm • A kind of “bootstrapping” approach

  38. Forward-backward (Baum-Welch) algorithm • Start with initial probabilities • If nothing known, assume all Ps equal • Adjust the individual probabilities so as to increase the overall probability. • Re-estimate the probabilities on the basis of the last iteration • Continue until convergence • i.e. there is no improvement, or improvement is below a threshold • All this can be done automatically

  39. Transformation-based tagging • Eric Brill (1993) • Start from an initial tagging, and apply a series of transformations • Transformations are learned as well, from the training data • Captures the tagging data in much fewer parameters than stochastic models • The transformations learned (often) have linguistic “reality”

  40. Transformation-based tagging • Three stages: • Lexical look-up • Lexical rule application for unknown words • Contextual rule application to correct mis-tags

  41. Transformation-based learning • Change tag a to b when: • Internal evidence (morphology) • Contextual evidence • One or more of the preceding/following words has a specific tag • One or more of the preceding/following words is a specific word • One or more of the preceding/following words has a certain form • Order of rules is important • Rules can change a correct tag into an incorrect tag, so another rule might correct that “mistake”

  42. Transformation-based tagging: examples • if a word is currently tagged NN, and has a suffix of length 1 which consists of the letter 's', change its tag to NNS • if a word has a suffix of length 2 consisting of the letter sequence 'ly', change its tag to RB (regardless of the initial tag) • change VBN to VBD if previous word is tagged as NN • Change VBD to VBN if previous word is ‘by’

  43. Transformation-based tagging: example Example after lexical lookup Booth/NP killed/VBN Abraham/NP Lincoln/NP Abraham/NP Lincoln/NP was/BEDZ shot/VBD by/BY Booth/NP He/PPS witnessed/VBD Lincoln/NP killed/VBN by/BY Booth/NP Example after application of contextual rule ’vbn vbd PREVTAG np’ Booth/NP killed/VBD Abraham/NP Lincoln/NP Abraham/NP Lincoln/NP was/BEDZ shot/VBD by/BY Booth/NP He/PPS witnessed/VBD Lincoln/NP killed/VBD by/BY Booth/NP Example after application of contextual rule ’vbd vbn NEXTWORD by’ Booth/NP killed/VBD Abraham/NP Lincoln/NP Abraham/NP Lincoln/NP was/BEDZ shot/VBN by/BY Booth/NP He/PPS witnessed/VBD Lincoln/NP killed/VBN by/BY Booth/NP

  44. Templates for TBL 9/22/2014 44

  45. Labels every word with its most-likely tag E.g. raceoccurences in the Brown corpus: P(NN|race) = .98 P(VB|race)= .02 is/VBZ expected/VBN to/TO race/NN tomorrow/NN Then TBL applies the following rule “Change NN to VB when previous tag is TO”… is/VBZ expected/VBN to/TO race/NN tomorrow/NNbecomes… is/VBZ expected/VBN to/TO race/VB tomorrow/NN Sample TBL Rule Application 9/22/2014 45

  46. TBL Tagging Algorithm • Step 1: Label every word with most likely tag (from dictionary) • Step 2: Check every possible transformation & select one which most improves tag accuracy (cf Gold) • Step 3: Re-tag corpus applying this rule, and add rule to end of rule set • Repeat 2-3 until some stopping criterion is reached, e.g., X% correct with respect to training corpus • RESULT: Ordered set of transformation rules to use on new data tagged only with most likely POS tags 9/22/2014 46

  47. TBL Issues • Problem: Could keep applying (new) transformations ad infinitum • Problem: Rules are learned in ordered sequence • Problem: Rules may interact • But: Rules are compact and can be inspected by humans 9/22/2014 47

  48. Evaluating Tagging Approaches • For any NLP problem, we need to know how to evaluate our solutions • Possible Gold Standards -- ceiling: • Annotated naturally occurring corpus • Human task performance (96-7%) • How well do humans agree? • Kappa statistic: avg pairwise agreement corrected for chance agreement • Can be hard to obtain for some tasks: sometimes humans don’t agree

  49. Baseline: how well does simple method do? • For tagging, most common tag for each word (90%) • How much improvement do we get over baseline?

  50. Methodology: Error Analysis • Confusion matrix: • E.g. which tags did we most often confuse with which other tags? • How much of the overall error does each confusion account for?

More Related