1 / 81

Resources for multilingual processing

Resources for multilingual processing. Georgiana Pu şcaşu University of Wolverhampton, UK. Outline. Motivation and goals NLP Methods, Resources and Applications Text Segmentation Part of Speech Tagging Stemming Lemmatization Syntactic Parsing Named Entity Recognition

phila
Télécharger la présentation

Resources for multilingual processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Resources for multilingual processing Georgiana Puşcaşu University of Wolverhampton, UK

  2. Outline • Motivation and goals • NLP Methods, Resources and Applications • Text Segmentation • Part of Speech Tagging • Stemming • Lemmatization • Syntactic Parsing • Named Entity Recognition • Term Extraction and Terminology Data Management Tools • Text Summarization • Language Identification • Statistical Language Modeling Toolkits • Corpora • Conclusions

  3. Motivation and goals Motivation • Most NLP research and resources deal with English • The Web is multilingual and ideally for all languages the current NLP state-of-the-art should be attained Goals • To present already available textual methods that can support multilingual NLP • To offer an inventory of existent tools and resources that can be exploited in order to avoid reinventing the wheel

  4. Text Segmentation • Electronic text is essentially just a sequence of characters • Before any real processing, text needs to be segmented • Text segmentation involves • Low-level text segmentation (performed at the initial stages of text processing) • Tokenization • Sentence splitting • High-level text segmentation • Intra-sentential: segmentation of linguistic groups such as Named Entities, Noun Phrases, splitting sentences into clauses • Inter-Sentential: grouping sentences and paragraphs into discourse topics

  5. Tokenization • Tokenization is the process of segmenting text into linguistic units such as words, punctuation, numbers, alphanumerics, etc. • It is normally the first step in the majority of text processing applications • Tokenization in languages that are: • segmented: is considered a relatively easy and uninteresting part of text processing (words delimited by blank spaces and punctuation) • non-segmented: is more challenging (no explicit boundaries between words)

  6. Tokenization in segmented languages • Segmented languages: all modern languages that use a Latin-, Cyrillic- or Greek-based writing system • Traditionally, tokenization rules are written using regular expressions • Problems: • Abbreviations: solved by lists of abbreviations (pre-compiled or automatically extracted from a corpus), guessing rules • Hyphenated words:“One word or two?” • Numerical and special expressions (Email addresses, URLs, telephone numbers, etc.) are handled by specialized tokenizers (preprocessors) • Apostrophe: (they’re => they + ‘re; don’t => do + n’t) solved by language-specific rules

  7. Tokenization innon-segmented languages • Non-segmented languages: Oriental languages • Problems: • tokens are written directly adjacent to each other • almost all characters can be one-character word by themselves but can also form multi-character words • Solutions: • Pre-existing lexico-grammatical knowledge • Machine learning employed to extract segmentation regularities from pre-segmented data • Statistical methods: character n-grams

  8. Tokenizers (1) ALEMBIC Author(s): M. Vilain, J. Aberdeen, D. Day, J. Burger, The MITRE Corporation Purpose: Alembic is a multi-lingual text processing system. Among other tools, it incorporates tokenizers for: English, Spanish, Japanese, Chinese, French, Thai. Access: Free by contacting day@mitre.org ELLOGON Author(s): George Petasis, Vangelis Karkaletsis, National Center for Scientific Research, Greece Purpose: Ellogon is a multi-lingual, cross-platform, general-purpose language engineering environment. One of the provided components that can be adapted to various languages can perform tokenization. Supported languages: Unicode. Access: Free at http://www.ellogon.org/ GATE (General Architecture for Text Engineering) Author(s): NLP Group, University of Sheffield, UK Access: Free but requires registration at http://gate.ac.uk/ HEART Of GOLD Author(s): Ulrich Schäfer, DFKI Language Technology Lab, Germany Purpose: Supported language: Unicode, Spanish, Polish, Norwegian, Japanese, Italian, Greek, German, French, English, Chinese. Access: Free at http://heartofgold.dfki.de/

  9. Tokenizers (2) LT TTT Author(s): Language Technology Group, University of Edinburgh, UK Purpose: LT TTT is a text tokenization system and toolset which enables users to produce a swift and individually-tailored tokenisation of text. Access: Free athttp://www.ltg.ed.ac.uk/software/ttt/ MXTERMINATOR Author(s): Adwait Ratnaparkhi Platforms: Platform independent Access: Free athttp://www.cis.upenn.edu/~adwait/statnlp.html QTOKEN Author(s): Oliver Mason, Birmingham University, UK Platforms: Platform independent Access: Free athttp://www.english.bham.ac.uk/staff/omason/software/qtoken.html SProUT Author(s): Feiyu Xu, Tim vor der Brück, LT-Lab, DFKI GmbH, Germany Purpose: SProUTprovides tokenization for Unicode, Spanish, Japanese, German, French, English, Chinese. Access: Not free. More information at http://sprout.dfki.de/

  10. Tokenizers (3) THE QUIPU GROK LIBRARY Author(s): Gann Bierner and Jason Baldridge, University of Edinburgh, UK Access: Free athttps://sourceforge.net/project/showfiles.php?group_id=4083 TWOL Author(s): Lingsoft Purpose: Supported languages: Swedish, Norwegian, German, Finnish, English, Danish. Access: Not free. More information at http://www.lingsoft.fi/

  11. Sentence splitting • Sentence splitting is the task of segmenting text into sentences • In the majority of cases it is a simple task: . ? ! usually signal a sentence boundary • However, in cases when a period denotes a decimal point or is a part of an abbreviation, it does not always signal a sentence break. • The simplest algorithm is known as ‘period-space-capital letter’ (not very good performance). Can be improved with lists of abbreviations, a lexicon of frequent sentence initial words and/or machine learning techniques

  12. WORDS TAGS The couple spent the honeymoon on a yacht N V P DET Part of Speech (POS) Tagging • POS Tagging is the process of assigning a part-of-speech or lexical class marker to each word in a corpus (Jurafsky and Martin)

  13. POS Tagger Prerequisites • Lexicon of words • For each word in the lexicon information about all its possible tags according to a chosen tagset • Different methods for choosing the correct tag for a word: • Rule-based methods • Statistical methods • Transformation Based Learning (TBL) methods

  14. POS Tagger Prerequisites: Lexicon of words • Classes of words • Closed classes: a fixed set • Prepositions: in, by, at, of, … • Pronouns: I, you, he, her, them, … • Particles: on, off, … • Determiners: the, a, an, … • Conjunctions: or, and, but, … • Auxiliary verbs: can, may, should, … • Numerals: one, two, three, … • Open classes: new ones can be created all the time, therefore it is not possible that all words from these classes appear in the lexicon • Nouns • Verbs • Adjectives • Adverbs

  15. POS Tagger Prerequisites Tagsets • To do POS tagging, need to choose a standard set of tags to work with • A tagset is normally sophisticated and linguistically well grounded • Could pick very coarse tagets • N, V, Adj, Adv. • More commonly used set is finer grained, the “UPenn TreeBank tagset”, 48 tags • Even more fine-grained tagsets exist

  16. POS Tagger PrerequisitesTagset example – UPenn tagset

  17. POS Tagging Rule based methods • Start with a dictionary • Assign all possible tags to words from the dictionary • Write rules by hand to selectively remove tags • Leaving the correct tag for each word

  18. POS Tagging Statistical methods (1) The Most Frequent Tag Algorithm • Training • Take a tagged corpus • Create a dictionary containing every word in the corpus together with all its possible tags • Count the number of times each tag occurs for a word and compute the probability P(tag|word); then save all probabilities • Tagging • Given a new sentence, for each word, pick the most frequent tag for that word from the corpus

  19. POS Tagging Statistical methods (2) Bigram HMM Tagger • Training • Create a dictionary containing every word in the corpus together with all its possible tags • Compute the probability of each tag generating a certain word, compute the probability each tag is preceded by a specific tag (Bigram HMM Tagger => probability is dependent only on the previous tag) • Tagging • Given a new sentence, for each word, pick the most likely tag for that word using the parameters obtained after training • HMM Taggers choose the tag sequence that maximizes this formula: P(word|tag) * P(tag|previous tag)

  20. Bigram HMM Tagging: Example People/NNS are/VBZ expected/VBN to/TO queue/VB at/IN the/DT registry/NNS The/DT police/NN is/VBZ to/TO blame/VB for/IN the/DT queue/NN • to/TO queue/???the/DT queue/??? • tk = argmaxk P(tk|tk-1)*P(wi|tk) • i = number of word in sequence, k = number among possible tags for the word “queue” • How do we compute P(tk|tk-1)? • count(tk-1tk)/count(tk-1) • How do we compute P(wi|tk)? • count(wi tk)/count(tk) • max[P(VB|TO)*P(queue|VB) , P(NN|TO)*P(queue|NN)] • Corpus: • P(NN|TO) = 0.021 * P(queue|NN) = 0.00041 => 0.000007 • P(VB|TO) = 0.34 * P(queue|VB) = 0.00003 => 0.00001

  21. POS Tagging Transformation Based Tagging (1) • Combination of rule-based and stochastic tagging methodologies • Like rule-based because rule templates are used to learn transformations • Like stochastic approach because machine learning is used — with tagged corpus as input • Input: • tagged corpus • lexicon (with all possible tags for each word)

  22. POS Tagging Transformation Based Tagging (2) • Basic Idea: • Set the most probable tag for each word as a start value • Change tags according to rules of type “if word-1 is a determiner and word is a verb then change the tag to noun” in a specific order • Training is done on tagged corpus: 1. Write a set of rule templates 2. Among the set of rules, find one with highest score 3. Continue from 2 until lowest score threshold is passed 4. Keep the ordered set of rules • Rules make errors that are corrected by later rules

  23. Transformation Based Tagging Example • Tagger labels every word with its most-likely tag • For example: race has the following probabilities in the Brown corpus: • P(NN|race) = 0.98 • P(VB|race)= 0.02 • Transformation rules make changes to tags • “Change NN to VB when previous tag is TO”… is/VBZ expected/VBN to/TO race/NN tomorrow/NNbecomes… is/VBZ expected/VBN to/TO race/VB tomorrow/NN

  24. POS Taggers (1) ACOPOST Author(s): Jochen Hagenstroem, Kilian Foth, Ingo Schröder, Parantu Shah Purpose: ACOPOST is a collection of POS taggers. It implements and extends well-known machine learning techniques and provides a uniform environment for testing. Platforms: All POSIX (Linux/BSD/UNIX-like OSes) Access: Free athttp://sourceforge.net/projects/acopost/ BRILL’S TAGGER Author(s): Eric Brill Purpose: Transformation Based Learning POS Tagger Access: Free athttp://www.cs.jhu.edu/~brill fnTBL Author(s): Radu Florian and Grace Ngai, John Hopkins University, USA Purpose: fnTBL is a customizable, portable and free source machine-learning toolkit primarily oriented towards Natural Language-related tasks (POS tagging, base NP chunking, text chunking, end-of-sentence detection). It is currently trained for English and Swedish. Platforms: Linux, Solaris, Windows Access: Free athttp://nlp.cs.jhu.edu/~rflorian/fntbl/

  25. POS Taggers (2) LINGSOFT Author(s): LINGSOFT, Finland Purpose: Among the services offered by Lingsoft one can find POS taggers for Danish, English, German, Norwegian, Swedish. Access: Not free. Demos at http://www.lingsoft.fi/demos.html LT POS (LT TTT) Author(s): Language Technology Group, University of Edinburgh, UK Purpose: The LT POS part of speech tagger uses a Hidden Markov Model disambiguation strategy. It is currently trained only for English. Access: Free but requires registration at http://www.ltg.ed.ac.uk/software/pos/index.html MACHINESE PHRASE TAGGER Author(s): Connexor Purpose: Machinese Phrase Tagger is a set of program components that perform basic linguistic analysis tasks at very high speed and provide relevant information about words and concepts to volume-intensive applications. Available for: English, French, Spanish, German, Dutch, Italian, Finnish. Access: Not free. Free access to online demo at http://www.connexor.com/demo/tagger/

  26. POS Taggers (3) MXPOST Author(s): Adwait Ratnaparkhi Purpose: MXPOST is a maximum entropy POS tagger. The downloadable version includes a Wall St. Journal tagging model for English, but can also be trained for different languages. Platforms: Platform independent Access: Free athttp://www.cis.upenn.edu/~adwait/statnlp.html MEMORY BASED TAGGER Author(s): ILK - Tilburg University, CNTS - University of Antwerp Purpose: Memory-based tagging is based on the idea that words occurring in similar contexts will have the same POS tag. The idea is implemented using the memory-based learning software package TiMBL. Access: Usable by email or on the Web athttp://ilk.uvt.nl/software.html#mbt µ-TBL Author(s): Torbjörn Lager Purpose: The µ-TBL system is a powerful environment in which to experiment with transformation-based learning. Platforms: Windows Access: Free athttp://www.ling.gu.se/~lager/mutbl.html

  27. POS Taggers (4) QTAG Author(s): Oliver Mason, Birmingham University, UK Purpose: QTag is a probabilistic parts-of-speech tagger. Resource files for English and German can be downloaded together with the tool. Platforms: Platform independent Access: Free athttp://www.english.bham.ac.uk/staff/omason/software/qtag.html STANFORD POS TAGGER Author(s): Kristina Toutanova, Stanford University, USA Purpose: The Stanford POS tagger is a log-linear tagger written in Java. The downloadable package includes components for command-line invocation and a Java API both for training and for running a trained tagger. Platforms: Platform independent Access: Free athttp://nlp.stanford.edu/software/tagger.shtml SVM TOOL Author(s): TALP Research Center, University of Catalunya, Spain Purpose: The SVMTool is a simple and effective part-of-speech tagger based on Support Vector Machines. The SVMLight software implementation of Vapnik's Support Vector Machine by Thosten Joachims has been used to train the models for Catalan, English and Spanish. Access: Free. SVMTool athttp://www.lsi.upc.es/~nlp/SVMTool/ and SVMLight athttp://svmlight.joachims.org/

  28. POS Taggers (5) TnT Author(s): Thorsten Brants, Saarland University, Germany Purpose: TnT, the short form of Trigrams'n'Tags, is a very efficient statistical part-of-speech tagger that is trainable on different languages and virtually any tagset. The tagger is an implementation of the Viterbi algorithm for second order Markov models. TnT comes with two language models, one for German, and one for English. Platforms: Platform independent. Access: Free but requires registrationathttp://www.coli.uni-saarland.de/~thorsten/tnt/ TREETAGGER Author(s): Helmut Schmid, Institute for Computational Linguistics, University of Stuttgart, Germany Purpose: The TreeTagger has been successfully used to tag German, English, French, Italian, Spanish, Greek and old French texts and is easily adaptable to other languages if a lexicon and a manually tagged training corpus are available. Access: Free at http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger/DecisionTreeTagger.html

  29. POS Taggers (6) Xerox XRCE MLTT Part Of Speech Taggers Author(s): Xerox Research Centre Europe Purpose:Xerox has developed morphological analysers and part-of-speech disambiguators for various languages including Dutch, English, French, German, Italian, Portuguese, Spanish. More recent developments include Czech, Hungarian, Polish and Russian. Access: Not free. Demos at http://www.xrce.xerox.com/competencies/content-analysis/fsnlp/tagger.en.html YAMCHA Author(s): Taku Kudo Purpose: YamCha is a generic, customizable, and open source text chunker oriented toward a lot of NLP tasks, such as POS tagging, Named Entity Recognition, base NP chunking, and Text Chunking. YamCha is using Support Vector Machines (SVMs), first introduced by Vapnik in 1995. YamCha is exactly the same system which performed the best in the CoNLL2000 Shared Task, Chunking and BaseNP Chunking task. Platforms: Linux, Windows Access: Free athttp://www2.chasen.org/~taku/software/yamcha/

  30. Stemming • Stemmers are used in IR to reduce as many related words and word forms as possible to a common canonical form – not necessarily the base form – which can then be used in the retrieval process. • Frequently, the performance of an IR system will be improved if term groups such as: CONNECT, CONNECTED, CONNECTING, CONNECTION, CONNECTIONS are conflated into a single term (by removal of the various suffixes -ED, -ING, -ION, -IONS to leave the single term CONNECT). The suffix stripping process will reduce the total number of terms in the IR system, and hence reduce the size and complexity of the data in the system, which is always advantageous.

  31. The Porter Stemmer • A conflation stemmer developed by Martin Porter at the University of Cambridge in 1980 • Idea: the English suffixes (approximately 1200) are mostly made up of a combination of smaller and simpler suffixes • Can be adapted to other languages (needs a list of suffixes and context sensitive rules)

  32. Stemmers (1) ELLOGON Author(s): George Petasis, Vangelis Karkaletsis, National Center for Scientific Research, Greece Access: Free at http://www.ellogon.org/ FSA Author(s): Jan Daciuk, Rijksuniversiteit Groningen and Technical University of Gdansk, Poland Purpose: Supported languages: German, English, French, Polish. Access: Free at http://juggernaut.eti.pg.gda.pl/~jandac/fsa.html HEART Of GOLD Author(s): Ulrich Schäfer, DFKI Language Technology Lab, Germany Purpose: Supported language: Unicode, Spanish, Polish, Norwegian, Japanese, Italian, Greek, German, French, English, Chinese. Access: Free at http://heartofgold.dfki.de/

  33. Stemmers (2) LANGSUITE Author(s): PetaMem Purpose: Supported languages: Unicode, Spanish, Polish, Italian, Hungarian, German, French, English, Dutch, Danish, Czech. Access: Not free. More information at http://www.petamem.com/ SNOWBALL Purpose: Presentation of stemming algorithms, and Snowball stemmers, for English, Russian, Romance languages (French, Spanish, Portuguese and Italian), German, Dutch, Swedish, Norwegian, Danish and Finnish. Access: Free at http://www.snowball.tartarus.org/ SProUT Author(s): Feiyu Xu, Tim vor der Brück, LT-Lab, DFKI GmbH, Germany Purpose: Available for: Unicode, Spanish, Japanese, German, French, English, Chinese Access: Not free. More information at http://sprout.dfki.de/ TWOL Author(s): Lingsoft Purpose: Supported languages: Swedish, Norwegian, German, Finnish, English, Danish Access: Not free. More information at http://www.lingsoft.fi/

  34. Lemmatization • The process of grouping the inflected forms of a word together under a base form, or of recovering the base form from an inflected form, e.g. grouping the inflected forms COME, COMES, COMING, CAME under the base form COME • Dictionary based • Input: token + pos • Output: lemma • Note: needs POS information • Example: • left+v -> leave, left+a->left • It is the same as looking for a transformation to apply on a word to get its normalized form (word endings: what word suffix should be removed and/or added to get the normalized form) => lemmatization can be modeled as a machine learning problem

  35. Lemmatizers (1) CONNEXOR LANGUAGE ANALYSIS TOOLS Author(s): Connexor, Finland Purpose: Supported languages: English, French, Spanish, German, Dutch, Italian, Finnish. Access: Not free. Demos at http://www.conexor.fi/ ELLOGON Author(s): George Petasis, Vangelis Karkaletsis, National Center for Scientific Research, Greece Access: Free at http://www.ellogon.org/ FSA Author(s): Jan Daciuk, Rijksuniversiteit Groningen and Technical University of Gdansk, Poland Purpose: Supported languages: German, English, French, Polish. Access: Free at http://juggernaut.eti.pg.gda.pl/~jandac/fsa.html MBLEM Author(s): ILK Research Group, Tilburg University Purpose: MBLEM is a lemmatizer for English, German, and Dutch. Access: Demo at http://ilk.uvt.nl/mblem/

  36. Lemmatizers (2) SWESUM Author(s): Hercules Dalianis, Martin Hassel, KTH, Euroling AB Purpose: Supported languages: Swedish, Spanish, German, French, English Access: Free at http://www.euroling.se/produkter/swesum.html TREETAGGER Author(s): Helmut Schmid, Institute for Computational Linguistics, University of Stuttgart, Germany Purpose: The TreeTagger has been successfully used for German, English, French, Italian, Spanish, Greek and old French texts and is easily adaptable to other languages if a lexicon is available. Access: Free at http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger/DecisionTreeTagger.html TWOL Author(s): Lingsoft Purpose: Supported languages: Swedish, Norwegian, German, Finnish, English, Danish Access: Not free. More information at http://www.lingsoft.fi/

  37. Syntactic Parsing • Syntax refers to the way words are arranged together and the relationship between them • Parsing is the process of using a grammar to assign a syntactic analysis to a string of words • Approaches: • Shallow Parsing • Dependency Parsing • Context-Free Parsing

  38. Shallow Parsing • Partition the input into a sequence of non-overlapping units, or chunks, each a sequence of words labelled with a syntactic category and possibly a marking to indicate which word is the head of the chunk • How? • Set of regular expressions over POS labels • Training the chunker on manually marked up text

  39. Dependency Parsing • Based on dependency grammars, where a syntactic analysis takes the form of a set of head-modifier dependency links between words, each link labelled with the grammatical function of the modifying word with respect to the head • Parser first labels each word with all possible function types and then applies handwritten rules to introduce links between specific types and remove other function-type readings

  40. Context-Free (CF) Parsing • CF parsing algorithms form the basis for almost all approaches to parsing that build hierarchical phrase structure • CFG Example: • S -> NP VP • NP -> Det NOMINAL • NOMINAL -> Noun • VP -> Verb • Det -> a • Noun -> flight • Verb -> left • A derivation is a sequence of rules applied to a string that accounts for that string (derivation tree) • Parsing is the process of taking a string and a grammar and returning one (more?) parse tree(s) for that string • Treebanks = Parsed corpora in the form of trees

  41. Probabilistic CFGs • Assigning probabilities to parse trees • Attach probabilities to grammar rules • The expansions for a given non-terminal sum to 1 • A derivation (tree) consists of the set of grammar rules that are in the tree • The probability of a tree is just the product of the probabilities of the rules in the derivation. • Needed: grammar, dictionary with POS, parser • Task is to find the max probability tree for an input

  42. Noun Phrase (NP) Chunkers fnTBL Author(s): Radu Florian and Grace Ngai, John Hopkins University, USA Purpose: fnTBL is a customizable, portable and free source machine-learning toolkit primarily oriented towards Natural Language-related tasks (POS tagging, base NP chunking, text chunking, end-of-sentence detection, word sense disambiguation). It is currently trained for English and Swedish. Platforms: Linux, Solaris, Windows Access: Free athttp://nlp.cs.jhu.edu/~rflorian/fntbl/ YAMCHA Author(s): Taku Kudo Purpose: YamCha is a generic, customizable, and open source text chunker oriented toward a lot of NLP tasks, such as POS tagging, Named Entity Recognition, base NP chunking, and Text Chunking. YamCha is using Support Vector Machines (SVMs), first introduced by Vapnik in 1995. YamCha is exactly the same system which performed the best in the CoNLL2000 Shared Task, Chunking and BaseNP Chunking task. Platforms: Linux, Windows Access: Free athttp://www2.chasen.org/~taku/software/yamcha/

  43. Syntactic parsers MACHINESE PHRASE TAGGER Author(s): Connexor Purpose: Machinese Phrase Tagger is a set of program components that perform basic linguistic analysis tasks at very high speed and provide relevant information about words and concepts to volume-intensive applications. Available for: English, French, Spanish, German, Dutch, Italian, Finnish. Access: Not free. Free access to online demo at http://www.connexor.com/demo/tagger/

  44. Named Entity Recognition • Identification of proper names in texts, and their classification into a set of predefined categories of interest: • entities: organizations, persons, locations • temporal expressions: time, date • quantities: monetary values, percentages, numbers • Two kinds of approaches Knowledge Engineering • rule based • developed by experienced language engineers • make use of human intuition • small amount of training data • very time consuming • some changes may be hard to accommodate Learning Systems • use statistics or other machine learning • developers do not need LE expertise • require large amounts of annotated training data • some changes may require re-annotation of the entire training corpus

  45. Named Entity RecognitionKnowledge engineering approach • identification of named entities in two steps: • recognition patterns expressed as WFSA (Weighted Finite-State Automaton) are used to identify phrases containing potential candidates for named entities (longest match strategy) • additional constraints (depending on the type of candidate) are used for validating the candidates • usage of on-line base lexicon for geographical names, first names

  46. Named Entity RecognitionProblems • Variation of NEs, e.g. John Smith, Mr. Smith, John • Since named entities may appear without designators (companies, persons) a dynamic lexicon for storing such named entities is used Example: “Mars Ltd is a wholly-owned subsidiary of Food Manufacturing Ltd, a non-trading company registered in England. Mars is controlled by members of the Mars family.” • Resolution of type ambiguity using the dynamic lexicon: If an expression can be a person name or company name (Martin Marietta Corp.) then use type of last entry inserted into dynamic lexicon for making decision. • Issues of style, structure, domain, genre etc. • Punctuation, spelling, spacing, formatting

  47. Named Entity Recognizers (1) ELLOGON Author(s): George Petasis, Vangelis Karkaletsis, National Center for Scientific Research, Greece Purpose: Available for Unicode. Access: Free at http://www.ellogon.org/ HEART Of GOLD Author(s): Ulrich Schäfer, DFKI Language Technology Lab, Germany Purpose: Supported language: Unicode, Spanish, Polish, Norwegian, Japanese, Italian, Greek, German, French, English, Chinese. Access: Free at http://heartofgold.dfki.de/ INSIGHT DISCOVERER EXTRACTOR Author(s): TEMIS Purpose: Supported language: Spanish, Russian, Portuguese, Polish, Italian, Hungarian, Greek, German, French, English, Dutch, Czech. Access: Not free. More information at http://www.temis-group.com/

  48. Named Entity Recognizers (2) LINGPIPE Author(s): Bob Carpenter, Breck Baldwin, Alias-I Purpose: Supported languages: Unicode, Spanish, German, French, English, Dutch. Access: Free at http://www.alias-i.com/lingpipe/ YAMCHA Author(s): Taku Kudo Purpose: YamCha is a generic, customizable, and open source text chunker oriented toward a lot of NLP tasks, such as POS tagging, Named Entity Recognition, base NP chunking, and Text Chunking. YamCha is using Support Vector Machines (SVMs), first introduced by Vapnik in 1995. YamCha is exactly the same system which performed the best in the CoNLL2000 Shared Task, Chunking and BaseNP Chunking task. Platforms: Linux, Windows Access: Free athttp://www2.chasen.org/~taku/software/yamcha/

  49. Automatic term extraction • Terms = linguistic labels of concepts • Concepts = units of thought (vague definition): if a term represents a unit of thought, its appearance in textual data has to be statistically significant, otherwise, the “unit” nature of the concept the term represents is in question. • Label: different labels can be used for the same concept, and the same label can be used for different concepts.

  50. Automatic term extraction • Lexico-syntactic approaches use lexical and syntactic patterns: • domain-specific prefixes and suffixes (i.e. formaldehyde) • part-of-speech sequences (AN; NN; AAN; ANN; NAN; NNN; NPN) (How about ((A|N)+|((A|N)*(N|P)?)(A|N)*)N) • cue word or phrase (immediate left/right contexts) • Statistical approaches: different statistical measures: • Frequency, relative frequency, tf.idf etc. (for the whole unit) • Mutual information; t-score; z-score; etc. (collocation measurement) • C-value: combine both internal and external statistical measures.

More Related