350 likes | 601 Vues
LINDEN: Linking Named Entities with Knowledge Base via Semantic Knowledge. Wei Shen † , Jianyong Wang † , Ping Luo ‡ , Min Wang ‡ † Tsinghua University, Beijing, China ‡ HP Labs China, Beijing, China WWW 2012 Presented by Tom Chao Zhou July 17, 2012. Outline. Motivation Problem Definition
E N D
LINDEN: Linking Named Entities with Knowledge Base via Semantic Knowledge Wei Shen†, Jianyong Wang†, Ping Luo‡, Min Wang‡ †Tsinghua University, Beijing, China ‡HP Labs China, Beijing, China WWW 2012 Presented by Tom Chao Zhou July 17, 2012
Outline Motivation Problem Definition Previous Methods LINDEN Framework Experiments Conclusion
Outline Motivation Problem Definition Previous Methods LINDEN Framework Experiments Conclusion
Motivation www.freebase.com • Many large scale knowledge bases have emerged • DBpedia, YAGO, Freebase, and etc.
Motivation NBA player Berkeley professor … … • Many large scale knowledge bases have emerged • DBpedia, YAGO, Freebase, and etc. • As world evolves • New facts come into existence • Digitally expressed on the Web • Maintaining and growing the existing knowledge bases • Integrating the extracted facts with knowledge base • Challenge • Name variations • “National Basketball Association” “NBA” • “New York City” “Big Apple” • Entity ambiguity • “Michael Jordan”
Outline Motivation Problem Definition Previous Methods LINDEN Framework Experiments Conclusion
Problem Definition • Entity linking task • Input: • A textual named entity mention m, already recognized in the unstructured text • Output: • The corresponding real world entity e in the knowledge base • If the matching entity e for entity mention m does not exist in the knowledge base, we should return NIL for m
Entity linking task German Chancellor Angela Merkel and her husband Joachim Sauer went to Ulm, Germany. NIL Figure 1: An example of YAGO Source: From Information to Knowledge:Harvesting Entities and Relationships from Web Sources. PODS’10.
Outline Motivation Problem Definition Previous Methods LINDEN Framework Experiments Conclusion
Previous Methods The bag of words model cannot work well! • Essential step of entity linking • Define a similarity measure between the text around the entity mention and the document associated with the entity • Bag of words model • Represent the context as a term vector • Measure the co-occurrence statistics of terms • Cannot capture the semantic knowledge • Example: • Text: Michael Jordan wins NBA champion.
Outline Motivation Problem Definition Previous Methods LINDEN Framework Experiments Conclusion
LINDEN Framework • Candidate Entity Generation • For each named entity mention m • Retrieve the set of candidate entities Em • Named Entity Disambiguation • For each candidate entity e∈Em • Define a scoring measure • Give a rank to Em • Unlinkable Mention Prediction • For each etopwhich has the highest score in Em • Validate whether the entity etop is the target entity for mention m
Candidate Entity Generation • Intuitively, the candidates in Em should have the name of the surface form of m. • We build a dictionary that contains vast amount of information about the surface forms of entities • Like name variations, abbreviations, confusable names, spelling variations, nicknames, etc. • Leverage the four structures of Wikipedia • Entity pages • Redirect pages • Disambiguation pages • Hyperlinks in Wikipedia articles
Candidate Entity Generation (Cont’) Table 1: An example of the dictionary • For each mention m • Search it in the field of surface forms • If a hit is found, we add all target entities of that surface form m to the set of candidate entities Em
Named Entity Disambiguation • Goal: • Give a rank to candidate entities according to their scores • Define four features • Feature 1: Link probability • Based on the count information in the dictionary • Semantic network based features • Feature 2: Semantic associativity • Based on the Wikipedia hyperlink structure • Feature 3: Semantic similarity • Derived from the taxonomy of YAGO • Feature 4: Global coherence • Global document-level topical coherence among entities
Link Probability LP 0.81 0.05 Table 1: An example of the dictionary • Feature 1: link probability LP(e|m) for candidate entity e • where countm(e) is the number of links which point to entity e and have the surface form m
Semantic Network Construction Figure 2: An example of the constructed semantic network • Recognize all the Wikipedia concepts Γdin the document d • The open source toolkit Wikipedia-Miner1 • Example: • The Chicago Bulls’ player Michael Jordan won his first NBA championship in 1991. • Set of entity mentions: {Michael Jordan, NBA} • Candidate entities: • Michael Jordan {Michael J. Jordan, Michael I. Jordan} • NBA {National Basketball Association, Nepal Basketball Association} • Γd : {NBA All-Star Game, David Joel Stern, Charlotte Bobcats, Chicago Bulls} • Hyperlink structure of Wikipedia articles • Taxonomy of concepts in YAGO 1http://wikipedia-miner.sourceforge.net/index.htm
Semantic Associativity Figure 2: An example of the constructed semantic network • Feature 2: semantic associativitySA(e) for each candidate entity e
Semantic Associativity (Cont’) [1] D. Milne and I. H. Witten. An effective, low-cost measure of semantic relatedness obtained from Wikipedia links. In Proceedings of WIKIAI, 2008. • Given two Wikipedia concepts e1and e2 • Wikipedia Link-based Measure (WLM) [1] • Semantic associativity betweenthem • where E1 and E2 are the sets of Wikipedia concepts that hyperlink to e1and e2respectively, and W is the set of all concepts in Wikipedia
Semantic Similarity k=2 • Feature 3: semantic similarity SS(e) for each candidate entity e • where Θk is the set of k context concepts in Γd which have the highest semantic similarity with entity e Figure 2: An example of the constructed semantic network
Semantic Similarity (Cont’) • Given two Wikipedia concepts e1 and e2 • Assume the sets of their super classes are Φe1 and Φe2 • For each class C1 in the set Φe1 • Assign a target class ε(C1) in another set Φe2 as • Where sim(C1, C2) is the semantic similarity between two classes C1 and C2 • To compute sim(C1, C2) • Adopt the information-theoretic approach introduced in [2] • Where C0 is the lowest common ancestor node for class nodes C1 and C2 in the hierarchy, P(C) is the probability that a randomly selected object belongs to the subtree with the root of C in the taxonomy. [2] D. Lin. An information-theoretic definition of similarity. In Proceedings of ICML, pages 296–304, 1998.
Semantic Similarity (Cont’) Calculate the semantic similarity from one set of classes Φe1to another set of classes Φe2 Define the semantic similarity between Wikipedia concepts e1 and e2
Global Coherence • Feature 4: global coherence GC(e) for each candidate entity e • Measured as the average semantic associativity of candidate entity e to the mapping entities of the other mentions • where em’ is the mapping entity of mention m’ • Substitute the most likely assigned entity for the mapping entity in Formula 9 • The most likely assigned entity e’m’for mention m is defined as the candidate entity which has the maximum link probability in Em
Global Coherence (Cont’) Figure 2: An example of the constructed semantic network
Candidates Ranking • To generate a feature vector Fm(e) for each e ∈ Em • To calculate Scorem(e) for each candidate e • where is the weight vector which gives different weights foreach feature element in Fm(e) • Rank the candidates and pick the top candidate as the predicted mapping entity for mention m • To learn , we use a max-margin technique based on the training data set • Assume Scorem(e∗) is larger than any other Scorem(e) with a margin • We minimize over ξm ≥ 0 and the objective
Unlinkable Mention Prediction • Predict mention m as an unlinkable mention • If the size of Emgenerated in the Candidate Entities Generationmodule is equal to zero • If Scorem(etop) is smaller than the learned threshold τ
Outline Motivation Problem Definition Previous Methods LINDEN Framework Experiments Conclusion
Experiment Setup [3] S. Cucerzan. Large-Scale Named Entity Disambiguation Based on Wikipedia Data. In Proceedings of EMNLP-CoNLL, pages 708–716, 2007. • Data sets • CZ data set: newswire data used by Cucerzan [3] • TAC-KBP2009 data set: used in the track of Knowledge Base Population (KBP) at the Text Analysis Conference (TAC) 2009 • Parameters learning: • 10-fold cross validation
Outline Motivation Problem Definition Previous Methods LINDEN Framework Experiments Conclusion
Conclusion • LINDEN • A novel framework to link named entities in text with YAGO • Leveraging the rich semantic knowledge derived from the Wikipedia and the taxonomy of YAGO • Significantly outperforms the state-of-the-art methods in terms of accuracy