310 likes | 327 Vues
This paper discusses the integration of heterogeneous databases without common domains by leveraging queries based on textual similarity. The author explores the concept of embodied cognition and knowledge in machine learning and language technologies.
E N D
Integration of Heterogeneous Databases without Common Domains Using Queries Based on Textual Similarity: Embodied Cognition and Knowledge William W. Cohen Machine Learning Dept. and Language Technologies Inst. School of Computer Science Carnegie Mellon University
What was that paper, and who is this guy talking? Machine Learning Human languages: NLP, IR Representation languages: DBs, KR WHIRL Word-Based Heterogeneous Information Representation Language
History • 1982/1984: Ehud Shapiro’s thesis: • MIS: Learning logic programs as debugging an empty Prolog program • Thesis contained 17 figures and a 25-page appendix that were a full implementation of MIS in Prolog • Incredibly elegant work • “Computer science has a great advantage over other experimental sciences: the world we investigate is, to a large extent, our own creation, and we are the ones to determine if it is simple or messy.”
History • Grad school in AI at Rutgers • MTS at AT&T Bell Labs in group doing KR, DB, learning, information retrieval, … • My work: learning logical (description-logic-like, Prolog-like, rule-based) representations that model large noisy real-world datasets.
History • AT&T Bells Labs becomes AT&T Labs Research • The web takes off • as predicted by Vinge and Gibson • IR folks start looking at retrieval and question-answering with the Web • Alon Halevy starts the Information Manifold project to integrate data on the web • VLDB 2006 10-year Best Paper Award for 1996 paper on IM • I started thinking about the same problem in a different way….
History: WHIRL motivation 1 • As the world of computer science gets richer and more complex, computer science can no longer limit itself to studying “our own creation”. • Tension exists between • Elegant theories of representation • The not-so-elegant real world that is being represented CA
History: WHIRL motivation 1 • The beauty of the real world is its complexity….
History: integration by mediation • Mediator translates between the knowledge in multiple separate KBs • Each KB is a separate “symbol system” • No formal connection between them except via the mediator
TFIDF similarity WHIRL idea: exploit linguistic properties of the HTML “veneer” of web-accessible DBs WHIRL Motivation 2: Web KBs are embodied
SELECT R.a,S.a,S.b,T.b FROM R,S,T WHERE R.a=S.a and S.b=T.b Link items as needed by Q Query Q Strongest links: those agreeable to most users Weaker links: those agreeable to some users even weaker links…
SELECT R.a,S.a,S.b,T.b FROM R,S,T WHERE R.a~S.a and S.b~T.b (~ TFIDF-similar) Query Q WHIRL approach: Link items as needed by Q Incrementally produce a ranked list of possible links, with “best matches” first. User (or downstream process) decides how much of the list to generate and examine.
WHIRL queries • Assume two relations: review(movieTitle,reviewText): archive of reviews listing(theatre, movieTitle, showTimes, …): now showing
WHIRL queries • “Find reviews of sci-fi comedies [movie domain] FROM review SELECT * WHERE r.text~’sci fi comedy’ (like standard ranked retrieval of “sci-fi comedy”) • “ “Where is [that sci-fi comedy] playing?” FROM review as r, LISTING as s, SELECT * WHERE r.title~s.title and r.text~’sci fi comedy’ (best answers: titles are similar to each other – e.g., “Hitchhiker’s Guide to the Galaxy” and “The Hitchhiker’s Guide to the Galaxy, 2005” and the review text is similar to “sci-fi comedy”)
Years are common in the review archive, so have low weight WHIRL queries - It is easy to find the (few) items that match on “important” terms - Search for strong matches can prune “unimportant terms” • Similarity is based on TFIDF rare wordsare most important. • Search for high-ranking answers uses inverted indices….
After WHIRL • Efficient text joins • On-the-fly, best-effort, imprecise integration • Interactions between information extraction quality and results of queries on extracted data • Keyword search on databases • Use of statistics on text corpora to build intelligent “embodied” systems • Turney: solving SAT analogies with PMI over word pairs • Mitchell & Just: predicting FMI brain images resulting from reading a common noun (“hammer”) from co-occurrence information between nouns and verbs
Recent work: non-textual similarity “Christos Faloutsos, CMU” “William W. Cohen, CMU” cohen cmu william w dr “Dr. W. W. Cohen” “George H. W. Bush” “George W. Bush”
Recent Work • Personalized PageRank aka Random Walk with Restart: • Similarity measure for nodes in a graph, analogous to TFIDF for text in a WHIRL database • natural extension to PageRank • amenable to learning parameters of the walk (gradient search, w/ various optimization metrics): • Toutanova, Manning & NG, ICML2004; Nie et al, WWW2005; Xi et al, SIGIR 2005 • various speedup techniques exist • queries: Given type t* and node x, find y:T(y)=t* and y~x
Learning to Search Email Einat Minkov, CMU; Andrew Ng, Stanford [SIGIR 2006, CEAS 2006, WebKDD/SNA 2007] CALO Term In Subject Sent To William graph proposal CMU 6/17/07 6/18/07 einat@cs.cmu.edu
Tasks that are like similarity queries Person namedisambiguation [ term “andy”file msgId ] “person” Threading • What are the adjacent messages in this thread? • A proxy for finding “more messages like this one” [ file msgId ] “file” Alias finding What are the email-addresses of Jason ?... [ term Jason ] “email-address” Meeting attendees finder Which email-addresses (persons) should I notify about this meeting? [ meeting mtgId ] “email-address”
Results on one task Mgmt. game PERSON NAME DISAMBIGUATION
Results on several tasks (MAP) * Namedisambiguation * * + + * * Threading * * * * * * * + + + Alias finding
Canon • Nikon • Olympus Set Expansion using the Web Richard Wang, CMU • Fetcher: download web pages from the Web • Extractor: learn wrappers from web pages • Ranker: rank entities extracted by wrappers • Pentax • Sony • Kodak • Minolta • Panasonic • Casio • Leica • Fuji • Samsung • …
The Extractor • Learn wrappers from web documents and seeds on the fly • Utilize semi-structured documents • Wrappers defined at character level • No tokenization required; thus language independent • However, very specific; thus page-dependent • Wrappers derived from document d is applied to d only
Ranking Extractions “ford”, “nissan”, “toyota” Wrapper #2 find northpointcars.com extract • A graph consists of a fixed set of… • Node Types: {seeds, document, wrapper, mention} • Labeled Directed Edges: {find, derive, extract} • Each edge asserts that a binary relation r holds • Each edge has an inverse relation r-1 (graph is cyclic) curryauto.com derive “chevrolet” 22.5% “volvo chicago” 8.4% Wrapper #1 “honda” 26.1% Wrapper #3 Wrapper #4 “acura” 34.6% “bmw pittsburgh” 8.4% Minkov et al. Contextual Search and Name Disambiguation in Email using Graphs. SIGIR 2006
Evaluation Method Prec(r) = precision at rank r (a) Extracted mention at r matches any true mention (b) There exist no other extracted mention at rank less than r that is of the same entity as the one at r • Mean Average Precision • Commonly used for evaluating ranked lists in IR • Contains recall and precision-oriented aspects • Sensitive to the entire ranking • Mean of average precisions for each ranked list where L = ranked list of extracted mentions, r = rank • Evaluation: Average over 36 datasets in three languages (Chinese, Japanese, English) • Average over several 2- or 3-seed queries for each dataset. • MAPperformance: high 80s - mid 90s • Google Sets: MAP in 40s, only English # True Entities = total number of true entities in this dataset
Top three mentionsare the seeds Try it out at http://rcwang.com/seal
Seeds Relational Set Expansion
Future? Machine Learning ? Human languages: NLP, IR Representation languages: DBs, KR