1 / 62

Using Machine Learning to Discover and Understand Structured Data

Using Machine Learning to Discover and Understand Structured Data. William W. Cohen Machine Learning Dept. and Language Technologies Inst. School of Computer Science Carnegie Mellon University. Outline. Information integration: Some history

etalbot
Télécharger la présentation

Using Machine Learning to Discover and Understand Structured Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using Machine Learning to Discover and Understand Structured Data William W. Cohen Machine Learning Dept. and Language Technologies Inst. School of Computer Science Carnegie Mellon University

  2. Outline • Information integration: • Some history • The problem, the economics, and the economic problem • “Soft” information integration • Concrete uses of “soft” integration • Classification • Collaborative filtering • Set expansion

  3. Bell Labs Bell Telephone Labs AT&T Bell Labs A&T Labs AT&T Labs—Research AT&T Labs Research, Shannon Laboratory Shannon Labs Bell Labs Innovations Lucent Technologies/Bell Labs Innovations When are two entities the same? [1925] History of Innovation: From 1925 to today, AT&T has attracted some of the world's greatest scientists, engineers and developers…. [www.research.att.com] Bell Labs Facts: Bell Laboratories, the research and development arm of Lucent Technologies, has been operating continuously since 1925… [bell-labs.com]

  4. In the once upon a time days of the First Age of Magic, the prudent sorcerer regarded his own true name as his most valued possession but also the greatest threat to his continued good health, for--the stories go--once an enemy, even a weak unskilled enemy, learned the sorcerer's true name, then routine and widely known spells could destroy or enslave even the most powerful. As times passed, and we graduated to the Age of Reason and thence to the first and second industrial revolutions, such notions were discredited. Now it seems that the Wheel has turned full circle (even if there never really was a First Age) and we are back to worrying about true names again: The first hint Mr. Slippery had that his own True Name might be known--and, for that matter, known to the Great Enemy--came with the appearance of two black Lincolns humming up the long dirt driveway ... Roger Pollack was in his garden weeding, had been there nearly the whole morning.... Four heavy-set men and a hard-looking female piled out, started purposefully across his well-tended cabbage patch.… This had been, of course, Roger Pollack's great fear. They had discovered Mr. Slippery's True Name and it was Roger Andrew Pollack TIN/SSAN 0959-34-2861.

  5. Deduction via co-operation User • Economic issues: • Who pays for integration? Who tracks errors & inconsistencies? Who fixes bugs? Who pushes for clarity in underlying concepts and object identifiers? • Standards approach  publishers are responsible  publishers pay • Mediator approach: 3rd party does the work, agnostic as to cost Integrated KB Site1 Site3 Site2 KB1 KB3 KB2 Standard Terminology

  6. Traditional approach: Linkage Queries Uncertainty about what to link must be decided by the integration system, not the end user

  7. SELECT R.a,S.a,S.b,T.b FROM R,S,T WHERE R.a=S.a and S.b=T.b Link items as needed by Q Query Q WHIRL approach: Strongest links: those agreeable to most users Weaker links: those agreeable to some users even weaker links…

  8. SELECT R.a,S.a,S.b,T.b FROM R,S,T WHERE R.a~S.a and S.b~T.b (~ TFIDF-similar) Query Q WHIRL approach: Link items as needed by Q Incrementally produce a ranked list of possible links, with “best matches” first. User (or downstream process) decides how much of the list to generate and examine.

  9. WHIRL queries • Assume two relations: review(movieTitle,reviewText): archive of reviews listing(theatre, movieTitle, showTimes, …): now showing

  10. WHIRL queries • “Find reviews of sci-fi comedies [movie domain] FROM review SELECT * WHERE r.text~’sci fi comedy’ (like standard ranked retrieval of “sci-fi comedy”) • “ “Where is [that sci-fi comedy] playing?” FROM review as r, LISTING as s, SELECT * WHERE r.title~s.title and r.text~’sci fi comedy’ (best answers: titles are similar to each other – e.g., “Hitchhiker’s Guide to the Galaxy” and “The Hitchhiker’s Guide to the Galaxy, 2005” and the review text is similar to “sci-fi comedy”)

  11. Years are common in the review archive, so have low weight WHIRL queries • Similarity is based on TFIDF rare wordsare most important. • Search for high-ranking answers uses inverted indices…. - It is easy to find the (few) items that match on “important” terms - Search for strong matches can prune “unimportant terms”

  12. Outline • Information integration: • Some history • The problem, the economics, and the economic problem • “Soft” information integration • Concrete uses of “soft” integration • Classification • Collaborative filtering • Set expansion

  13. Outline • Information integration: • Some history • The problem, the economics, and the economic problem • “Soft” information integration • Concrete uses of “soft” integration • Classification • Collaborative filtering • Set expansion

  14. Outline • Information integration: • Some history • The problem, the economics, and the economic problem • “Soft” information integration • Concrete uses of “soft” integration • Classification • Collaborative filtering • Set expansion: using generalized notion of similarity

  15. Recent work: non-textual similarity “Christos Faloutsos, CMU” “William W. Cohen, CMU” cohen cmu william w dr “Dr. W. W. Cohen” “George H. W. Bush” “George W. Bush”

  16. Recent work • Personalized PageRank aka Random Walk with Restart: • Similarity measure for nodes in a graph, analogous to TFIDF for text in a WHIRL database • natural extension to PageRank • amenable to learning parameters of the walk (gradient search, w/ various optimization metrics): • Toutanova, Manning & NG, ICML2004; Nie et al, WWW2005; Xi et al, SIGIR 2005 • various speedup techniques exist • queries: Given type t* and node x, find y:T(y)=t* and y~x

  17. Learning to Search Email Einat Minkov, CMU; Andrew Ng, Stanford [SIGIR 2006, CEAS 2006, WebKDD/SNA 2007] CALO Term In Subject Sent To William graph proposal CMU 6/17/07 6/18/07 einat@cs.cmu.edu

  18. Tasks that are like similarity queries Person namedisambiguation [ term “andy”file msgId ] “person” Threading • What are the adjacent messages in this thread? • A proxy for finding “more messages like this one” [ file msgId ] “file” Alias finding What are the email-addresses of Jason ?... [ term Jason ] “email-address” Meeting attendees finder Which email-addresses (persons) should I notify about this meeting? [ meeting mtgId ] “email-address”

  19. Learning to search better Task T (query class) … Query q Query a Query b + Rel. answers a + Rel. answers b + Rel. answers q GRAPH WALK • node rank 1 • node rank 2 • node rank 3 • node rank 4 • … • node rank 10 • node rank 11 • node rank 12 • … • node rank 50 • node rank 1 • node rank 2 • node rank 3 • node rank 4 • … • node rank 10 • node rank 11 • node rank 12 • … • node rank 50 • node rank 1 • node rank 2 • node rank 3 • node rank 4 • … • node rank 10 • node rank 11 • node rank 12 • … • node rank 50

  20. Learning Node re-ordering: train task Feature generation Learnre-ranker Re-rankingfunction Graph walk

  21. Node re-ordering: Feature generation Learnre-ranker Re-rankingfunction Graph walk Graph walk Feature generation Score byre-ranking function Boosting Learning Approach train task test task Voted Perceptron; RankSVM; PerceptronCommittees; … [Joacchim KDD 2002, Elsas et al WSDM 2008] [Collins & Koo, CL 2005; Collins, ACL 2002]

  22. Learning approaches Edge weight tuning: Graph walk Weightupdate Theta*

More Related