1 / 37

Querying Text Databases for Efficient Information Extraction

Querying Text Databases for Efficient Information Extraction. Eugene Agichtein Luis Gravano Columbia University. Extracting Structured Information “Buried” in Text Documents. Organization. Location. Microsoft 's central headquarters in Redmond

brosh
Télécharger la présentation

Querying Text Databases for Efficient Information Extraction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Querying Text Databases for Efficient Information Extraction Eugene AgichteinLuis Gravano Columbia University

  2. Extracting Structured Information “Buried” in Text Documents Organization Location Microsoft's central headquarters in Redmond is home to almost every product group and division. Microsoft Apple Computer Nike Redmond Cupertino Portland Brent Barlow, 27, a software analyst and beta-tester at Apple Computer’s headquarters in Cupertino, was fired Monday for "thinking a little too different." Apple's programmers "think different" on a "campus" in Cupertino, Cal.Nike employees "just do it" at what the company refers to as its "World Campus," near Portland, Ore.

  3. Information Extraction Applications • Over a corporation’s customer report or email complaint database: enabling sophisticated querying and analysis • Over biomedical literature: identifying drug/condition interactions • Over newspaper archives: tracking disease outbreaks, terrorist attacks; intelligence Significant progress over the last decade[MUC]

  4. Information Extraction Example: Organizations’ Headquarters Input: Documents Named-Entity Tagging Pattern Matching Output: Tuples

  5. Goal: Extract All Tuples of a Relation from a Document Database • One approach: feed every document to information extraction system • Problem:efficiency! InformationExtraction System Extracted Tuples

  6. Information Extraction is Expensive • Efficiency is a problem even after training information extraction system Example: NYU’s Proteus extraction system takes around 9 seconds per document • Over 15 days to process 135,000 news articles • “Filtering” before further processing a document might help • Can’t afford to “scan the web” to process each page! • “Hidden-Web” databases don’t allow crawling

  7. Information Extraction Without Processing All Documents • Observation: Often only small fraction of database is relevant for an extraction task • Our approach: Exploit database search engine to retrieve and process only “promising” documents

  8. Architecture of our QXtract System User-Provided Seed Tuples Query Generation Queries Key problem: Learn queries to retrieve “promising” documents Promising Documents Information Extraction Extracted Relation

  9. Generating Queries to Retrieve Promising Documents User-Provided Seed Tuples Seed Sampling • Get document sample with “likely negative” and “likely positive” examples. • Label sample documents usinginformation extraction systemas “oracle.” • Train classifiers to “recognize”useful documents. • Generate queries from classifiermodel/rules. Information Extraction Classifier Training Query Generation Queries

  10. Getting a Training Document Sample User-Provided Seed Tuples “Random” Queries Get document sample with “likely negative” and “likely positive” examples. User-Provided Seed Tuples Seed Sampling

  11. Labeling the Training Document Sample Information Extraction System Use information extraction system as “oracle” to label examples as “true positive” and “true negative.”

  12. Training Classifiers to Recognize “Useful” Documents + + Document features: words - - Ripper SVM Okapi (IR) sponsored based AND near => Useful is homerun Classifier Training near event earnings based far spokesperson

  13. Generating Queries from Classifiers Ripper SVM Okapi (IR) based AND near => Useful sponsored is homerun near event earnings based far spokesperson spokespersonearnings based AND near basedspokesperson QCombined based AND nearspokespersonbased Query Generation Queries

  14. Architecture of our QXtract System User-Provided Seed Tuples Query Generation Queries Promising Documents Information Extraction Extracted Relation

  15. Experimental Evaluation: Data • Training Set: • 1996 New York Times archive of 137,000 newspaper articles • Used to tune QXtract parameters • Test Set: • 1995 New York Times archive of 135,000 newspaper articles

  16. Final Configuration of QXtract, from Training

  17. Experimental Evaluation: Information Extraction Systems and Associated Relations • DIPRE [Brin 1998] • Headquarters(Organization, Location) • Snowball [Agichtein and Gravano 2000] • Headquarters(Organization, Location) • Proteus [Grishman et al. 2002] • DiseaseOutbreaks(DiseaseName, Location, Country, Date, …)

  18. Experimental Evaluation: Seed Tuples Headquarters DiseaseOutbreaks

  19. Experimental Evaluation: Metrics • Gold standard: relation Rall, obtained by running information extraction system over every document in Dall database • Recall: % of Rall captured in approximation extracted from retrieved documents • Precision: % of retrieved documents that are “useful” (i.e., produced tuples)

  20. Experimental Evaluation: Relation Statistics

  21. Alternative Query Generation Strategies • QXtract, with final configuration from training • Tuples: Keep deriving queries from extracted tuples • Problem: “disconnected” databases • Patterns: Derive queries from extraction patterns from information extraction system • “<ORGANIZATION>, based in <LOCATION>”=>“based in” • Problems: pattern features often not suitable for querying, or not visible from “black-box” extraction system • Manual: Construct queries manually [MUC] • Obtained for Proteus from developers • Not available for DIPRE and Snowball Plus simple additional “baseline”: retrieve a random document sample of appropriate size

  22. Recall and Precision Headquarters Relation; Snowball Extraction System Precision Recall

  23. Recall and Precision Headquarters Relation; DIPRE Extraction System Precision Recall

  24. Extraction Efficiency and RecallDiseaseOutbreaks Relation; Proteus Extraction System 60% of relation extracted from just 10% of documents of 135,000 newspaper article database

  25. Snowball/Headquarters Queries

  26. DIPRE/Headquarters Queries

  27. Proteus/DiseaseOutbreaks Queries

  28. Current Work: Characterizing Databases for an Extraction Task Sparse? no yes Scan QXtract, Tuples Connected? no yes QXtract Tuples

  29. Related Work • Information Extraction: focus on quality of extracted relations [MUC]; most relevant sub-task: text filtering • Filters derived from extraction patterns, or consisting of words (manually created or from supervised learning) • Grishman et al.’s manual pattern-based filters for disease outbreaks • Related to Manual and Patterns strategies in our experiments • Focus not on querying using simple search interface • Information Retrieval: focus on relevant documents for queries • In our scenario, relevance determined by “extraction task” and associated information extraction system • Automatic Query Generation: several efforts for different tasks: • Minority language corpora construction [Ghani et al. 2001] • Topic-specific document search (e.g., [Cohen & Singer 1996])

  30. Contributions: An Unsupervised Query-Based Technique for Efficient Information Extraction • Adapts to “arbitrary” underlying information extraction system and document database • Can work over non-crawlable “Hidden Web” databases • Minimal user input required • Handful of example tuples • Can trade off relation completeness and extraction efficiency • Particularly interesting in conjunction with unsupervised/bootstrapping-based information extraction systems (e.g., DIPRE, Snowball)

  31. Questions?

  32. Overflow Slides

  33. Related Work (II) • Focused Crawling (e.g., [Chakrabarti et al. 2002]): uses link and page classification to crawl pages on a topic • Hidden-Web Crawling [Raghavan & Garcia-Molina 2001]: retrieves pages from non-crawlable Hidden-Web databases • Need rich query interface, with distinguishable attributes • Related to Tuples strategy, but “tuples” derived from pull-down menus, etc. from search interfaces as found • Our goal: retrieve as few documents as possible from one database to extract relation • Question-Answering Systems

  34. Related Work (III) • [Mitchell, Riloff, et al. 1998] use “linguistic phrases” derived from information extraction patterns as features for text categorization Related to Patterns strategy; requires document parsing, so can’t directly generate simple queries • [Gaizauskas & Robertson 1997] use 9 manually generated keywords to search for documents relevant to a MUC extraction task

  35. Recall and Precision DiseaseOutbreaks Relation; Proteus Extraction System Recall Precision

  36. Running Times

  37. Extracting Relations from Text: Snowball • Exploit redundancy on web to focus on “easy” instances • Require only minimal training (handful of seed tuples) ACM DL’00 Initial Seed Tuples Occurrences of Seed Tuples Generate New Seed Tuples Tag Entities Generate Extraction Patterns Augment Table

More Related