1 / 32

Information Extraction: Distilling Structured Data from Unstructured Text.

Information Extraction: Distilling Structured Data from Unstructured Text. Presenter: Shanshan Lu 03/04/2010. Referenced paper. Andrew McCallum: Information Extraction: Distilling Structured Data from Unstructured Text. ACM Queue, volume 3, Number 9, November 2005. 

ami
Télécharger la présentation

Information Extraction: Distilling Structured Data from Unstructured Text.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Information Extraction: Distilling Structured Data from Unstructured Text. Presenter: Shanshan Lu 03/04/2010

  2. Referenced paper • Andrew McCallum:Information Extraction: Distilling Structured Data from Unstructured Text. ACM Queue, volume 3, Number 9, November 2005.  • Craig A. Knoblock, Kristina Lerman, Steven Minton, Ion Muslea: Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach. IEEE Data Eng. Bull. 23(4): 33-41 (2000)

  3. Example • Task: try to build a website to help people find continuing education opportunities at colleges, universities, and organization across the country, to support field searches over locations, dates, times etc. • Problem: much of the data was not available in structured form. • The only universally available public interfaces were web pages designed for human browsing. Information Extraction: Distilling Structured Data from Unstructured Text

  4. Information Extraction: Distilling Structured Data from Unstructured Text

  5. Information extraction • Information extraction is the process of filling the fields and records of a database from unstructured or loosely formatted text. Information Extraction: Distilling Structured Data from Unstructured Text

  6. Information extraction • Information extraction involves five major subtasks Information Extraction: Distilling Structured Data from Unstructured Text

  7. Technique in information extraction • Some simple extraction tasks can be solved by writing regular expressions. • Due to Frequently change of web pages, the previous method is not sufficient for the information extraction task. • Over the past decade there has been a revolution in the use of statistical and machine-learning methods for information extraction. Information Extraction: Distilling Structured Data from Unstructured Text

  8. A Machine Learning Approach • A wrapper is a piece of software that enables a semi-structured Web source to be queried as if it were a database. Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  9. Contributions • The ability to learn highly accurate extraction rules. • To verify the wrapper to ensure that the correct data continues to be extracted. • To automatically adapt to changes in the sites from which the data is being extracted. Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  10. Learning extraction rules • One of the critical problems in building a wrapper is defining a set of extraction rules that precisely define how to locate the information on the page. • For any given item to be extracted from a page, one needs an extraction rule to locate both the beginning and end of that item. • A key idea underlying our work is that the extraction rules are based on “landmarks” (i.e., groups of consecutive tokens) that enable a wrapper to locate the start and end of the item within the page. Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  11. Samples Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  12. Rules • Start rules: • End rules are similar to start rules. • Disjunctive rules: Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  13. STALKER to learn rules • STALKER : a hierarchical wrapper induction algorithm that learns extraction rules based on examples labeled by the user. • STALKER only requires no more than 10 examples because of the fixed web page format and the hierarchical structure. • STALKER exploits the hierarchical structure of the source to constrain the learning problem. Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  14. STALKER to learn rules • For instance, instead of using one complex rule that extracts all restaurant names, addresses and phone numbers from a page, they take a hierarchical approach. • Apply a rule that extracts the whole list of restaurants; • Then use another rule to break the list into tuples that correspond to individual restaurants; • finally, from each such tuple they extract the name, address, and phone number of the corresponding restaurant. Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  15. STALKER to learn rules • Algorithm to learn each rule • STALKER is a sequential covering algorithm that, given the training examples E, tries to learn a minimal number of perfect disjuncts that cover all examples in E. • A perfect disjunct is a rule that covers at least one training example and on any example the rule matches, it produces the correct result. Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  16. STALKER to learn rules • Learning a start rule for address: • First, it selects an example, say E4, to guide the search. • Second, it generates a set of initial candidates, which are rules that consist of a single 1-token landmark; these landmarks are chosen so that they match the token that immediately precedes the beginning of the address in the guiding example. Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  17. STALKER to learn rules • Learning a start rule for address: • Because R6 has a better generalization potential, STALKER selects R6 for further refinements. • While refining R6, STALKER creates, among others, the new candidates R7, R8, R9, and R10 shown below. Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  18. STALKER to learn rules • Learning a start rule for address: • As R10 works correctly on all four examples, STALKER stops the learning process and returns R10. • Result of STALKER: • In an empirical evaluation on 28 sources • STALKER had to learn 206 extraction rules. • They learned 182 perfect rules (100% accurate), and another 18 rules that had an accuracy of at least 90%. In other words, only 3% of the learned rules were less that 90% accurate. Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  19. Identifying highly informative examples • The most informative examples illustrate exceptional cases. • They have developed an active learning approach called co-testing that analyzes the set of unlabeled examples to automatically select examples for the user to label. • Backward: Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  20. Identifying highly informative examples • Basic idea: • after the user labels one or two examples, the system learns both a forward and a backward rule. • Then it runs both rules on a given set of unlabeled pages. Whenever the rules disagree on an example, the system considers that as an example for the user to label next. • Co-testing makes it possible to generate accurate extraction rules with a very small number of labeled examples. Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  21. Identifying highly informative examples • Assume that the initial training set consists of E1 and E2, while E3 and E4 are not labeled. Based on these examples, we learn the rules: Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  22. Identifying highly informative examples • We applied co-testing on the 24 tasks on which STALKER fails to learn perfect rules. • The results were excellent: the average accuracy over all tasks improved from 85.7% to 94.2%. • Furthermore, 10 of the learned rules were 100% accurate, while another 11 rules were at least 90% accurate.

  23. Verifying the extracted data Since the information for even a single field can vary considerably, the system learns the statistical distribution of the patterns for each field. Wrappers can be verified by comparing the patterns of data returned to the learned statistical distribution. When a significant difference is found, an operator can be notified or we can automatically launch the wrapper repair process. Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  24. Automatically repairing wrappers Locate correct examples of the data field on new pages. Re-label the new pages automatically. Labeled and re-labeled examples re-run through the STALKER to produce the correct rules for this site. Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  25. How to locate the correct example? • Each new page is scanned to identify all text segments that begin with one of the starting patterns and end with one of the ending patterns. Those segments, which we call candidates. • The candidates are then clustered to identify subgroups that share common features (relative position on the page, adjacent landmarks, and whether it is visible to the user). • Each group is then given a score based on how similar it is to the training examples. • We expect the highest ranked group to contain the correct examples of the data field. Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  26. Automatically repairing wrappers Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach

  27. Upcoming trends and capabilities • Combine IE and data mining to perform text mining as well as improve the performance of the underlying extraction system. • Rules mined from a database extracted from a corpus of texts are used to predict additional information to extract from future documents, thereby improving the recall of IE. Information Extraction: Distilling Structured Data from Unstructured Text

  28. Upcoming trends and capabilities • SQL --> Database Information Extraction: Distilling Structured Data from Unstructured Text

  29. Information extraction, the Web and the future Second half internet revolution: machine access to this immense knowledge base Information Extraction: Distilling Structured Data from Unstructured Text

  30. Information extraction, the Web and the future • In web search there will be a transition from keyword search on documents to higher-level queries: • queries where the search hits will be objects, such as people or companies instead of simply documents; • queries that are structured and return information that has been integrated and synthesized from multiple pages; • queries that are stated as natural language questions (“Who were the first three female U.S. Senators?”) and answered with succinct responses. Information Extraction: Distilling Structured Data from Unstructured Text

  31. Thank you! Any questions?

More Related