1 / 37

Ontology-based information extraction: progresses and perspectives of the Ex tool

Ontology-based information extraction: progresses and perspectives of the Ex tool. Martin Labsk ý labsky@vse.cz KEG seminar , May 2 9, 2008. Agenda. Motivation for Web Information Extraction (IE) Difficulties in practical applications Extraction Ontologies Extraction process

belle
Télécharger la présentation

Ontology-based information extraction: progresses and perspectives of the Ex tool

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ontology-based information extraction: progresses and perspectives of the Ex tool Martin Labský labsky@vse.cz KEG seminar, May 29, 2008

  2. Agenda • Motivation for Web Information Extraction (IE) • Difficulties in practical applications • Extraction Ontologies • Extraction process • Experimental results: contact information • Future work and Conclusion ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  3. Motivation for Web IE (1/4): online products ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  4. Motivation for Web IE (2/4): contact information ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  5. Motivation for Web IE (3/4): seminars, events ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  6. Motivation for Web IE (4/4) • Store the extracted results in a DB to enable structured search over documents • information retrieval • database-like querying • e.g. online product search engine • e.g. building a contact DB • Support for web page quality assessment • involved in an EU project MedIEQ to support medical website accreditation agencies • Source documents • internet, intranet, emails • can be very diverse ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  7. Agenda • Motivation for Web Information Extraction (IE) • Difficulties in practical IE applications • Extraction Ontologies • Extraction process • Experimental results: contact information • Future work and Conclusion ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  8. Difficulties in practical applications (1/3) • Requirements • be able to extract some information quickly • not necessarily with the best accuracy • often needed for a proof-of-concept application • then more work can be done to boost accuracy • the extraction model changes • meaning of to-be-extracted items may shift, • new items are often added ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  9. Difficulties in practical applications (2/3) • Training data • most state-of-the-art trainable IE systems require large amounts of training data: these are almost never available • when training data is collected, it is not easy to adapt it to changed or additional criteria • active learning helps reduce training data collection efforts but users often need to spend time annotating trivial examples that could be easily covered by manual rules • this is our experience from experiments with extraction of bicycle descriptions using Hidden Markov Models • Wrappers • cannot rely on wrapper-only systems when extracting from multiple websites • non-wrapper systems often do not utilize regular formatting cues • Purely manual rules • just writing extraction rules manually is not easily extensible when training data are collected in later phases ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  10. Difficulties in practical applications (3/3) • It seems to be difficult to exploit at the same time • extraction knowledge from domain experts • training data • formatting regularities • within a document • within a group of documents from the same source • We attempt to address this with the approach of extraction ontologies ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  11. Agenda • Motivation for Web Information Extraction (IE) • Difficulties in practical applications • Extraction Ontologies • Extraction process • Experimental results: contact information • Future work and Conclusion ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  12. Extraction ontologies • An extraction ontology is a part of a domain ontology transformed to suit extraction needs • Contains classes composed of attributes • more like UML class diagrams, less like ontologies where e.g. relations are standalone • also contains axioms related to classes or attributes • Classes and attributes are augmented with extraction evidence • manually provided patterns for content and context • value or length ranges • links to external trainable classifiers Person name {1} degree {0-5} email {0-2} phone {0-3} Responsible ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  13. Extraction evidence provided by domain expert (1) • Patterns • for attributes and classes • for their content and context • patterns may be defined at the following levels: • word and character-level, • formatting tag level • level of labels (e.g. sentence breaks, POS tags) • Attribute value constraints • word length constraints, numeric value ranges • possible to attach units to numeric attributes • Axioms • may enforce relations among attributes • interpreted using JavaScript scripting language • Simple co-reference resolution rules ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  14. Extraction evidence provided by domain expert (2) Axioms • class level • attribute level Patterns • class content • attribute value • attribute context • class context Value constraints • word length • numeric value ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  15. Extraction evidence from classifiers • Links to trainable classifiers • may classify attributes only • binary or multi-class • Training (if not done externally) uses these features • re-use all evidence provided by expert • induce binary features based on word n-grams • Feature induction • candidate features are all word n-grams of given lengths occurring inside or near training attribute values • pruning parameters: • point-wise mutual information thresholds: • minimal absolute occurrence count • maximum number of features classifier usage classifier definition ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  16. Probabilistic model to combine evidence • Each piece of evidence E is equipped with 2 probability estimates with respect to predicted attribute A: • evidence precision P(A|E) ... prediction confidence • evidence coverage P(E|A) ... necessity of evidence (support) • Each attribute is assigned some low prior probability P(A) • Let be the set of evidence applicable to A • Assume conditional independence among : • Using Bayes formula we compute P(A | its evidence values) as: where ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  17. Agenda • Motivation for Web Information Extraction (IE) • Difficulties in practical applications • Extraction Ontologies • Extraction process • Experimental results: contact information • Future work and Conclusion ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  18. The extraction process (1/5) • Tokenize, build HTML formatting tree, apply sentence splitter, POS tagger • Match patterns • Create Attribute Candidates (ACs) • For each created AC, let PAC= • prune ACs below threshold • build document AC lattice, score ACs by log(PAC) Washington , DC ... ... ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  19. The extraction process (2/5) • Evaluate coreference resolution rules for each pair of ACs • e.g. “Dr. Burns”  “John Burns” • possible coreferring groups are remembered • in attribute’s value section: • Compute the best scoring path BP through AC lattice • using dynamic programming • Run wrapper induction algorithm using all AC  BP • wrapper induction algorithm described in next slides • if new local patterns are induced, apply them to: • rescore existing ACs • create new ACs • update AC lattice, recompute BP • Terminate here if no instances are to be generated • output all AC  BP (n-best paths supported) ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  20. The extraction process (3/5) • Generate Instance Candidates (ICs) bottom-up • triangular trellis used to store partial ICs • when scoring new ICs, only consider axioms and patterns that already can be applied to the IC. Validity is not required. • pruning parameters: abs and relative beam size at trellis node, maximum number of ACs that can be skipped, min IC probability ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  21. The extraction process (4/5) • IC generation: continued • When new IC is created, its P(IC) is computed from 2 components: where |IC| is member attribute count, ACskip is an non-member AC that is fully or partially inside the IC, PAC skip is the probability of AC being a “false positive”. where C is the set of evidence known for the class C, computed using the same probabilistic model as for ACs. • Scores are combined using the Prospector pseudo-bayesian method: ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  22. The extraction process (5/5) • Insert valid ICs into AC lattice • Valid ICs were assembled during IC generation phase • Score of a valid IC reflects all extraction evidence of its class • All unpruned valid ICs are inserted into the AC lattice, scored by • The best path BP is calculated through the IC+AC lattice (n-best supported) • the search algorithm allows constraints to be defined over the extracted path(s) • e.g. min/max count of extracted instances • output all ACs and ICs on BP IC1 ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  23. Extraction evidence based on formatting • A simple wrapper induction algorithm • identify formatting regularities • turn them into “local” context patterns to boost contained ACs • Assemble distinct formatting subtrees rooted at block elements containing ACs from the best path BP currently determined by the system • For each subtree S, calculate • If both C(S,Att) and prec(Att|S) reach defined thresholds, a new local context pattern is created with its precision set to C(S,Att) and its recall close to 0 (in order not to harm potential singleton ACs. a formatting tree learned using known names like “John Doe” and applied to unknown names TD TD B A_href B A_href John Doe jdoe@web.ca Argentina Agosto aa@web.br ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  24. Agenda • Motivation for Web Information Extraction (IE) • Difficulties in practical applications • Extraction Ontologies • Extraction process • Experimental results: contact information • Future work and Conclusion ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  25. Experimental results: Contact information • 109 English contact pages, 200 Spanish, 108 Czech • Named entity counts: 7000, 5000, 11000, respectively, instances not labeled • Only domain expert’s evidence and formatting pattern induction were used • Domain expert saw 30 randomly chosen documents, the rest was test data • Instance extraction done but not evaluated ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  26. Future work • Confirm that improved results can be achieved when combining expert knowledge and formatting pattern induction with classifiers • Attempt to improve a seed extraction ontology by bootstrapping using relevant pages retrieved from the Internet • Adapt the structure of extraction ontology according to data • e.g. add new attributes to represent product features ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  27. Conclusions • Presented an extraction ontology approach to • allow for fast prototyping of IE applications • accomodate extraction schema changes easily • utilize all available forms of extraction knowledge • domain expert’s knowledge • training data • formatting regularities found in web pages • Results • indicate that extraction ontologies can serve as a quick prototyping tool • it seems possible to improve performance of the prototyped ontology when training data become available ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  28. Acknowledgements • The research was partially supported by the EC under contract FP6-027026, Knowledge Space of Semantic Inference for Automatic Annotation and Retrieval of Multimedia Content: K-Space. • The medical website application is carried out in the context of the EC-funded (DG-SANCO) project MedIEQ. ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  29. Backup slides • IET and co. ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  30. Classified documents from WCC Information extraction toolkit - architecture INFORMATION EXTRACTION TOOLKIT Expert’s domain and extraction knowledge, annotated corpora Annotation tool Pre-processor UI Labelling schemas IE Engines Data Model Manager CRF extraction engine Ex extraction ontology engine UI Rule-based integrator (TBD) Task Manager Annotated documents Extracted attributes, instances AQUA Document IO UI user components Evaluator admin components ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  31. Information extraction toolkit – document flow classified document extracted attributes and instances select extraction model (s) based on document class Pre-processor CRF NE engine Extraction ontology engine Rule-based integrator extract attributes refines extracted values, e.g. based on document classification extract attributes and instances ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  32. Czech contact data set: results ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  33. Czech dataset: per-attribute F-measures • IET purpose: • to support the user by providing suggestions • not to work standalone without supervision ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  34. Customization to new criteria • Precisely define the criterion or criteria group • define and give positive and negative examples • If gazetteers required: • search or construct appropriate gazetteers • If training required: • annotate training corpus of at least 100 documents with at least 300 occurrences of the criterion • train one of the trainable extractors: • CRF engine • Ex with Weka integration • If some extraction evidence can be given by human: • write new or extend an existing extraction ontology • Evaluate performance ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  35. Localization to a new language • Reuse language independent parts of extraction ontology: • class structure (attributes in a class) • cardinalities, constraints, axioms • some criteria can be reused almost completely (phone, email) • If a criterion requires training: • annotate corpus and train classifier as when adding a new criterion • Provide language specific extraction evidence that can be encoded by a human (if any): • add to extraction ontology ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  36. Demo + tutorial • IET + Ex • free text criteria • (shows internal IET user interface) • Tutorial • http://eso.vse.cz/~labsky/ex/ex_tutorial.pdf ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

  37. New features in Ex IE engine • Significant speed-up • Memory footprint reduction • Multiple class extraction • Extended axiom support • Instance parsing and reference resolution improvements • Extraction ontology authoring made easier ISMIS 2008 Combining Multiple Sources of Evidence in Web IE

More Related