html5-img
1 / 58

SKOS-2-HIVE

SKOS-2-HIVE. UNT workshop. Introductions. Craig Willis (craig.willis@unc.edu). Afternoon Session Schedule. Overview Using HIVE as a service Installing and configuring HIVE Using HIVE Core API Understanding HIVE Internals HIVE supporting technologies Developing and customizing HIVE.

Pat_Xavi
Télécharger la présentation

SKOS-2-HIVE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SKOS-2-HIVE UNT workshop

  2. Introductions Craig Willis (craig.willis@unc.edu)

  3. Afternoon Session Schedule Overview Using HIVE as a service Installing and configuring HIVE Using HIVE Core API Understanding HIVE Internals HIVE supporting technologies Developing and customizing HIVE

  4. Block 1: Introduction

  5. Workshop Overview • Schedule • Interactive, less structure • Hands-on (work together) • Activities: • Installing and configuring HIVE • Programming examples (HIVE Core API, HIVE REST API)

  6. Background and Interests • What are you most interested in getting out of this part of the workshop? • What is your background? • Cataloging, indexing, and classification • Programming and databases • Systems administration • What is your level of familiarity with the following technologies? • Java, Tomcat, Lucene • REST • RDF, SPARQL, SKOS, Sesame

  7. HIVE Technical Overview • HIVE consists of many technologies combined to provide a framework for vocabulary services • System for management of multiple controlled vocabularies in SKOS/RDF format • Java-based web services can run in any Java application server • Demonstration website (http://hive.nescent.org/) • Google Code project (http://code.google.com/p/hive-mrc/)

  8. Architecture

  9. HIVE Architecture • SPARQL:RDF query language (W3C recommendation) • REST: Web-based API and software architecture • Triple store: Database for the storage and retrieval of RDF data. Supports queries using SPARQL. • Sesame: Open source triple store • Elmo: Sesame API for common ontologies (OWL, Dublin Core, SKOS) • Lucene: Java-based search engine • KEA++: Algorithm and Java API for automatic subject suggestions from controlled vocabularies.

  10. HIVE Functions • Conversion of vocabularies to SKOS • Rich internet application (RIA) for browsing and searching multiple SKOS vocabularies • Java API and REST application interfaces for programmatic access to multiple SKOS vocabularies • Support for natural language and SPARQL queries • Automatic keyphrase indexing using multiple SKOS vocabularies. HIVE supports two indexers: • KEA++ indexer • Basic Lucene indexer

  11. Block 2: Using HIVE as a service

  12. Using HIVE as a Service • HIVE web application • http://hive.nescent.org/ • Developed by Jose Perez-Aguera, Lina Huang • Java servlet, Google Web Toolkit (GWT) • http://code.google.com/p/hive-mrc/wiki/AboutHiveWeb • HIVE REST service • http://hive.nescent.org/rs • Developed by Duane Costa, Long-Term Ecological Research Network • http://code.google.com/p/hive-mrc/wiki/AboutHiveRestService

  13. Activity: Calling HIVE-RS Demonstrate calling the HIVE-RS web service (Java)

  14. Block 3: Install and Configure HIVE

  15. Installing and Configuring HIVE • Requirements • Java 1.6 • Tomcat (HIVE is currently using 6.x) • Detailed installation instructions: • http://code.google.com/p/hive-mrc/wiki/InstallingHiveWeb • http://code.google.com/p/hive-mrc/wiki/InstallingHiveRestService

  16. Installing and Configuring HIVE-web • Detailed installation instructions (hive-web) • http://code.google.com/p/hive-mrc/wiki/InstallingHiveWeb • Quick start (hive-web) • Download and extract Tomcat 6.x • Download and extract latest hive-web war • Download and extract sample vocabulary • Configure hive.properties and agrovoc.properties • Start Tomcat • http://localhost:8080/

  17. Properties files • hive.properties • Specifies enabled vocabularies and selected indexing algorithm • http://code.google.com/p/hive-mrc/source/browse/trunk/hive-web/war/WEB-INF/conf/hive.properties • <vocabulary>.properties • Specifies location of vocabulary databases/indexes on the local filesystem • http://code.google.com/p/hive-mrc/source/browse/trunk/hive-web/war/WEB-INF/conf/lcsh.properties

  18. Installing and Configuring HIVE-web from source • Detailed installation instructions (hive-web) • http://code.google.com/p/hive-mrc/wiki/DevelopingHIVE • http://code.google.com/p/hive-mrc/wiki/InstallingHiveWeb • Requirements • Eclipse IDE for J2EE Developers • Subclipse plugin • Google Eclipse Plugin • Apache Ant • Google Web Toolkit 1.7.1 • Tomcat 6.x

  19. Installing and Configuring HIVE REST Service • Detailed installation instructions (hive-rs) • http://code.google.com/p/hive-mrc/wiki/InstallingHiveRestService • Quick start (hive-rs) • Download and extract latest webapp • Download and extract sample vocabulary • Configure hive.properties • Start Tomcat

  20. Importing SKOS Vocabularies • http://code.google.com/p/hive-mrc/wiki/ImportingVocabularies • Note memory requirements for each vocabulary • http://code.google.com/p/hive-mrc/wiki/HIVEMemoryUsage • java –Xmx1024m -Djava.ext.dirs=path/to/hive/lib  edu.unc.ils.mrc.hive.admin.AdminVocabularies [/path/to/hive/conf/] [vocabulary] [train]

  21. Block 4: Using the HIVE Core Library

  22. HIVE Core Interfaces

  23. HIVE Core Packages

  24. edu.unc.ils.hive.api • SKOSServer: • Provides access to one or more vocabularies • SKOSSearcher: • Supports searching across multiple vocabularies • SKOSTagger: • Supports tagging/keyphrase extraction across multiple vocabularies • SKOSScheme: • Represents an individual vocabulary (location of vocabulary on file system)

  25. SKOSServer • SKOSServer is the top-level class used to initialize the vocabulary server. • Reads the hive.properties file and initializes the SKOSScheme (vocabulary management), SKOSSearcher (concept searching), SKOSTagger (indexing) instances based on the vocabulary configurations. • edu.unc.ils.mrc.hive.api.SKOSServer • TreeMap<String, SKOSScheme> getSKOSSchemas(); • SKOSSearcher getSKOSSearcher(); • SKOSTagger getSKOSTagger(); • String getOrigin(QName uri);

  26. SKOSSearcher • Supports searching across one or more configured vocabularies. • Keyword queries using Lucene, SPARQL queries using OpenRDF/Sesame • edu.unc.ils.mrc.hive.api.SKOSSearcher • searchConceptByKeyword(uri, lp) • searchConceptByURI(uri, lp) • searchChildrenByURI(uri, lp) • SPARQLSelect()

  27. SKOSTagger • Keyphrase extraction using multiple vocabularies • Depends on setting in hive.properties • edu.unc.ils.mrc.hive.api.SKOSTagger • “dummy” or “KEA” • List<SKOSConcept> getTags(String text, List<String> vocabularies, SKOSSearcher searcher);

  28. SKOSScheme Represents an individual vocabulary, based on settings in <vocabulary>.properties Supports querying of statistics about each vocabulary (number of concepts, number of relationships, etc).

  29. Activity Demonstrate a simple Java class that allows the user to query for a given term. Demonstrate a simple Java class that can read a text file and call the tagger.

  30. Block 5: Understanding HIVE Internals

  31. Architecture

  32. Data Directory Layout • /usr/local/hive/hive-data • vocabulary/ • vocabulary.rdf SKOS RDF/XML • vocabularyAlphaIndex Serialized map • vocabularyH2 H2 database (used by KEA) • vocabularyIndex Lucene Index • vocabularyKEA KEA model and training data • vocabularyStore Sesame/OpenRDF store • topConceptIndex Serialized map of top concepts

  33. KeywordSearch

  34. Indexing

  35. HIVE Internals: Data Models Lucene Index: Index of SKOS vocabulary (view with Luke) Sesame/OpenRDF Store: Native/Sail RDF repository for the vocabulary KEA++ Model: Serialized KEAFilter object H2 Database: Embedded DB contains SKOS vocabulary in format used by KEA. (Can be queried using H2 command line) Alpha Index: Serialized map of concepts Top Concept Index: Serialized map of top concepts

  36. HIVE Internals: HIVE Web • GWT Entry Points: • HomePage • ConceptBrowser • Indexer • Servlets • VocabularyService: Singleton vocabulary server • FileUpload: Handles the file upload for indexing • ConceptBrowserServiceImpl • IndexerServiceImpl

  37. HIVE Internals: HIVE-RS • Java API for RESTful Web Services (JAX-RS) • Classes • ConceptsResource: • SchemesResource

  38. Block 6: HIVE Supporting Technologies

  39. HIVE supporting technologies • Lucene http://lucene.apache.org • Sesame http://www.openrdf.org/ • KEA http://www.nzdl.org/Kea/ • H2 http://www.h2database.com/ • GWT http://code.google.com/webtoolkit/

  40. Activity • Explore Lucene index with Luke • http://luke.googlecode.com/ • Explore Sesame store with SPARQL • http://www.xml.com/pub/a/2005/11/16/introducing-sparql-querying-semantic-web-tutorial.html • http://www.cambridgesemantics.com/2008/09/sparql-by-example/

  41. Block 7: Customizing HIVE

  42. Obtaining Vocabularies Several vocabularies can be freely downloaded Some vocabularies require licensing HIVE Core includes converters for each of the supported vocabularies. List of HIVE vocabularieshttp://code.google.com/p/hive-mrc/wiki/VocabularyConversion

  43. Converting Vocabularies to SKOS • Additional information • http://code.google.com/p/hive-mrc/wiki/VocabularyConversion • Each vocabulary has different requirements

  44. Converting Vocabularies to SKOS • A Method to Convert Thesauri to SKOS (van Assem et al) • Prolog implementation • IPSV, GTAA, MeSH • http://thesauri.cs.vu.nl/eswc06/ • Converting MeSH to SKOS for HIVE • Java SAX-based parser • http://code.google.com/p/hive-mrc/wiki/MeshToSKOS

  45. LTER Sample Service http://scoria.lternet.edu:8080/lter-hive-prototypes

  46. Block 8: KEA++

  47. About KEA++ • http://www.nzdl.org/Kea/ • Algorithm and open-source Java library for extracting keyphrases from documents using SKOS vocabularies. • Developed by Alyona Medelyan (KEA++), based on earlier work by Ian Whitten (KEA) from the Digital Libraries and Machine Learning Lab at the University of Waikato, New Zealand. • Problem: How can we automatically identify the topic of documents?

  48. Automatic Indexing Medelyan, O. and Whitten I.A. (2008). “Domain independent automatic keyphrase indexing with small training sets.” Journal of the American Society for Information Science and Technology, (59) 7: 1026-1040). • Free keyphrase indexing (KEA) • Significant terms in a document are determined based on intrinsic properties (e.g., frequency and length). • Keyphrase indexing (KEA++) • Terms from a controlled vocabulary are assigned based on intrinsic properties. • Controlled indexing/term assignment: • Documents are classified based on content that corresponds to a controlled vocabulary. • e.g., Pouliquen, Steinberger, and Camelia (2003)

  49. KEA++ at a Glance • KEA++ uses a machine learning approach to keyphrase extraction • Two stages: • Candidate identification: Find terms that relate to the document’s content • Keyphrase selection: Uses a model to identify the most significant terms.

  50. KEA++: Candidate identification Parse tokens based on whitespace and punctuation Create word n-grams based on longest term in CV Stem to grammatical root (Porter) Stem terms in vocabulary (Porter) Replace non-descriptors with descriptors using CV relationships Match stemmed n-grams to vocabulary

More Related