1 / 1

Benchmarking Textual Annotation Tools for the Semantic Web

In this paper, Diana Maynard from the University of Sheffield discusses the essential criteria for benchmarking textual annotation tools in the context of the Semantic Web. Key criteria include performance, scalability, usability, flexibility, and interoperability, enabling users to make informed decisions when selecting the best tools for their specific needs. The paper also explores the trade-offs between scalability versus response time and performance versus coverage, alongside the requirements of different user groups. Notable tools reviewed include Magpie, KIM, OntoMat, GATE, and MnM.

Télécharger la présentation

Benchmarking Textual Annotation Tools for the Semantic Web

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Benchmarking Textual Annotation Tools for the Semantic Web Diana Maynard University of Sheffield, UK • Motivation • Criteria for benchmarking should include not just performance but also • scalability • usability • flexibility • interoperability • in order for users to make an informed decision about the best product for their needs. Requirements Requirements for different users Results Tradeoffs: 1. Scalability vs response time 2. Performance vs coverage Suitable tools for different users Tools Magpie KIM OntoMat GATE MnM

More Related