1 / 25

Query Relevance Feedback and Ontologies

Query Relevance Feedback and Ontologies. How to Make Queries Better. Overview. Ranked Retrieval Relevance Feedback The Semantic Web and Ontologies. Typical Web Retrieval Process. Link Following. Need. KeywordQuery. More Like this. Ranked Retrieval.

Télécharger la présentation

Query Relevance Feedback and Ontologies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Query Relevance Feedback and Ontologies How to Make Queries Better

  2. Overview • Ranked Retrieval • Relevance Feedback • The Semantic Web and Ontologies

  3. Typical Web Retrieval Process Link Following Need KeywordQuery More Like this

  4. Ranked Retrieval How can we present the “best” item to the user first

  5. What are we trying to do in IR • Find the Document which is most similar to the query • Ranking Interpretation • show the best most similar document first • then the next best most similar document • and so on

  6. Bag of Words Model of Text • Ignore the order of words in the document • Just record whether a word appears in a document

  7. Similarity Measures • Cosine Formula • Measures how like a document is to a query/document See Kowalski Chapter 7

  8. Similarity as Ranking • Use the Similarity Measure to rank the documents

  9. Relevance Feedback More Like this done properly

  10. Observation • The user is probably in the best position to judge the relevance of a document • Likewise the user is probably in the best position to judge which returned (highly ranked) documents are irrelevant

  11. Retrieval Process No More Like This Need Analytic Query More Like this

  12. Relevance Feedback in Nutshell • Perform an initial retrieval • Ask the user to indicate which documents are relevant/irrelevant • Add all terms from relevant documents • Remove all terms from irrelevant documents • requery

  13. Variants • Using Ranking and Weighting • Pseudo relevance feedback • use terms from all (highly ranked) retrieved documents • Assumes highly ranked documents are a homogenous mass of relevant documents (Croft) • very helpful if very few documents retrieved • perpetuates errors/misunderstandings from original query

  14. Exercise • What are advantages of positive feedback ? • What are advantages of negative feedback ? • Which is best ?

  15. Relevance Feedback Conclusion • Consistently proven an effective way to improve retrieval • Biggest problem is getting users to engage in the interaction, especially if no highly relevant documents are in the initially retrieved set

  16. Ontologies

  17. The Semantic Web • Introduced by Tim Berners Lee and others in 2001 • http://www.sciam.com/article.cfm?articleID=00048144-10D2-1C70-84A9809EC588EF21 • Essentially about allowing computers and people to share the same world • Central to the communication is the notion of an Ontology

  18. Ontology Definition • To standardize semantic terms, many areas use specific ontologies, which are hierarchical taxonomies of terms describing certain knowledge topics (Baeza-Yates & Ribeiro-Neto, 1999, p143). • Thesauri: Ontologies for Information Retrieval. • Entities, Relations.

  19. O example Automobile Car Hot Hatch Drop head coupe Parts Seat Wheels Engine Sort of Also Known as

  20. Improving Recall and/or Precision • If you get too few documents • Use more general terms in the query • Use “automobile” instead of “drop head coupe” • Use an alternative term which is more common • Use “car” rather than “automobile” • If you get too many (overall) • Use a more specific term • Use “hot hatch” rather than “car”

  21. Issues • How are thesauri different from Ontologies • Are we representing the world or words • Is Wordnet an ontology ? • Are Ontologies meant to be • General • Universal • For a specific purpose ?

  22. Thesauri • Provide a map of a given field of knowledge: concepts, relations. • Provide a standard vocabulary for consistent indexing. • Assist users with locating terms for proper query formulation. • Ensure only one term from a synonym set is used for indexing and searching: otherwise a searcher who uses one synonym and retrieves some useful documents may think the correct term has been used and the search has been exhaustive, without knowing that there are other useful documents under other synonyms. • Provide classified hierarchies for broadening or narrowing a search if too many or too few documents are retrieved. • Retrieval based on concepts rather than words (Baeza-Yates & Ribeiro-Neto, 1999).

  23. WordNet Relations • Examples are: • Synonyms e.g. couch / sofa / lounge • Antonyms e.g. love / hate • Hypernyms (broader) e.g. cat / tabby • Hyponyms (narrower) e.g. cat / animal • Meronym (part-of) e.g. finger / hand • Meronym (made-of) e.g. snowflake / snow

  24. WordNet Demos • See vancouver-webpages.com/wordnet • See marimba.d.umn.edu/cgi-bin/similarity.cgi

  25. Conclusions • Ranked Retrieval • similarity matching • Relevance Feedback • positive and negative feedback • The Semantic Web and Ontologies

More Related