1 / 13

Can People Collaborate to Improve the relevance of Search Results?

Can People Collaborate to Improve the relevance of Search Results?. Florian Eiteljörge eiteljoerge@stud.uni-hannover.de. 1. Outline. Web search & social search techniques Phase one: Study setup & results Phase two: Study setup & results Discussion. Web Search.

emmett
Télécharger la présentation

Can People Collaborate to Improve the relevance of Search Results?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Florian Eiteljörge Can People Collaborate to Improve the relevance of Search Results? Florian Eiteljörge eiteljoerge@stud.uni-hannover.de 1

  2. Florian Eiteljörge Outline • Web search & social search techniques • Phase one: Study setup & results • Phase two: Study setup & results • Discussion

  3. Florian Eiteljörge Web Search • Search engines heavily used on internet • studies indicate: 50% of web search sessions fail • Idea: use social search techniques to improve web search

  4. Florian Eiteljörge Social search techniques • Idea • people search for something and give (implicit) feedback by clicking on result items • Most clicked items seem to be mostly relevant – so they will be ranked higher next time. • Problem • users tend to click on the top result items • popular sites get even more popular, even if there are new high-quality pages that would be more relevant ("rich-get-richer" phenomenon)

  5. Florian Eiteljörge What is the paper about? • Authors had three hypothesis related to social search techniques: • H1: Users will prefer to rate results at the top of the result lists, whether the results are randomized, or in the order that Google presents them. • H2: Users explicit relevance rankings are not biased by the rank of the result list [while implicit feedback is biased] • H3: For some types of queries people's collaborative effort can produce better ordering of search results. • The authors developed a search engine environment to capture user respond by presenting Google's top ten results in randomized order to test the above hypothesis

  6. Florian Eiteljörge Study setup – phase one (rating) • 145 participants were invited by mail to rate search results for their relevance • participants had the possibility to rate any number of results of preselected queries in the most frequent categories (shopping, health, technology, business, computers, arts) • participants were free to choose categories and queries they wanted to rate • the result items were presented in random order • Google-like result item layout • relevance was measured on a 4-point scale: highly relevant, relevant, don’t know, not relevant • after rating queries, each participant was asked to answer a short survey to determine how experience in searching affects the relevance perception

  7. Florian Eiteljörge Results first bar: percent of selection of the item for ratingsecond bar: percent of times when item was rated as highly relevant

  8. Florian Eiteljörge Results – phase one • participants preferred to rate the first two items (H1 confirmed) • participants explicit feedback not biased in general (H2 mostly confirmed) • feedback for the first item is biased: rated highly relevant in 70% of the times (even if participants were told the order is randomized)

  9. Florian Eiteljörge Study setup – phase two (evaluation) • 20 participants were invited to choose if they prefer the results based on the explicit user-feedback or the Google-results • the invited participants self-identified themselves as novice searchers • both result-lists were displayed side-by-side • the new ranking was created with the following formula:score = 3 x highly-relevant-count + 2 x relevant-count + don’t-know-count + (-1) x not-relevant-count

  10. Florian Eiteljörge Results – phase two • in some categories the users rate result items very different from Googlee.g.: shopping (digital cameras, walking shoes) – a mean difference in ranking of 4.2 • in some categories users agree with the Google rankinge.g.: Business (Microsoft Bid for Yahoo, Online Advertisement) – a mean difference of 0.8 • 70% of the participants rated the user-based ordering higher than the Google-ordering; these participants chose to rate queries in the categories shopping, computers and arts • the other 30% preferred the Google-ranking while choosing to rate queries of the categories business and technology

  11. Florian Eiteljörge Conclusion • people prefer the top result items • explicit feedback is not biased in general • in some categories the Google-ranking is very inconsistent to the users ranking

  12. Florian Eiteljörge Discussion

  13. Florian Eiteljörge Presentation based on • Morris MR, Horvitz E. SearchTogether: an interface for collaborative web search. Symposium on User Interface Software and Technology. 2007:3-12http://www.grouplens.org/system/files/p283-agrahri.pdf

More Related