1 / 23

Learning User Interaction Models for Predicting Web Search Result Preference

Learning User Interaction Models for Predicting Web Search Result Preference. Eugene Agichtein et al. Microsoft Research SIGIR ‘06. Objective. Provide a rich set of features for representing user behavior Query-text Browsing Clickthough Aggregate various feature RankNet.

Télécharger la présentation

Learning User Interaction Models for Predicting Web Search Result Preference

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning User Interaction Models for Predicting Web Search Result Preference Eugene Agichtein et al. Microsoft Research SIGIR ‘06

  2. Objective • Provide a rich set of features for representing user behavior • Query-text • Browsing • Clickthough • Aggregate various feature • RankNet

  3. Browsing feature • Related work • The amount of reading time could predict • interest level on news articles • rating in recommender system • The amount of scrolling on a page also have strong relationship with interest

  4. Browsing feature • How to collect browsing feature? • Obtain the information via opt-in client-side instrumentation

  5. Browsing feature • Dwell time

  6. Browsing feature • Average & Deviation • Properties of the click event

  7. Clickthrough feature • 1. Clicked VS. Unclicked • Skip Above (SA) • Skip Next (SN) • Advantage • Propose preference pair • Disadvantage • Inconsistency • Noisiness of individual

  8. Clickthrough feature • 2. Position-biased

  9. Clickthrough feature

  10. Clickthrough feature

  11. Clickthrough feature • Disadvantage of SA & SN • User may click some irrelevant pages

  12. Clickthrough feature • Disadvantage of SA & SN • User often click part of relevant pages

  13. Clickthrough feature • 3. Feature for learning

  14. Feature set

  15. Feature set

  16. Evaluation • Dataset • Random sample 3500 queries and their top 10 results • Rate on a 6-point scale manually • 75% training, 25% testing • Convert into pairwise judgment • Remove tied pair

  17. Evaluation • Pairwise judgment • Input • UrlA, UrlB • Outpur • Positive: rel(UrlA) > rel(UrlB) • Negative: rel(UrlA) ≤ rel(UrlB) • Measurement • Average query precision & recall

  18. Evaluation 1. Current • Original rank from search engine • 2. Heuristic rule without parameter • SA, SA+N • 3. Heuristic rule with parameter • CD, CDiff, CD + CDiff • 4. Supervised learning • RankNet

  19. Evaluation

  20. Evaluation

  21. Evaluation

  22. Conclusion • Recall is not a important measurement • Heuristic rule • very low recall and low precision • Feature set • Browsing features have higher precision

  23. Discussion • Is user interaction model better than search engine • Small coverage • Only pairwise judgment • Given the same training data, which one is better, traditional ranking algorithm or user interaction? • Which feature is more useful?

More Related