1 / 29

Making DB and IR (socially) meaningful

Making DB and IR (socially) meaningful. Sihem Amer-Yahia, Human Social Dynamics Dagstuhl 03/10/2008. Disclaimers. No XML No Querying No religion Lots of Ranking Millions of people with different opinions A hint of db and ir. 2. Abstract.

maylin
Télécharger la présentation

Making DB and IR (socially) meaningful

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Making DB and IR (socially) meaningful Sihem Amer-Yahia, Human Social Dynamics Dagstuhl 03/10/2008

  2. Disclaimers No XML No Querying No religion Lots of Ranking Millions of people with different opinions A hint of db and ir 2

  3. Abstract Collaborative tagging and rating sites constitute a unique opportunity to leverage implicit and explicit social ties between users in search and recommendations. In the first part of the talk, we explore different ranking semantics which account for content popularity within a network, thereby going beyond traditional query relevance. We show that the accuracy of ranking is tied to users behavior. In the second part of the talk, we describe a set of novel questions that arise under the new ranking semantics. The first question is to revisit data processing in the presence of power law distributions and tag sparsity, and indexing in light of different user behaviors. We then explore different ways of explaining recommendations followed by a discussion on diversifying results. Diversity is a well-known problem in recommender systems, referred to as over-specialization, and in Web search. We propose to leverage explanations to achieve diversity on the basis that the same users tend to endorse similar content. Finally, we note that different topics (e.g., sports, photography) are popular at different points in time and argue for time-aware recommendations. We conclude with a brief description of the infrastructure of Royal Jelly, a scalable social recommender system built on top of Hadoop. 3

  4. Outline Motivation Ranking Almost-new questions Royal Jelly Wilder ideas 4

  5. Recommendations (Amazon) but who are these people? 5

  6. Explaining recommendations in x.qui.site • Leveraging user-user similarities • Multiple recommendation methods • Friends network • Shared-bookmark-interest • Shared-tag-interest • Shared-bookmark-tag-interest • Multiple recommendation types • Bookmarks • Users • Tags 6

  7. Yahoo! Movies now

  8. Reviewers biases in Yahoo! Movies • Leveraging item-item similarities • Socially Meaningful Attribute Collections • Sets of items which are easy to labeland serve as a socially meaningful reference set: • Adventure movies starring Johnny Depp • Woody Allen Comedies • Scary movies from the 80’s • Moderate French restaurants in Southern CA • Similarities between movies are defined based on their SMACs

  9. Social Context Heuristic Recommenders Content / Item-based (purple column): discover items similar to i2 (seed items) and see how u2 has rated them Collaborative / User-based (green row): discover users similar to u2 (seed users) and see how they rate i2 Fusion / Filterbots: leveraging both similar items and similar users 9

  10. Outline Motivation Ranking Almost-new questions Royal Jelly Wilder ideas 10

  11. New ranking semantics Collaborative tagging/reviewing sites contains a lot of high-quality user-generated: Flickr, YouTube, del.icio.us, Yahoo! Movies Users need help to sift through the large number of available items • Not only relevance (in a traditional Web sense) but also about people whose opinion matters 11

  12. Data model Items: photos in Flickr, movies in Y!Movies, URLs in del.icio.us Users: Seekers or Taggers Tagging/rating/reviewing: endorsements from users u  Taggers, Items(u) = {i  Items | Tagged(u)} Taggers(i, t) = {v | Tagged(v,i,t)} Network: implicit and explicit social links u  Seekers, Network(u) = {v Taggers | Link(u, v, w)} Flickr friends, people with similar movie tastes, del.icio.us network 12

  13. Search Given a seeker s and a query Q (set of tags), return items which are most relevant to Q and are most popular in s’s network • f and g are monotone, assume f = count, g = sum 13

  14. Hotlists Evaluate different hotlist generation methods in del.icio.us to see how best they predict user’s tagging actions 116,177 users who tagged 175,691 distinct URLs using 903 tags, for a total of 2,322,458 tagging actions for 1 month Each method defined by its seed and scope and returns the 10 best ranked items 14

  15. People who matter friends url-interest tag-url-interest Coverage - overlap of hotlist with u’s tagging actions, averaged over users in scope 15

  16. Coverage 8.6% 42.9% 61% 81.7% 16

  17. Outline Motivation Ranking Almost-new questions pre-processing&indexing explanation: why a recommendation diversity: be innovative, stay relevant time-awareness: what matters when Royal Jelly Wilder ideas 17

  18. Pre-processing Tags are sparse and may mean different things Co-occurrence analysis, association rules, ontologies, EM Tails are long, very long cut tails? average among very different users? 18

  19. Social Meaningfulness in Y! Movies

  20. Indexing Hotlists global (1 inverted list), global-tag (900 lists, 1 list/popular tag), friends, url-interest, tag-url-interest (1 list/user) Search: 1 list/per (user,keyword) pair 1 list/groups of similar users Cluster indices based on common user behavior Behavior does change 20

  21. Explanation • Users relate to social biases and influences • What to display? • all influencers: does not scale • top influencers • distribution of opinions among influencers • 80% of your friends bookmarked this link • this reviewer rates this movie better than 40% of all reviewers • How to display it? • e.g., natural language pattern, visual pattern • Some relationship to DB annotations 21

  22. Diversity Well-know problem in recommender systems (over-specialization) and IR (Web search) In recommendations: Stay as close as possible to the user’s interests But not too close Woody Allen Comedies Restaurants serving Chinese in the east village in NYC Post-processing based on items objective attributes Many possible top-k sets Pick the most diverse Explanation-based diversity The same people (items) recommend the same items Does not require presence of objective attributes Independent from recommendation method 22

  23. Time-awareness • Recommender systems focus on most recent (hot) items • Recovering old URLs in del.icio.us • Some URLs are tagged heavily for a certain period then slows down – how to find those worth recovering? • Anticipating new URLs • New URLs come into the system, often tagged with very few initial users – how to detect those with potential? • Topic grouping and time patterns are key: • Event-driven activity (election, photography) • Utilizing per topic time patterns 23

  24. Posts with tag “photography”: consistent time pattern New Year Average: 2948 STDEV: 533 Weekends

  25. Posts with tag “election”: event-driven tagging Average: 240 STDEV: 105 New Hampshire Thompson Out Richardson Out Iowa Michigan Florida

  26. Outline Motivation Ranking Almost-new questions Royal Jelly Wilder ideas 26

  27. Royal Jelly

  28. MySQL Extract del.icio.us backup database research9 MySQL Load quicknever database distributed analysis and index / view generation Hadoop-Pig Based Processing • Daily analysis for a window of several months worth of data Explanation Diversity

  29. Wilder ideas • Automatic user assessments • Users are willing to create new content • And rate it! • Let them rate recommendations • And help us define evaluation benchmarks • Make DB social! • Social-awareness in databases and query languages • Different DB organizations • Different query semantics • SQL: a Social Query Language? • Who thinks like me? Who does not?

More Related