1 / 77

Recommender System

Recommender System. http://net.pku.edu.cn/~wbia 黄连恩 hle@net.pku.edu.cn 北京大学信息工程学院 1 2/3/2013. Read any good books lately?. 拉蒙-卡哈尔(西班牙) 1906诺贝尔生理学或医学奖:“现代神经科学的主要代表人物和倡导者” “最重要的问题已经解决完了” “过度关注应用科学” “认为自己缺乏能力”. Outline Today. What: Recommend er System How:

herman
Télécharger la présentation

Recommender System

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Recommender System http://net.pku.edu.cn/~wbia 黄连恩 hle@net.pku.edu.cn 北京大学信息工程学院 12/3/2013

  2. Read any good books lately? • 拉蒙-卡哈尔(西班牙) • 1906诺贝尔生理学或医学奖:“现代神经科学的主要代表人物和倡导者” • “最重要的问题已经解决完了” • “过度关注应用科学” • “认为自己缺乏能力”

  3. Outline Today What: Recommender System How: CollaborativeFiltering (CF) Algorithm User-based Item-based Model-based Evaluation on recommender system

  4. What is Recommender System?

  5. The Problem 分类 检索 还有什么 更有效的手段?

  6. Recommendation

  7. This title is a textbook-style exposition on the topic, with its information organized very clearly into topics such as compression, indexing, and so forth. In addition to diagrams and example text transformations, the authors use "pseudo-code" to present algorithms in a language-independent manner wherever possible. They also supplement the reading with mg--their own implementation of the techniques. The mg C language source code is freely available on the Web.

  8. Personalized Recommendation

  9. Everyday Examples of Recommender Systems… Bestseller lists Top 40 music lists The “recent returns” shelf at the library Many weblogs “Read any good books lately?” .... • Common insight: personal tastes arecorrelated: • If Marry and Bob both like X and Marry likes Y then Bob is more likely to like Y • especially (perhaps) if Bob knows Marry

  10. Rec System: Applications Ecommerce Product recommendations - amazon Corporate Intranets Recommendation, finding domain experts, … Digital Libraries Finding pages/books people will like Medical Applications Matching patients to doctors, clinical trials, … Customer Relationship Management Matching customer problems to internal experts

  11. Recommender Systems 给出一个users和items集合 Items 可以是documents, products, other users … 向一个user推荐items,根据: users和items的属性信息 age, genre, price, … 这个user以及其它user过去的behavior Who has viewed/bought/liked what? 来帮助人们 makedecisions maintain awareness

  12. Recommender systems are software applications that aim to support users in their decision-making while interacting with large information spaces. Recommender systems help overcome the information overload problem by exposing users to the most interesting items, and by offering novelty, surprise, and relevance.

  13. The Web, they say, is leaving the era of search and entering one of discovery. What's the difference? Search is what you do when you're looking for something. Discoveryis when something wonderful that you didn't know existed, or didn't know how to ask for, finds you.

  14. Collaborative Filtering Algorithm

  15. Ad Hoc Retrieval and Filtering Ad hoc retrieval (特别检索: 文档集合保持不变) Q1 Q2 Collection “Fixed Size” Q3 Q4 Q5

  16. Ad Hoc Retrieval and Filtering Filtering(过滤: 用户需求不变) Docs Filtered for User 2 User 2 Profile User 1 Profile Docs for User 1 Documents Stream

  17. Inputs - more detail Explicit role/domain/content info: content/attributes of documents Document taxonomies Role in an enterprise Interest profiles Past transactions/behavior info from users: which docs viewed, browsing history search(es) issued which products purchased pages bookmarked explicit ratings (movies, books … ) Large space Extremely sparse

  18. The Recommendation Space Links derived from similar attributes, explicit connections (Ratings, purchases, page views, laundry lists, play lists) Links derived from similar attributes, similar content, explicit cross references Users Items User-User Links Item-ItemLinks Observed preferences

  19. Definitions recommender system 为user提供对items的recommendation/ prediction/ opinion的系统 Rule-based systems use manual rules to do this An item similarity/clustering system 使用item links A classic collaborative filtering system 使用links between users and items Commonly one has hybrid systems 使用前面all three kinds of links

  20. Link types User attributes-based Recommendation Male, 18-35: Recommend The Matrix Item attributes-based Content Similarity You liked The Matrix: recommend The Matrix Reloaded Collaborative Filtering People with interests like yours also liked Forrest Gump

  21. Example - behavior only Users Docs viewed U1 d1 d2 U2 d3 ? U1 viewed d1, d2, d3. U2 views d1, d2. Recommend d3 to U2.

  22. Expert finding - simple example Recommend U1 to U2 as someone to talk to? U1 d1 d2 U2 d3

  23. Simplest Algorithm:Neighbors Voting U viewed d1, d2, d5. 看还有谁viewed d1, d2 or d5. 向U推荐:那些users里面viewed最“popular”的doc. U d1 d2 V d5 W

  24. Simple algorithm - shortcoming 把所有其它users同等对待 实际上,通过过去的历史behavior数据可以发现,users与U相像的程度不同。 U d2 d5 d1 V W 怎样改进? 如何区分user对于U的重要度? User-basedNearestNeighbors

  25. Matrix View item user • Users-Items Matrix • Aij = 1 if user i viewed item j, • = 0 otherwise. • 共同访问过的items# by pairs of users = ? AAt

  26. Voting Algorithm AAt 的行向量ri jthentry is the # of items viewed by both user i and user j. riA是一个向量 kth entry gives a weighted vote count to item k 按最高的vote count推荐items. item A user user user ri

  27. Add Rating to Algorithm user i 给出评分 一个实数rating Vikfor item k 每个user i 都拥有一个ratings vector vi 稀疏,有大量空值 计算每一对users i,j 之间的Similarity measure of how much user pair i,j agrees: wij

  28. Predict user i’s utility for item k 与voting算法类似,WiV是一个向量 Sum (over users i’s nearest neighbors j) ∑wijVjk 按这个值为user i 推荐 item k.

  29. Similarity Measure COS similarity(From IR)

  30. Real data problems User有各自的rating bias

  31. Similarity Measure

  32. Correlation Between two random variables Mean Standard variance Pearson's correlation indicating the degree of linear dependence between the variables

  33. Correlation Between two random variables

  34. Discussion on Pearson Correlation • Whether two users have co-rated only a few items(on which they may agree by chance)or whether there are many items on which they agree • Significance weighting • An agreement by two users on a more controversial item has more “value” than an agreement on a generally liked item. • Inverse user frequency • Variance weighting factor

  35. Neighborhood Selection • Define a specific minimum threshold of user similarity • Limit the size to a fixed number k • 20 to 50 neighbors seems reasonable

  36. Voting Algorithm - implementation issues 计算复杂度? User similarity: w(a,i) Matrix Multiply K nearest neighbors Hold all rating data in memory Memory-based algorithm Scalability Problem Does pre-computation on w matrix works?

  37. vi,j= vote of user i on item j Ii = items for which user i has voted Mean vote for i is User u,v similarity is avoids overestimating who happen to have rated a few items identically User-based Nearest Neighbor Algorithm

  38. User Nearest Neighbor Algorithm 选取user u的nearest neighbor 集合V,计算u对item j的vote如下

  39. Item-based .vs. User-based • Amazon Online Shop 2003 • 29 millions users • Millions of catalog items • Prediction in real time is infeasible • Item-based .vs. User-based • Pre-computation much stable for item similarity than user similarity.

  40. Item-based Algorithm U be the set of users that rated both items a and b. Predict the rating for user u for a product p Also limited to k nearest neighbors

  41. Model-based Algorithm • Item-based algorithm is still memory-based • Original rating database is held in memory and used directly for generating the recommendations • Model-based • Only precomputed or “learned” model is required to make predictions at runtime • E.g. Matrix factorization/latent factor models

  42. Matrix factorization • LSI/SVD • Dimensionality reduction • Noise removing

  43. Challenges of Nearest-Neighbor CF What is “the most optimal weight calculation” to use? Requires fine tuning of weighting algorithm for the particular data set What do we do when the target user has not voted enough to provide a reliable set of nearest-neighbors? One approach: use default votes (popular items) to populate matrix on items neither the target user nor the nearest-neighbor have voted on A different approach: model-based prediction using Dirichlet priors to smooth the votes Other factors include relative vote counts for all items between users, thresholding, clustering (see Sarwar, 2000)

  44. Summary of Advantages of Pure CF No expensive and error-prone user attributes or item attributes Incorporates qualityand taste Want not just things that are similar, but things that are similar and good Works on any rate-able item One model applicable to many content domains Users understand it It’s rather like asking your friends’ opinions

  45. Evaluation

  46. Netflix Prize NetFlix: on-line DVD-rental company a collection of 100,000 titles and over 10 million subscribers. They have over 55 million discs and ship 1.9 million a day, on average a training data set of over 100 million ratings that over 480,000 users gave to nearly 18,000 movies Submitted predictions are scored against the true grades in terms of root mean squared error (RMSE)

  47. Netflix Prize prize of $1,000,000 A trivial algorithm got RMSE of 1.0540 Netflix, Cinematch, got RMSE of 0.9514 on the quiz data, a 9.6% improvement To WIN 10% over Cinematch on the test set a progress prize of $50,000 is granted every year for the best result so far By June, 2007, over 20,000 teams had registered for the competition from over 150 countries. On June 26, 2009 the team "BellKor's Pragmatic Chaos", a merger of teams "Bellkor in BigChaos" and "Pragmatic Theory", achieved a 10.05% improvement over Cinematch (an RMSE of 0.8558).

  48. Measuring collaborative filtering How good are the predictions? How much of previous opinion do we need? How do we motivate people to offer their opinions?

More Related