1 / 34

Recommender Systems

Recommender Systems. Recommender Systems. In many cases, users are faced with a wealth of products and information from which they can choose. To alleviate this many web sites help users by using Recommender Systems, List of items or page that are likely to interest them

odina
Télécharger la présentation

Recommender Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Recommender Systems

  2. Recommender Systems • In many cases, users are faced with a wealth of products and information from which they can choose. • To alleviate this many web sites help users by using Recommender Systems, • List of items or page that are likely to interest them • Once the user makes a choice, a new list can be presented

  3. What Data is used to make the recommendations? • Explicit feedback • Ratings • Reviews • Auctions • Implicit feedback • Page visits • Purchase data • Browsing paths

  4. What are the type of recommendations? • Item-to-Item associations • Similar pages this • “Users who bought this book also bought X” • User-to-User associations • Which other user has similar interests? • User-to-Item associations • Rating history describes user • Items are described by attributes • Items are described by ratings of other users

  5. Classification of Recommender Systems • Content-based approach • Item is described by a set of attributes • Movies: e.g director, genre, year, actors • Documents: bag-of-words • Similarity metric defines relationship between items • e.g. cosine similarity • Examples • “related pages” in search engine • Google News

  6. Related Approaches Mooney and Roy (2000) • Their approach comes from the Information Retrieval (IR) field • They rely on the content of the items, and use some similarity score to match the items based on their content • Burke (2000) • The use the content-based recommendation. • However, they allow to the user introduce explicit information about his preferences.

  7. Types of Recommender Systems • Collaborative filtering • Item is described by user interactions • Matrix V of n (number of users) rows and m (number of items) columns • Elements of matrix V are the user feedback • Examples: • Rating given to item by each user • Users who viewed this item • Similarity metric between items

  8. Related Approaches Collaborative Filtering • They used historical data gathered from other users to make the recommendation • Ex: If a user wants to rent a movie, he tends to rely on friends to recommend him items that they have like it • The goal is to identify those users whose taste in recommendations is predictive of the taste of a certain person and use this recommendations to construct an interesting list for the user.

  9. Collaborative Filtering Models • Memory Based • Neighborhood Models • Latent Factors • Model Based • Classification • Bayesian Networks • Association Rules

  10. Memory Based Approaches • Works directly with the user data • Given a user, the system finds the most similar users to make a recommendation • There are two approaches: • Neighborhood • Latent Factor

  11. Neighborhood Approach • It’s an item-oriented approach, focusing on evaluating the preference of a user to an item based on ratings of similar items by the same user. • Users are transformed to item space by viewing them as baskets of rated items. No longer to compare users to items, but directly relate items to items. • Pros: rely on a few significant neighborhood relations; effective at detecting very localized relationships • Cons: ignore the vast majority of ratings by a user; unable to capture the totality of weak signals in all of a user’s rating.

  12. Latent Factor Models • Transform both items and users to the same latent factor space, thus making them directly comparable. • Latent space tries to explain ratings by characterizing both products and users on factors automatically inferred from user feedback. • Pros: effective at estimating overall structure that relates simultaneously to most or all items. • Cons: poor at detecting strong association among a small set of closely related items. ~

  13. Singular Value Decomposition Decompose ratings matrix, R, into coefficients matrix US and factors matrix V such thatis minimized. U = eigenvectors of RRT (NxN) V = eigenvectors of RTR (MxM)  = diag(1,…,M) eigenvalues of RRT

  14. Challenges Collaborative Filtering User Cold-Start problemnot enough known about new user to decide who is similar (and perhaps no other users yet..)

  15. Challenges Collaborative Filtering Sparsity when recommending from a large item set, users will have rated only some of the items(makes it hard to find similar users)

  16. Challenges Collaborative Filtering Scalabilitywith millions of users and items, computations become slow Item Cold-Start problemCannot predict ratings for new item till some similar users have rated it [No problem for content-based]

  17. Related Approaches Binary weights wij = 1 means element is observed wij = 0 means element is missing Positive weights weights are inversely proportional to noise variance allow for sampling density e.g. elements are actually sample averages from counties or districts Srebro & Jaakkola (2003) Weighted SVD

  18. Related Approaches SVD with Missing Values Uses Expectation maximization to calculate the approximation of matrix E step fills in missing values of ranking matrix with the low-rank approximation matrix M step computes best approximation matrix in Frobenius norm Local minima exist for weighted SVD

  19. Related Approaches Agarwal (2009) Regression-Based Latent Factor Models They presented a regression based factor model that regularizes and deals with both cold-start and warm-start in a single framework. • It takes advantage of other user ratings, item and user features to predict the missing ratings

  20. Model Based Approaches • User data is compressed into a predictive model • Instead of using ratings directly,develop a model of user ratings • Use the model to predict ratings for new items • To build the model: • Bayesian network (probabilistic) • Clustering (classification) • Rule-based approaches (e.g., association rules between co-purchased items)

  21. Related Approaches Stern(2009) Large Scale Online Bayesian Recommender • Integrates Collaborative Filtering with Content information. • Users and items compared in the same space. • Flexible feedback model. • Bayesian probabilistic approach.

  22. Value of the Recommendation Many considerations are taken into account to build the list of recommendations: • The likelihood of a recommendation to been accepted by the user • The immediate value to the site • The long term implications of the recommendations on the user’s future choices

  23. Value of the Recommendation Example: Suggest a video camera with probability 0.5 or a VCR with a probability 0.6 • To recommend the video camera is less profitable than the VCR • It the long term it might be more profitable (the camera has accessories that are likely to be purchased whereas the VCR does not)

  24. Sequential Nature of Recommendation Process The recommender system suggests items to the user The user can accept or not one the items offered A new list of items is calculated based on the user past ratings

  25. Markov Decision Process (MDP) • A MDP is a model for stochastic decision problems • A MDP is a four-tuple (S,A,Rwd, tr) where S is a set of states, A is a set of actions, Rwd is the reward associated with each state/action and tr is the transition function for each state. • The goal is to behave in order to maximize the total reward • The optimal solution π is a policy specifying which action to perform in each state .

  26. Markov Decision Process (MDP) The value function V of the policy π is defined as: Where γ is a discount factor And the optimal value function V* is defined as:

  27. Markov Decision Process (MDP) • To find the optimal policy π* and its corresponding value function V*: • We search the space of the possible policies starting with an initial policy π0(s) • At each step we compute the value function based on the former policy and update the policy based on the new value function

  28. Temporal Dynamics in the Recommendations • Item-side effects: • Product perception and popularity are constantly changing • Seasonal patterns influence items’ popularity • User-side effects: • Customers ever redefine their taste • Transient, short-term bias; anchoring • Drifting rating scale • Change of rater within household

  29. Temporal dynamics - challenges Multiple sources: Both items and users are changing over time Multiple targets: Each user/item forms a unique time series  Scarce data per target Inter-related targets: Signal needs to be shared among users – foundation of collaborative filtering  cannot isolate multiple problems

  30. Time Sensitive Recommenders Koren (2009) Collaborative Filtering with Temporal Dynamics • He use factor models to separate different aspects of the ratings to observe changes in: • Rating scale of individual users • Popularity of individual items • User preferences

  31. Recommender Systems with Social Networks • Use the interaction of the user with others to do recommendations • Motivation: • Social Influence: users adopt the behavior of their friends • Challenges: • How do we define influence between users?

  32. Recommender Systems with Social Networks Preliminary Approaches Jamali & Ester (2009) TrustWalker: A Random Walk Model for Combining Trust-based and Item-based Recommendation • Explores the trust network to find Raters. • Aggregate the ratings from raters for prediction. • Different weights for users

  33. Open Challenges • Transparency • Convince a user to accept a recommendation • Help a user make a good decision • Help a user fit a goal or mood • Exploration versus Exploitation • Cold start problems (for new items, and for new users) • Choosing what questions to ask users • Trade-off between optimizing for this user vs. for all users • How can meta-data on user or item help? • Guided Navigation • Providing a guide over a vast body of content • User's intent detection

  34. Open Challenges • Time Value • Does value of user input decay with time? • Do items change in relevance with time? • How to adjust for recent user experience? • Evaluation of the recommenders performance • Scalability • Combining Content and Collaborative Recommenders efficiently

More Related