1 / 14

Trust-aware Recommender Systems

Trust-aware Recommender Systems. Massa, P. & Avesani , P. Recommender System 2007 Presented by Danielle Lee. Problem & Purpose. Poor performance of collaborative filtering due to Data sparsity Ad hoc user profiles / Copy profile attack

kass
Télécharger la présentation

Trust-aware Recommender Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Trust-aware Recommender Systems Massa, P. & Avesani, P. Recommender System 2007 Presented by Danielle Lee

  2. Problem & Purpose • Poor performance of collaborative filtering due to • Datasparsity • Ad hoc user profiles / Copy profile attack • Cold start users & Newly added items (provided few ratings) • To search trustable users by exploiting trust propagation over the trust network, not to search similar users as CF • Just providing a trust statement is effective way of bootstrapping RSs for new users with very few ratings.

  3. Trust Networks and Trust Metrics • Trust Metrics : Algorithms whose goal is to predict, based on the trust network, the trustworthiness of “unknown” users. • Local Trust Metrics : the very personal and subjective views of the users. Different value of trust in other users for every user MoleTrust • Global Trust Metrics : a global “reputation” value that approximates how the community as a whole considers a certain user.  PageRank

  4. Trust-Aware Recommender Architecture

  5. Data Set • Epinions.com which a consumers opinion site • Review and rate items (such cars, books, movies, softwares, etc.) • Express Web of Trust • Inserting a user in the Web of Trust equals to a trust statement of value “1.” • 49,290 users, 139,738 different items, 664,824 reviews and 487,181 trust statement. • 52.84% cold start users having less than 55 reviews • Mean number of users in the Web of Trust is 9.88 (std. dev 32.85) • Compared with Movielens, Epinions have much more coldstart users and high sparsity in data.

  6. Evaluation Measures (1) • Mean Absolute Error (MAE) • Mean Absolute User Error (MAUE) • MAE for every user is computed independently • Average all the MAEs. • Ratings Coverage • The fraction of ratings for which the RS algorithm is able to produce a predicted rating. • User Coverage • The portion of users for which the RS is able to predict at least one rating.

  7. Evaluation Measures (2) • Cold start users : provided 1 ~ 4 ratings • Heavy users : provided more than 10 ratings • Opinionated users : Provided more than 4 ratings and the std. dev. is greater than 1.5 • Black sheep : provided more than 4 ratings and for which the average distance of their rating on item i with respect to mean rating of item i is greater than 1 • Niche item : received less than 5 ratings • Controversial items : received rating whose std. dev. Is greater than 1.5

  8. Input_Results (Simple Algorithm) • To explore which MAE a simple algorithm would achieve • Always5:returns always 5 as the predicted rating a user would give to an item • MAE : 1.008. • Average rating : returns the mean of the ratings provided by one users • MAE : 0.9243 • The most of the rating in the data set is in fact 5, and on the controversial items, these performs very badly. • MAE value over all rating is not a useful way to compare different algorithms.

  9. Input_Results (TrustAll) • TrustAll : predicts a rating for a certain item the unweighted average of all the ratings given to that item by all the users but the active users. • TrustAll (0.821) outperformed standard CF (0.843) in MAE and TrustAll (88.20%) more predictable than CF (51.28%) . • On cold start user, TrustAll (0.856) outperformed CF (1.094) in MAE and TrustAll (92.92%) is more predictable than CF (3.22%) • On controversial items, CF (1.515) outperformed TrustAll (1.741) • It is due to the sparsity of data and relative low variance in rating values.

  10. Input_Results (Overall) • MT1 : The users explicitly trusted by users ont propagating trust. • MT1 is able to predict fewer ratings than CF but the predictions are spread more equallly over all the users and MT1’s prediction is more accurate than CF. • Especially MT1 works better for cold start users. • MT2, MT3 & MT4 : Calculating trust propagation metrics using distance 2, 3 & 4 respectively. • Average number of directly trusted users (MT1) : 9.88 • Propagated number of users at distance 2 (MT2) : 399.89 • 4,386.32 for MT3 and 16,333.94 for MT4 • The larger the trust propagation horizon, the greater the coverage, but the errors can increase as well.

  11. Input_Results (Overall)

  12. Input_Results (Overall)

  13. Output_Results • The results of “rating predictor” • For the combined data of CF + MTx (x is from 1 to 4), the coverage is greater than the coverage of the two techniques but the error is between CF and MTx (worse than MTx, better than CF)

  14. Conclusion & Discussion • Ratings of directly trusted users achieves the smallest error with an acceptable coverage, particularly for controversial item and black sheep. • For cold start users, CF totally failed and trusted users achieves very small error and good coverage. • According to the different characteristic of the data set, the algorithms work differently.

More Related