1 / 12

IMDB Improved (Random name that just stuck)

IMDB Improved (Random name that just stuck). Joshua Boren, Douglas Hitchcock, Calvin Kern, Ryan Norton. Recommenders. What is a recommender? How does a recommender learn? What applications are there for recommenders?. Steer Clear!.

winter
Télécharger la présentation

IMDB Improved (Random name that just stuck)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IMDB Improved(Random name that just stuck) Joshua Boren, Douglas Hitchcock, Calvin Kern, Ryan Norton

  2. Recommenders • What is a recommender? • How does a recommender learn? • What applications are there for recommenders?

  3. Steer Clear! • Our recommender suggests movie titles that a user should not bother watching. • We have developed the world’s first unrecommender.

  4. Learning from Data • How did our recommender learn from the dataset? • What did our dataset consist of? • What features did we have to use? • Where did we get our data?

  5. Netflix Prize Data Set • Movies • Movie Ids • Year Released • Movie Title • Training Set • One for each movie that contained: • User ID • Rating • Date of rating

  6. Organizing the data • To apply the recommender, the data was reorganized • Converted from movies with users, to users with movies

  7. Learning from Data • Learning for one user and then growing on that • Re-creating the model when a user is updated • Methods used

  8. Collaborative Filtering • Collaborating • Collecting preferences or taste information from many users • Filtering • Cold Start • There were no early assumptions, conclusions were formed upon reviewing a large data set. • Content-Based-Filtering • Not all categories are treated equally.

  9. Results • What have our efforts gotten us? • Precision: 90% • Recall (True Positive Rate): 1

  10. Results • How do we define a successful recommendation? • How did our results stack up against the teams competing in the Netflix competition? • Trivial Algorithm: 1.0540 • Netflix Winner RMSE: 0.8554 • Our RMSE: 0.9820

  11. Future Plans • Given more time, how would we improve our recommender system to be more accurate? • What other changes would/could we make?

  12. Conclusion

More Related