1 / 34

Principled Regularization for Probabilistic Matrix Factorization

Principled Regularization for Probabilistic Matrix Factorization. Robert Bell, Suhrid Balakrishnan AT&T Labs-Research Duke Workshop on Sensing and Analysis of High-Dimensional Data July 26-28, 2011. Probabilistic Matrix Factorization (PMF). Approximate a large n -by- m matrix R by

Télécharger la présentation

Principled Regularization for Probabilistic Matrix Factorization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Principled Regularization for Probabilistic Matrix Factorization Robert Bell, Suhrid Balakrishnan AT&T Labs-Research Duke Workshop on Sensing and Analysis of High-Dimensional DataJuly 26-28, 2011

  2. Probabilistic Matrix Factorization (PMF) • Approximate a large n-by-m matrix R by • M = PQ • P and Q each have k rows, k << n, m • mui = puqi • R may be sparsely populated • Prime tool in Netflix Prize • 99% of ratings were missing

  3. Regularization for PMF • Needed to avoid overfitting • Even after limiting rank of M • Critical for sparse, imbalanced data • Penalized least squares • Minimize

  4. Regularization for PMF • Needed to avoid overfitting • Even after limiting rank of M • Critical for sparse, imbalanced data • Penalized least squares • Minimize • or

  5. Regularization for PMF • Needed to avoid overfitting • Even after limiting rank of M • Critical for sparse, imbalanced data • Penalized least squares • Minimize • or • ’s selected by cross validation

  6. Research Questions • Should we use separate P and Q?

  7. Research Questions • Should we use separate P and Q? • Should we use k separate ’sfor each dimension of P and Q?

  8. Matrix Completion with Noise(Candes and Plan, Proc IEEE, 2010) • Rank reduction without explicit factors • No pre-specification of k, rank(M) • Regularization applied directly to M • Trace norm, aka, nuclear norm • Sum of the singular values of M • Minimize subject to • “Equivalent” to L2 regularization for P, Q

  9. Research Questions • Should we use separate P and Q? • Should we use k separate ’sfor each dimension of P and Q? • Should we use the trace norm for regularization?

  10. Bayesian Matrix Factorization (BPMF) (Salakhutdinov and Mnih, ICML 2008) • Let rui ~ N(puqi, 2) • No PMF-type regularization • pu ~ N(P, P-1) and qi ~ N(Q, Q-1) • Priors for 2, P, Q, P, Q • Fit by Gibbs sampling • Substantial reduction in prediction error relative to PMF with L2 regularization

  11. Research Questions • Should we use separate P and Q? • Should we use k separate reg. parameters for each dimension of P and Q? • Should we use the trace norm for regularization? • Does BPMF “regularize” appropriately?

  12. Matrix Factorization with Biases • Let mui =  + au + bi + puqi • Regularization similar to before • Minimize

  13. Matrix Factorization with Biases • Let mui =  + au + bi + puqi • Regularization similar to before • Minimize • or

  14. Research Questions • Should we use separate P and Q? • Should we use k separate reg. parameters for each dimension of P and Q? • Should we use the trace norm for regularization? • Does BPMF “regularize” appropriately? • Should we use separate ’sfor the biases?

  15. Some Things this Talk Will Not Cover • Various extensions of PMF • Combining explicit and implicit feedback • Time varying factors • Non-negative matrix factorization • L1 regularization • ’s depending on user or item sample sizes • Efficiency of optimization algorithms • Use Newton’s method, each coordinate separately • Iterate to convergence

  16. No Need for Separate P and Q • M = (cP)(c-1Q) is invariant for c ≠ 0 • For initial P and Q • Solve for c to minimize • c = • Gives • Sufficient to let P = Q = PQ

  17. Bayesian Motivation for L2 Regularization • Simplest case: only one item • R is n-by-1 • Ru1 = a1 + ui, a1 ~ N(0,  2), ui ~ N(0,  2) • Posterior mean (or MAP) of a1 satisfies • a = ( 2/ 2) • Best  is inversely proportional to  2

  18. Implications for Regularization of PMF • Allow a≠ b • If a2 ≠ b2 • Allow a≠ b≠ PQ • Allow PQ1≠ PQ2≠ … ≠ PQk ? • Trace norm does not • BPMF appears to

  19. Simulation Experiment Structure • n = 2,500 users, m = 400 items • 250,000 observed ratings • 150,000 in Training (to estimate a, b, P, Q) • 50,000 in Validation (to tune ’s) • 50,000 in Test (to estimate MSE) • Substantial imbalance in ratings • 8 to 134 ratings per user in Training data • 33 to 988 ratings per item in Training data

  20. Simulation Model • rui = au + bi + pu1qi1 + pu2qi2 + ui • Elements of a, b, P, Q, and  • Independent normals with mean 0 • Var(au) = 0.09 • Var(bi) = 0.16 • Var(pu1qi1) = 0.04 • Var(pu2qi2) = 0.01 • Var(ui) = 1.00

  21. Evaluation • Test MSE for estimation of mui = E(rui) • MSE = • Limitations • Not real data • Only one replication • No standard errors

  22. PMF Results for k = 0

  23. PMF Results for k = 0

  24. PMF Results for k = 0

  25. PMF Results for k = 0

  26. PMF Results for k = 1

  27. PMF Results for k = 1

  28. PMF Results for k = 1

  29. PMF Results for k = 2

  30. PMF Results for k = 2

  31. PMF Results for k = 2

  32. Results for Matrix Completion • Performs poorly on raw ratings • MSE = .0693 • Not designed to estimate biases • Fit to residuals from PMF with k = 0 • MSE = .0477 • “Recovered” rank was 1 • Worse than MSE’s from PMF: .0428 to .0439

  33. Results for BPMF • Raw ratings • MSE = .0498, using k = 3 • Early stopping • Not designed to estimate biases • Fit to residuals from PMF with k = 0 • MSE = .0433, using k = 2 • Near .0428, for best PMF w/ biases

  34. Summary • No need for separate P and Q • Theory suggests using separate ’s for distinct sets of exchangeable parameters • Biases vs. factors • For individual factors • Tentative simulation results support need for separate ’s across factors • BPMF does so automatically • PMF requires a way to do efficient tuning

More Related