1 / 17

Direct Convex Relaxations of Sparse SVM

Direct Convex Relaxations of Sparse SVM. Antoni B. Chan, Nuno Vasconcelos, and Gert R. G. Lanckriet The 24th International Conference on Machine Learning (ICML 2007) Presented by Shuiwang Ji. Outline. Introduction; Quadratically Constrained Quadratic Programming (QCQP) formulation;

trey
Télécharger la présentation

Direct Convex Relaxations of Sparse SVM

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Direct Convex Relaxations of Sparse SVM Antoni B. Chan, Nuno Vasconcelos, and Gert R. G. Lanckriet The 24th International Conference on Machine Learning (ICML 2007) Presented by Shuiwang Ji

  2. Outline • Introduction; • Quadratically Constrained Quadratic Programming (QCQP) formulation; • Semidefinite Programming (SDP) formulation; • Experiments;

  3. Sparsity of SVM x1, …, xd SVM is sparse w.r.t. data points, but not sparse w.r.t. features.

  4. Motivations & Related Work • Features may be noisy, redundant; • Sparsity enhance interpretability; • Sparse PCA (Zou et al. & d'Aspremont et al.); • Sparse Eigen Methods by D.C. Programming (ICML07);

  5. An Example

  6. Vector Norm Number of nonzero entries in x

  7. 2-norm C-SVM Primal and Dual

  8. 1-norm LP-SVM Primal and Dual

  9. Convex QCQP Relaxation

  10. Interpretations of QCQP-SSVM • Problem 6 and 7 are equivalent; • QCQP-SSVM is a combination of C-SVM and LP-SVM, 1-norm encourages sparsity and 2-norm encourages large margin;

  11. QCQP-SSVM Dual

  12. QCQP-SSVM • QCQP-SSVM automatically learns an adaptive soft-threshold on the original SVM hyperplane.

  13. SDP Relaxation

  14. SDP-SSVM Dual • The optimal weighting matrix increases the influence of the relevant features while demoting the less relevant features; • SDP-SSVM learns a weighting on the inner product such that the hyperplane in the feature space is sparse.

  15. Results on Synthetic Data

  16. Results on 15 UCI data sets

  17. Results on 15 UCI data sets

More Related