1 / 20

Learning The Discriminative Power-Invariance Trade-Off

Manik Varma Debajyoti Ray Presented by Evan and Suporn. Learning The Discriminative Power-Invariance Trade-Off. Outline. Motivation Background Learning weights Experiments Discussion. Motivation. Image Categorization Problem Descriptors No single descriptor for all tasks Use learning.

Télécharger la présentation

Learning The Discriminative Power-Invariance Trade-Off

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Manik Varma Debajyoti Ray Presented by Evan and Suporn Learning The Discriminative Power-Invariance Trade-Off

  2. Outline Motivation Background Learning weights Experiments Discussion

  3. Motivation Image Categorization Problem Descriptors No single descriptor for all tasks Use learning 6 6 9 9 4 4

  4. Background: SVM

  5. SVM

  6. SVM (2/|w|)‏

  7. SVM

  8. SVM (single kernel)‏ • Primal • Dual, linear kernel • Dual, general kernel

  9. Single-kernel classifier Combination of basis kernels Learn α, d simultaneously Multiple Kernel Learning

  10. Learning weights Not QP  inefficient to solve Dual now looks like ,d + σTd where

  11. Efficient Multiple Kernel Learning Dual of T: Reformulate the problem as

  12. Efficient Multiple Kernel Learning d0 α* α*,d* SVM dn+1 Max 1Tα + σd - ½ ΣkdkαTYKkYα gradient descent: α with 0 ≤ α ≤ C with 1tYα = 0

  13. Experiment: UIUC Texture

  14. Experiment: UIUC Texture

  15. Experiment: Oxford Flower

  16. Experiment: Oxford Flower

  17. Experiment: CalTech 101

  18. Experiment: CalTech 101

  19. Results

  20. Discussion Why do they not tackle “large-scale problems involving hundreds of kernels”? Would that help? Is the claim “what distinguishes one descriptor from another is the trade-off between discriminative power and invariance” true? Should researchers stop looking for a miracle descriptor? Potential of discriminative classification vs. other uses of distance functions over images (eg k-nn)?

More Related