1 / 51

Image Categorization

03/11/10. Image Categorization. Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem. Last classes. Object recognition: localizing an object instance in an image Face recognition: matching one face image to another. Today’s class: categorization.

oke
Télécharger la présentation

Image Categorization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 03/11/10 Image Categorization Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem

  2. Last classes • Object recognition: localizing an object instance in an image • Face recognition: matching one face image to another

  3. Today’s class: categorization • Overview of image categorization • Representation • Image histograms • Classification • Important concepts in machine learning • What the classifiers are and when to use them

  4. Image Categorization Training Training Labels Training Images Image Features Classifier Training Trained Classifier

  5. Image Categorization Training Training Labels Training Images Image Features Classifier Training Trained Classifier Testing Prediction Image Features Trained Classifier Outdoor Test Image

  6. Part 1: Image features Training Training Labels Training Images Image Features Classifier Training Trained Classifier

  7. General Principles of Representation • Coverage • Ensure that all relevant info is captured • Concision • Minimize number of features without sacrificing coverage • Directness • Ideal features are independently useful for prediction Image Intensity

  8. Image Representations: Histograms Global histogram • Represent distribution of features • Color, texture, depth, … Space Shuttle Cargo Bay Images from Dave Kauchak

  9. Image Representations: Histograms Histogram: Probability or count of data in each bin • Joint histogram • Requires lots of data • Loss of resolution to avoid empty bins • Marginal histogram • Requires independent features • More data/bin than joint histogram Images from Dave Kauchak

  10. Image Representations: Histograms Clustering EASE Truss Assembly Use the same cluster centers for all images Space Shuttle Cargo Bay Images from Dave Kauchak

  11. Computing histogram distance Histogram intersection (assuming normalized histograms) Chi-squared Histogram matching distance Cars found by color histogram matching using chi-squared

  12. Histograms: Implementation issues • Quantization • Grids: fast but only applicable with few dimensions • Clustering: slower but can quantize data in higher dimensions • Matching • Histogram intersection or Euclidean may be faster • Chi-squared often works better • Earth mover’s distance is good for when nearby bins represent similar values Few Bins Need less data Coarser representation Many Bins Need more data Finer representation

  13. What kind of things do we compute histograms of? • Color • Texture (filter banks or HOG over regions) L*a*b* color space HSV color space

  14. What kind of things do we compute histograms of? • Histograms of gradient • Visual words SIFT – Lowe IJCV 2004

  15. Image Categorization: Bag of Words Training • Extract keypoints and descriptors for all training images • Cluster descriptors • Quantize descriptors using cluster centers to get “visual words” • Represent each image by normalized counts of “visual words” • Train classifier on labeled examples using histogram values as features Testing • Extract keypoints/descriptors and quantize into visual words • Compute visual word histogram • Compute label or confidence using classifier

  16. But what about layout? All of these images have the same color histogram

  17. Spatial pyramid Compute histogram in each spatial bin

  18. Part 2: Classifiers Training Training Labels Training Images Image Features Classifier Training Trained Classifier

  19. Learning a classifier • Given some set features with corresponding labels, learn a function to predict the labels from the features x x x x x x x o x o o o o x2 x1

  20. Many classifiers to choose from • SVM • Neural networks • Naïve Bayes • Bayesian network • Logistic regression • Randomized Forests • Boosted Decision Trees • K-nearest neighbor • RBMs • Etc. Which is the best one?

  21. No Free Lunch Theorem

  22. Bias-Variance Trade-off MSE = bias2 + variance

  23. Bias and Variance Error = bias2 + variance Few training examples Test Error Many training examples High Bias Low Variance Complexity Low Bias High Variance

  24. Choosing the trade-off • Need validation set • Validation set not same as test set Test error Error Training error High Bias Low Variance Complexity Low Bias High Variance

  25. Effect of Training Size Fixed classifier Error Testing Generalization Error Training Number of Training Examples

  26. How to measure complexity? • VC dimension Upper bound on generalization error Training error + N: size of training set h: VC dimension : 1-probability

  27. How to reduce variance? • Choose a simpler classifier • Regularize the parameters • Get more training data

  28. The perfect classification algorithm • Objective function: solves what you want to solve • Parameterization: makes assumptions that fit the problem • Regularization: right level of regularization for amount of training data • Training algorithm: can find parameters that maximize objective on training set • Inference algorithm: can solve for objective function in evaluation

  29. Generative vs. Discriminative Classifiers Generative • Training • Maximize joint likelihood of data and labels • Assume (or learn) probability distribution and dependency structure • Can impose priors • Testing • P(y=1, x) / P(y=0, x) > t? • Examples • Foreground/background GMM • Naïve Bayes classifier • Bayesian network Discriminative • Training • Learn to directly predict the labels from the data • Assume form of boundary • Margin maximization or parameter regularization • Testing • f(x) > t ; e.g., wTx > t • Examples • Logistic regression • SVM • Boosted decision trees

  30. Generative Classifier: Naïve Bayes • Objective • Parameterization • Regularization • Training • Inference y x1 x2 x3

  31. Using Naïve Bayes • Simple thing to try for categorical data • Very fast to train/test

  32. Classifiers: Logistic Regression • Objective • Parameterization • Regularization • Training • Inference x x x x x x x o x o o o o x2 x1

  33. Using Logistic Regression • Quick, simple classifier (try it first) • Use L2 or L1 regularization • L1 does feature selection and is robust to irrelevant features

  34. Classifiers: Linear SVM • Objective • Parameterization • Regularization • Training • Inference x x x x x x x o x o o o o x2 x1

  35. Classifiers: Linear SVM • Objective • Parameterization • Regularization • Training • Inference x x x x x x x o x o o o o x2 x1

  36. Classifiers: Linear SVM • Objective • Parameterization • Regularization • Training • Inference x x o x x x x x o x o o o o x2 x1

  37. Classifiers: Kernelized SVM • Objective • Parameterization • Regularization • Training • Inference x x o o o x x x x x o o x2 o x x x

  38. Using SVMs • Good general purpose classifier • Generalization depends on margin, so works well with many weak features • No feature selection • Usually requires some parameter tuning • Choosing kernel • Linear: fast training/testing – start here • RBF: related to neural networks, nearest neighbor • Chi-squared, histogram intersection: good for histograms (but slower, esp. chi-squared) • Can learn a kernel function

  39. Classifiers: Decision Trees • Objective • Parameterization • Regularization • Training • Inference x x o x x x x o x o x o o o o x2 x1

  40. Ensemble Methods: Boosting figure from Friedman et al. 2000

  41. Boosted Decision Trees High in Image? Gray? Yes No Yes No Smooth? Green? High in Image? Many Long Lines? … Yes Yes No Yes No Yes No No Blue? Very High Vanishing Point? Yes No Yes No P(label | good segment, data) Ground Vertical Sky [Collins et al. 2002]

  42. Using Boosted Decision Trees • Flexible: can deal with both continuous and categorical variables • How to control bias/variance trade-off • Size of trees • Number of trees • Boosting trees often works best with a small number of well-designed features • Boosting “stubs” can give a fast classifier

  43. K-nearest neighbor • Objective • Parameterization • Regularization • Training • Inference + x x + o x x x x o x o x o o o o x2 x1

  44. 1-nearest neighbor + x x + o x x x x o x o x o o o o x2 x1

  45. 3-nearest neighbor + x x + o x x x x o x o x o o o o x2 x1

  46. 5-nearest neighbor + x x + o x x x x o x o x o o o o x2 x1

  47. Using K-NN • Simple, so another good one to try first • With infinite examples, 1-NN provably has error that is at most twice Bayes optimal error

  48. Clustering (unsupervised) + x + o + + x x + x + + x + + o x x + o + o + o x2 x2 x1 x1

  49. What to remember about classifiers • No free lunch: machine learning algorithms are tools, not dogmas • Try simple classifiers first • Better to have smart features and simple classifiers than simple features and smart classifiers • Use increasingly powerful classifiers with more training data (bias-variance tradeoff)

  50. Next class • Object category detection overview

More Related