1 / 96

Kernel – Based Methods

Kernel – Based Methods . Presented by Jason Friedman Lena Gorelick. Advanced Topics in Computer and Human Vision Spring 2003. Agenda…. Structural Risk Minimization (SRM) Support Vector Machines (SVM) Feature Space vs. Input Space Kernel PCA Kernel Fisher Discriminate Analysis (KFDA).

achilles
Télécharger la présentation

Kernel – Based Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Kernel – Based Methods Presented by Jason Friedman Lena Gorelick Advanced Topics in Computer and Human Vision Spring 2003

  2. Agenda… • Structural Risk Minimization (SRM) • Support Vector Machines (SVM) • Feature Space vs. Input Space • Kernel PCA • Kernel Fisher Discriminate Analysis (KFDA)

  3. Agenda… • Structural Risk Minimization (SRM) • Support Vector Machines (SVM) • Feature Space vs. Input Space • Kernel PCA • Kernel Fisher Discriminate Analysis (KFDA)

  4. Structural Risk Minimization (SRM) • Definition: • Training set with l observations:Each observation consists of a pair: 16x16=256

  5. Structural Risk Minimization (SRM) • The task:“Generalization” - find a mapping • Assumption: Training and test data drawn from the same probability distribution, i.e.(x,y) is “similar” to (x1,y1), …, (xl,yl)

  6. Structural Risk Minimization (SRM) – Learning Machine • Definition: • Learning machine is a family of functions {f()},  is a set of parameters. • For a task of learning two classes f(x,) 2 {-1,1} 8 x, Class of oriented lines in R2:sign(1x1 + 2x2 + 3)

  7. Too little Capacity Too much Capacity ? ? Does it have the same # of leaves? Is the color green? overfitting underfitting Structural Risk Minimization (SRM) – Capacity vs. Generalization • Definition: • Capacity of a learning machine measures the ability to learn any training set without error.

  8. Structural Risk Minimization (SRM) – Capacity vs. Generalization • For small sample sizes overfitting or underfitting might occur • Best generalization = right balance between accuracy and capacity

  9. Structural Risk Minimization (SRM) – Capacity vs. Generalization • Solution: Restrict the complexity (capacity) of the function class. • Intuition: “Simple” function that explains most of the data is preferable to a “complex” one.

  10. Structural Risk Minimization (SRM) -VC dimension • What is a “simple”/”complex” function? • Definition: • Given l points (can be labeled in 2l ways) • The set of points is shattered by the function class {f()} if for each labeling there is a function which correctly assigns those labels.

  11. Structural Risk Minimization (SRM) -VC dimension • Definition • VC dimension of {f()} is the maximum number of points that can be shattered by {f()} and is a measure of capacity.

  12. Structural Risk Minimization (SRM) -VC dimension • Theorem: The VC dimension of the set of orientedhyperplanes in Rn is n+1. • Low # of parameters ) low VC dimension

  13. Structural Risk Minimization (SRM) -Bounds • Definition: Actual risk • Minimize R() • But, we can’t measure actual risk, since we don’t know p(x,y)

  14. Structural Risk Minimization (SRM) -Bounds • Definition: Empirical risk • Remp() ! R(), l!1But for small training set deviations might occur

  15. Structural Risk Minimization (SRM) -Bounds Not valid for infinite VC dimension • Risk bound: Confidence term with probability (1-) h is VC dimension of the function class • Note: R() is independent of p(x,y)

  16. Structural Risk Minimization (SRM) -Bounds

  17. Structural Risk Minimization (SRM) -Bounds

  18. Structural Risk Minimization (SRM)-Principal Method • Principle method for choosing a learning machine for a given task:

  19. Risk Bound Complexity SRM • Divide the class of functions into nested subsets • Either calculate h for each subset, or get a bound on it • Train each subset to achieve minimal empirical error • Choose the subset with the minimal risk bound

  20. Agenda… • Structural Risk Minimization (SRM) • Support Vector Machines (SVM) • Feature Space vs. Input Space • Kernel PCA • Kernel Fisher Discriminate Analysis (KFDA)

  21. Support Vector Machines (SVM) • Currently the “en vogue” approach to classification • Successful applications in bioinformatics, text, handwriting recognition, image processing • Introduced by Bosner, Gayon and Vapnik, 1992 • SVM are a particular instance of Kernel Machines

  22. Linear SVM – Separable case • Two given classes are linearly separable

  23. Linear SVM - definitions • Separating hyperplane H: • w is normal to H • |b|/||w|| is the perpendicular distance from H to the origin • d+ (d-) is the shortest distance from H to the closest positive (negative) point.

  24. Linear SVM - definitions

  25. Linear SVM - definitions • If H is a separating hyperplane, then • No training points fall between H1 and H2

  26. Linear SVM - definitions • By scaling w and b, we can require that Or more simply: • Equality holds  xi lies on H1 or H2

  27. Linear SVM - definitions • Note: w is no longer a unit vector • Margin is now 2 / ||w|| • Find hyperplane with the largest margin.

  28. Linear SVM – maximizing margin • Maximizing the margin , minimizing ||w||2 • ) more room for unseen points to fall • ) restrict the capacity R is the radius of the smallest ball around data

  29. Linear SVM – Constrained Optimization • Introduce Lagrange multipliers • “Primal” formulation: • Minimize LP with respect to w and bRequire

  30. Linear SVM – Constrained Optimization • Objective function is quadratic • Linear constraint defines a convex set • Intersection of convex sets is a convex set • ) can formulate “WolfeDual” problem

  31. Linear SVM – Constrained Optimization The Solution • Maximize LP with respect to i Require • Substitute into LP to give: • Maximize with respect to i

  32. Linear SVM – Constrained Optimization • Using Karush Kuhn Tuckerconditions: • If i > 0 then lies either on H1 or H2) The solution is sparse in i • Those training points are called “support vectors”. Their removal would change the solution

  33. SVM – Test Phase • Given the unseen sample x we take the class of x to be

  34. Linear SVM – Non-separable case • Separable case corresponds to empirical risk of zero. • For noisy data this might not be the minimum in the actual risk. (overfitting ) • No feasible solution for non-separable case

  35. Linear SVM – Non-separable case • Relax the constraints by introducing positive slack variables i • is an upper bound on the number of errors

  36. Linear SVM – Non-separable case • Assign extra cost to errors • Minimize where C is a penalty parameterchosen by the user

  37. Linear SVM – Non-separable case • Lagrange formulation again: Lagrange multiplier • “Wolfe Dual” problem - maximize:subject to: • The solution:

  38. Linear SVM – Non-separable case • Using Karush Kuhn Tucker conditions: • The solution is sparse in i

  39. Nonlinear SVM • Non linear decision function might be needed

  40. Nonlinear SVM- Feature Space • Map the data to a high dimensional (possibly infinite) feature space • Solution depends on • If there were function k(xi,xj) s.t.) no need to know  explicitly

  41. Nonlinear SVM – Toy example Input Space Feature Space

  42. Nonlinear SVM – Avoid the Curse • Curse of dimensionality:The difficulty of estimating a problem increases drastically with the dimension • But! Learning in F may be simpler if one uses low complexity function class (hyperplanes)

  43. Nonlinear SVM-Kernel Functions • Kernel functions exist! • effectively compute dot products in feature space • Can use it without knowing  and F • Given a kernel,  and F are not unique • F with smallest dim is calledminimal embedding space

  44. Nonlinear SVM-Kernel Functions • Mercer’s condition:There exists a pair {,F} such thatiff for any g(x) s.t. is finitethen

  45. Nonlinear SVM-Kernel Functions • Formulation of algorithm in terms of kernels

  46. Nonlinear SVM-Kernel Functions • Kernels frequently used:

  47. Nonlinear SVM-Feature Space d=256, p=4 ) dim(F)= 183,181,376 • Hyperplane {w,b} requires dim(F) + 1 parameters • Solving SVM means adjusting l+1 parameters

  48. SVM - Solution • LD is convex ) the solution is global • Two type of non-uniqueness: • {w,b} is not unique • {w,b} is unique, but the set {i} is notPrefer the set with less support vectors(sparse)

  49. Nonlinear SVM-Toy Example

  50. Nonlinear SVM-Toy Example

More Related