1 / 13

Computational Learning Theory PAC IID VC Dimension SVM

Computational Learning Theory PAC IID VC Dimension SVM. Marius Bulacu. Kunstmatige Intelligentie / RuG. The Problem. Why does learning work? How do we know that the learned hypothesis h is close to the target function f if we do not know what f is?. answer provided by

Télécharger la présentation

Computational Learning Theory PAC IID VC Dimension SVM

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Learning Theory • PAC • IID • VC Dimension • SVM Marius Bulacu Kunstmatige Intelligentie / RuG

  2. The Problem • Why does learning work? • How do we know that the learned hypothesis h is close to the target function fif we do not know what f is? answer provided by computational learning theory

  3. The Answer • Any hypothesis h that is consistent with a sufficiently large number of training examples is unlikely to be seriously wrong. Therefore it must be: Probably Approximately Correct PAC

  4. The Stationarity Assumption • The training and test sets are drawn randomly from the same population of examples using the same probability distribution. Therefore training and test data are Independently and Identically Distributed IID “the future is like the past”

  5. How many examples are needed? Probability of existence of a wrong hypothesis consistent with all examples Size of hypothesis space Number of examples Probability that h and f disagree on an example Sample complexity

  6. Formal Derivation H (the set of all possible hypothese) HBAD(the set of “wrong” hypotheses) e f

  7. What if hypothesis space is infinite? • Can’t use our result for finite H • Need some other measure of complexity for H • Vapnik-Chervonenkis dimension

  8. Kernels • Polynomial • Radial basis • Sigmoid SVM (1): Kernels • Implicit mapping to a higher dimensional space where linear separation is possible. f3 f2 f2 f1 f1 Complicated separation boundary Simple separation boundary: Hyperplane

  9. SVM (2): Max Margin Support vectors f2 Good generalization “Best” Separating Hyperplane f1 Max Margin • From all the possible separating hyperplanes, select the one that gives Max Margin. • Solution found by Quadratic Optimization – “Learning”.

More Related