1 / 21

The Computational Complexity of Searching for Predictive Hypotheses

The Computational Complexity of Searching for Predictive Hypotheses. Shai Ben-David Computer Science Dept. Technion. Introduction. The complexity of leaning is measured mainly along two axis: Information and computation . Information complexity enjoys a rich theory that yields

Télécharger la présentation

The Computational Complexity of Searching for Predictive Hypotheses

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Computational Complexityof Searching for Predictive Hypotheses Shai Ben-David Computer Science Dept. Technion

  2. Introduction The complexity of leaning is measured mainly along two axis: Information andcomputation. Information complexity enjoys a rich theory that yields rather crisp sample size and convergence rate guarantees. The focus of this talk is the Computational complexity of learning. While playing a critical role in any application,its theoretical understanding is far less satisfactory.

  3. Outline of this Talk 1.Some background. 2. Survey of recent pessimistic hardness results. 3. New efficient learning algorithms for some basic learning architectures.

  4. The Label Prediction Problem Formal Definition Example Given some domain setX Data files of drivers A sampleSof labeled members of X is generated by some (unknown) distribution Drivers in a sample are labeled according to whether they filed an insurance claim For a next point x, predict its label Will the current customer file a claim?

  5. The Agnostic Learning Paradigm Choose a Hypothesis Class Hof subsets of X. For an input sample S, find some h in Hthat fits Swell. For a new point x, predict a label according to its membership in h.

  6. The Mathematical Justification If His not too rich (has small VC-dimension) then, for every h in H, the agreement ratio of h on the sample S is a good estimate of its probability of success on a new x .

  7. The Mathematical Justification - Formally If Sis sampled i.i.d., by some Dover X  {0, 1} then with probability > 1- Agreement ratio Probability of success

  8. The Model Selection Issue Output of the the learning Algorithm Best regressor for P The Class H Approximation Error Estimation Error Computational Error

  9. The Computational Problem Input:A finite set of {0, 1}-labeled points Sin Rn. Output:Some ‘hypothesis’ h in Hthat maximizes the number of correctly classified points of S.

  10. Hardness-of-Approximation Results For each of the following classes, approximating the best agreement rate for h inH(on a given input sample S) up to some constant ratio, is NP-hard: Monomials Constant width Monotone Monomials Half-spaces Balls Axis aligned Rectangles Threshold NN’s with constant 1st-layer width BD-Eiron-Long Bartlett- BD

  11. The SVM Solution Rather than bothering with non-separable data, make the data separable - by embedding it into some high-dimensional Rn

  12. A Problem with the SVM method In “most” cases the data cannot be made separable unless the mapping is intodimension(|X|) . This happens even for classes of small VC-dimension. For “most” classes, no mapping for which concept-classified data becomes separable, has large margins. In all of these cases generalization is lost!

  13. Data-Dependent Success Note that the definition of success for agnostic learning is data-dependent; The success rate of the learner on S is compared to that of the best h in H.  We extend this approach to a data-dependent success definition for approximations; The required success-rate is a function of the input data.

  14. A New Success Criterion A learning algorithm Ais m-marginsuccessful if, for every input S  Rn  {0,1}, |{(x,y) S: A(s)(x) = y}|>|{(x,y): h(x)=y and d(h, x) > m}| forevery half-space h.

  15. Some Intuition If there exist some optimal h which separates with generous margins, then a m-margin algorithm must produce an optimal separator. On the other hand, If every good separator can be degraded by small perturbations, then a m- margin algorithm can settle for a hypothesis that is far from optimal.

  16. A New Positive Result For every positive m , there is an efficient m-margin algorithm. That is, the algorithm that classifies correctly as many input points as any half-space can classify correctly with margin m .

  17. The positive resultFor every positive m ,there is a m - marginalgorithm whose running time is polynomial in |S| and n . A Complementing Hardness Result Unless P = NP , no algorithm can do this in time polynomial in 1/m (and in |S| and n ).

  18. A m-margin Perceptron Algorithm On input S consider all k-size sub-samples. For each such sub-sample find its largest margin separating hyperplane.  Among all the (~|S|k) resulting hyperplanes. choose the one with best performance on S . (The choice of k is a function of the desired margin m ,k ~ m-2 ).

  19. Other m-margin Algorithms Each of the following algorithms can replace the “find the largest margin separating hyperplane” The usual “Perceptron Algorithm”. “Find a point of equal distance from x1, … xk “. Phil Long’s ROMMA algorithm. These are all very fast online algorithms.

  20. Directions for Further Research Can similar efficient algorithms be derived for more complex NN architectures? How well do the new algorithms perform on real data sets? Can the ‘local approximation’ results be extended to more geometric functions?

More Related