1 / 47

Machine Learning: k-Nearest Neighbor and Support Vector Machines

CMSC 471. Machine Learning: k-Nearest Neighbor and Support Vector Machines. skim 20.4, 20.6-20.7. Revised End-of-Semester Schedule. Wed 11/21 Machine Learning IV Mon 11/26 Philosophy of AI (You must read the three articles!) Wed 11/28 Special Topics Mon 12/3 Special Topics

pier
Télécharger la présentation

Machine Learning: k-Nearest Neighbor and Support Vector Machines

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CMSC 471 Machine Learning:k-Nearest Neighbor andSupport Vector Machines skim 20.4, 20.6-20.7

  2. Revised End-of-Semester Schedule • Wed 11/21 Machine Learning IV • Mon 11/26 Philosophy of AI (You must read the three articles!) • Wed 11/28 Special Topics • Mon 12/3 Special Topics • Wed 12/5 Review / Tournament dry run #2 (HW6 due) • Mon 12/10 Tournament • Wed 12/19 FINAL EXAM (1:00pm - 3:00pm) (Project and final report due) NO LATE SUBMISSIONS ALLOWED! • Special Topics • Robotics • AI in Games • Natural language processing • Multi-agent systems

  3. k-Nearest Neighbor Instance-Based Learning Some material adapted from slides by Andrew Moore, CMU. Visit http://www.autonlab.org/tutorials/ for Andrew’s repository of Data Mining tutorials.

  4. 1-Nearest Neighbor • One of the simplest of all machine learning classifiers • Simple idea: label a new point the same as the closest known point Label it red.

  5. 1-Nearest Neighbor • A type of instance-based learning • Also known as “memory-based” learning • Forms a Voronoi tessellation of the instance space

  6. Distance Metrics • Different metrics can change the decision surface • Standard Euclidean distance metric: • Two-dimensional: Dist(a,b) = sqrt((a1 – b1)2 + (a2 – b2)2) • Multivariate: Dist(a,b) = sqrt(∑ (ai – bi)2) Dist(a,b) =(a1 – b1)2 + (a2 – b2)2 Dist(a,b) =(a1 – b1)2 + (3a2 – 3b2)2 Adapted from “Instance-Based Learning” lecture slides by Andrew Moore, CMU.

  7. Four Aspects of anInstance-Based Learner: • A distance metric • How many nearby neighbors to look at? • A weighting function (optional) • How to fit with the local points? Adapted from “Instance-Based Learning” lecture slides by Andrew Moore, CMU.

  8. 1-NN’s Four Aspects as anInstance-Based Learner: • A distance metric • Euclidian • How many nearby neighbors to look at? • One • A weighting function (optional) • Unused • How to fit with the local points? • Just predict the same output as the nearest neighbor. Adapted from “Instance-Based Learning” lecture slides by Andrew Moore, CMU.

  9. Zen Gardens Mystery of renowned zen garden revealed [CNN Article] Thursday, September 26, 2002 Posted: 10:11 AM EDT (1411 GMT) LONDON (Reuters) -- For centuries visitors to the renowned Ryoanji Temple garden in Kyoto, Japan have been entranced and mystified by the simple arrangement of rocks. The five sparse clusters on a rectangle of raked gravel are said to be pleasing to the eyes of the hundreds of thousands of tourists who visit the garden each year. Scientists in Japan said on Wednesday they now believe they have discovered its mysterious appeal. "We have uncovered the implicit structure of the Ryoanji garden's visual ground and have shown that it includes an abstract, minimalist depiction of natural scenery," said Gert Van Tonder of Kyoto University. The researchers discovered that the empty space of the garden evokes a hidden image of a branching tree that is sensed by the unconscious mind. "We believe that the unconscious perception of this pattern contributes to the enigmatic appeal of the garden," Van Tonder added. He and his colleagues believe that whoever created the garden during the Muromachi era between 1333-1573 knew exactly what they were doing and placed the rocks around the tree image. By using a concept called medial-axis transformation, the scientists showed that the hidden branched tree converges on the main area from which the garden is viewed. The trunk leads to the prime viewing site in the ancient temple that once overlooked the garden. It is thought that abstract art may have a similar impact. "There is a growing realisation that scientific analysis can reveal unexpected structural features hidden in controversial abstract paintings," Van Tonder said Adapted from “Instance-Based Learning” lecture slides by Andrew Moore, CMU.

  10. k – Nearest Neighbor • Generalizes 1-NN to smooth away noise in the labels • A new point is now assigned the most frequent label of its k nearest neighbors Label it red, when k = 3 Label it blue, when k = 7

  11. k-Nearest Neighbor (k = 9) Appalling behavior! Loses all the detail that 1-nearest neighbor would give. The tails are horrible! A magnificent job of noise smoothing. Three cheers for 9-nearest-neighbor. But the lack of gradients and the jerkiness isn’t good. Fits much less of the noise, captures trends. But still, frankly, pathetic compared with linear regression. Adapted from “Instance-Based Learning” lecture slides by Andrew Moore, CMU.

  12. Support Vector Machines and Kernels Doing Really Well with Linear Decision Surfaces Adapted from slides by Tim Oates Cognition, Robotics, and Learning (CORAL) Lab University of Maryland Baltimore County

  13. Outline • Prediction • Why might predictions be wrong? • Support vector machines • Doing really well with linear models • Kernels • Making the non-linear linear

  14. Supervised ML = Prediction • Given training instances (x,y) • Learn a model f • Such that f(x) = y • Use f to predict y for new x • Many variations on this basic theme

  15. Why might predictions be wrong? • True Non-Determinism • Flip a biased coin • p(heads) =  • Estimate  • If  > 0.5 predict heads, else tails • Lots of ML research on problems like this • Learn a model • Do the best you can in expectation

  16. Why might predictions be wrong? • Partial Observability • Something needed to predict y is missing from observation x • N-bit parity problem • x contains N-1 bits (hard PO) • x contains N bits but learner ignores some of them (soft PO)

  17. Why might predictions be wrong? • True non-determinism • Partial observability • hard, soft • Representational bias • Algorithmic bias • Bounded resources

  18. X X X X O O O O Representational Bias • Having the right features (x) is crucial X X O O O O X X

  19. Support Vector Machines Doing Really Well with Linear Decision Surfaces

  20. Strengths of SVMs • Good generalization in theory • Good generalization in practice • Work well with few training instances • Find globally best model • Efficient algorithms • Amenable to the kernel trick

  21. Linear Separators • Training instances • x  n • y  {-1, 1} • w  n • b   • Hyperplane • <w, x> + b = 0 • w1x1 + w2x2 … + wnxn + b = 0 • Decision function • f(x) = sign(<w, x> + b) • Math Review • Inner (dot) product: • <a, b> = a · b = ∑ ai*bi • = a1b1 + a2b2 + …+anbn

  22. Intuitions O O X X O X X O O X O X O X O X

  23. Intuitions O O X X O X X O O X O X O X O X

  24. Intuitions O O X X O X X O O X O X O X O X

  25. Intuitions O O X X O X X O O X O X O X O X

  26. A “Good” Separator O O X X O X X O O X O X O X O X

  27. Noise in the Observations O O X X O X X O O X O X O X O X

  28. Ruling Out Some Separators O O X X O X X O O X O X O X O X

  29. Lots of Noise O O X X O X X O O X O X O X O X

  30. Maximizing the Margin O O X X O X X O O X O X O X O X

  31. “Fat” Separators O O X X O X X O O X O X O X O X

  32. O O X O X X O X O O X O X Support Vectors X O X

  33. The Math • Training instances • x  n • y  {-1, 1} • Decision function • f(x) = sign(<w,x> + b) • w  n • b   • Find w and b that • Perfectly classify training instances • Assuming linear separability • Maximize margin

  34. The Math • For perfect classification, we want • yi (<w,xi> + b) ≥ 0 for all i • Why? • To maximize the margin, we want • w that minimizes |w|2

  35. Dual Optimization Problem • Maximize over  • W() = ii - 1/2 i,jij yi yj <xi, xj> • Subject to • i  0 • ii yi = 0 • Decision function • f(x) = sign(ii yi <x, xi> + b)

  36. Strengths of SVMs • Good generalization in theory • Good generalization in practice • Work well with few training instances • Find globally best model • Efficient algorithms • Amenable to the kernel trick …

  37. O O O O O O O O O O O X X X X O O X X O O O O O O O Image from http://www.atrandomresearch.com/iclass/ What if Surface is Non-Linear?

  38. Kernel Methods Making the Non-Linear Linear

  39. x12 X X X X O O x1 O O When Linear Separators Fail x2 x1 X X O O O O X X

  40. Mapping into a New Feature Space • Rather than run SVM on xi, run it on (xi) • Find non-linear separator in input space • What if (xi) is really big? • Use kernels to compute it implicitly!  : x  X = (x) (x1,x2) = (x1,x2,x12,x22,x1x2) Image from http://web.engr.oregonstate.edu/ ~afern/classes/cs534/

  41. Kernels • Find kernel K such that • K(x1,x2) = < (x1), (x2)> • Computing K(x1,x2) should be efficient, much more so than computing (x1) and (x2) • Use K(x1,x2) in SVM algorithm rather than <x1,x2> • Remarkably, this is possible

  42. The Polynomial Kernel • K(x1,x2) = < x1, x2 > 2 • x1 = (x11, x12) • x2 = (x21, x22) • < x1, x2 > = (x11x21 + x12x22) • < x1, x2 > 2 = (x112 x212 + x122x222 + 2x11x12 x21x22) • (x1) = (x112, x122, √2x11x12) • (x2) = (x212, x222, √2x21x22) • K(x1,x2) = < (x1), (x2)>

  43. The Polynomial Kernel • (x) contains all monomials of degree d • Useful in visual pattern recognition • Number of monomials • 16x16 pixel image • 1010 monomials of degree 5 • Never explicitly compute (x)! • Variation - K(x1,x2) = (< x1, x2 > + 1) 2

  44. A Few Good Kernels • Dot product kernel • K(x1,x2) = < x1,x2 > • Polynomial kernel • K(x1,x2) = < x1,x2 >d (Monomials of degree d) • K(x1,x2) = (< x1,x2 > + 1)d (All monomials of degree 1,2,…,d) • Gaussian kernel • K(x1,x2) = exp(-| x1-x2 |2/22) • Radial basis functions • Sigmoid kernel • K(x1,x2) = tanh(< x1,x2 > + ) • Neural networks • Establishing “kernel-hood” from first principles is non-trivial

  45. The Kernel Trick “Given an algorithm which is formulated in terms of a positive definite kernel K1, one can construct an alternative algorithm by replacing K1 with another positive definite kernel K2” • SVMs can use the kernel trick

  46. These are kernels! Using a Different Kernel in the Dual Optimization Problem • For example, using the polynomial kernel with d = 4 (including lower-order terms). • Maximize over  • W() = ii - 1/2 i,jij yi yj <xi, xj> • Subject to • i  0 • ii yi = 0 • Decision function • f(x) = sign(ii yi <x, xi> + b) (<xi, xj> + 1)4 X So by the kernel trick, we just replace them! (<xi, xj> + 1)4 X

  47. Conclusion • SVMs find optimal linear separator • The kernel trick makes SVMs non-linear learning algorithms

More Related