html5-img
1 / 121

Computacion inteligente

Computacion inteligente. Introduction to Classification. Outline. Introduction Feature Vectors and Feature Spaces Decision boundaries The nearest-neighbor classifier Classification Accuracy Accuracy Assesment Linear Separability Linear Classifiers Classification learning. Introduction.

alda
Télécharger la présentation

Computacion inteligente

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computacion inteligente Introduction to Classification

  2. Outline • Introduction • Feature Vectors and Feature Spaces • Decision boundaries • The nearest-neighbor classifier • Classification Accuracy • Accuracy Assesment • Linear Separability • Linear Classifiers • Classification learning

  3. Introduction

  4. ? Classification • Classification: • A task of induction to find patterns from data • Inferring knowledge from data

  5. Learning from data • You may find different names for Learning from data identification estimation Regression classification pattern recognition Function approximation, curve or surface fitting etc…

  6. Sample data Composition of mammalian milk

  7. Classification • Classification is an important component of intelligent systems • We have a special discrete-valued variable called the Class, C • C takes values in {c1, c2, ......, cm}

  8. Classification • Problem is to decide what class an object is • i.e., what value the class variable Y is for a given object • given measurements on the object, e.g., x1, x2, …. • These measurements are called “features” we wish to learn a mapping from Features -> Class

  9. Classification Functions • Notation • Input space: X – ( X Rn) • Output domain: Y • binary classification: C = {-1, 1} • m-class classification: Y = {c1, c2, c3, …, cm} • Training set : S Each class is described by a label

  10. Classification Functions • We want a mapping or function which: • takes any combination of values x = (a, b, d, .... z) and, • produces a prediction C, • i.e., a function C = f(a,b, d, ….z) which produces a value c1 or c2, etc The problem is that we don’t know this mapping: we have to learn it from data!

  11. a b C Classifier d z Classification Functions Feature Values (which are known, measured) Predicted Class Value (true class is unknown to the classifier)

  12. Classification Algorithms Training Data Training Data Classifier (Model) IF rank = ‘professor’ OR years > 6 THEN tenured = ‘yes’ Classification: Model Construction

  13. Classifier Testing Data Unseen Data (Jeff, Professor, 4) Tenured? Classification: Prediction Using the Model

  14. Applications of Classification • Medical Diagnosis • classification of cancerous cells • Credit card and Loan approval • Most major banks • Speech recognition • IBM, Dragon Systems, AT&T, Microsoft, etc • Optical Character/Handwriting Recognition • Post Offices, Banks, Gateway, Motorola, Microsoft, Xerox, etc • Email classification • classify email as “junk” or “non-junk” • Many other applications • one of the most successful applications of AI technology

  15. Examples of Features and Classes

  16. Examples of Features and Classes

  17. Examples of Features and Classes

  18. Examples: Classification of Galaxies Class 2 Class 1

  19. Original Image Classified Image Examples: Image Classification • Remote sensing

  20. Feature Vectors and Feature Spaces

  21. Feature Vectors and Feature Spaces • Feature Vector: Say we have 2 features: • we can think of the features as a 2-component vector (i.e., a 2-dimensional vector, [a b]) • So the features correspond to a 2-dimensional space • We can generalize to d-dimensional space. This is called the “feature space”

  22. Feature Vectors and Feature Spaces • Each feature vector represents the “coordinates” of a particular object in feature space • If the feature-space is 2-dimensional (for example), and the features a and b are real-valued • we can visually examine and plot the locations of the feature vectors

  23. Data with 2 Features

  24. Additive RGB color model HSV Color Space Data with 3 Features • Given a set of of balls • Classify it by “color”

  25. Data from Two Classes • Classes: data sets D1 and D2: • sets of points from classes 1 and 2 • data are of dimension d • i.e., d-dimensional vectors If d = 2 (2 features), we can plot the data

  26. Example of Data from 2 Classes

  27. Another Example: Red Blood Cells

  28. Data from Multiple Classes • Now consider that we have data from mclasses • e.g., m=5

  29. Data from Multiple Classes • Now consider that we have data from mclasses • e.g., m=5 • We can imagine the data from each class being in a “cloud” in feature space

  30. Composition of mammalian milk Proteins (%) Classes Fat (%) Example of Data from 5 Classes

  31. Decision Boundaries

  32. Decision Boundaries • What is a Classifier? • A classifier is a mapping from feature space to the class labels {1, 2, … m} • Thus, a classifier partitions the feature space into mdecision regions • The line or surface separating any 2 classes is the decision boundary

  33. Decision Boundaries • Linear Classifiers • a linear classifier is a mapping which partitions feature space using a linear function • it is one of the simplest classifiers we can imagine In 2 dimensions the decision boundary is a straight line

  34. 2-Class Data with a Linear Decision Boundary

  35. Class Overlap • Consider two class case • data from D1 and D2 may overlap • features = {age, body temperature}, • classes = {flu, not-flu} • features = {income, savings}, • classes = {good/bad risk}

  36. Class Overlap • common in practice that the classes will naturally overlap • this means that our features are usually not able to perfectly discriminate between the classes • note: with more expensive/more detailed additional features (e.g., a specific test for the flu) we might be able to get perfect separation If there is overlap => classes are not linearly separable

  37. Classification Problem with Overlap

  38. TWO-CLASS DATA IN A TWO-DIMENSIONAL FEATURE SPACE 6 Decision Region 1 Decision 5 Region 2 4 3 Feature 2 2 1 0 Decision Boundary -1 2 3 4 5 6 7 8 9 10 Feature 1 Solution: A More Complex Decision Boundary

  39. The Nearest Neighbor Classifier

  40. Training Data and Test Data • Training data • examples with class values for learning. Used to build a classifier • Test data • new data, not used in the training process, toevaluate how well a classifier does on new data

  41. Some Notation • Feature Vectors • x(i) is the ith training data feature vector • in MATLAB this could be the ith column of an dxN matrix • Class Labels • c(i) is the class label of the ith feature vector • in general, c(i) can take m different class values, (e.g., c = 1, c = 2, ...)

  42. Some Notation • Training Data • Dtrain = {[x(1), c(1)] , [x(2), c(2)] , ……, [x(N), c(N)]} • N pairs of feature vectors and class labels • Test Data • Dtest = {[x(1), c(1)] , [x(2), c(2)] , ……, [x(M), c(M)]} • M pairs of feature vectors and class labels Let y be a new feature vector whose class label we do not know, i.e., we wish to classify it.

  43. Nearest Neighbor Classifier • y is a new feature vector whose class label is unknown • SearchDtrain for the closest feature vector to y • let this “closest feature vector” be x(j) • Classifyy with the same label as x(j), i.e. • y is assigned label c(j)

  44. Nearest Neighbor Classifier • How are “closest x” vectors determined?. We have to define a distance • Euclidean distance • Manhatan distance

  45. Nearest Neighbor Classifier • How are “closest x” vectors determined?. We have to define a distance • Mahalanobis distance

  46. Nearest Neighbor Classifier • How are “closest x” vectors determined? • We have to define a distance • typically use minimum Euclidean distance dE(x, y) = sqrt(S (xi - yi)2) • Side note: this produces a called “Voronoi tesselation” of the d-space • each point “claims” a cell surrounding it • cell boundaries are polygons Analogous to “memory-based” reasoning in humans

  47. 1 2 Feature 2 1 2 2 1 Feature 1 Geometric Interpretation of Nearest Neighbor

More Related