250 likes | 501 Vues
Machine Learning II - Outline. Instance Based Learning k-Nearest Neighbour Learning Case-based Reasoning Unsupervised Learning Numeric Clustering Methods Conceptual Clustering Other Machine Learning Paradigms Artificial Neural Networks Evolutionary Techniques. Instance-Based Learning.
E N D
Machine Learning II - Outline • Instance Based Learning • k-Nearest Neighbour Learning • Case-based Reasoning • Unsupervised Learning • Numeric Clustering Methods • Conceptual Clustering • Other Machine Learning Paradigms • Artificial Neural Networks • Evolutionary Techniques J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Instance-Based Learning • Lazy methods • do not estimate the target function once for the entire instance space • estimate it locally and differently for each new instance • generalising beyond the training examples is postponed • Instances represented as points in a Euclidean space as well as more complex symbolic representations J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Instance-Based Learning • Straightforward approaches to approximating real-valued or discrete-valued target functions • Key idea: just store all training examples xi, f(xi) • Given new query instance xq, • first locate set of similar related training examples, • then use them to classify the new instance • Construct a local approximation to the target function that applies in the neighbourhood of the new instance • Even very complex target function can be described by a collection of less complex approximations J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
k-Nearest Neighbour Learning • Instances map to points in the n-dimensional space Rn • The distance between two instances is defined as the static Euclidean distance • Given xq, take vote among its k nearest nbrs (if discrete-valued target function) • Take mean of f values of k nearest nbrs (if real-valued) • ; where (a, b)=1 if a=b and (a, b)=0 otherwise. J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Operation of the k-NN algorithm Voronoi diagram - the shape of the decision surface induced by 1-NN over the entire instance space J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Distance-Weighted k-NN Algorithm • Weights the contributions of the neighbours according to their distance to the query point where • Note now it makes sense to use all training examples instead of just k (Shepard’s method) J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Remarks on k-NN Algorithm • Robust to noisy training data • Effective for large set of training data • The cost of classification of a new instances can be high - techniques for efficiently indexing training examples • All attributes are considered of the instance when calculating a distance • dangerous, if the concept depends on only a few relevant attributes • curse of dimensionality J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Curse of Dimensionality • Imagine instances described by 20 attributes, but only 2 are relevant to target function • Curse of dimensionality: nearest neighbours is easily mislead when high-dimensional • The distance between two neighbours will be dominated by the large number of irrelevant attributes • One approach to remedy: • to weight each attribute differently when calculating the distance • shortening (lengthening) the axes that correspond to more (less) relevant attributes J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
IBL: Case StudyPrediction of Gas Consumption J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
IBL: Case StudyPrediction of Gas Consumption J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Case-Based Reasoning • Symbolic representation of instances • function graphs, logical descriptions, sequence (or tree) of operations • Methods for retrieving similar instances are more elaborate • size of the largest shared subgraph between two function graphs • Multiple cases retrieved, modified and combined to form solution to new problem • Applications • storing and reusing experience at a help desk • conceptual design of mechanical devices • solving complex planning and scheduling problems by reusing relevant portions of previously solved problems J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Case-Based Reasoning in CADET • CADET: • Conceptual Design of simple mechanical devices- water faucets • 75 stored examples of mechanical devises • CBR: • training example: qualitative function, mechanical structure • new query: desired function • target value: mechanical structure for this function • distance metric: match qualitative function descriptions J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Case-Based Reasoning in CADET J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Unsupervised Learning • No teacher - a method of classifying training examples • Learner forms and evaluates concepts on its own • must discover patterns, features, regularities, correlation, or categories in the input data • Scientists: • do not have benefit of a teacher, • propose hypotheses to explain observations, • evaluate & test hypotheses • Clustering problem • Numeric taxonomy • Conceptual clustering J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Conceptual Clustering • Objects represented as sets of symbolic rather than numeric features • Measuring the similarity of objects • proportion of common features similarity({small,red,rubber,ball},{small,blue,rubber,ball}) = 3/4 • Background knowledge used to formation of categories • Intensional definition rather than extensional enumerative representation {X | X has been elected secretary-general of the United Nations} vs. list of particular individuals • Machine learning techniques used to produce general concepts J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Conceptual Clustering - CLUSTER/2 • CLUSTER/2 (Michalski and Stepp, 1983) - forms k categories 1. Select k seeds from the set of observed objects. 2. For each seed, using that seed as a positive instance and all other seed as negative instances, produce a maximally general definition covering only positive and none of the negative instances. 3. Classify all objects in the sample according to these descriptions. Replace each description with a maximally specific description that covers all objects in the category. 4. Adjust overlapping definitions. 5. Select an element closest to the center of each class. 6. Using these central elements as new seeds, repeat steps 1-5. Stop when clusters are satisfactory - complexity of the general descriptions of classes. 7. If unsatisfactory clusters and no improvement occurs select the new seeds closest to the edge of the cluster. J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Steps of a CLUSTER/2 run J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Numeric Taxonomy • Objects represented as a collection of numeric-valued features • Similarity measure - similarity of two objects • Euclidean distance in the n-dimensional space • Distance between two clusters • minimal, maximal, average, mean measure • Criterion of the quality of clustering • maximising the similarity of objects in the same class • Methods: • iterative optimisation • agglomerative clustering J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Distribution of Training Examples a) proper distribution b) improper distribution J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Iterative Optimisation: Algorithm 1. Split the space by the randomly chosen border line so that two non-empty sets of objects are created. 2. For each set of objects calculate a vector of mean values of each attribute - ethalons. Classify all objects according to those ethalons. 3. New separation line is given by the ethalons. If the classification according to the ethalons changed at least in one object then continue with step 2 else end. J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Iterative Optimisation J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Agglomerative Clustering - Dendrogram J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control
Agglomerative Clustering - Algorithm 1. Examining all pairs of objects, select the pair with the highest degree of similarity and make that pair a cluster • replace the component members of the cluster with the cluster definition 2. Repeat the step 1 on the collection of objects and clusters until all objects have been reduced to a single cluster. • Result: A binary tree whose leaf nodes are instances and internal nodes are clusters of increasing size J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control