1 / 77

Additional Material: Mahalanobis Distance

Additional Material: Mahalanobis Distance. Interpretation of a Covariance Matrix. A univariate normal distribution has the density function A multivariate normal distribution has the density function. Variance and Standard Deviation.

sboutte
Télécharger la présentation

Additional Material: Mahalanobis Distance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Additional Material: Mahalanobis Distance

  2. Interpretation of a Covariance Matrix • A univariate normal distribution has the density function • A multivariate normal distribution has the density function

  3. Variance and Standard Deviation • Univariate Normal/Gaussian DistributionThe variance/standard deviation provides information about the height of the mode and the width of the curve.

  4. Interpretation of a Covariance Matrix • The variance/standard deviation relates the spread of the distribution to the spread of a standard normal distribution • The covariance matrix relates the spread of the distribution to the spread of a multivariate standard normal distribution • Example: bivariate normal distribution • Question: Is there a multivariate analog of standard deviation?

  5. Eigenvalue Decomposition • Yields an analog of standard deviation. • Let S be a symmetric, positive definite matrix (e.g. a covariance matrix).

  6. Eigenvalue Decomposition Special Case: Two Dimensions

  7. Eigenvalue Decomposition

  8. Eigenvalue Decomposition

  9. Eigenvalue Decomposition Special Case: Two Dimensions

  10. Cluster-Specific Distance Functions The similarity of a data point to a prototype depends on their distance. • If the cluster prototype is a simple cluster center, a general distance measure can be defined on the data space. In this case the Euclidean distance is most often used due to its rotation invariance. It leads to (hyper-)spherical clusters. • However, more flexible clustering approaches (with size and shape parameters) use cluster-specific distance functions. The most common approach is to use a Mahalanobis distance with a cluster-specific covariance matrix. The covariance matrix comprises shape and size parameters. The Euclidean distance is a special case that results for

  11. Additional Material: Neuro-Fuzzy Systems

  12. Beispiel : Automatik-Getriebe Aufgabe: Verbesserung des VWAutomatik-Getriebes - keine zusätzlichen Sensoren - individuelle Anpassung des Schaltverhaltens Idee (1995): Das Fahrzeug “beobachtet” und klassifiziert den Fahrer nach Sportlichkeit - ruhig, normal, sportlich Bestimmung eines Sport-Faktors aus [0, 1] - nervös  Beruhigung des Fahrers Testfahrzeug: - verschiedene Fahrer, Klassifikation durch Experten (Mitfahrer) - gleichzeitige Messungen: • Geschwindigkeit, • Position, • Geschwindigkeit des Gaspedals, • Winkel des Lenkrades, ... (14 Attribute).

  13. Modellierung unscharfer Informationen mit Fuzzy-Mengen Zugehörigkeitsgrad

  14. Example:Continously Adapting Gear Shift Schedule in VW New Beetle

  15. Fuzzy-Regler mit 7 Regeln Optimiertes Programm AG 4 Laufzeit 80 ms,12 mal pro Sekunde wird ein neuer Sportfaktor bestimmt In Serie im VW Konzern Erlernen von Regelsystemen mit Hilfe von Künstlichen Neuronalen Netzen, Optimierung mit evolutionären Algorithmen

  16. Beispiel : Fuzzy Datenbank TOP MANAGEMENT TALENTBANK NACH- FOLGER MANAGEMENT Nachfolger für Top-Management Positionen

  17. Beispiel : Automatisiertes sensor-basiertes Landen

  18. Neuro-Fuzzy Systems • Building a fuzzy system requires • prior knowledge (fuzzy rules, fuzzy sets) • manual tuning: time consuming and error-prone • Therefore: Support this process by learning • learning fuzzy rules (structure learning) • learning fuzzy set (parameter learning) Approaches from Neural Networks can be used

  19. Example: Prognosis of the Daily Proportional Changes of the DAX at the Frankfurter Stock Exchange (Siemens) Database: time series from 1986 - 1997

  20. Fuzzy Rules in Finance • Trend RuleIF DAX = decreasing AND US-$ = decreasingTHEN DAX prediction = decreaseWITH high certainty • Turning Point RuleIF DAX = decreasing AND US-$ = increasingTHEN DAX prediction = increaseWITH low certainty • Delay RuleIF DAX = stable AND US-$ = decreasingTHEN DAX prediction = decreaseWITH very high certainty • In generalIFx1 is m1ANDx2 is m2THENy = hWITH weight k

  21. Classical Probabilistic Expert Opinion Pooling Method DM analyzes each source (human expert, data + forecasting model) in terms of (1) Statistical accuracy, and (2) Informativeness by asking the source to asses quantities (quantile assessment) DM obtains a “weight” for each source DM “eliminates” bad sources DM determines the weighted sum of source outputs Determination of “Return of Invest”

  22. E experts, R quantiles for N quantities  each expert has to asses R·N values stat. Accuracy: information score: weight for expert e: outputt= roi =

  23. Formal Analysis Sources of information R1 rule set given by expert 1 R2 rule set given by expert 2 D data set (time series) Operator schema fuse (R1, R2) fuse two rule sets induce(D) induce a rule set from D revise(R, D) revise a rule set R by D

  24. Formal Analysis • Strategies: • fuse(fuse (R1, R2), induce(D)) • revise(fuse(R1, R2), D)  • fuse(revise(R1, D), revise(R2, D)) • Technique: Neuro-Fuzzy Systems • Nauck, Klawonn, Kruse, Foundations of Neuro-Fuzzy Systems, Wiley 97 • SENN (commercial neural network environment, Siemens)

  25. Neuro-Fuzzy Architecture

  26. From Rules to Neural Networks 1. Evaluation of membership degrees 2. Evaluation of rules (rule activity) 3. Accumulation of rule inputs and normalization

  27. The Semantics-Preserving Learning Algorithm Reduction of the dimension of the weight space 1. Membership functions of different inputs share their parameters, e.g. 2. Membership functions of the same input variable are not allowed to pass each other, they must keep their original order, e.g. Benefits:  the optimized rule base can still be interpreted the number of free parameters is reduced

  28. Return-on-Investment Curves of the Different Models Validation data from March 01, 1994 until April 1997

  29. Neuro-Fuzzy Systems in Data Analysis • Neuro-Fuzzy System: • System of linguistic rules (fuzzy rules). • Not rules in a logical sense, but function approximation. • Fuzzy rule = vague prototype / sample. • Neuro-Fuzzy-System: • Adding a learning algorithm inspired by neural networks. • Feature: local adaptation of parameters.

  30. A Neuro-Fuzzy System • is a fuzzy system trained by heuristic learning techniques derived from neural networks • can be viewed as a 3-layer neural network with fuzzy weights and special activation functions • is always interpretable as a fuzzy system • uses constraint learning procedures • is a function approximator (classifier, controller)

  31. Learning Fuzzy Rules • Cluster-oriented approaches=> find clusters in data, each cluster is a rule • Hyperbox-oriented approaches=> find clusters in the form of hyperboxes • Structure-oriented approaches=> used predefined fuzzy sets to structure the data space, pick rules from grid cells

  32. Hyperbox-Oriented Rule Learning Search for hyperboxes in the data space Create fuzzy rules by projecting the hyperboxes Fuzzy rules and fuzzy sets are created at the same time Usually very fast

  33. Hyperbox-Oriented Rule Learning • Detect hyperboxes in the data, example: XOR function • Advantage over fuzzy cluster anlysis: • No loss of information when hyperboxes are represented as fuzzy rules • Not all variables need to be used, don‘t care variables can be discovered • Disadvantage: each fuzzy rules uses individual fuzzy sets, i.e. the rule base is complex.

  34. Structure-Oriented Rule Learning Provide initial fuzzy sets for all variables. The data space is partitioned by a fuzzy grid Detect all grid cells that contain data (approach by Wang/Mendel 1992) Compute best consequents and select best rules (extension by Nauck/Kruse 1995, NEFCLASS model)

  35. Structure-Oriented Rule Learning • Simple: Rule base available after two cycles through the training data • 1. Cycle: discover all antecedents • 2. Cycle: determine best consequents • Missing values can be handled • Numeric and symbolic attributes can be processed at the same time (mixed fuzzy rules) • Advantage: All rules share the same fuzzy sets • Disadvantage: Fuzzy sets must be given

  36. Learning Fuzzy Sets • Gradient descent proceduresonly applicable, if differentiation is possible, e.g. for Sugeno-type fuzzy systems. • Special heuristic procedures that do not use gradient information. • The learning algorithms are based on the idea of backpropagation.

  37. Learning Fuzzy Sets: Constraints • Mandatory constraints: • Fuzzy sets must stay normal and convex • Fuzzy sets must not exchange their relative positions (they must not „pass“ each other) • Fuzzy sets must always overlap • Optional constraints • Fuzzy sets must stay symmetric • Degrees of membership must add up to 1.0 • The learning algorithm must enforce these constraints.

  38. Example: Medical Diagnosis Results from patients tested for breast cancer (Wisconsin Breast Cancer Data). Decision support: Do the data indicate a malignant or a benign case? A surgeon must be able to check the classification for plausibility. We are looking for a simple and interpretable classifier: knowledge discovery.

  39. Example: WBC Data Set • 699 cases (16 cases have missing values). • 2 classes: benign (458), malignant (241). • 9 attributes with values from {1, ... , 10}(ordinal scale, but usually interpreted as a numerical scale). • Experiment: x3 and x6 are interpreted as nominal attributes. • x3 and x6 are usually seen as „important“ attributes.

  40. Applying NEFCLASS-J • Tool for developing Neuro-Fuzzy Classifiers • Written in JAVA • Free version for research available • Project started at Neuro-Fuzzy Group of University of Magdeburg, Germany

  41. NEFCLASS: Neuro-Fuzzy Classifier Output variables (class labels) Unweighted connections Fuzzy rules Fuzzy sets (antecedents) Input variables (attributes)

  42. NEFCLASS: Features Automatic induction of a fuzzy rule base from data Training of several forms of fuzzy sets Processing of numeric and symbolic attributes Treatment of missing values (no imputation) Automatic pruning strategies Fusion of expert knowledge and knowledge obtained from data

  43. Representation of Fuzzy Rules c1 c2 R1 R2 small large large x y Example: 2 Rules R1: if x islarge and y is small, then class is c1. R2: if x is large and y is large, then class is c2. The connections x R1 and x R2are linked. The fuzzy set large is a shared weight. That means the term large has always the same meaning in both rules.

  44. 1. Training Step: Initialisation c1 c2 x y Specify initial fuzzy partitions for all input variables

  45. 2. Training Step: Rule Base Algorithm: for (all patterns p) do find antecedent A, such that A( p) is maximal;if (A L) then add A to L;end; for (all antecedents AL) do find best consequent C for A; create rule base candidate R = (A,C); Determine the performance of R; Add R to B;end; Select a rule base from B; Variations: Fuzzy rule bases can also be created by using prior knowledge, fuzzy cluster analysis, fuzzy decision trees, genetic algorithms, ...

  46. Selection of a Rule Base • Order rules by performance. • Either selectthe best r rules orthe best r/m rules per class. • r is either given or is determined automatically such that all patterns are covered.

  47. Rule Base Induction c1 c2 R1 R2 R3 x y NEFCLASS uses a modified Wang-Mendel procedure

More Related