1 / 37

A Similarity Evaluation Technique for Data Mining with Ensemble of Classifiers

International Workshop on Similarity Search. A Similarity Evaluation Technique for Data Mining with Ensemble of Classifiers. Seppo Puuronen, Vagan Terziyan. 1-2 September, 1999 Florence (Italy). Authors. Seppo Puuronen. sepi@jytko.jyu.fi. Vagan Terziyan. vagan@jytko.jyu.fi.

mayrad
Télécharger la présentation

A Similarity Evaluation Technique for Data Mining with Ensemble of Classifiers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. International Workshop on Similarity Search A Similarity Evaluation Technique for Data Mining with Ensemble of Classifiers Seppo Puuronen, Vagan Terziyan 1-2 September, 1999 Florence (Italy)

  2. Authors Seppo Puuronen sepi@jytko.jyu.fi Vagan Terziyan vagan@jytko.jyu.fi Department of Computer Science and Information Systems University of Jyvaskyla FINLAND Department of Artificial Intelligence Kharkov State Technical University of Radioelectronics, UKRAINE

  3. Contents • The Research Problem and Goal • Basic Concepts • External Similarity Evaluation • Evaluation of Classifiers Competence • An Example • Internal Similarity Evaluation • Conclusions

  4. The Research Problem During the past several years, in a variety of application domains, researchers in machine learning, computational learning theory, pattern recognition and statistics have tried to combine efforts to learn how to create and combine an ensemble of classifiers. The primary goal of combining several classifiers is to obtain a more accurate prediction than can be obtained from any single classifier alone.

  5. Goal • The goal of this research is to develop simple similarity evaluation technique to be used for classification problem based on an ensemble of classifiers • Classification here is finding of an appropriate class among available ones for certain instance based on classifications produced by an ensemble of classifiers

  6. Basic Concepts:Training Set (TS) • TSof an ensemble of classifiers is a quadruple: <D,C,S,P> • Dis the set of instances D1, D2,..., Dn to be classified; • C is the set of classes C1, C2,..., Cm ,that are used to classify the instances; • Sis the set of classifiers S1, S2,..., Sr , which select classes to classify the instances; • Pis the set of semantic predicates that define relationships between D, C, S

  7. Basic Concepts:Semantic Predicate P

  8. Problem 1:Deriving External Similarity Values Classes Instances Classifiers

  9. External Similarity Values External Similarity Values (ESV): binary relations DC, SC, and SD between the elements of (sub)sets of D and C; S and C; and S and D. ESV are based on total support among all the classifiers for voting for the appropriate classification (or refusal to vote)

  10. Problem 2:Deriving Internal Similarity Values Classes Instances Classifiers

  11. Internal Similarity Values Internal Similarity Values (ISV): binary relations between two subsets of D, two subsets of C and two subsets of S. ISV are based on total support among all the classifiers for voting for the appropriate connection (or refusal to vote)

  12. Why we Need Similarity Values (or Distance Measure) ? • Distance between instances is used by agents to recognize nearest neighbors for any classified instance • distance between classes is necessary to define the misclassification error during the learning phase • distance between classifiers is useful to evaluate weights of all classifiers to be able to integrate them by weighted voting

  13. Deriving External Relation DC:How well class fits the instance Classes Instances Classifiers

  14. Deriving External Relation SC:Measures Classifiers Competence in the Area of Classes • The value of the relation (Sk,Cj) in a way represents the total support that the classifier Sk obtains selecting (refusing to select) the class Cj to classify all the instances.

  15. Example of SC Relation Classes Instances Classifiers

  16. Deriving External Relation SD:Measures “Competence” of Classifiers in the Area of Instances • The value of the relation (Sk,Di) represents the total support that the classifier Sk receives selecting (or refusing to select) all the classes to classify the instance Di.

  17. Example of SD Relation Instances Classes Classifiers

  18. Standardizing External Relations to the Interval [0,1] nis the number of instances mis the number of classes ris the number of classifiers

  19. Competence of a Classifier Classes Instances Conceptual pattern of class definition Cj Conceptual pattern of features Di Competence in the Instance Area Competence in the Area of Classes Classifier

  20. Classifier’s Evaluation:Competence Quality in an Instance Area - measure of the “classification abilities” of a classifier relatively to instances from the support point of view

  21. Agent’s Evaluation:Competence Quality in the Area of Classes - measure of the “classification abilities” of a classifier in the correct use of classes from the support point of view

  22. Quality Balance Theorem The evaluation of a classifier’s competence (ranking, weighting, quality evaluation) does not depend on the competence area “real world of instances” or “conceptual world of classes” because both competence values are always equal

  23. Proof ... ...

  24. An Example • Let us suppose that four classifiers have to classify three papers submitted to a conference with five conference topics • The classifiers should define their selection of appropriate conference topic for every paper • The final goal is to obtain a cooperative result of all the classifiers concerning the “paper - topic” relation

  25. C (classes) Set in the Example Classes - Conference PapersNotation AI and Intelligent Systems C1 Analytical Technique C2 Real-Time Systems C3 Virtual Reality C4 Formal Methods C5

  26. S (classifiers) Set in the Example Classifiers - “Referees” Notation A.B. S1 H.R. S2 M.L. S3 R.S. S4

  27. D (instances) Set in the Example

  28. Selections Made for the Instance“Paper 1” D1 P(D,C,S) C1 C2 C3 C4 C5 S11 -1 -1 0 -1 S20+ -1** 0 ++ 1* -1*** S30 0 -1 1 0 S41 -1 0 0 1 Classifier H.R. considers “Paper 1” to fit to topic Virtual Reality* and refuses to include it to Analytical Technique** or Formal Methods***.H.R. does not choose or refuse to choose the AI and Intelligent Systems+or Real-Time Systems++ topics to classify “Paper 1”.

  29. Selections Made for the Instance“Paper 2” D2 P C1 C2 C3 C4 C5 S1-1 0 -1 0 1 S21 -1 -1 0 0 S31 -1 0 1 1 S4-1 0 0 1 0

  30. Selections Made for the Instance“Paper 3” D3 P C1 C2 C3 C4 C5 S11 0 1 -1 0 S20 1 0 -1 1 S3-1 -1 1 -1 1 S4-1 -1 1 -1 1

  31. Result of Cooperative Paper Classification Based on DC Relation

  32. Results of Classifiers’ Competence Evaluation (based on SC and SD sets) … Proposals obtained from the classifier A.B. should be accepted if they concern topics Real-Time Systems and Virtual Reality or instances “Paper 1” and “Paper 3”, and these proposals should be rejected if they concern AI and Intelligent Systems or “Paper 2”. In some cases it seems to be possible to accept classification proposals from the classifier A.B. if they concern Analytical Technique and Formal Methods. All four classifiers are expected to give an acceptable proposals concerning “Paper 3” and only suggestion of the classifier M.L. can be accepted if it concerns “Paper 2” ...

  33. Deriving Internal Similarity Values Via one intermediate set Via two intermediate sets

  34. Internal Similarity for Classifiers:Instance-Based Similarity Instances Classifiers

  35. Internal Similarity for Classifiers:Class-Based Similarity Classes Classifiers

  36. Internal Similarity for Classifiers:Class-Instance-Based Similarity Classes Instances Classifiers

  37. Conclusion • Discussion was given to methods of deriving the total support of each binary similarity relation. This can be used, for example, to derive the most supported classification result and to evaluate the classifiers according to their competence • We also discussed relations between elements taken from the same set: instances, classes, or classifiers. This can be used, for example, to divide classifiers into groups of similar competence relatively to the instance-class environment

More Related