1 / 20

On Reducing Classifier Granularity in Mining Concept-Drifting Data Streams

On Reducing Classifier Granularity in Mining Concept-Drifting Data Streams. Peng Wang, H. Wang, X. Wu, W. Wang, and B. Shi Proc. of the Fifth IEEE International Conference on Data Mining (ICDM ’ 05). Speaker: Yu Jiun Liu Date : 2006/9/26. Introduction. State of the art

saima
Télécharger la présentation

On Reducing Classifier Granularity in Mining Concept-Drifting Data Streams

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On Reducing Classifier Granularity in Mining Concept-Drifting Data Streams Peng Wang, H. Wang, X. Wu, W. Wang, and B. Shi Proc. of the Fifth IEEE International Conference on Data Mining (ICDM’05) Speaker: Yu Jiun Liu Date : 2006/9/26

  2. Introduction • State of the art • The incrementally updated classifiers. • The ensemble classifiers. • Model Granularity • Traditional : monolithic • This paper : semantic decomposition

  3. Motivation • The model is decomposable into smaller components. • The decomposition is semantic-aware in the sense.

  4. Monolithic Models • Stream : • Attributes : • Class Label : • Window : • Model (Classifier) :Ci

  5. Rule-based Models • A rule form : • minsup = 0.3 and minconf = 0.8 • Valid rules of W1 are: • Valid rules of W3 are:

  6. Algorithm • Phase 1 : Initialization • Use the first w records to train all valid rules for window W1. • Construct the RS-tree and REC-tree. • Phase 2 : Update • When record arrives, insert it into the REC-tree and update the sup. and conf. of the rules matched by it. • Delete oldest record and update the value matched by it.

  7. Data Structure

  8. RS-Tree • A prefix tree with attribute order • Each node N represents a unique rule R : P  Ci • N’ (P’  Cj) is a child node of N, iff:

  9. REC-Tree • Each record r as a sequence • Node N points to rule in the RS-tree if :

  10. Detecting Concept Drifts • percentage V.S. the distribution of the misclassified records. The percentage approach cannot tell us which part of the classifier gives rise to the inaccuracy.

  11. Definition

  12. Finding Rule Algorithm

  13. Update Algorithm

  14. Experiments • CPU : 1.7 GHz • Memory : 256MB • Datasets : synthetic and real life dataset. • Synthetic : • Real life dataset : • 10,344 recodes and 8 dimensions.

  15. Synthetic 10 dimensions Window size 5000 4 dimensions changing Effect of model updating

  16. The relation of concept drifts and

  17. Effect of rule composition

  18. Accuracy and Time • Window size : 10,000 • EC : 10 classifiers, each trained on 1000 records. • Synthetic data.

  19. Real life data

  20. Conclusion • Overcome the effects of concept drifts. • By reducing granularity, change detection and model update can be more efficient without compromising classification accuracy.

More Related