1 / 17

ACE

ACE. AUTONOMOUS CLASSIFICATION ENGINE Gautam Bhattacharya MUMT 621 Winter 2012. contents. Introduction - What is ACE ? Limitations of existing systems ACE Framework Weka and Weka related issues ACE XML Testing ACE. Introduction.

devona
Télécharger la présentation

ACE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ACE • AUTONOMOUS CLASSIFICATION ENGINE • Gautam Bhattacharya • MUMT 621 • Winter 2012

  2. contents • Introduction - What is ACE ? • Limitations of existing systems • ACE Framework • Weka and Weka related issues • ACE XML • Testing ACE

  3. Introduction • ACE (Autonomous Classification Engine) is a standardized classification framework, specifically designed for MIR related research. • The ACE system is designed with the dual goals of : • Increasing classification success rates • Facilitating the process of classification for users of all skill levels.

  4. Limitations of existing Classification systems • General Pattern Recognition softwares - PRTools (Matlab), Weka (Java) • Using general pattern recognition frameworks can work well with some limited applications, but one inevitably encounters complications, limitations and difficulties due to the particularities of music. • Frameworks specifically adapted to MIR like Marsyas (Tzanetakis, 1999) and M2K (Downie, 2004)

  5. ACE Framework • Choosing the best algorithm(s) to use for a particular application and effectively parameterizing them are not tasks that can be optimally performed by inexperienced researchers. • The Autonomous Classification Engine (ACE) was developed as a solution to this problem. Moreover, ACE can tackle this problem automatically. • Performs optimization experiments using different dimensionality reduction techniques, classifiers, classifier parameters and classifier ensemble architectures. - Particular efforts have been made to investigate the power of feature weighting.

  6. ACE FRAMEWORK • ACE is a framework for using and optimizing classifiers. • ACE analyzes the effectiveness of different approaches not only in terms of classification accuracy, but also training time and classification time. • ACE also allows users to specify limits on how long the system has to arrive at a solution. • ACE may also be used directly as a classifier. • ACE makes use of classifier ensembles, which help improve classification times.

  7. INTRODUCTION • An important advantage of ACE is that, it is open source and freely distributable. • ACE is also implemented in Java, which means that the framework is portable among operating systems and is easy to install. • ACE is built upon the Weka framework, and was also built with a modular and extensible design philosophy. McKay, C., R. Fiebrink, D. McEnnis, B. Li, and I. Fujinaga. 2005. ACE: A framework for optimizing music classification. Proceedings of the International Conference on Music Information Retrieval. 42–9.

  8. weka • Weka is a collection of machine learning algorithms for data mining tasks. It was developed by researchers at the University of Waikato in New Zealand. • The algorithms can either be applied directly to a dataset or called from your own Java code. • Weka contains tools for data pre-processing, classification, regression, clustering, association rules, and visualization. It is also well-suited for developing new machine learning schemes.

  9. Problems with Weka • There is no good way to assign more than one class to a given instance. • A second problem is that ARFF files do not permit any logical grouping of features. • A third problem is that ARFF files do not allow any labelling or structuring of instances. • A fourth problem is that there is no way of imposing a structure on the class labels

  10. ACE XML • Why XML? • An important priority while developing a feature file format was to enforce a clear separation between the feature extraction and classification tasks, - The file format makes it possible to use any feature extractor to communicate any features of any type to any classification system. • The reusability of files is another important consideration - it could be useful to use the same set of extracted features for a variety of tasks, such as genre classification as well as artist identification. Similarly, it could be convenient to reuse the same model classifications with different sets of features.

  11. ACE XML • The use of two separate files is therefore proposed for what is traditionally contained in one file : McKay, C., R. Fiebrink, D. McEnnis, B. Li, and I. Fujinaga. 2005. ACE: A framework for optimizing music classification. Proceedings of the International Conference on Music Information Retrieval. 42–9.

  12. ACE XML • Two additional optional files are also available for : • specifying class taxonomies • storing metadata about features, such as basic descriptions or details about the cardinality of multi-dimensional features.

  13. ACE XML McKay, C., R. Fiebrink, D. McEnnis, B. Li, and I. Fujinaga. 2005. ACE: A framework for optimizing music classification. Proceedings of the International Conference on Music Information Retrieval. 42–9.

  14. TESTING ACE • ACE achieved a classification success rate of 95.6% with the five-class beat-box classification experiment using AdaBoost. • A reproduction of a previous seven-class percussion identification experiment (Tindale et al. 2004), was also performed. - Tindale’s best success rate of 94.9% was improved to 96.3% by ACE, a reduction in error rate of 27.5%.

  15. Testing ace • ACE was run on ten UCI datasets (Blake and Merz 1998) from a variety of research domains. McKay, C., R. Fiebrink, D. McEnnis, B. Li, and I. Fujinaga. 2005. ACE: A framework for optimizing music classification. Proceedings of the International Conference on Music Information Retrieval. 42–9.

  16. Thank you

  17. questions

More Related