480 likes | 583 Vues
Explore support vector machines for pattern recognition and image categorization through maximum margin formulation. Learn about separating cases, hyperplane parameters, KKT conditions, and more. Discover applications in digital libraries, space science, web searching, and more.
E N D
Support Vector Machines Pattern Recognition Sergios TheodoridisKonstantinos Koutroumbas Second Edition A Tutorial on Support Vector Machines for Pattern RecognitionData Mining and Knowledge Discovery, 1998C. J. C. Burges
Separable Case • Label the training data • Hyperplane satisfy • w:normal to the hyperplane • |b|/||w||:perpendicular distance from the hyperplane to the origin • d+ (d-):margin
Separable Case positive example negative example d+ d-
Separable Case • Suppose that all the training data satisfy the following constraints • These can be combines into one set of inequalities • Distance of a point from a hyperplane class 1 class 2
Separable Case • Having a margin of • Task • compute the parameter w, b of the hyperplane maximize
Separable Case • Karush-Kuhn-Tucker (KKT) conditions • : vector of the Langrange multiplier • : Langrangian function
Separable Case • Wolfe dual representation form
Image Categorization by Learning and Reasoning with Regions Yixin ChenUniversity of New Orleans James Z. WangThe Pennsylvania State University Journal of Machine Learning Research 5 (2004) (Submitted 7/03; Revised 11/03; Published 8/04)
Introduction • Automatic image categorization • Difficulties • Variable & uncontrolled image conditions • Complex and hard-to-describe objects in image • Objects occluding other objects • Applications • Digital libraries, Space science, Web searching, Geographic information systems, Biomedicine, Surveillance and sensor system, Commerce, Education
Overview • Give a set of labeled images, can a computer program learn such knowledge or semantic concepts form implicit information of objects contained in image?
Related Work • Multiple-Instance Learning • Diverse Density Function (1998) • MI-SVM (2003) • Image Categorization • Color Histograms (1998-2001) • Subimage-based Methods (1994-2004)
Motivation • Correct categorization of an image depends on identifying multiple aspects of the image • Extension of MIL→A bag must contain a number of instances satisfying various properties
A New Formulation of Multiple-Instance Learning • Maximum margin problem in a new feature space defined by the DD function • DD-SVM • In the instance feature space, a collection of feature vectors, each of which is called an instance prototype, is determined according to DD
A New Formulation of Multiple-Instance Learning • Instance prototype: • A class of instances (or regions) that is more likely to appear in bags (or images) with the specific label than in the other bags • Maps every bag to a point in bag feature space • Standard SVMs are the trained in the bag feature space
Outline • Image segmentation & feature representation • DD-SVM, and extension of MIL • Experiments & result • Conclusions & future work
Image Segmentation • Partitions the image into non-overlapping blocks of size 4x4 pixels • Each feature vector consists of six features • Average color components in a block • LUV color space • Square root of the second order moment of wavelet coefficients in high-frequency bands
HL LL k, l 2x2 coefficients LH HH Image Segmentation • Daubechies-4 wavelet transform • Moments of wavelet coefficients in various frequency bands are effective for representing texture (Unser, 1995)
Image Segmentation • k-means algorithm: cluster the feature vectors into several classes with every class corresponding to one “region” • Adaptively select N by gradually increasing N until a stopping criterion is met (Wang et al. 2001)
Image Representation • :the mean of the set of feature vectors corresponding to each region Rj • Shape properties of each region • Normalized inertia of order 1, 2, 3 (Gersho, 1979)
Image Representation • Shape feature of region Rj as • An image Bi • Segmentation: {Rj : j = 1, …, Ni} • Feature vectors: { xij : j = 1, …, Ni} 9-dimensional feature vector
An extension of Multiple-Instance Learning • Maximum margin formulation of MIL in a bag feature space • Constructing a bag feature space • Diverse density • Learning instance prototypes • Computing bag features
Maximum Margin Formulation of MIL in a Bag Feature Space • Basic idea of new MIL framework: • Map every bag to a point in a new feature space, named the bag feature space • To train SVMs in the bag feature space subject to
Constructing a Bag Feature Space • Clues for classifier design: • What is common in positive bags and does not appear in the negative bags • Instance prototypes computed from the DD function • A bag feature space is then constructed using the instance prototypes
Diverse Density (Maron and Lozano-Perez, 1998) • A function defined over the instance space • DD value at a point in the feature space • The probability that the point agrees with the underlying distribution of positive and negative bags
Diverse Density • It measures a co-occurrence of instances from different (diverse) positive bags
Learning Instance Prototype • An instance prototype represents a class of instances that is more likely to appear in positive bags than in negative bags • Learning instance prototypes then becomes an optimization problem • Finding local maximizers of the DD function in a high-dimensional
Learning Instance Prototype • How do we find the local maximizers? • Start an optimization at every instance in every positive bag • Constraints: • Need to be distinct from each other • Have large DD values
Computing Bag Features • Let be the collection of instance prototypes • Bag features,
Experimental Setup for Image Categorization • COREL Corp: 2,000 images • 20 image categories • JPEG format, size 384*256 (256*384) • Each category are randomly divided into a training set and a test set (50/50) • SVMLight[Joachims, 1999] software is used to train the SVMs
Image Categorization Performance • 5 random test sets, 95% confidence intervals • The images belong to Cat.0 ~ Cat.9 Chapelle et al., 1999 14.8% Andrews et al., 2003 6.8%
Sensitivity to Image Segmentation • k-means clustering algorithmwith 5 different stopping criteria • 1,000 images for Cat.0 ~ Cat.9
13.8% 9.5% 11.7% 6.8% 27.4% Robustness to Image Segmentation
Robustness to the Number of Categories in a Data Set 81.5% 6.8% 67.5% 12.9
Speed • 40 minutes • Training set of 500 images (4.31 regions per image) • Pentium III 700MHz PC running the Linux operating system • Algorithm is implemented in Matlab, C programming language • The majority is spent on learninginstance prototypes
Conclusions • A region-based image categorization method using an extension of MIL → DD-SVM • Image → collection of regions → k-means alg. • Image → a point in a bag feature space (defined by a set of instance prototypes learned with the DD func.) • SVM-based image classifiers are trained in the bag feature space • DD-SVM outperforms two other methods • DD-SVM generates highly competitive results on MUSK data set
Future Work • Limitations • Region naming (Barnard et al., 2003) • Texture dependence • Improvement • Image segmentation algorithm • DD function • Scene category can be a vector • Semantically-adaptive searching • Art & biomedical images