1 / 20

Context-based object-class recognition and retrieval by generalized correlograms

Context-based object-class recognition and retrieval by generalized correlograms. by J. Amores, N. Sebe and P. Radeva. Discussion led by Qi An Duke University. Outline. Introduction Overview of the approach Image representation Learning and matching Implementation with boosting

zonta
Télécharger la présentation

Context-based object-class recognition and retrieval by generalized correlograms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Context-based object-class recognition and retrieval by generalized correlograms by J. Amores, N. Sebe and P. Radeva Discussion led by Qi An Duke University

  2. Outline • Introduction • Overview of the approach • Image representation • Learning and matching • Implementation with boosting • Experimental results • Conclusions

  3. Introduction • Information retrieval from images • Keyword-based • Content-based • Direct comparison • Machine learning models • Constraints • Low burden to the user • Fast learning and testing

  4. Overview of the approach • Image representation with Generalized Correlograms (GCs) • Match homologous parts from training set • Learn the key characteristics (classifier) about these parts and their spatial arrangement • Match the remaining images with the initial model • Re-learn the classifier • Output the learning results

  5. Image representation • Image representation is crucial for learning relevant information efficiently • Pre-processing to obtain the contours • Region segmentation (edge-finding) • Smoothing • The images are represented by a constellation of GCs, each one describing one part of the image (both local and spatial information) • Only informative locations (contour points) are considered

  6. A dense set of all contour points {pj} Sampled reference points where GC descriptors (feature vectors) are extracted {xi} One image is represented with M descriptors localized at {xi}’s. Each contour point is associated with a feature vector lj. The feature vector may contain both local and spatial information. All the values are quantized into several bins. The dimensionality of the GC descriptor is nα×nr×nL (Could be very long and sparse)

  7. Other than the angle of the tangent, the color information can also be used. To provide scale invariance, the radius is normalized by the size of the object.

  8. Learning and matching • Assume an object category has C parts, and each part is modeled with parameters • If the models and parameters are known, a new testing image can be evaluated (i.e. to decide whether an object is present or not) by computing the likelihood.

  9. A testing image is represented with M contextual descriptors, The likelihood that a model context (part) wc is represented by any descriptor in H is given by where is the likelihood that wc is represented by a particular descriptor hi The likelihood that an object (image) Ω is present in H is given by Consider about multiple scales for a testing image. The probability that an object is present in one of the scaled representation of the testing image is given by where s is the index of the scale

  10. Matching with low supervision

  11. Implementation with boosting • To train the parameters of each part of the object, authors apply the AdaBoost with decision stumps. • The weak classifier (decision stumps) A equivalent feature selection process since only a single feature is chosen for one weak classifier.

  12. Procedure of the adaboost classifier

  13. Some local structure and/or color characteristics are selected

  14. Experimental results • Apply the proposed algorithm to CALTECH dataset with seven categories and three background types. • Approximately half the object’s data set and half of the background’s data set are used for training. • Used pre-specified partition if available, or 5 different random partitions.

  15. Conclusions • A novel type of part-based object representation is proposed • Both local attributes and spatial relationship are considered • The computation complexity is significantly lower than other state-of-the-art graph-based object representation • The method works with weak supervision and only very few manually segmented images are required

More Related