1 / 20

Recognition using Regions

Recognition using Regions. CVPR 2009. Outline. Introduction Overview of the Approach Experimental Results Conclusion. Introduction. Region features they encode shape and scale information of objects naturally they are only mildly affected by background clutter

serge
Télécharger la présentation

Recognition using Regions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Recognition using Regions CVPR 2009

  2. Outline • Introduction • Overview of the Approach • Experimental Results • Conclusion

  3. Introduction • Region features • they encode shape and scale information of objects naturally • they are only mildly affected by background clutter • But sensitivity to segmentation errors

  4. Overview of the Approach • each image is represented by a bag of regions derived from a region tree[2] • region weights are learned using a discriminative max-margin framework • a generalized Hough voting scheme is applied [2] P. Arbel´aez, M. Maire, C. Fowlkes, and J. Malik. From contours to regions: An empirical evaluation. In CVPR, 2009.

  5. Region Description • describe a region by subdividing evenly its bounding box into an n × n grid (n=4) • Contour shape, given by the histogram of oriented responses of the contour detector gPb[22] • Edge shape (by local image gradient computed by convolution with a filter) • Color (CIELAB color space) • Texture, described by texton histograms [22] M. Maire, P. Arbel´aez, C. Fowlkes, and M. Malik. Using contours to detect and localize junctions in natural images. In CVPR, 2008.

  6. Region Description • to compare regions regardless of their relative sizes • background clutter interferes with region representations only mildly compared to interest point descriptors • our region descriptor inherits insights from recent popular image representations such as GIST, HOG and SIFT

  7. Discriminative Weight Learning • Not all regions are equally significant for discriminating an object from another • adapt the framework of [13] for learning region weights [13] A. Frome, Y. Singer, and J. Malik. Image retrieval and classification using local distance functions. In NIPS, 2006.

  8. Discriminative Weight Learning • Exemplar : • Query : • The distance from to is defined as

  9. Discriminative Weight Learning

  10. Detection and Segmentation Algorithms • contains three components: voting, verification and segmentation

  11. Voting • The goal here, given a query image and an object category, is to generate hypotheses of bounding boxes and (partial) support of objects of that category in the image

  12. Verification • A verification classifier is applied to each bounding box hypothesis from voting [13] A. Frome, Y. Singer, and J. Malik. Image retrieval and classification using local distance functions. In NIPS, 2006.

  13. Segmentation • The segmentation task we consider is that of precisely extracting the support of the object

  14. Experimental Results • Data set • ETHZ Shape [12] • Caltech101 [12] V. Ferrari, T. Tuytelaars, and L. V. Gool. Object detection by contour segment networks. In ECCV, 2006.

  15. ETHZ Shape • on average ∼ 100 regions per image • color and texture cues are not very useful in this database • Asp(R) is the aspect ratio of the bounding box of Ruse = 2 and = 0.6 • split the entire set into half training and half test for each category

  16. ETHZ Shape [11] V. Ferrari, F. Jurie, and C. Schmid. Accurate object detection with deformable shape models learnt from images. In CVPR, 2007.

  17. ETHZ Shape Detection rates(%) at 0.3 FPPI pixel-wise mean Average Precision (AP) over 5 trials

  18. Caltech-101 • For each category, we randomly pick 5, 15 or 30 images for training and up to 15 images in a disjoint set for test Geometric Blur[4] [4] A. Berg and J.Malik. Geometric blur and template matching. In CVPR, 2001.

  19. Caltech-101

  20. Conclusion • presented a unified framework for object detection, segmentation, and classification using regions • further shown that : cue combination significantly boosts recognition performance

More Related