1 / 30

Content-Based Image Retrieval

Content-Based Image Retrieval. Natalia Vassilieva natalia@ntc-it.ru. Alexander Dolnik alexander.dolnik@gmail.com. Il’ya Markov ilya.markov@gmail.com. Saint-Petersburg State University. Our team. Natalia Vassilieva Alexander Dolnik Ilya Markov Maria Teplyh Maria Davydova

Télécharger la présentation

Content-Based Image Retrieval

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Content-Based Image Retrieval Natalia Vassilieva natalia@ntc-it.ru Alexander Dolnik alexander.dolnik@gmail.com Il’ya Markov ilya.markov@gmail.com Saint-Petersburg State University February, 26 2007

  2. Our team • Natalia Vassilieva • Alexander Dolnik • Ilya Markov • Maria Teplyh • Maria Davydova • Dmitry Shubakov • Alexander Yaremchuk February, 26 2007

  3. General problems • Semantic gap between system and human mode of image analysis • Specific of human visual perception • How to catch semantics of an image • Signature calculation and response time • Combining different features and metrics February, 26 2007

  4. semantic low-level features semantic gap Image retrieval system How to minimize “semantic gap”? General goal: an image retrieval system • that is able to process natural language query • that is able to search among annotated and non-annotated images • that takes into account human visual perception • that processes various features (color, texture, shapes) • that uses relevance feedback for query refinement, adaptive search February, 26 2007

  5. auto-annotation multidimensional indexing (vp-tree) database image signature calculation indexation color space partition according to human perception fusion of results: independent search by different features result query signature calculation comparison retrieval Relevance feedback: query refinement annotations refinement CBIR : Traditional approach February, 26 2007

  6. Research directions • Color space partition according to human visual perception • Correspondence between low-level features and semantics: auto-annotation • Fusion of retrieval result sets • Adaptive search: color and texture fusion • Using relevance feedback February, 26 2007

  7. Human visual perception: colors Experiments with color partition: HSV space (H=9; S=2; V=3) – 72 % (H=11; S=2; V=3) – 66% (H=13; S=2; V=3) – 63% (H=15; S=2; V=3) – 60% Compare partitions of different spaces (RGB, HSV, Lab) February, 26 2007

  8. Research directions • Color space partition according to human visual perception • Correspondence between low-level features and semantics: auto-annotation • Fusion of retrieval result sets • Adaptive search: color and texture fusion • Using relevance feedback February, 26 2007

  9. Auto-annotation • Training set selection • Color feature extraction for every image from the set • Similarity calculation for every pair of images from the set • Training set clustering • Basis color features calculation: one per every cluster • Definition of basis lexical features • Correspondence between basis color features and basis lexical features Natalia Vassilieva, Boris Novikov. Establishing a correspondence between low-level features and semantics of fixed images. In Proceedings of the Seventh National Russian Research Conference RCDL'2005, Yaroslavl, October 04 - 06, 2005 February, 26 2007

  10. Examples city, night, road, river snow, winter, sky, mountain February, 26 2007

  11. Retrieve by textual query • Image database is divided into clusters • Search for appropriate cluster by textual query using cluster’s annotations • Browse the images from the appropriate cluster • Use relevance feedback to refine the query • Use relevance feedback to reorganize the clusters and assign new annotations N. Vassilieva and B. Novikov. A Similarity Retrieval Algorithm for Natural Images. Proc. of the Baltic DB&IS'2004, Riga, Latvia, Scientific Papers University of Latvia, June 2004 February, 26 2007

  12. Feature extraction: color • Color: histograms • Color: statistical approach First moments for color distribution (every channel) and covariations February, 26 2007

  13. Image I1 dist(I1,I2) = KLH(H1i , H2i) N Σ i=1 … Image I2 N filtres Feature extraction: texture • Texture: use independent component filters that results from ICA H. Borgne, A. Guerin-Dugue, A. Antoniadis “Representation of images for classification with independent features” February, 26 2007

  14. Research directions • Color space partition according to human visual perception • Correspondence between low-level features and semantics: auto-annotation • Fusion of retrieval result sets • Adaptive search: color and texture fusion • Using relevance feedback February, 26 2007

  15. Fusion of retrieval result sets Fusion of weighted lists with ranked elements: • How to merge fairly? • How to merge efficiently? • How to merge effectively? ω1 (x11, r11), (x12, r12), … , (x1n, r1n) ω2 (x21, r21), (x22, r22), … , (x2k, r2n) ? … ωm (xm1, rm1), (xm2, rm2), … , (xml, rml) February, 26 2007

  16. Ranked lists fusion: application area • Supplement fusion • union textual results (textual viewpoints ) • Collage fusion • combine texture (texture viewpoint) & color results (color viewpoint) • different color methods (different color viewpoints) February, 26 2007

  17. content-based … … tr1 tr2 TextResult1, textrank1 TR2, tr2, ... Ranked lists fusion: application area • Search by textual query in partly annotated image database Textual query by annotations Result … February, 26 2007

  18. Three main native fusion properties • commutative property • associative property • value of result object's rank independent of another object's ranks Examples: COMBSUM, COMBMIN, COMBMAX merge functions February, 26 2007

  19. Additional native fusion properties • normalization & delimitation property • conic property • attraction of current object for mix result depend on value of function g(rank, weight) ≥ 0 ; • snare condition: February, 26 2007

  20. Conic properties, function g • g monotonically decreases with fixed weight parameter • g monotonically decreases with fixed rank parameter • g must satisfy boundaries conditions: • g( 0, w ) > 0 if w != 0 • g( r, 0 ) = 0 February, 26 2007

  21. Ranked lists fusion: Formulas • Fusion formula • where February, 26 2007

  22. Ranked lists fusion:Algorithm • All lists are sorted by object id • Using step by step lists merging (object id priory) • If object_id1 not equal object_id2 => some object is absent in one of the lists Current object_id1 List 1 Result list List 2 Current object_id2 February, 26 2007

  23. Ranked lists fusion: Experiments Necessary conditions: • Viewpoint should provide some “valuable” information. Retrieval system's performance at least should be better than a random system. • Information is not fully duplicated. There should be partial disagreement among viewpoints. February, 26 2007

  24. Ranked lists fusion: Experiments Parameters: • Roverlap && Noverlap conditions • Intercomparison of methods • Classical methods: COMBSUM, COMBMIN, COMBMAX • Probability methods: probFuse • Random method: random values that satisfied to merge properties. February, 26 2007

  25. Research directions • Color space partition according to human visual perception • Correspondence between low-level features and semantics: auto-annotation • Fusion of retrieval result sets • Adaptive search: color and texture fusion • Using relevance feedback February, 26 2007

  26. Adaptive merge: color and texture Dist(I, Q) = α*C(I, Q) + (1 - α)*Т(I, Q), C(I, Q) – color distance between I and Q; T(I, Q) – texture distance between I and Q; 0 ≤ α≤ 1 Hypothesis: Optimal αdepends on features of query Q. It is possible to distinguish common features for images that have the same “best”α. February, 26 2007

  27. Adaptive merge: experiments February, 26 2007

  28. Estimation tool • Web-application • Provides interfaces for developers of search-methods • Uses common measures to estimate search methods: • Precision • Pseudo-recall • Collects users opinions – > builds test database February, 26 2007

  29. Datasets • Own photo collection (~2000 images) • Subset from own photo collection (150 images) • Flickr collection (~15000, ~1.5 mln images) • Corel photoset (1100 images) February, 26 2007

  30. Research directions • Color space partition according to human visual perception • Correspondence between low-level features and semantics: auto-annotation • Fusion of retrieval result sets • Adaptive search: color and texture fusion • Using relevance feedback February, 26 2007

More Related