1 / 71

Gist

Gist. The of the Scene. Essence. Trayambaka Karra. KT. and Garold Fuks. The “Gist” of a scene. If this is a street this must be a pedestrian. Physiological Evidence. People are excellent in identifying pictures (Standing L., QL. Exp. Psychol. 1973).

marlin
Télécharger la présentation

Gist

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Gist The of the Scene Essence Trayambaka Karra KT andGarold Fuks

  2. The “Gist” of a scene If this is a street this must be a pedestrian

  3. Physiological Evidence • People are excellent in identifying pictures(Standing L., QL. Exp. Psychol. 1973) • Change Blindness (seconds) (Simons DJ,Levin DT,Trends Cog.Sci. 97) • Gist: abstract meaning of scene • Obtained within 150 ms(Biederman, 1981, Thorpe S. et.al 1996 ) • Obtained without attention(Oliva & Schyns, 1997, Wolfe,J.M. 1998) • Possibly derived via statistics of low-level structures • (e.g. Swain & Ballard, 1991)

  4. What is the “gist” • Inventory of the objects (2-3 objects in 150 msec Luck & Vogel, Nature 390, 1997 ) • Relation between objects (layout) (J. Wolfe, Curr. Bio. 1998, 8 ) • Presence of other objects • “Visual stuff” – impression of low level features

  5. How does the “Gist” works Statistical Properties Object Properties R.A. Rensink, lecture notes

  6. Context Modeling Previous Models Scene based Context Model Context Based Applications Place Identification Object Priming Control of Focus of Attention Scale Selection Scene Classification Joint Local and Global Features Applications Object Detection and Localization Summary Outline

  7. Probabilistic Framework MAP Estimator • v – image measurements • O – object property • Category (o) • Location (x) • Scale (σ)

  8. Object-Centered Object Detection • The only image features relevant to object detection • are those belonging to the object and not the • background B. Moghaddam, A. Petland IEEE, PAMI-19 1997

  9. The “Gist” of a scene Context can provide prior Local features can be ambiguous

  10. Scene Based Context Model Background provides a likelihood of finding an object Prob(Car/image) = low Prob(Person/image) = high

  11. Context Modeling • Previous Context Models (Fu, Hammond and Swain, 1994,Haralick, 1983; Song et al, 2000) • Rule Based Context Model • Object Based Context Model • Scene centered context representation (Oliva and Torralba, 2001,2002)

  12. Rule Based Context Model Structural Description O1 Above O2 O2 Touch Above O3 Left-of Right-of O4 O4

  13. Rule Based Context Model Fu, Hammond and Swain, 1994

  14. Object Based Context Model • Context is incorporated only through prior • probability of object combinations in the world R. Harralick, IEEE, PAMI-5 1983

  15. Scene Based Context Model What are the features representing scene - ? • Statistics of local low level features • Color histograms • Oriented band pass filters

  16. Context Features - Vc g1(x) v(x,1) g2(x) v(x,2) gK(x) v(x,K)

  17. Context Features - Vc Gabor filter People, no car Car , no people

  18. Context Features - Vc PCA

  19. Context Features - Summary I(x) Bank Of Filters Dimension Reduction PCA

  20. Probability from Features How to obtain context based probability priors P(O/vc) on object properties - ? • GMM - Gaussian Mixture Model • Logistic regression • Parzen window

  21. Probability from Features GMM P(Object Property/Context) Need to study two probabilities: P(vc/O) – likelihood of the features given the presence of an object P(vc/¬O) – likelihood of the features given the absence of an object Gaussian Mixture Model: The unknown parameters are learnt by EM algorithm

  22. Probability from Features How to obtain context based probability priors P(O/vc) on object properties - ? • GMM - Gaussian Mixture Model • Logistic regression • Parzen window

  23. Probability from Features Logistic Regression

  24. Probability from Features Logistic Regression Example O = having back problems vc = age Training Stage • - The log odds for 20 year old person • The log odds ratio when comparing two persons • who differ 1 year in age Working Stage

  25. Probability from Features How to obtain context based probability priors P(O/vc) on object properties - ? • GMM - Gaussian Mixture Model • Logistic regression • Parzen window

  26. Probability from Features Parzen Window Radial Gaussian Kernel

  27. What did we have so far… • Context Modeling • Context Based Applications • Place Identification • Object Priming • Control of Focus of Attention • Scale Selection • Scene Classification

  28. Place Identification Goal: Recognize specific locations

  29. Place Identification A.Torralba, K.Murphy, W. Freeman, M. Rubin ICCV 2003

  30. Place Identification Decide only when Precision vs. Recall rate: A.Torralba, P. Sinha, MIT AIM 2001-015

  31. Object Priming • How do we detect objects in an image? • Search the whole image for the object model. • What if I am searching in images where the object doesn’t exist at all? • Obviously, wasting “my precious” computational resources. --------- GOLUM. • Can we do better and if so, how? • Use the “great eye”, the contextual features of the image (vC), to predict the probability of finding our object of interest, o in the image i.e. P(o / vC).

  32. Object Priming ….. • What to do? • Use my experience to learn from a database of images with • How to do it? • Learn the PDF , by a mixture of Gaussians • Also, learn the PDF

  33. Object Priming …..

  34. Object Priming …..

  35. Control of Focus of Attention • How do biological visual systems use to deal with the analysis of complex real-world scenes? • by focusing attention into image regions that require detailed analysis.

  36. Modeling the Control of Focus of Attention How to decide which regions are “more” important than others? • Local–type methods • Low level saliency maps – regions that have different properties than their neighborhood are considered salient. • Object centered methods. • Global-type methods • Contextual control of focus of attention

  37. Contextual Control of Focus of Attention • Contextual control is both • Task driven (looking for a particular object o) and • Context driven (given global context information: vC) • No use of object models (i.e. ignores object centered features)

  38. Contextual Control of Focus of Attention …

  39. Contextual Control of Focus of Attention … • Focus on spatial regions that have high probability of containing the target object o given context information (vC) • For each location x, lets calculate the probability of presence of the object o given the context vC. • Evaluate the PDF based on the past experience of the system.

  40. Contextual Control of Focus of Attention … Learning Stage: Use the Swiss Army Knife, the EM algorithm, to estimate the parameters

  41. Contextual Control of Focus of Attention …

  42. Scale Selection • Scale selection is • a fundamental problem in computer vision. • a key bottleneck for object-centered object detection algorithms. • Can we estimate scale in a pre-processing stage? • Yes, using saliency measures of low-level operators across spatial scales. • Other methods? Of course, …..

  43. Context-Driven Scale Selection Preferred Scale,

  44. Context-Driven Scale Selection ….

  45. Context-Driven Scale Selection ….

  46. Scene Classification • Strong correlation between the presence of many types of objects. • Do not model this correlation directly. Rather, use a “common” cause, which we shall call “scene”. • Train a Classifier to identify scenes. • Then all we need is to calculate

  47. What did we have so far… • Context Modeling • Context Based Applications • Joint Local and Global Features Applications • Object Detection and Localization Need new tools: Learning and Boosting

  48. Weak Learners • Given (x1,y1),…,(xm,ym) where • Can we extract “rules of thumb” for classification purposes? • Weak learner finds a weak hypothesis (rule of thumb) h : X {spam, non-spam}

  49. Decision Stumps • Consider the following simple family of component classifiers generating ±1 labels: h(x;p) = a[xk > t] - b where p = {a, b, k, t}. These are called decision stumps. • Sign (h) for classification and mag (h) for a confidence measure. • Each decision stump pays attention to only a single component of the input vector.

  50. Ponders his maker, ponders his will • Can we combine weak classifiers to produce a single strong classifier in a simple manner: hm(x) = h(x;p1) + …. + h(x;pm) where the predicted label for x is the sign of hm(x). • Is it beneficial to allow some of the weak classifiers to have more “votes” than others: hm(x) = α1h(x;p1) + …. + αmh(x;pm) where the non-negative votes αi can be used to emphasize the components more reliable than others.

More Related