1 / 19

Analysis of scores, datasets, and models in visual saliency modeling

Analysis of scores, datasets, and models in visual saliency modeling. Ali Borji, Hamed R. Tavakoli, Dicky N. Sihite, and Laurent Itti,. Toronto dataset. Toronto dataset. Toronto dataset. Toronto dataset. Toronto dataset. Visual Saliency. Why important? Current status

umed
Télécharger la présentation

Analysis of scores, datasets, and models in visual saliency modeling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analysis of scores, datasets, and models in visual saliency modeling • Ali Borji, Hamed R. Tavakoli, Dicky N. Sihite, and Laurent Itti,

  2. Toronto dataset

  3. Toronto dataset

  4. Toronto dataset

  5. Toronto dataset

  6. Toronto dataset

  7. Visual Saliency • Why important? • Current status • Methods: numerous / 8 categories (Borji and Itti, PAMI, 2012) • Databases: • Measures: • scan-path analysis • correlation based measures • ROC analysis How good my method works?

  8. Benchmarks • Judd et al. http://people.csail.mit.edu/tjudd/SaliencyBenchmark/ • Borji and Itti https://sites.google.com/site/saliencyevaluation/ • Yet another benchmark!!!?

  9. Toronto MIT Le Meur Dataset Challenge • Dataset bias : • Center-Bias (CB), • Border effect • Metrics are affected by these phenomena.

  10. Tricking the metric Solution ? • sAUC • Best smoothing factor • More than one metric

  11. The Benchmark Fixation Prediction

  12. Features Low level High level people car intensity color symmetry orientation signs depth text size The Feature Crises Does it capture any semantic scene property or affective stimuli? Challenge of performance on stimulus categories & affective stimuli

  13. The Benchmark Image categories and affective data

  14. The Benchmark Image categories and affective data vs 0.64(non-emotional)

  15. aAdDbBcCaA aAcCaA aAcCbBcCaAaA …. The Benchmarkpredicting scanpath aAbBcCaA aA dD bB cC bBbBcC matching score

  16. The Benchmarkpredicting scanpath (scores)

  17. Category Decoding

  18. Lessons learned • We recommend using shuffled AUC score for model evaluation. • Stimuli affects the performance . • Combination of saliency and eye movement statistics can be used in category recognition. • There seems the gap between models and IO is small (though statistically significant). It somehow alerts the need for new dataset. • The challenge of task decoding using eye statistics is open yet. • Saliency evaluation scores can still be introduced

  19. Questions ??

More Related