1 / 14

ViPER Video Performance Evaluation Resource

ViPER Video Performance Evaluation Resource. University of Maryland. Problem and Motivation. Unified video performance evaluation resource, including: ViPER-GT – a Java toolkit for marking up videos with truth data. ViPER-PE – a command line tool for comparing truth data to result data.

tomas
Télécharger la présentation

ViPER Video Performance Evaluation Resource

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ViPERVideo Performance Evaluation Resource University of Maryland

  2. Problem and Motivation • Unified video performance evaluation resource, including: • ViPER-GT – a Java toolkit for marking up videos with truth data. • ViPER-PE – a command line tool for comparing truth data to result data. • A set of scripts for running several sets of results with different options and generating graphs.

  3. Solutions • Object level matching. • First, do matching. • For each ground truth object, get the output object that is the closest. • Alternatively, for each subset of truth objects, get the subset of output objects that minimizes the total overall distance. • Measure of precision / recall for all objects. • Score for each object match. • O(ex) • Pixel/object frame level and single-match tracking. • For each frame, generate a series of metrics looking at the truth and result pixels and box sets. • Using keys, or the location of object in frame k, get success rates for matching individual moving boxes.

  4. Pixel Graphs

  5. Pixel-Object Graphs

  6. Tracking Graphs

  7. Progress • Polygons added. • Slight improvements in memory usage. • Various responses to user feedback. • Changed the way certain metrics are calculated.

  8. Goals and Milestones • Defining formats for tracking people, and metrics to operate on them. • Adding new types of graphs to the script output. • Replacing or upgrading the current graph toolkit. • Reducing memory usage.

  9. Fin Dr. David Doermann David Mihalcik Ilya Makedon & many others

  10. Object Level Matching • Most obvious solution: many-many matching. • Allows matching on any data type, at a price.

  11. Pixel-Frame-Box Metrics • Look at each frame and ask a specific question about its contents. • Number of pixels correctly matched. • Number of boxes that have some overlap. • Or overlap greater than some threshold. • How many boxes overlap a given box? (Fragmentation) • Look at all frames and ask a question: • Number of frames correctly detected. • Proper number of objects counted.

  12. Individual Box Tracking Metrics • Mostly useful for the retrieval problem, this solution looks at pairs of ground truth boxes and a result box. • Metrics are: • Position • Size • Orientation

  13. Questions: Ignoring Ground Truth • Assume the evaluation routine is given a set of objects to ignore (or rules for determining what type of object to ignore). How does this effect the output? • For pixel measures, just don’t count pixels on ignored regions. • For object matches, do the complete match; when finished, ignore result data that matches ignored truth.

  14. Questions: Presenting the Results

More Related