1 / 36

Segmenting Video Into Classes of Algorithm-Suitability

Oisin Mac Aodha (UCL ) Gabriel Brostow (UCL) Marc Pollefeys (ETH). Segmenting Video Into Classes of Algorithm-Suitability. Which algorithm should I (use / download / implement) to track things in this video?. Video from Dorothy Kuipers. The Optical Flow Problem.

melvyn
Télécharger la présentation

Segmenting Video Into Classes of Algorithm-Suitability

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Oisin Mac Aodha(UCL) Gabriel Brostow (UCL) Marc Pollefeys (ETH) Segmenting Video Into Classes of Algorithm-Suitability

  2. Which algorithm should I (use / download / implement) to track things in this video? Video from Dorothy Kuipers

  3. The Optical Flow Problem • #2 all-time Computer Vision problem (disputable) • “Where did each pixel go?”

  4. Optical Flow Solutions • Compared against each other on the “blind” Middlebury test set

  5. Anonymous. Secrets of optical flow estimation and their principles. CVPR 2010 submission 477

  6. Anonymous. Secrets of optical flow estimation and their principles. CVPR 2010 submission 477

  7. 1st Best Algorithm (7th overall as of 17-12-2009)DPOF [18]: C. Lei and Y.-H. Yang. Optical flow estimation on coarse-to-fine region-trees using discrete optimization. ICCV 2009

  8. (3rd overall as of 17-12-2009) 2nd Best AlgorithmClassic+Area [31]: Anonymous. Secrets of optical flow estimation and their principles. CVPR 2010 submission 477

  9. Should use algorithm A Anonymous. Secrets of optical flow estimation and their principles. CVPR 2010 submission 477 Should use algorithm B

  10. Sky SignSymbol Building Car Tree Fence Sidewalk Pedestrian ColumnPole Road

  11. Algorithm B Algorithm A Algorithm C (Artistic version; object-boundaries don’t interest us) Algorithm B Algorithm C Algorithm B Algorithm B Algorithm C Algorithm A Algorithm D

  12. Hypothesis: • that the most suitable algorithm can be chosen for each video automatically, through supervised training of a classifier

  13. Hypothesis: • that the most suitable algorithm can be chosen for each video automatically, through supervised training of a classifier • that one can predict the space-time segments of the video that are best-served by each available algorithm • (Can always come back to choose a per-frame or per-video algorithm)

  14. Experimental Framework Groundtruth labels Image data Feature extraction Learning algorithm Estimated class labels Trained classifier

  15. Experimental Framework Groundtruth labels Image data Feature extraction Learning algorithm New test data Estimated class labels Trained classifier

  16. Experimental Framework Groundtruth labels Image data Feature extraction Learning algorithm New test data Estimated class labels Trained classifier Random Forests Breiman, 2001

  17. Experimental Framework Groundtruth labels Image data Feature extraction Learning algorithm New test data Estimated class labels Trained classifier

  18. “Making” more data

  19. Formulation • Training data D consists of feature vectors x and class labels c (i.e. best-algorithm per pixel) • Feature vector x is multi-scale, and includes: • Spatial Gradient • Distance Transform • Temporal Gradient • Residual Error (afterbicubic reconstruction)

  20. Formulation • Training data D consists of feature vectors x and class labels c (i.e. best-algorithm per pixel) • Feature vector x is multi-scale, and includes: • Spatial Gradient • Distance Transform • Temporal Gradient • Residual Error (afterbicubic reconstruction)

  21. Formulation • Training data D consists of feature vectors x and class labels c (i.e. best-algorithm per pixel) • Feature vector x is multi-scale, and includes: • Spatial Gradient • Distance Transform • Temporal Gradient • Residual Error (afterbicubic reconstruction)

  22. Formulation Details • Temporal Gradient • Residual Error

  23. Application I: Optical Flow

  24. True Po-sitive Rate FlowLib Decision Confidence False Positive Rate (for EPE < 1.0) Ave EPE Number of pixels x105

  25. Application II: Feature Matching

  26. Comparing 2 Descriptions • What is a match? Details are important… • Nearest neighbor (see also FLANN) • Distance Ratio • PCA • Evaluation: density, # correct matches, tolerance “192 correct matches (yellow) and 208 false matches (blue)”

  27. SIFT Decision Confidence

  28. SIFT Decision Confidence

  29. SIFT Decision Confidence

  30. Hindsight / Future Work • Current results don’t quite live up to the theory: • Flaws of best-algorithm are the upper bound (ok) • Training data does not fit in memory (fixable) • “Winning” the race is more than rank (problem!)

  31. Summary • Overall, predictions are correlated with the best algorithm for each segment (expressed as Pr!) • Training data where one class dominates is dangerous – needs improvement • Other features could help make better predictions • Results don’t yet do the idea justice • One size does NOT fit all • At least in terms of algorithm suitability • Could use “bad” algorithms!

  32. Ground Truth Best Based on Prediction

  33. FlowLib Based on Prediction White = 30 pixel end point error

  34. FlowLib Based on Prediction (Contrast enhanced)

More Related