1 / 24

Motion Segmentation at Any Speed

Motion Segmentation at Any Speed. Shrinivas Pundlik and Stan Birchfield Department of Electrical and Computer Engineering Clemson University Clemson, SC USA. The problem of motion segmentation. Carve an image according to motion vectors Gestalt theory:

rileytracy
Télécharger la présentation

Motion Segmentation at Any Speed

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Motion Segmentation at Any Speed Shrinivas Pundlik and Stan Birchfield Department of Electrical and Computer Engineering Clemson UniversityClemson, SC USA

  2. The problem of motion segmentation • Carve an image according to motion vectors • Gestalt theory: • Focus on well-organized patterns rather than disparate parts • “Grouping” - the key idea behind visual perception • But motion is inherently differential! common fate

  3. Previous approaches Eigenvector based Extraction of motion layers Multi-body factorization Wang and Adelson 1994, Ayer and Sawhney 1995, Xiao and Shah 2005 Ke and Kanade 2001, Vidal and Sastry 2003 Shi and Malik 1998 Object level grouping Rank Constraint Functional Sivic, Schaffalitzky and Zisserman 2004 Cremers and Soatto 2005 Rothganger, Lazebnik, Schmid and Ponce 2004

  4. Traditional approach time two frames spatiotemporal volume • Two unanswered questions: • What are the limitations of processing a block of frames? • How to integrate information over time?

  5. Batch processing x fast medium slow threshold t time window  dependent upon speed

  6. Incremental processing x fast medium slow crawling threshold t  independent of speeddependent only upon the amount of information

  7. Algorithm overview • Detect and track Kanade-Lucas-Tomasi feature points • Accumulate groups using region growing (neighbors from Delaunay triangulation) • Retain consistent groups • Maintain groups over time

  8. Region growing Between two frames, • Repeat • Randomly select seed feature • Fit motion model to neighbors • Repeat until group does not change: • Discard all features except the one near the centroid • Grow group by recursively including neighboring features with similar motion • Update the motion model • Until all features have been considered

  9. Region growing for a single group Choice of seed heavily influences resulting group

  10. a b c d a b c d Finding consistent groups Consistency check: Features that are always grouped together, no matter the seed point seed point seed point a a b a b b c c c d d d a b c d a b c d a 1 1 a 2 1 1 1 1 1 1 = b b 2 2 + 1 1 c 1 c 2 1 d 1 d 2 1 In practice, we use 7 seed points

  11. Single consistent group seed point 1 seed point 2 seed point 3 consistent group

  12. Multiple consistent groups seed point 1 seed point 2 seed point 3 only 3 groups in initial results 4 groups in final result consistent groups

  13. Maintaining groups over time • Find new groups(when new objects enter scene) • Split existing groups(when configuration changes) • Add new features to existing groups(when new information available)

  14. Finding new groups group1 group2 group1 group2 ungrouped features ungrouped feature group3 find consistent groups

  15. 3 3 7 7 8 5 5 8 5 5 6 6 6 6 9 9 9 9 10 10 10 10 Splitting existing groups if lost features > x % of original features frame k frame k + n lost features 2 2 2 try to regroup(find consistent groups again) 1 1 1 track features 3 3 4 4 7 4 7 5 8 6 either all are regrouped or multiple groups are found 6 5 9 10 newly added features

  16. Adding new features new (ungrouped) features 1 2 3 group 1 (with motion model 1) group 2 (with motion model 2) Feature 2 is neighbor to multiple groups: Compare feature motion with all group motion models Add if similar to one and dissimilar from the rest Feature 1 is neighbor to only one group: Compare feature motion with group motion model Add if similar Feature 3 is neighbor to only one group: Compare feature motion with group motion model Add if similar Feature 1 is neighbor to only one group: Compare feature motion with group motion model Add if similar

  17. Experimental results 64 185 279 8 395 468 497 520 statue sequence

  18. Experimental results Number of groups is determined automatically and dynamically

  19. Experimental results mobile-calendar sequence 14 70 100 car-map sequence 11 20 35 free-throw sequence 10 15 20

  20. Videos Videos available at http://www.ces.clemson.edu/~stb/research/motion_segmentation

  21. Insensitivity to speed normal 64 185 395 480 ½ frames dropped 32 93 197 240 double frames 128 370 790 960

  22. Insensitivity to parameter normal 4 8 12 64 ½ threshold 4 8 12 64 threshold x 2 4 8 12 64

  23. Future application: Mobile robot obstacle avoidance Speed of algorithm: 20 ms per image frame (plus feature tracking, which is real time)  Can apply algorithm to real-time problems

  24. Conclusion • Motion is inherently differential • Motion segmentation should take this into account • Proposed algorithm • segments based upon available evidence, independently of object speed • incrementally processes video • contains one primary parameter, namely the amount of evidence needed to split a group • works in real time • automatically computes the number of objects dynamically • Future work: dense segmentation

More Related