1 / 73

Local spatio-temporal image features for motion interpretation

Computational Vision and Active Perception Laboratory (CVAP) Dept of Numerical Analysis and Computer Science KTH (Royal Institute of Technology) SE-100 44 Stockholm, Sweden. Local spatio-temporal image features for motion interpretation. Ivan Laptev. Motivation. Goal:

verdad
Télécharger la présentation

Local spatio-temporal image features for motion interpretation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Vision and Active Perception Laboratory (CVAP) Dept of Numerical Analysis and Computer Science KTH (Royal Institute of Technology) SE-100 44 Stockholm, Sweden Local spatio-temporal image features for motion interpretation Ivan Laptev

  2. Motivation Goal: Interpretation of dynamic scenes Common methods: Common problems: • Camera stabilization • Complex & changing BG • Segmentation • Appearance of new OBJ • Tracking  No global assumptions about the scene

  3. Space-time No global assumptions  Consider local spatio-temporal neighborhoods hand waving boxing

  4. Space-time No global assumptions  Consider local spatio-temporal neighborhoods hand waving boxing

  5. How to deal with transformations in the data? (ICPR’04) • How to use obtained features for applications? (ICPR’04) • How to find informative neighborhoods? (ICCV’03) • How to describe the neighborhoods? (SCMVP’04) Questions • How to find informative neighborhoods? • How to deal with transformations in the data? • How to describe the neighborhoods? • How to use obtained features for applications?

  6. How to describe the neighborhoods? (SCMVP’04) Questions • How to find informative neighborhoods? (ICCV’03) • How to deal with transformations in the data? (ICPR’04) • How to use obtained features for applications? (ICPR’04)

  7. Look at the distribution of the gradient High image variation in space and time   Space-time gradient Covariance Spatial scale , temporal scale Space-Time interest points What neighborhoods to consider? Distinctive neighborhoods

  8. High variation of  large eigenvalues of   Local maxima of H over (x,y,t) Second-moment matrix (similar to Harris operator [Harris and Stephens, 1988]) Space-Time interest points Distribution of within a local neighborhood

  9. Space-Time interest points Motion event detection

  10. Space-Time interest points Motion event detection

  11. Space-Time interest points Motion event detection: complex background

  12. Space-Time interest points appearance/ disappearance accelerations split/merge

  13. Events are well localized in time andareconsistently identified by different people. • The ability of memorizing activities has shown to bedependent on how fine we subdivide the motioninto units. Relations to psychology ”... The world presents us with a continuous stream of activity which the mind parses intoevents. Like objects, they are bounded; they have beginnings, (middles,) and ends. Likeobjects, they are structured, composed of parts.However, in contrast to objects, events arestructured in time...'' Tversky et.al.(2002), in ”The Imitative Mind”

  14. How to describe the neighborhoods? (SCMVP’04) Questions • How to find informative neighborhoods? (ICCV’03) • How to deal with transformations in the data? (ICPR’04) • How to use obtained features for applications? (ICPR’04)

  15. How to describe the neighborhoods? (SCMVP’04) Scale and frequency transformations Questions • How to find informative neighborhoods? (ICCV’03) • How to deal with transformations in the data? (ICCV’03) • How to use obtained features for applications? (ICPR’04)

  16. S p P’ • • point transformation covariance transformation Spatio-temporal scale selection Image sequence f can be influenced by changes in spatial and temporal resolution

  17. Estimate spatial and temporal extents of image structures  Scale selection Scale-selection in space [Lindeberg IJCV’98] Extension to space-time: Find normalization parameters a,b,c,d for Spatio-temporal scale selection Want to estimate S from the data

  18. Spatio-temporal scale selection Analyze spatio-temporal blob Extrema constraints give parameter values a=1, b=1/4,c=1/2, d=3/4

  19. Spatio-temporal scale selection  The normalized spatio-temporal Laplacian operator Assumes extrema values at positions and scales corresponding to the centers and the spatio-temporal extent of a Gaussian blob

  20.  Scale estimation (*)  Interest point detection (**) • Fix • For each detected interest point (**) • Estimate (*) • Update covariance • Re-detect using • Iterate 3-6 until convergence of and Space-Time interest points H depends on  and, hence, on  and scale transformation S • adapt interest points by iteratively computing:

  21. Spatio-temporal scale selection Stability to size changes, e.g. camera zoom

  22. Spatio-temporal scale selection Selection of temporal scales captures the frequency of events

  23. How to describe the neighborhoods? (SCMVP’04) Questions • How to find informative neighborhoods? (ICCV’03) • How to deal with transformations in the data? (ICCV’03) • How to use obtained features for applications? (ICPR’04) Scale and frequency transformations

  24. How to describe the neighborhoods? (SCMVP’04) Transformations due to camera motion Stabilized camera Stationary camera time time Questions • How to find informative neighborhoods? (ICCV’03) • How to deal with transformations in the data? (ICPR’04) • How to use obtained features for applications? (ICPR’04)

  25. local jet descriptors local jet descriptors Effect of camera motion

  26. G point transformation • • P’ p G covariance transformation ’  Galilean transformation

  27. Need to know point correspondences  Bad Space-time gradient Second-moment matrix Estimation of G Want to ”undo” the effect of G Consider local measurements:

  28. Transformations of and  • Let Estimation of G Idea: Fix the ”normal” form of  and estimate G by normalizing.

  29. Fix , let • Estimate according to (*) • Update • Iterate 2-3-4 until convergence of Iterative method for estimating and  Can solve for from ! (similar to Lucas&Kanade OF) ... however  Estimation of G (*)

  30. Non-adapted neighborhoods Galilei-adapted neighborhoods Comparison of local jet responses computed in corresponding neighborhoods Estimation of G: experiments

  31. adapt interest points by iteratively computing:  Velocity estimation (*)  Interest point detection (**) • Fix • For each detected interest point (**) • Estimate (*) • Update covariance • Re-detect using • Iterate 3-6 until convergence of and Space-Time interest points H depends on  and velocity transformation G

  32. Stabilized camera Stationary camera Interest points Velocity-adapted interest points Adapted interest points

  33. G-1 Evaluation: Repeatability f f’ Synthetic experiments: G is known How many points in f and f’ do correspond?

  34. Stability of descriptors Define local jet descriptors: Distance between descriptors at corresponding points

  35. How to describe the neighborhoods? (SCMVP’04) Questions • How to find informative neighborhoods? (ICCV’03) • How to deal with transformations in the data? (ICCV’03) • How to use obtained features for applications? (ICPR’04)

  36. How to describe the neighborhoods? (SCMVP’04) Questions • How to find informative neighborhoods? (ICCV’03) • How to deal with transformations in the data? (ICCV’03) • How to use obtained features for applications? (ICPR’04)

  37. Features from human actions

  38. Space-time neighborhoods boxing walking hand waving

  39. A well-founded choice of local descriptors is the local jet (Koenderink and van Doorn, 1987) computed from spatio-temporal Gaussian derivatives (here at interest points pi) where Local space-time descriptors

  40. Use of descriptors:Clustering • Group similar points in the space of image descriptors using K-means clustering • Select significant clusters Clustering c1 c2 c3 c4 Classification

  41. Use of descriptors:Clustering

  42. Use of descriptors:Matching • Find similar events in pairs of video sequences

  43. Other descriptors better? Consider the following choices: • Multi-scale spatio-temporal derivatives • Projections to orthogonal bases obtained with PCA • Histogram-based descriptors Spatio-temporal neighborhood

  44. Multi-scale derivative filters Derivatives up to order 2 or 4; 3 spatial scales; 3 temporal scales: • 9 x 3 x 3 = 81 or 34 x 3 x 3 = 306 dimensional descriptors

  45. PCA descriptors • Compute normal flow or optic flow in locally adapted spatio-temporal neighborhoods of features • Subsample the flow fields to resolution 9x9x9 pixels • Learn PCA basis vectors (separately for each flow) from features in training sequences • Project flow fields of the new features onto the 100 most significant eigen-flow-vectors:

  46. Position-dependent histograms • Divide the neighborhood i of each point piinto M^3subneighborhoods, here M=1,2,3 • Compute space-time gradients (Lx, Ly, Lt)T or optic flow (vx, vy)T at combinations of 3 temporal and 3 spatial scales where are locally adapted detection scales • Compute separable histograms over all subneighborhoods, derivatives/velocities and scales ...

  47. Evaluation: Action Recognition Database: walking running jogging handwaving handclapping boxing Initially, recognition with Nearest Neighbor Classifier (NNC): • Take sequences of X subjects for training (Strain) • For each test sequence stest find the closest training sequence strain,i by minimizing the distance • Action of stest is regarded as recognized if class(stest)= class(strain,i)

  48. Results: Recognition rates (all) Scale and velocity adapted features Scale-adapted features

  49. Results: Recognition rates (Hist) Scale and velocity adapted features Scale-adapted features

  50. Results: Recognition rates (Jets) Scale and velocity adapted features Scale-adapted features

More Related