Download
non linear tracking n.
Skip this Video
Loading SlideShow in 5 Seconds..
Non-linear tracking PowerPoint Presentation
Download Presentation
Non-linear tracking

Non-linear tracking

341 Vues Download Presentation
Télécharger la présentation

Non-linear tracking

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Non-linear tracking Marc Pollefeys COMP 256 Some slides and illustrations from D. Forsyth, M. Isard, T. Darrell …

  2. Tentative class schedule

  3. Final project presentation No further assignments, focus on project Final presentation: • Presentation and/or Demo (your choice, but let me know) • Short paper (Due April 22 by 23:59) (preferably Latex IEEE proc. style) • Final presentation/demo April 24 and 26

  4. System state dynamics Observation dynamics We are interested in: Belief or posterior density Bayes Filters Estimating system state from noisy observations

  5. Recall “law of total probability” and “Bayes’ rule” From above, constructing two steps of Bayes Filters Predict: Update:

  6. Assumptions: Markov Process Predict: Update:

  7. Bayes Filter How to use it? What else to know? Motion Model Perceptual Model Start from:

  8. Step 0: initialization Step 1: updating Example 1

  9. Step 3: updating Step 4: predicting Step 2: predicting Example 1 (continue)

  10. Several types of Bayes filters • They differs in how to represent probability densities • Kalman filter • Multihypothesis filter • Grid-based approach • Topological approach • Particle filter

  11. Recall general problem Assumptions of Kalman Filter: Belief of Kalman Filter is actually a unimodal Gaussian Advantage: computational efficiency Disadvantage: assumptions too restrictive Kalman Filter

  12. Multi-hypothesis Tracking • Belief is a mixture of Gaussian • Tracking each Gaussian hypothesis using a Kalman filter • Deciding weights on the basis of how well the hypothesis predict the sensor measurements • Advantage: • can represent multimodal Gaussian • Disadvantage: • Computationally expensive • Difficult to decide on hypotheses

  13. Grid-based Approaches • Using discrete, piecewise constant representations of the belief • Tessellate the environment into small patches, with each patch containing the belief of object in it • Advantage: • Able to represent arbitrary distributions over the discrete state space • Disadvantage • Computational and space complexity required to keep the position grid in memory and update it

  14. Topological approaches • A graph representing the state space • node representing object’s location (e.g. a room) • edge representing the connectivity (e.g. hallway) • Advantage • Efficiency, because state space is small • Disadvantage • Coarseness of representation

  15. Particle filters • Also known as Sequential Monte Carlo Methods • Representing belief by sets of samples or particles are nonnegative weights called importance factors • Updating procedure is sequential importance sampling with re-sampling

  16. Step 0: initialization Each particle has the same weight Step 1: updating weights. Weights are proportional to p(z|x) Example 2: Particle Filter

  17. Step 3: updating weights. Weights are proportional to p(z|x) Step 4: predicting. Predict the new locations of particles. Step 2: predicting. Predict the new locations of particles. Example 2: Particle Filter Particles are more concentrated in the region where the person is more likely to be

  18. Compare Particle Filter with Bayes Filter with Known Distribution Updating Example 1 Example 2 Predicting Example 1 Example 2

  19. Comments on Particle Filters • Advantage: • Able to represent arbitrary density • Converging to true posterior even for non-Gaussian and nonlinear system • Efficient in the sense that particles tend to focus on regions with high probability • Disadvantage • Worst-case complexity grows exponentially in the dimensions

  20. Particle Filtering in CV: Initial Particle Set • Particles at t = 0 drawn from wide prior because of large initial uncertainty • Gaussian with large covariance • Uniform distribution from MacCormick & Blake, 1998 State includes shape & position; prior more constrained for shape

  21. ¼(1) ¼(N) ¼(2) ¼(N-1) ¼(3) courtesy of D. Fox Particle Filtering: Sampling • Normalize N particle weights so that they sum to 1 • Resample particles by picking randomly and uniformly in [0, 1] range Ntimes • Analogous to spinning a roulette wheel with arc-lengths of bins equal to particle weights • Adaptively focuses on promising areas of state space

  22. Deterministic component (aka “drift”) Random component (aka “diffusion”) Particle Filtering: Prediction • Update each particle using generative form of dynamics: • Drift may be nonlinear (i.e., different displacement for each particle) • Each particle diffuses independently • Typically modeled with a Gaussian

  23. Particle Filtering: Measurement • For each particle s(i), compute new weight ¼(i) as measurement likelihood ¼(i) =P (zjs(i)) • Enforcing plausibility: Particles that represent impossible configurations are given 0 likelihood • E.g., positions outside of image from MacCormick & Blake, 1998 A snake measurement likelihood method

  24. Sampling occurs here Particle Filtering Steps (aka CONDENSATION) drift diffuse measurement likelihood measure from Isard & Blake, 1998

  25. Particle Filtering Visualization courtesy of M. Isard 1-D system, red curve is measurement likelihood

  26. CONDENSATION: Example State Posterior from Isard & Blake, 1998 Note how initial distribution “sharpens”

  27. Example: Contour-based Head Template Tracking courtesy of A. Blake

  28. Example: Recovering from Distraction from Isard & Blake, 1998

  29. Obtaining a State Estimate • Note that there’s no explicit state estimate maintained—just a “cloud” of particles • Can obtain an estimate at a particular time by querying the current particle set • Some approaches • “Mean” particle • Weighted sum of particles • Confidence: inverse variance • Really want a mode finder—mean of tallest peak

  30. Condensation:Estimating Target State From Isard & Blake, 1998 State samples (thickness proportional to weight) Mean of weighted state samples

  31. More examples

  32. Multi-Modal Posteriors • The MAP estimate is just the tallest one when there are multiple peaks in the posterior • This is fine when one peak dominates, but when they are of comparable heights, we might sometimes pick the wrong one • Committing to just one possibility can lead to mistracking • Want a wider sense of the posterior distribution to keep track of other good candidate states adapted from [Hong, 1995] Multiple peaks in the measurement likelihood

  33. MCMC-based particle filter (Khan, Balch & Dellaert PAMI05) Model interaction (higher dimensional state-space) CNN video

  34. Next class: recognition