1 / 29

CONDENSATION – Conditional Density Propagation for Visual Tracking

Michael Isard and Andrew Blake, IJCV 1998. CONDENSATION – Conditional Density Propagation for Visual Tracking. Presented by Wen Li Department of Computer Science & Engineering Texas A&M University. Outline. Problem Description Previous Methods CONDENSATION Experiment Conclusion.

dagan
Télécharger la présentation

CONDENSATION – Conditional Density Propagation for Visual Tracking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Michael Isard and Andrew Blake, IJCV 1998 CONDENSATION – Conditional Density Propagation for Visual Tracking Presented by Wen Li Department of Computer Science & Engineering Texas A&M University

  2. Outline • Problem Description • Previous Methods • CONDENSATION • Experiment • Conclusion

  3. Problem Description • What’s the task • Track outlines and features of foreground objects • Video frame-rate • Visual clutter

  4. Problem Description • Challenges • Elements in background clutter may mimic parts of foreground features • Efficiency

  5. Previous Methods • Directed matching • Geometric model of object • + motion model • Kalman Filter

  6. Kalman Filter • Main Idea • Model the object • Prediction – predict where the object would be • Measurement – observe features that imply where the object is • Update – Combine measurement and prediction to update the object model

  7. Kalman Filter • Assumption • Gaussian prior • Markov assumption

  8. Kalman Filter

  9. Kalman Filter • Essential Technique • Bayes filter • Limitation • Gaussian distribution • Does not work well in “clutter” background

  10. CONDENSATION • Stochastic framework + Random sampling • Difference with Kalman Filter • Kalman Filter – Gaussian densities • Condensation – General situation

  11. CONDENSATION • Symbols + goal • Assumptions • Modelling • Dynamic model • Observation model • Factored sampling • CONDENSATION algorithm

  12. CONDENSATION • Symbols • xt – the state of object at time t • Xt – the history of xt, {x1,…, xt} • zt – the set of image features at time t • Zt – the history of zt, {z1,…, zt} • Goal • Calculate the model of x at time t, given the history of the measurements. -- P(xt |Zt)

  13. CONDENSATION • Assumptions • Markov assumption • The new state is conditioned directly only on the immediately preceding state • P(xt|Xt-1)=p(xt|xt-1) • zt -- Independence (mutually and with respect to the dynamical process) • P(Zt|Xt)=∏ p(zi|xi) • P(zi|xi) = p(z|x)

  14. CONDENSATION • Dynamic model • P(xt|xt-1) • Observation model

  15. CONDENSATION • Propagation – applying Bayes rules Cannot be evaluated in closed form

  16. CONDENSATION • Factored Sampling • Approximate the probability density p(x|z) • In single image • Step 1: generate a sample set {s(1),…, s(N)} • Step 2: calculate the weight πi corresponding to each s(i), using p(z | s(i)) and normalization • Step 3: calculate the mean position of x, that

  17. CONDENSATION • Factored Sampling -- illustration

  18. CONDENSATION • The CONDENSATION algorithm – finally! • Initialize p(x0) • For any time t • Predict: select a sample set {s’t(1),…, s’t(N)} from old sample set {st-1(1),…, st-1(N)} according to π t-1(n) predict a new sample-set {st(1),…, st(N)} from {s’t(1),…, s’t(N)}, using the dynamic model we mentioned previously • Measure: calculate weights πi according to observed features, then calculate mean position of xt as in the single image

  19. CONDENSATION

  20. Experiment • On Multi-Model Distribution The shape-space for tracking is built from a hand-drawn template of head and shoulder

  21. N=1000, frame rate=40 ms

  22. Experiment • On Rapid Motions Through Clutter

  23. Experiment

  24. Experiment • On Articulated Object

  25. Experiment

  26. Experiment • On Camouflaged Object

  27. Conclusion • Good news: • Works on general distributions • Deals with Multi-model • Robust to background clutter • Computational efficient • Controllable of performance by sample size N • Not too difficult

  28. Conclusion • Problems might be • Initialization • “hand-drawn” shape-space

More Related