1 / 28

MODELING AND RECOGNITION OF DYNAMIC VISUAL PROCESSES

MODELING AND RECOGNITION OF DYNAMIC VISUAL PROCESSES. Rene Vidal and Stefano Soatto UC Berkeley UCLA. Overview. Motivation: modeling dynamic visual processes for recognition Review of results for stationary processes

dburrows
Télécharger la présentation

MODELING AND RECOGNITION OF DYNAMIC VISUAL PROCESSES

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MODELING AND RECOGNITIONOF DYNAMIC VISUAL PROCESSES Rene Vidal and Stefano Soatto UC Berkeley UCLA

  2. Overview • Motivation: modeling dynamic visual processes for recognition • Review of results for stationary processes • Dynamic textures: modeling, synthesis, classification • Human gaits: modeling with HOS, recognition • Extension to hybrid models • Jump-Markov systems • Lack of uniqueness guarantees (inference) • Analysis of the observability and identifiability of jump-linear systems

  3. Motivation: modeling dynamic visual processes • Visual data not sufficient to recover the correct (Euclidean) geometry, arbitrary (non-Lambertian) photometry and (non-linear) dynamics! • In vision, use assumptions on some unknowns to recover the others (e.g. photometric invariants to recover geometric invariants – shape of rigid objects). Assumptions cannot be validated. • When assumptions are violated, what kind of model can we retrieve? REPRESENTATION depends on what TASK the model is used for.

  4. Modeling dynamic visual processes for classification • HP: is (second-order) stationary (simplest case) • Images (realizations of a stochastic process) • Recover a model • Model should be • Generative (reproduce the statistics) • Predictive (allow extrapolation)

  5. DYNAMIC IMAGE MODELS Response • Spatial filters • Receptive fields • ICA/PCA • Wavelets • … • is the output of a dynamical system driven by IID process

  6. DYNAMIC IMAGE MODELS • is second-order stationary • Stochastic realization + details (spectral factorization, innovation form)

  7. UNIQUENESS OF REPRESENTATION “learning” (identification) • Equivalence class of models (basis of state-space) • Canonical realization

  8. Learning (ID) • Nonlinear (bi-linear) • Typically: E-M • (P) Global convergence not guaranteed • (P) Convergence to equivalent class; cannot use for recognition • Use Subspace Methods [Van Overschee, De Moor ’95]

  9. BASIC IDEA • (P) Does not take into account dynamics • State is the n-rank approximation of data that makes future conditionally independent of past (canonical correlations) • Look for “best” (F) n-dimensional subspace of past that predicts future (Subspace ID) • HP: state is reconstructible (WLOG in a dimensionality reduction scenario) in one step

  10. SUBSPACE IDENTIFICATION • Closed-form, unique, asymptotically efficient (maximum likelihood)

  11. WHAT CAN WE DO WITH A MODEL? Compression(maximize mutual information)

  12. WHAT CAN WE DO WITH A MODEL? Extrapolation

  13. WHAT CAN WE DO WITH A MODEL? Synthesis Learning = 3 min in Matlab Synthesis = instantaneous

  14. RECOGNITION ? • Given samples of “water”, “foliage”, “steam” • Given new sample, classify it • What is the “average” model? • Can “uncertainty” be inferred from data?

  15. RECOGNITION • Given samples of “water”, “foliage”, “steam” • Given new sample: • What is the “average” model? • Probability distribution on Stiefel manifold • “distance” between two models? ?

  16. Langevin distributions(also Gibbs, Fisher) [See also Jupp & Mardia, ’00]

  17. Langevin distributions(also Gibbs, Fisher) • Likelihood ratios: compute from data (ML) • Easy for : • ??

  18. DISTRIBUTION-INDEPENDENT DECISIONS • Compute distances between models: length of geodesic connecting them

  19. DISTRIBUTION-INDEPENDENT DECISIONS • Compute distances between models: length of geodesic connecting them • Canonical metric • Geodesic trajectories

  20. COMPARING models (measuring distances, computing statistics/likelihood ratios, uncertainty descriptions): ???[Martin ’00, DeKoch-DeMoor ’00]. • Also, Robust control techniques [Mazzaro, Camps, Sznaier, Bissacco, Soatto, 2002]

  21. DISTRIBUTION-INDEPENDENT DECISIONS

  22. Walking Learn A,B,C,q Data Model A,B,C,q x(0)=xo Synthesis

  23. Running Learn A,B,C,q Gait Data Model A,B,C,q x(0)=xo Synthesis

  24. Limping Synthesis Data

  25. EXTENSIONS • Nonlinear dynamic textures • Higher-order statistics (dynam-ICA) • First step: Jump-linear systems

  26. Jump-linear systems

  27. http://vision.ucla.edu

More Related