1 / 77

Computer Vision: Vision and Modeling

Computer Vision: Vision and Modeling. Computer Vision: Vision and Modeling. Lucas-Kanade Extensions Support Maps / Layers: Robust Norm, Layered Motion, Background Subtraction, Color Layers Statistical Models (Forsyth+Ponce Chap. 6, Duda+Hart+Stork: Chap. 1-5)

zorina
Télécharger la présentation

Computer Vision: Vision and Modeling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Vision: Vision and Modeling

  2. Computer Vision: Vision and Modeling • Lucas-Kanade Extensions • Support Maps / Layers: • Robust Norm, Layered Motion, Background Subtraction, Color Layers • Statistical Models (Forsyth+Ponce Chap. 6, Duda+Hart+Stork: Chap. 1-5) • - Bayesian Decision Theory • - Density Estimation

  3. A Different View of Lucas-Kanade 2 E = S( ) = I (i) - I(i) v D i t i 2 I (1) - I(1) v D 1 t High Gradient has Higher weight I (2) - I(2) v D 2 t ... D I (n) - I(n) v n t  White board

  4. Constrain - V V Constrained Optimization 2 I (1) - I(1) v D 1 t I (2) - I(2) v D 2 t ... D I (n) - I(n) v n t

  5. Constraints = Subspaces Constrain - V V E(V) Analytically derived: Affine / Twist/Exponential Map Learned: Linear/non-linear Sub-Spaces

  6. Motion Constraints • Optical Flow: local constraints • Region Layers: rigid/affine constraints • Articulated: kinematic chain constraints • Nonrigid: implicit /learned constraints

  7. Constrained Function Minimization Constrain - V V 2 I (1) - I(1) v D 1 t V= M( q) = E(V) I (2) - I(2) v D 2 t ... D I (n) - I(n) v n t

  8. 2D Translation: Lucas-Kanade 2D Constrain - V V 2 I (1) - I(1) v D dx, dy 1 t V= = E(V) dx, dy I (2) - I(2) v D 2 t ... ... dx, dy D I (n) - I(n) v n t

  9. a1, a2 a3, a4 2D Affine: Bergen et al, Shi-Tomasi 6D Constrain - V V 2 I (1) - I(1) v D 1 t x dx v = = E(V) i + I (2) - I(2) v D 2 i y dy t i ... D I (n) - I(n) v n t

  10. Affine Extension • Affine Motion Model: • 2D Translation • 2D Rotation • Scale in X / Y • Shear Matlab demo ->

  11. Affine Extension Affine Motion Model -> Lucas-Kanade: Matlab demo ->

  12. 2D Affine: Bergen et al, Shi-Tomasi 6D Constrain - V V

  13. K-DOF Models K-DOF Constrain - V V 2 I (1) - I(1) v D 1 t V= M( q) = E(V) I (2) - I(2) v D 2 t ... D I (n) - I(n) v n t

  14. Quadratic Error Norm (SSD) ??? Constrain - V V 2 I (1) - I(1) v D 1 t V= M( q) = E(V) I (2) - I(2) v D 2 t ... D I (n) - I(n) v n t  White board (outliers?)

  15. Support Maps / Layers • L2 Norm vs Robust Norm • Dangers of least square fitting: L2 D

  16. Support Maps / Layers • L2 Norm vs Robust Norm • Dangers of least square fitting: L2 robust D D

  17. Support Maps / Layers • Robust Norm -- good for outliers • nonlinear optimization robust D

  18. Support Maps / Layers • Iterative Technique Add weights to each pixel eq (white board)

  19. Support Maps / Layers • how to compute weights ? • -> previous iteration: how good does G-warp matches F ? • -> probabilistic distance: Gaussian:

  20. Error Norms / Optimization Techniques SSD: Lucas-Kanade (1981) Newton-Raphson SSD: Bergen-et al. (1992) Coarse-to-Fine SSD: Shi-Tomasi (1994) Good Features Robust Norm: Jepson-Black (1993) EM Robust Norm: Ayer-Sawhney (1995) EM + MRF MAP: Weiss-Adelson (1996) EM + MRF ML/MAP: Bregler-Malik (1998) Twists / EM ML/MAP: Irani (+Ananadan) (2000) SVD

  21. Computer Vision: Vision and Modeling • Lucas-Kanade Extensions • Support Maps / Layers: • Robust Norm, Layered Motion, Background Subtraction, Color Layers • Statistical Models (Forsyth+Ponce Chap. 6, Duda+Hart+Stork: Chap. 1-5) • - Bayesian Decision Theory • - Density Estimation

  22. Support Maps / Layers • Black-Jepson-95

  23. Support Maps / Layers • More General: Layered Motion (Jepson/Black, Weiss/Adelson, …)

  24. Support Maps / Layers • Special Cases of Layered Motion: • - Background substraction • - Outlier rejection (== robust norm) • - Simplest Case: Each Layer has uniform color

  25. Support Maps / Layers • Color Layers: P(skin | F(x,y))

  26. Computer Vision: Vision and Modeling • Lucas-Kanade Extensions • Support Maps / Layers: • Robust Norm, Layered Motion, Background Subtraction, Color Layers • Statistical Models (Duda+Hart+Stork: Chap. 1-5) • - Bayesian Decision Theory • - Density Estimation

  27. Statistical Models / Probability Theory • Statistical Models: Represent Uncertainty and Variability • Probability Theory: Proper mechanism for Uncertainty • Basic Facts  White Board

  28. General Performance Criteria Optimal Bayes With Applications to Classification

  29. Bayes Decision Theory Example: Character Recognition: Goal: Classify new character in a way as to minimize probability of misclassification

  30. Bayes Decision Theory • 1st Concept: Priors ? P(a)=0.75 P(b)=0.25 a a b a b a a b a b a a a a b a a b a a b a a a a b b a b a b a a b a a

  31. Bayes Decision Theory • 2nd Concept: Conditional Probability # black pixel # black pixel

  32. Bayes Decision Theory • Example: X=7

  33. Bayes Decision Theory • Example: X=8

  34. Bayes Decision Theory • Example: Well… P(a)=0.75 P(b)=0.25 X=8

  35. Bayes Decision Theory • Example: P(a)=0.75 P(b)=0.25 X=9

  36. Bayes Decision Theory • Bayes Theorem:

  37. Bayes Decision Theory • Bayes Theorem:

  38. Bayes Decision Theory • Bayes Theorem: Likelihood x prior Posterior = Normalization factor

  39. Bayes Decision Theory • Example:

  40. Bayes Decision Theory • Example:

  41. Bayes Decision Theory • Example: X>8 class b

  42. Bayes Decision Theory Goal: Classify new character in a way as to minimize probability of misclassification Decision boundaries:

  43. Bayes Decision Theory Goal: Classify new character in a way as to minimize probability of misclassification Decision boundaries:

  44. Bayes Decision Theory Decision Regions: R3 R1 R2

  45. Bayes Decision Theory Goal:minimize probability of misclassification

  46. Bayes Decision Theory Goal:minimize probability of misclassification

  47. Bayes Decision Theory Goal:minimize probability of misclassification

  48. Bayes Decision Theory Goal:minimize probability of misclassification

  49. Bayes Decision Theory Discriminant functions: • class membership solely based on relative sizes • Reformulate classification process in terms of discriminant functions: x is assigned toCk if

  50. Bayes Decision Theory Discriminant function examples:

More Related