1 / 109

Visual Tracking

Visual Tracking. CMPUT 613/498. Why Visual Tracking?. Related technology is cheap!. Computers are fast enough!. Applications abound. Motion Control. HCI. Measurement. Applications: Watching a moving target. Camera + computer can determine how things move in an scene over time. Uses:

jcatchings
Télécharger la présentation

Visual Tracking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Visual Tracking CMPUT 613/498

  2. Why Visual Tracking? Related technology is cheap! Computers are fast enough! Applications abound Motion Control HCI Measurement

  3. Applications:Watching a moving target • Camera + computer can determine how things move in an scene over time. Uses: • Security: e.g. monitoring people moving in a subway station or store • Measurement: Speed, alert on colliding trajectories etc.

  4. ApplicationsHuman-Computer Interfaces • Camera + computer tracks motions of human user and interprets this in an on-line interaction. • Can interact with menus, buttons and e.g. drawing programs using hand movements as mouse movements, and gestures as clicking • Furthermore, can interpret physical interactions

  5. ApplicationsHuman-Machine Interfaces • Camera + computer tracks motions of human user, interprets this and machine/robot carries out task. • Remote manipulation • Service robotics for the handicapped and elderly

  6. ApplicationsHuman-Human Interfaces • Camera + computer tracks motions and expressions of human user, interprets, codes and transmits to render at remote location • Ultra low bandwidth video communication • Handheld video cell phone

  7. Image streams -> Computer Digital Signal Image Processor Host Computer Camera Digitizer DISPLAY Analog Signal

  8. A Modern Digital Camera (Firewire) IEEE 1394 Host Computer Camera DISPLAY X window Digital Signal

  9. HW BANDWIDTH REQUIREMENTS Binary 1 bit * 640x480 * 30 = 9.2 Mbits/second Grey 1 byte * 640x480 * 30 = 9.2 Mbytes/second Color 3 bytes * 640x480 * 30 = 27.6 Mbytes/second (actually about 37 mbytes/sec) Typical operation: 8x8 convolution on grey image 64 multiplies + adds  600 Mflops Today’s PC’s are just getting to the point they can process images at frame rate

  10. Characteristics of Video Processing abs( Image 1 - Image 2) = ? Note: Almost all pixels change! Constancy: The physical scene is the same How do we make use of this?

  11. Fundamental types of video processing “Visual motion detecton” • Relating two adjacent frames: (small differences):

  12. Fundamental types of video processing “Visual Tracking” / Stabilization • Globally relating all frames: (large differences):

  13. Types of tracking • Point tracking Extract the point (pixel location) of a particular image feature in each image. • Segmentation based tracking Define an identifying region property (e.g. color, texture statistics). Repeatedly extract this region in each image. • Stabilization based tracking Formulate image geometry and appearance equations and use these acquire image transform parameters for each image.

  14. Point tracking • Simples technique. Already commercialized in motion capture systems. • Features commonly LED’s (visible or IR), special markers (reflective, patterns) • Detection: e.g. pick brightest pixel(s), cross correlate image to find best match for known pattern.

  15. Region Tracking(Segmentation-Based) • Select a “cue:” • foreground enhancement • background subtraction • Segment • threshold • “clean up” • Compute region geometry • centroid (first moment) • orientation (second moment) • scale (second moment)

  16. Regions • (color space): intrinsic coloration • Homogeneous region: is roughly constant • Textured region: has significant intensity gradients horizontally and vertically • Contour: (local) rapid change in contrast

  17. Homogeneous Color Region: Photometry

  18. Color Representation • is irradiance of region R • Color representation • DRM [Klinker et al., 1990]: if P is Lambertian, has matte line and highlight line • User selects matte pixels in R • PCA fits ellipsoid to matte cluster • Color similarity is defined by Mahalanobis distance

  19. Homogeneous Region: Photometry PCA-fitted ellipsoid Sample

  20. Color Extension (Contd.) • Hi = number of pixels in class i • Histograms are vectors of bin counts • Given histograms H and G, compare by • dense, stable signature • relies on segmentation • relative color and feature locations lost • affine transformations preserve area ratios of planar objects Color Histograms (Swain et al.)

  21. Variability model: It = g(I0, pt) Incremental Estimation: From I0, It+1 and pt compute Dpt+1 Principles of Stabilization Tracking pt I0 It || I0 - g(It+1, pt+1) ||2 ==> min Visual Tracking = Visual Stabilization

  22. Prediction Prior states predict new appearance Image warping Generate a “normalized view” Model inverse Compute error from nominal State integration Apply correction to state Reference Image Warping - p Dp Tracking Cycle Model Inverse

  23. Stabilization tracking: Planar Case Planar Object + Linear camera => Affine motion model: u’i = A ui + d Subtract these: - - = nonsense Warping But these: - - = small variation It = g(pt, I0)

  24. State information • Position • Orientation • Scale • Aspect ratio/Shear • Pose (S0(3); subgroup of Affine x R(2)) • Kinematic configuration (chains in SO(2) or SO(3)) • Non-rigid information (eigenmodes) • Photometric information (parametric illumination models) SO(2) + scale Affine (SL(2) x R(2))

  25. Mathematical Formulation • Define a “warped image” • f(p,x) = x’ (warping function) • I(x,t) (image a location x at time t) • g(p,It) = (I(f(p,x1),t), I(f(p,x2),t), … I(f(p,xN),t))’ • Define the Jacobian of warping function • M(p,t) = • Consider “Incremental Least Squares” formulation • O(Dp, t+Dt) = || g(pt,It+Dt) – g(0,I0) ||2

  26. Stabilization Formulation • Model • I0 = g(pt, It ) (image I, variation model g, parameters p) • DI = M(pt, It) Dp (local linearization M) • Define an error • et+1 = g(pt, It+1) - I0 • Close the loop • pt+1 = pt - (MT M)-1MT et+1 where M = M(pt,It) M is N x m and is time varying!

  27. A Factoring Result (Hager & Belhumeur 1998) Suppose I = g(It, p) at pixel location u is defined as I(u) = I(f(p,u),t) and = L(u)S(p) Then M(p,It) = M0 S(p) where M0 = M(0,I0)

  28. O(mN) operations G is m x N, e is N x 1 S is m x m Stabilization Revisited • In general, solve • [STG S] Dp = M0T et+1 where G = M0TM0 constant! • pt+1 = pt +Dp • If S is invertible, then • pt+1 = pt - S-T G et+1 where G = (M0TM0)-1M0T

  29. On The Structure of M u’i = A ui + d Planar Object -> Affine motion model: X Y Rotation Scale Aspect Shear

  30. Putting all together

  31. Video

  32. Motion Illumination Occlusion Tracking 3D Objects What is the set of all images of a 3D object?

  33. 3D Case: Local Geometry Non-Planar Object: ui = A ui + bzi+ d x y rot z scale aspect rot x rot y

  34. Observations: Lambertian object, single source, no cast shadows => 3D image space With shadows => a cone Empirical evidence suggests 5 to 6 basis images suffices 3D Case: Illumination Modeling Illumination basis: It = Ba + I0

  35. Putting It Together • Variability model • I0 = g(pt, It ) + Ba (variation model g, parameters p, basis B) • DI = M(p) Dp + Ba (local linearization M) • Combine motion and illumination • et+1 = g(pt, It+1 ) - I0 - Bat • pt+1 = pt + (JT J)-1JT et+1 where J = [ M, B ] • Or, project into illumination kernel • pt+1 = pt + (MTN M)-1MT N et+1 where N = 1- B (BTB)-1BT

  36. Handling Occlusion • Robust error metric • Dp = arg min r(e) • Huber & Dutter 1981 --- modified IRLS estimation • Dpk = (MTM)-1MT W(Dpk) e • Temporal propagation of weighting • Wt+1 = F (Wt)

  37. Image Warping - Handling Occlusion Reference Weighting p Dp S Model Inverse

  38. Pose

  39. Related Work: Modalities • Color • Histogram [Birchfield, 1998; Bradski, 1998] • Volume [Wren et al., 1995; Bregler, 1997; Darrell, 1998] • Shape • Deformable curve [Kass et al.1988] • Template [Blake et al., 1993; Birchfield, 1998] • Example-based [Cootes et al., 1993; Baumberg & Hogg, 1994] • Appearance • Correlation [Lucas & Kanade, 1981; Shi & Tomasi, 1994] • Photometric variation [Hager & Belhumeur, 1998] • Outliers [Black et al., 1998; Hager & Belhumeur, 1998] • Nonrigidity [Black et al., 1998; Sclaroff & Isidoro, 1998]

  40. Edges: An Illustrative Example • Classical approach to feature-based vision • Feature extraction (e.g. Canny edge detector) • Feature grouping (edges edgels) • Feature characterization (length, orientation, magnitude) • Feature matching (combinatorial problem!) • XVision approach • Develop a “canonical” edge with fixed state • Vertical step edge with position, orientation, strength • Assume prior information and use warping to acquire candidate region • Optimize detection algorithm to compute from/to state

  41. Edges (XVision Approach) Rotational warp Apply a derivative of a triangle (IR filter) across rows Sum and interpolate to get position and orientation

  42. Snakes • Contour C: continuous curve on smooth surface in • Snake S: projection of C to image • Curve types • Edge between regions on surface with contrasting properties • Line that contrasts with surface properties on both side • Silhouette of surface against contrasting background • General Algorithm: • Perform edge detection • Fit parametric or non-parametric curve to data

  43. Snakes: Basic Approach • Parameterize a closed contour • or • Given a predicted state q, search radially for edges • Solve a least squares problem for new state

  44. Snake: Details • Constrained deformations of B-spline templates [Blake et al., 1993] • zi: nearest edge at point r(si) along the curve with normal n(si) • X’ and S’ existing state and weight (covariance) estimate • Z0= 0, S0 = 0 • Iterate i = 1..N • vi = (zi – r(si)) . n(si) • h(si)t = n(si)t U(si) W • Si = Si-1 + 1/qi2 h(si)h(si)t • Zi = Zi-1 + 1/qi2 h(si)vi • Compute • X = X’ + (S’ + SN)-1ZN optional shape template where Q = WX + Q0

  45. Snake: Alternative Approach • Perform gradient ascent on

  46. Textured Region • Mean intensity difference between I and affine warp of template image [Shi & Tomasi, 1994]

  47. Illumination

  48. Pose and Illumination

  49. XVision: Desktop Feature Tracking • Graphics-like system • Primitive features • Geometric constraints • Fast local image processing • Perturbation-based algorithms • Easily reconfigurable • Set of C++ classes • State-based conceptual model of information propagation • Goal • Flexible, fast, easy-to-use substrate

  50. Prediction prior states predict new appearance Image rectification generate a “normalized view” Offset computation compute error from nominal State update apply correction to fundamental state Abstraction: Feature Tracking Cycle Offset Computation Window Manager s do ds State Update Prediction

More Related