1 / 84

Feature-Based Alignment

CS 636 Computer Vision. Feature-Based Alignment. Nathan Jacobs. Announcements. New office: 319 Marksbury Project 2 released (partially). review from last week…. Point features robustly finding ( harris and DoG ) and matching (local invariant descriptors) Line features

pakuna
Télécharger la présentation

Feature-Based Alignment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 636 Computer Vision Feature-Based Alignment Nathan Jacobs

  2. Announcements • New office: 319 Marksbury • Project 2 released (partially)

  3. review from last week… • Point features • robustly finding (harris and DoG) and matching (local invariant descriptors) • Line features • robustly finding • This week: fitting and aligning

  4. Fitting slides by Lazebnik

  5. Fitting: Motivation 9300 Harris Corners Pkwy, Charlotte, NC • We’ve learned how to detect edges, corners, blobs. Now what? • We would like to form a higher-level, more compact representation of the features in the image by grouping multiple features according to a simple model

  6. Fitting • Choose a parametric model to represent a set of features simple model: circles simple model: lines complicated model: car Source: K. Grauman

  7. Fitting • Choose a parametric model to represent a set of features • Line, ellipse, spline, etc. • Three main questions: • What model represents this set of features best? • Which of several model instances gets which feature? • How many model instances are there? • Computational complexity is important • It is infeasible to examine every possible set of parameters and every possible combination of features

  8. Fitting: Issues Case study: Line detection • Noise in the measured feature locations • Extraneous data: clutter (outliers), multiple lines • Missing data: occlusions

  9. Fitting: Issues • If we know which points belong to the line, how do we find the “optimal” line parameters? • Least squares • What if there are outliers? • Robust fitting, RANSAC • What if there are many lines? • Voting methods: RANSAC, Hough transform • What if we’re not even sure it’s a line? • Model selection

  10. Least squares line fitting • Data: (x1, y1), …, (xn, yn) • Line equation: yi = mxi + b • Find (m, b) to minimize y=mx+b (xi, yi)

  11. Least squares line fitting • Data: (x1, y1), …, (xn, yn) • Line equation: yi = mxi + b • Find (m, b) to minimize y=mx+b (xi, yi) Normal equations: least squares solution to XB=Y

  12. Problem with “vertical” least squares • Not rotation-invariant • Fails completely for vertical lines

  13. Total least squares • Distance between point (xi, yi) and lineax+by=d (a2+b2=1): |axi + byi – d| ax+by=d Unit normal: N=(a, b) (xi, yi)

  14. Total least squares • Distance between point (xi, yi) and lineax+by=d (a2+b2=1): |axi + byi – d| • Find (a, b, d) to minimize the sum of squared perpendicular distances ax+by=d Unit normal: N=(a, b) (xi, yi)

  15. Total least squares • Distance between point (xi, yi) and lineax+by=d (a2+b2=1): |axi + byi – d| • Find (a, b, d) to minimize the sum of squared perpendicular distances ax+by=d Unit normal: N=(a, b) (xi, yi) Solution to (UTU)N = 0, subject to ||N||2 = 1: eigenvector of UTUassociated with the smallest eigenvalue (least squares solution to homogeneous linear systemUN = 0)

  16. Total least squares second moment matrix

  17. Total least squares second moment matrix N = (a, b)

  18. Least squares as likelihood maximization ax+by=d • Generative model: line points are corrupted by Gaussian noise in the direction perpendicular to the line (u, v) ε (x, y) point on the line normaldirection noise:sampled from zero-meanGaussian withstd. dev. σ

  19. Log-likelihood: Least squares as likelihood maximization ax+by=d • Generative model: line points are corrupted by Gaussian noise in the direction perpendicular to the line (u, v) ε (x, y) Likelihood of points given line parameters (a, b, d):

  20. Least squares for general curves • We would like to minimize the sum of squared geometric distances between the data points and the curve (xi, yi) d((xi, yi), C) curve C

  21. Least squares for conics • Equation of a general conic:C(a, x) = a · x = ax2 + bxy+ cy2 + dx + ey+ f = 0, a = [a, b, c, d, e, f], x = [x2, xy, y2,x, y, 1] • Minimizing the geometric distance is non-linear even for a conic • Algebraic distance: C(a, x) • Algebraic distance minimization by linear least squares:

  22. Least squares for conics • Least squares system: Da = 0 • Need constraint on a to prevent trivial solution • Discriminant: b2 – 4ac • Negative: ellipse • Zero: parabola • Positive: hyperbola • Minimizing squared algebraic distance subject to constraints leads to a generalized eigenvalue problem • For more information: • A. Fitzgibbon, M. Pilu, and R. Fisher, Direct least-squares fitting of ellipses, IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(5), 476--480, May 1999

  23. Least squares: Robustness to noise • Least squares fit to the red points:

  24. Least squares: Robustness to noise • Least squares fit with an outlier: Problem: squared error heavily penalizes outliers

  25. Robust estimators • General approach: minimizeri(xi, θ) – residual of ith point w.r.t. model parametersθρ – robust function with scale parameter σ The robust function ρ behaves like squared distance for small values of the residual u but saturates for larger values of u

  26. Choosing the scale: Just right The effect of the outlier is minimized

  27. Choosing the scale: Too small The error value is almost the same for everypoint and the fit is very poor

  28. Choosing the scale: Too large Behaves much the same as least squares

  29. Robust estimation: Notes • Robust fitting is a nonlinear optimization problem that must be solved iteratively • Least squares solution can be used for initialization

  30. RANSAC • Robust fitting can deal with a few outliers – what if we have very many? • Random sample consensus (RANSAC): Very general framework for model fitting in the presence of outliers • Outline • Choose a small subset of points uniformly at random • Fit a model to that subset • Find all remaining points that are “close” to the model and reject the rest as outliers • Do this many times and choose the best model M. A. Fischler, R. C. Bolles. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Comm. of the ACM, Vol 24, pp 381-395, 1981.

  31. RANSAC for line fitting • Repeat N times: • Draw s points uniformly at random • Fit line to these s points • Find inliers to this line among the remaining points (i.e., points whose distance from the line is less than t) • If there are d or more inliers, accept the line and refit using all inliers

  32. Choosing the parameters • Initial number of points s • Typically minimum number needed to fit the model • Distance threshold t • Choose t so probability for inlier is p (e.g. 0.95) • Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2 • Number of samples N • Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e) Source: M. Pollefeys

  33. Choosing the parameters • Initial number of points s • Typically minimum number needed to fit the model • Distance threshold t • Choose t so probability for inlier is p (e.g. 0.95) • Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2 • Number of samples N • Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e) Source: M. Pollefeys

  34. Choosing the parameters • Initial number of points s • Typically minimum number needed to fit the model • Distance threshold t • Choose t so probability for inlier is p (e.g. 0.95) • Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2 • Number of samples N • Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e) Source: M. Pollefeys

  35. Choosing the parameters • Initial number of points s • Typically minimum number needed to fit the model • Distance threshold t • Choose t so probability for inlier is p (e.g. 0.95) • Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2 • Number of samples N • Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e) • Consensus set size d • Should match expected inlier ratio Source: M. Pollefeys

  36. Adaptively determining the number of samples • Inlier ratio e is often unknown a priori, so pick worst case, e.g. 50%, and adapt if more inliers are found, e.g. 80% would yield e=0.2 • Adaptive procedure: • N=∞, sample_count=0 • While N >sample_count • Choose a sample and count the number of inliers • Set e = 1 – (number of inliers)/(total number of points) • RecomputeN from e: • Increment the sample_count by 1 Source: M. Pollefeys

  37. RANSAC pros and cons • Pros • Simple and general • Applicable to many different problems • Often works well in practice • Cons • Lots of parameters to tune • Can’t always get a good initialization of the model based on the minimum number of samples • Sometimes too many iterations are required • Can fail for extremely low inlier ratios • We can often do better than brute-force sampling

  38. Questions? • fitting: least squares and total leastsquares • robust fitting • RANSAC • … rest of the lecture about image alignment… fitting planes onto planes

  39. A look into the past http://blog.flickr.net/en/2010/01/27/a-look-into-the-past/

  40. A look into the past • Leningrad during the blockade http://komen-dant.livejournal.com/345684.html

  41. Bing streetside images http://www.bing.com/community/blogs/maps/archive/2010/01/12/new-bing-maps-application-streetside-photos.aspx

  42. Image alignment: Applications Panorama stitching Recognitionof objectinstances

  43. Image alignment: Challenges Small degree of overlap Intensity changes Occlusion,clutter

  44. Image alignment • Two broad approaches: • Direct (pixel-based) alignment • Search for alignment where most pixels agree • Feature-based alignment • Search for alignment where extracted features agree • Can be verified using pixel-based alignment

  45. xi xi ' T Alignment as fitting Alignment: fitting a model to a transformation between pairs of features (matches) in two images M Find model M that minimizes xi Find transformation Tthat minimizes

  46. 2D transformation models • Similarity(translation, scale, rotation) • Affine • Projective(homography)

  47. Let’s start with affine transformations • Simple fitting procedure (linear least squares) • Approximates viewpoint changes for roughly planar objects and roughly orthographic cameras • Can be used to initialize fitting for more complex models

  48. Fitting an affine transformation • Assume we know the correspondences, how do we get the transformation?

  49. Fitting an affine transformation • Linear system with six unknowns • Each match gives us two linearly independent equations: need at least three to solve for the transformation parameters

  50. Beyond affine transformations • Homography: plane projective transformation (transformation taking a quad to another arbitrary quad)

More Related