1 / 104

Interest Points Detection

Interest Points Detection. CS485/685 Computer Vision Dr. George Bebis. Interest Points. Local features associated with a significant change of an image property of several properties simultaneously (e.g., intensity, color, texture). Why Extract Interest Points?.

tinagraham
Télécharger la présentation

Interest Points Detection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Interest Points Detection CS485/685 Computer Vision Dr. George Bebis

  2. Interest Points Local features associated with a significant change of an image property of several properties simultaneously (e.g., intensity, color, texture).

  3. Why Extract Interest Points? • Corresponding points (or features) between images enable the estimation of parameters describing geometric transforms between the images.

  4. What if we don’t know the correspondences? Need to compare feature descriptors of local patches surrounding interest points ? ( ) ( ) ? = featuredescriptor featuredescriptor

  5. Lots of possibilities (this is a popular research area) Simple option: match square windows around the point State of the art approach: SIFT David Lowe, UBC http://www.cs.ubc.ca/~lowe/keypoints/ ? ( ) ( ) ? = featuredescriptor featuredescriptor What if we don’t know the correspondences?

  6. Invariance Features should be detected despite geometric or photometric changes in the image. Given two transformed versions of the same image, features should be detected in corresponding locations.

  7. How to achieve invariance? • Detector must be invariant to geometric and photometric transformations. 2. Descriptors must be invariant (if matching the descriptions is required).

  8. Applications • Image alignment • 3D reconstruction • Object recognition • Indexing and database retrieval • Object tracking • Robot navigation

  9. Example: Object Recognition occlusion, clutter

  10. Example: Panorama Stitching • How do we combine these two images?

  11. Panorama stitching (cont’d) Step 2: match features Step 1: extract features

  12. Panorama stitching (cont’d) Step 1: extract features Step 2: match features Step 3: align images

  13. What features should we use? Use features with gradients in at least two (significantly) different orientations  e.g., corners

  14. What features should we use? (cont’d) (auto-correlation)

  15. Corners • Corners are easier to localize than lines when considering the correspondence problem (aperture problem). A point on a line is hard to match. A corner is easier t t+1 t t+1

  16. Characteristics of good features Repeatability The same feature can be found in several images despite geometric and photometric transformations Saliency Each feature has a distinctive description Compactness and efficiency Many fewer features than image pixels Locality A feature occupies a relatively small area of the image; robust to clutter and occlusion

  17. Mains Steps in Corner Detection 1. For each pixel in the input image, the corner operator is applied to obtain a cornerness measure for this pixel. 2. Threshold cornerness map to eliminate weak corners. 3. Apply non-maximal suppression to eliminate points whose cornerness measure is not larger than the cornerness values of all points within a certain distance.

  18. Mains Steps in Corner Detection (cont’d)

  19. Corner Types Example of L-junction, Y-junction, T-junction, Arrow-junction, and X-junction corner types

  20. Corner Detection Methods • Contour based • Extract contours and search for maximal curvature or inflexion points along the contour. • Intensity based • Compute a measure that indicates the presence of an interest point directly from gray (or color) values. • Parametric model based • Fit parametric intensity model to the image. • Can provide sub-pixel accuracy but are limited to specific types of interest points (e.g., L-corners).

  21. A contour-based approach:Curvature Scale Space • Object has been segmented • Parametric contour representation: • (x(t), y(t)) g(t,σ): Gaussian curvature

  22. Curvature Scale Space (cont’d) σ G. Bebis, G. Papadourakis and S. Orphanoudakis, "Curvature Scale Space Driven Object Recognition with an Indexing Scheme based on Artificial Neural Networks", Pattern Recognition., Vol. 32, No. 7, pp. 1175-1201, 1999.

  23. A parametric model approach:Zuniga-Haralick Detector • Approximate image function in the neighborhood of the pixel (i,j) by a cubic polynomial. (use SVD to find the coefficients) measure of "cornerness”:

  24. Corner Detection Using Edge Detection? • Edge detectors are not stable at corners. • Gradient is ambiguous at corner tip. • Discontinuity of gradient direction near corner.

  25. Corner Detection Using Intensity: Basic Idea Image gradient has two or more dominant directions near a corner. Shifting a window in anydirection should give a large change in intensity. “flat” region:no change in all directions “edge”: no change along the edge direction “corner”: significant change in all directions

  26. Moravec Detector (1977) • Measure intensity variation at (x,y) by shifting a small window (3x3 or 5x5) by one pixel in each of the eight principle directions (horizontally, vertically, and four diagonals). 

  27. Moravec Detector (1977) • Calculate intensity variation by taking the sum of squares of intensity differences of corresponding pixels in these two windows. 8 directions ∆x, ∆y in {-1,0,1} SW(-1,-1), SW(-1,0), ...SW(1,1)

  28. Moravec Detector (cont’d) • The “cornerness” of a pixel is the minimum intensity variation found over the eight shift directions: Cornerness(x,y) = min{SW(-1,-1), SW(-1,0), ...SW(1,1)} Cornerness Map (normalized) Note response to isolated points!

  29. Moravec Detector (cont’d) • Non-maximal suppression will yield the final corners.

  30. Moravec Detector (cont’d) • Does a reasonable job in • finding the majority of true • corners. • Edge points not in one of the • eight principle directions • will be assigned a relatively • large cornerness value.

  31. Moravec Detector (cont’d) • The response is anisotropic as the intensity variation is only calculated at a discrete set of shifts (i.e., not rotationally invariant)

  32. or 1 in window, 0 outside Gaussian Harris Detector • Improves the Moravec operator by avoiding the use of discrete directions and discrete shifts. • Uses a Gaussian window instead of a square window. C.Harris and M.Stephens. "A Combined Corner and Edge Detector.“ Proceedings of the 4th Alvey Vision Conference: pages 147—151, 1988. 

  33. Harris Detector (cont’d) • Using first-order Taylor expansion: Reminder: Taylor expansion

  34. Harris Detector (cont’d) Since

  35. AW(x,y)= Harris Detector (cont’d) 2 x 2 matrix (Hessian or auto-correlation or second moment)

  36. Auto-correlation matrix Describes the gradient distribution (i.e., local structure) inside window! Does not depend on

  37. Harris Detector (cont’d) • General case – use window function: default window function w(x,y) : 1 in window, 0 outside

  38. Harris Detector (cont’d) Gaussian • Harris uses a Gaussian window: w(x,y)=G(x,y,σI) where σI is called the “integration” scale window function w(x,y) :

  39. Auto-correlation matrix (cont’d)

  40. Harris Detector (cont’d) (min)-1/2 (max)-1/2 Since M is symmetric, we have: We can visualize AW as an ellipse with axis lengths determined by the eigenvalues and orientation determined by R direction of the fastest change direction of the slowest change Ellipse equation:

  41. Harris Detector (cont’d) (min)-1/2 (max)-1/2 • Eigenvectors encode edge direction • Eigenvalues encode edge strength direction of the fastest change direction of the slowest change

  42. Distribution of fx and fy

  43. Distribution of fx and fy (cont’d)

  44. Harris Detector (cont’d) 2 “Edge” 2 >> 1 “Corner”1 and 2 are large,1 ~ 2;E increases in all directions Classification of image points using eigenvalues of AW: 1 and 2 are small;SW is almost constant in all directions “Edge” 1 >> 2 “Flat” region 1

  45. Harris Detector (cont’d) % 2 (assuming that 1 > 2)

  46. Harris Detector (cont’d) • To avoid eigenvalue computation, the following response function is used: R(A) = det(A) – k trace2(A) • It can be shown that: R(A) = λ1λ2-k(λ1+ λ2)2

  47. Harris Detector (cont’d) “Edge” R < 0 “Corner”R > 0 |R| small “Edge” R < 0 “Flat” region R(A) = det(A) – k trace2(A) k: is a const, usually between 0.04 and 0.06

  48. Harris Detector (cont’d) σD is called the “differentiation” scale

  49. Harris Detector - Example

  50. Harris Detector - Example Compute corner response R

More Related