1 / 66

CS598:Visual information Retrieval

CS598:Visual information Retrieval. Lecture III: Image Representation: Invariant Local Image Descriptors. Recap of Lecture II. Color, texture, descriptors Color histogram Color correlogram LBP descriptors Histogram of oriented gradient Spatial pyramid matching

marlin
Télécharger la présentation

CS598:Visual information Retrieval

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS598:Visual information Retrieval Lecture III: Image Representation: Invariant Local Image Descriptors

  2. Recap of Lecture II • Color, texture, descriptors • Color histogram • Color correlogram • LBP descriptors • Histogram of oriented gradient • Spatial pyramid matching • Distance & Similarity measure • Lp distances • Chi-Square distances • KL distances • EMD distances • Histogram intersection

  3. Lecture III: Part I Local Feature Detector

  4. Outline • Blob detection • Brief of Gaussian filter • Scale selection • Lapacian of Gaussian (LoG) detector • Difference of Gaussian (DoG) detector • Affine co-variant region

  5. Gaussian Kernel • Constant factor at front makes volume sum to 1 (can be ignored when computing the filter values, as we should renormalize weights to sum to 1 in any case) 0.003 0.013 0.022 0.013 0.003 0.013 0.059 0.097 0.059 0.013 0.022 0.097 0.159 0.097 0.022 0.013 0.059 0.097 0.059 0.013 0.003 0.013 0.022 0.013 0.003 5 x 5,  = 1 Source: C. Rasmussen

  6. Gaussian Kernel • Standard deviation : determines extent of smoothing σ = 2 with 30 x 30 kernel σ = 5 with 30 x 30 kernel Source: K. Grauman

  7. Choosing kernel width • The Gaussian function has infinite support, but discrete filters use finite kernels Source: K. Grauman

  8. Choosing kernel width • Rule of thumb: set filter half-width to about 3σ

  9. Gaussian vs. box filtering

  10. Gaussian filters • Remove “high-frequency” components from the image (low-pass filter) • Convolution with self is another Gaussian • So can smooth with small- kernel, repeat, and get same result as larger- kernel would have • Convolving two times with Gaussian kernel with std. dev. σis same as convolving once with kernel with std. dev. • Separable kernel • Factors into product of two 1D Gaussians Source: K. Grauman

  11. Separability of the Gaussian filter Source: D. Lowe

  12. * = = * Separability example 2D convolution(center location only) The filter factorsinto a product of 1Dfilters: Perform convolutionalong rows: Followed by convolutionalong the remaining column: Source: K. Grauman

  13. Why is separability useful? • What is the complexity of filtering an n×n image with an m×mkernel? • O(n2 m2) • What if the kernel is separable? • O(n2 m)

  14. Outline • Blob detection • Brief of Gaussian filter • Scale selection • Lapacian of Gaussian (LoG) detector • Difference of Gaussian (DoG) detector • Affine co-variant region

  15. Blob detection in 2D • Laplacian of Gaussian: Circularly symmetric operator for blob detection in 2D

  16. Blob detection in 2D • Laplacian of Gaussian: Circularly symmetric operator for blob detection in 2D Scale-normalized:

  17. Scale selection • At what scale does the Laplacian achieve a maximum response to a binary circle of radius r? r image Laplacian

  18. Scale selection • At what scale does the Laplacian achieve a maximum response to a binary circle of radius r? • To get maximum response, the zeros of the Laplacian have to be aligned with the circle • The Laplacian is given by: • Therefore, the maximum response occurs at r 0 circle image Laplacian

  19. Characteristic scale • We define the characteristic scale of a blob as the scale that produces peak of Laplacian response in the blob center characteristic scale T. Lindeberg (1998). "Feature detection with automatic scale selection."International Journal of Computer Vision30 (2): pp 77--116.

  20. Scale-space blob detector • Convolve image with scale-normalized Laplacian at several scales

  21. Scale-space blob detector: Example

  22. Scale-space blob detector: Example

  23. Scale-space blob detector • Convolve image with scale-normalized Laplacian at several scales • Find maxima of squared Laplacian response in scale-space

  24. Scale-space blob detector: Example

  25. Outline • Blob detection • Brief of Gaussian filter • Scale selection • Lapacian of Gaussian (LoG) detector • Difference of Gaussian (DoG) detector • Affine co-variant region

  26. Efficient implementation • Approximating the Laplacian with a difference of Gaussians: (Laplacian) (Difference of Gaussians)

  27. Efficient implementation David G. Lowe. "Distinctive image features from scale-invariant keypoints.”IJCV 60 (2), pp. 91-110, 2004.

  28. Invariance and covariance properties • Laplacian (blob) response is invariantw.r.t. rotation and scaling • Blob location and scale is covariantw.r.t. rotation and scaling • What about intensity change?

  29. Outline • Blob detection • Brief of Gaussian filter • Scale selection • Lapacian of Gaussian (LoG) detector • Difference of Gaussian (DoG) detector • Affine co-variant region

  30. Achieving affine covariance • Affine transformation approximates viewpoint changes for roughly planar objects and roughly orthographic cameras

  31. direction of the fastest change direction of the slowest change (max)-1/2 (min)-1/2 Achieving affine covariance Consider the second moment matrix of the window containing the blob: Recall: This ellipse visualizes the “characteristic shape” of the window

  32. Affine adaptation example Scale-invariant regions (blobs)

  33. Affine adaptation example Affine-adapted blobs

  34. From covariant detection to invariant description • Geometrically transformed versions of the same neighborhood will give rise to regions that are related by the same transformation • What to do if we want to compare the appearance of these image regions? • Normalization: transform these regions into same-size circles

  35. Affine normalization • Problem: There is no unique transformation from an ellipse to a unit circle • We can rotate or flip a unit circle, and it still stays a unit circle

  36. p 2 0 Eliminating rotation ambiguity • To assign a unique orientation to circular image windows: • Create histogram of local gradient directions in the patch • Assign canonical orientation at peak of smoothed histogram

  37. From covariant regions to invariant features Eliminate rotational ambiguity Compute appearancedescriptors Extract affine regions Normalize regions SIFT (Lowe ’04)

  38. Invariance vs. covariance • Invariance: • features(transform(image)) = features(image) • Covariance: • features(transform(image)) =transform(features(image)) Covariant detection => invariant description

  39. David G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision, Vol. 60, No. 2, pp. 91-110 • 6291 as of 02/28/2010; 22481 as of 02/03/2014 • Our goal is to design the best local image descriptors in the world. Lecture III: Part I Learning Local Feature Descriptor

  40. Panoramic stitching [Brown et al. CVPR05] • Real-worldFace recognition • [Wright & Hua CVPR09] • [Hua & Akbarzadeh ICCV09] • Image databases • [Mikolajczyk & Schmid ICCV01] Courtesy of Seitz & Szeliski • Object or location recognition • [Nister & Stewenius CVPR06] • [Schindler et al. CVPR07] • Robot navigation • [Deans et al. AC05] • 3D reconstruction • [Snavely et al. SIGGRAPH06]

  41. Typical matching process Image 1 Image 2 Image 1 Image 2 Dense sampling of image patches Interest point/region detection (sparse)

  42. Typical matching process Image 1 Image 2 Q P Descriptor space

  43. Problem to solve • To obtain the most discriminative, compact, and computationally efficient local image descriptors. • How can we get ground truth data? • What is the form of the descriptor function f(.)? • What is the measure for optimality? Learning a function of a local image patch descriptor = f ( ) s.t. a nearest neighbor classifier is optimal

  44. How can we get ground truth data?

  45. Training data 3D Point Cloud Multiview stereo = Training data [Goesele et al. - ICCV’07] [Snavely et al. - SIGGRAPH’06 ]

  46. Training data 3D Point Cloud [Goesele et al. - ICCV’07] [Snavely et al. - SIGGRAPH’06 ]

  47. Training data 3D Point Cloud [Goesele et al. - ICCV’07] [Snavely et al. - SIGGRAPH’06 ]

  48. Training data 3D Point Cloud [Goesele et al. - ICCV’07] [Snavely et al. - SIGGRAPH’06 ]

  49. Liberty Yosemite • Statue of liberty (New York) – Liberty • Notre Dame (Paris) – Notre Dame • Half Dome (Yosemite) - Yosemite • http://www.cs.ubc.ca/~mbrown/patchdata/patchdata.html

  50. What is the form of the descriptor function?

More Related