1 / 39

Approximate Nearest Subspace Search with Applications to Pattern Recognition

Approximate Nearest Subspace Search with Applications to Pattern Recognition. Ronen Basri , Tal Hassner , Lihi Zelnik -Manor presented by Andrew Guillory and Ian Simon. The Problem. Given n linear subspaces S i :. The Problem. Given n linear subspaces S i : And a query point q :.

rebeccahogg
Télécharger la présentation

Approximate Nearest Subspace Search with Applications to Pattern Recognition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Approximate Nearest Subspace Search with Applications to Pattern Recognition Ronen Basri, Tal Hassner, LihiZelnik-Manor presented by Andrew Guillory and Ian Simon

  2. The Problem • Given n linear subspaces Si:

  3. The Problem • Given n linear subspaces Si: • And a query point q:

  4. The Problem • Given n linear subspaces Si: • And a query point q: • Find the subspace Si that minimizes dist(Si,q).

  5. Why? • object appearance variation = subspace • fast queries on object database

  6. Why? • object appearance variation = subspace • fast queries on object database • Other reasons?

  7. Approach • Solve by reduction to nearest neighbor. • point-to-point distances

  8. Approach • Solve by reduction to nearest neighbor. • point-to-point distances not actual reduction

  9. Approach • Solve by reduction to nearest neighbor. • point-to-point distances • In higher-dimensional space. not actual reduction

  10. Point-Subspace Distance • Use squared distance.

  11. Point-Subspace Distance • Use squared distance.

  12. Point-Subspace Distance • Use squared distance. • Squared point-subspace distancecan be represented as a dot product.

  13. The Reduction • Let: Remember:

  14. The Reduction • Let: • Then: Remember:

  15. The Reduction

  16. The Reduction constant over query

  17. The Reduction ? constant over query

  18. The Reduction ? constant over query ZTZ = I

  19. The Reduction ? constant over query ZTZ = I Z is d-by-(d-k), columns orthonormal.

  20. The Reduction ? constant over query ZTZ = I Z is d-by-(d-k), columns orthonormal.

  21. The Reduction • For query point q:

  22. The Reduction • For query point q: • Can we decrease the additive constant?

  23. Observation 1 • All data points lie on a hyperplane.

  24. Observation 1 • All data points lie on a hyperplane. • Let: • Now the hyperplane contains the origin.

  25. Observation 2 • After hyperplane projection: • All data points lie on a hypersphere.

  26. Observation 2 • After hyperplane projection: • All data points lie on a hypersphere. • Let: • Now the query point lies on the hypersphere.

  27. Observation 2 • After hyperplane projection: • All data points lie on a hypersphere. • Let: • Now the query point lies on the hypersphere.

  28. Reduction Geometry • What is happening?

  29. Reduction Geometry • What is happening?

  30. Finally • Additive constant depends only on dimension of points and subspaces. • This applies to linear subspaces, all of the same dimension.

  31. Extensions • subspaces of different dimension • lines and planes, e.g. • Not all data points have the same norm. • Add extra dimension to fix this.

  32. Extensions • subspaces of different dimension • lines and planes, e.g. • Not all data points have the same norm. • Add extra dimension to fix this. • affine subspaces • Again, not all data pointshave the same norm.

  33. Approximate Nearest Neighbor Search • Find point x with distance d(x, q) <= (1 + ε) mini d(xi,q) • Tree based approaches: KD-trees, metric / ball trees, cover trees • Locality sensitive hashing • This paper uses multiple KD-Trees with (different) random projections

  34. KD-Trees • Decompose space into axis aligned rectangles Image from Dan Pelleg

  35. Random Projections • Multiply data with a random matrix X with X(i,j) drawn from N(0,1) • Several different justifications • Johnson-Lindenstrauss (data set that is small compared to dimensionality) • Compressed Sensing (data set that is sparse in some linear basis) • RP-Trees (data set that has small doubling dimension)

  36. Results • Two goals • show their method is fast • show nearest subspace is useful • Four experiments • Synthetic Experiments • Image Approximation • Yale Faces • Yale Patches

  37. Image Reconstruction

  38. Yale Faces

  39. Questions / Issues • Should random projections be applied before or after the reduction? • Why does the effective distance error go down with the ambient dimensionality? • The reduction tends to make query points far away from the points in the database. Are there better approximate nearest neighbor algorithms in this case?

More Related