1 / 55

Locality Sensitive Hashing and Large Scale Image Search

Locality Sensitive Hashing and Large Scale Image Search. Yunchao Gong UNC Chapel Hill yunchao@cs.unc.edu. The problem. Large scale image search: We have a candidate image Want to search a large database to find similar images Search the internet to find similar images Fast Accurate .

teneil
Télécharger la présentation

Locality Sensitive Hashing and Large Scale Image Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Locality Sensitive Hashing and Large Scale Image Search Yunchao Gong UNC Chapel Hill yunchao@cs.unc.edu

  2. The problem • Large scale image search: • We have a candidate image • Want to search a large database to find similar images • Search the internet to find similar images • Fast • Accurate

  3. Large Scale Image Search in Database • Find similar images in a large database Kristen Grauman et al

  4. Internet Large scale image search Internet contains billions of images Search the internet • The Challenge: • Need way of measuring similarity between images (distance metric learning) • Needs to scale to Internet (How?)

  5. Large scale image search • Representation must fit in memory (disk too slow) • Facebook has ~10 billion images (1010) • PC has ~10 Gbytes of memory (1011 bits)  Budget of 101 bits/image Fergus et al

  6. Requirements for image search • Search must be both fast, accurate and scalable to large data set • Fast • Kd-trees: tree data structure to improve search speed • Locality Sensitive Hashing: hash tables to improve search speed • Small code: binary small code (010101101) • Scalable • Require very little memory, enabling their use on standard hardware or even on handheld devices • Accurate • Learned distance metric

  7. Categorization of existing large scale image search algorithms • Tree Based Structure • Spatial partitions (i.e. kd-tree) and recursive hyper plane decomposition provide an efficient means to search low-dimensional vector data exactly. • Hashing • Locality-sensitive hashing offers sub-linear time search by hashing highly similar examples together. • Binary Small Code • Compact binary code, with a few hundred bits per image

  8. Tree Based Structure • Kd-tree • The kd-tree is a binary tree in which every node is a k-dimensional point • (No theoretical guarantee!)They are known to break down in practice for high dimensional data, and cannot provide better than a worst case linear query time guarantee.

  9. Locality Sensitive Hashing • Hashing methods to do fast Nearest Neighbor (NN) Search • Sub-liner time search by hashing highly similar examples together in a hash table • Take random projections of data • Quantize each projection with few bits • Strong theoretical guarantees • More detail later

  10. Binary Small Code • 1110101010101010 • Binary? • 0101010010101010101 • Only use binary code (0/1) • Small? • A small number of bits to code each image • i.e. 32 bits, 256 bits • How could this kind of small code improve the image search speed? More detail later.

  11. Detail of these algorithms 1. Locality sensitive hashing • Basic LSH • LSH for learned metric 2. Small binary code • Basic small code idea • Spectral hashing

  12. 1. Locality Sensitive Hashing • The basic idea behind LSH is to project the data into a low-dimensional binary (Hamming) space; that is, each data point is mapped to a b-bit vector, called the hash key. • Each hash function h must satisfy the locality sensitive hashing property: • Where ∈ [0, 1] is the similarity function of interest Datar, N. Immorlica, P. Indyk, and V. Mirrokni. Locality-Sensitive Hashing Scheme Based on p-Stable Distributions. In SOCG, 2004. Kristen Grauman et al

  13. LSH functions for dot products The hashing function of LSH to produce Hash Code is a hyperplane separating the space (next page for example)

  14. 1. Locality Sensitive Hashing • Take random projections of data • Quantize each projection with few bits 101 0 Feature vector 1 0 No learning involved 1 1 0 Fergus et al

  15. How to search from hash table? N Xi h h r1…rk r1…rk Q A set of data points Hash function Search the hash table for a small set of images << N Hash table 110101 110111 Q 111101 New query [Kristen Grauman et al, modified my me] results

  16. Could we improve LSH? • Could we utilize learned metric to improve LSH? • How to improve LSH from learned metric? • Assume we have already learned a distance metric A from domain knowledge • XTAX has better quantity than simple metrics such as Euclidean distance

  17. How to learn distance metric? • First assume we have a set of domain knowledge • Use the methods described in the last lecture to learn distance metric A • As discussed before, • Thus is a linear embedding function that embeds the data into a lower dimensional space • Define G =

  18. LSH functions for learned metrics Given learned metric with G could be viewed as linear parametric function or a linear embedding function for data x Thus the LSH function could be: The key idea is first embed the data into a lower space by G and then do LSH in the lower dimensional space Data embedding Jain, B. Kulis, and K. Grauman. Fast Image Search for Learned Metrics. In CVPR, 2008

  19. Some results for LSH • Caltech-101 data set • Goal: Exemplar-based Object Categorization • Some exemplars • Want to categorize the whole data set

  20. Results: object categorization [CORR] Caltech-101 database ML = metric learning Kristen Grauman et al

  21. Question ? • Is Hashing fast enough? • Is sub-linear search time fast enough? • For retrieving (1 + e) near neighbors is bounded by O(n1/(1+e) ) • Is it fast enough? • Is it scalable enough? (adapt to the memory of a PC?)

  22. NO! • Small binary code could do better. • Cast an image to a compact binary code, with a few hundred bits per image. • Small code is possible to perform real-time searches with millions from the Internet using a single large PC. • Within 1 second! (for 80 million data  0.146 sec.) • 80 million data (~300G)  120M

  23. Binary Small code • First introduced in text search/retrieval • [3] introduced it for text documents retrieval • Introduced to computer vision by Antonio Torralba et al [4]. Semantic Hashing. RuslanSalakhutdinov and Geoffrey Hinton International Journal of Approximate Reasoning, 2009 A. Torralba, R. Fergus, and Y. Weiss. Small Codes and Large Databases for Recognition. In CVPR, 2008.

  24. Semantic Hashing Semantic Hashing. RuslanSalakhutdinov and Geoffrey Hinton International Journal of Approximate Reasoning, 2009 Query Semantic HashFunction Address Space Binary code Images in database Query address Semantically similar images Quite differentto a (conventional)randomizing hash Fergus et al

  25. Semantic Hashing • Similar points are mapped into similar small code • Then store these code into memory and compute hamming distance (very fast, carried out by hardware)

  26. Overall Query Scheme <10μs Binary code Image 1 generate small code Retrieved images <1ms Store into memory Use hard ware to compute hamming distance Query Image Feature vector Feature vector ~1ms (in Matlab) Fergus et al

  27. Searching Framework • Produce binary code (01010011010) • Store these binary code into the memory • Use hardware to compute the hamming distance (very fast) • Sort the Hamming distances and get final ranking results

  28. The problem is reduced to how to learn small binary code • Simplest method (use median) • LSH are already able to produce binary code • Restricted Boltzmann Machines (RBM) • Optimal small binary code by spectral hashing

  29. 1. Simple Binarization Strategy Set threshold (unsupervised) - e.g. use median 0 1 0 1 Fergus et al

  30. 2. Locality Sensitive Hashing • LSH is ready to generate binary code (unsupervised) • Take random projections of data • Quantize each projection with few bits 101 0 Feature vector 1 0 No learning involved 1 1 0 Fergus et al

  31. 3. RBM [3] to generate code • Not going into detail, see [3] for detail • Use a deep neural network to train small code • Supervised method R. R. Salakhutdinov and G. E. Hinton. Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure. In AISTATS, 2007.

  32. LabelMe retrieval • LabelMe is a large database with human annotated images • The goal of this experiment is to • First generate small code • Use hamming distance to search for similar images • Sort the results to produce final ranking • Gist descriptor: ground truth

  33. Fergus et al Examples of LabelMe retrieval • 12 closest neighbors under different distance metrics

  34. Test set 2: Web images • 12.9 million images from tiny image data set • Collected from Internet • No labels, so use Euclidean distance between Gist vectors as ground truth distance • Note: Gist descriptor is a kind of feature widely used in computer vision for

  35. Examples of Web retrieval • 12 neighbors using different distance metrics Fergus et al

  36. Web images retrieval Observation: more codes get better performance

  37. Fergus et al Retrieval Timings

  38. 4. Spectral hashing Y. Weiss, A. Torralba, and R. Fergus. Spectral Hashing. In NIPS, 2008. • Closely related to the problem of spectral graph partitioning • What makes a good code? • easily computed for a novel input • requires a small number of bits to code the full dataset • maps similar items to similar binary code words

  39. Spectral Hashing • To simplify the problem, first assume that the items have already been embedded in a Euclidean space • Try to embed the data into a hamming space • Hamming space is binary space 010101001… Fergus et al

  40. Some definition • Let be the list of code words (binary vectors of length k) for n data points • is the affinity matrix characterize similarities between data points.

  41. Objective function • the average Hamming distance between similar points is minimal • What does this objective function mean?

  42. Objective of Spectral Hashing the average Hamming distance between similar neighbors in the Euclidean space The code is binary each bit have 50% to be 0 or 1 the bits to be uncorrelated (bounding condition for the objective)

  43. Graph illustration Nearby points Near with each other Euclidean Space Hamming space

  44. Spectral Relaxation • We obtain an easy problem whose solutions are simply the k eigenvectors of D − W with minimal eigenvalue • Observation: Similar with spectral graph partition • Could be solved by computing generalized eigenvalue problem

  45. Problem? • Only tells us how to compute the code representation of items in the training set • How about the testing set? • Computing the code in the testing set is called the “out-of-sample extension”

  46. Recall the problem • Compute the eigenvector and eigenvalue of the graph D − W • Eigenproblem is computational expensive O(n3) • Could not handle too large data set • The solution is use eigenfunction

  47. One dimensional eigenfunction • Multi-dimensional eigenfunction is a difficult problem • One dimensional eigenfuntion for simple distribution is well studied! • For example: for 1D uniform distribution • (1) eigenfunction eigenvalue

  48. Finding independent coordinates • The problem is reduced to find several independent 1 dimensional coordinates that • S1 S2 S3…Sk are single coordinates • Means that the whole distribution could be separated • The same with eigenvectors and eigenvalues

  49. The spectral hashing algorithm • Select a setofn data points • Find k independent coordinates from data • For each coordinate, assume the data distribution are uniform and learn analytical eigenfunction by • Use analytical eigenfunction to learn the eigenvector and eigenvalue for whole data set • Choose top k eigenfunctions from all the eigenvectors learned • Threshold the analytical eigenfunction to obtain binary codes

  50. Results for spectral hashing • Synthetic results on uniform distribution • LabelMe retrieval results using spectral hashing to produce small binary code

More Related