1 / 63

Hash-Based Indexes

Hash-Based Indexes. CPSC 461 Instructor: Marina Gavrilova. Goal. Goal of today’s lecture is to introduce concept of “hash-index”. Hashing is an alternative to tree structure that allows near constant access time to ANY record in a very large database. Presentation Outline. Introduction

dyanne
Télécharger la présentation

Hash-Based Indexes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hash-Based Indexes CPSC 461 Instructor: Marina Gavrilova

  2. Goal • Goal of today’s lecture is to introduce concept of “hash-index”. Hashing is an alternative to tree structure that allows near constant access time to ANY record in a very large database.

  3. Presentation Outline • Introduction • Did you know that? • Static hashing definition and methods • Good hash function • Collision resolution (open addressing, chaining) • Extendible hashing • Linear hashing • Useful links and current market trends • Summary

  4. Introduction to Hashing • Approaches to Search • Sequential and list methods • (lists, tables, arrays). • 2. Direct access by key value (hashing) • 3. Tree indexing methods.

  5. Introduction to Hashing Definition Hashingis the process of mapping a key value to a position in a table. A hash function maps key values to positions. A hash table is an array that holds the records. Searching in a hash table can be done in O(1) regardless of the hash table size.

  6. Introduction to Hashing

  7. Introduction to Hashing • Example of Usefullness • 10 stock details, 10 table positions • Stock numbers are between 0 and 1 1000. Using the whole stock numbers may require 1000 storage locations and this is an obvious waste of memory.

  8. Introduction of Hashing • Applications of Hashing • Compilers use hash tables to keep track of declared variables • A hash table can be used for on-line spelling checkers — if misspelling detection (rather than correction) is important, an entire dictionary can be hashed and words checked in constant time • Game playing programs use hash tables to store seen positions, thereby saving computation time if the position is encountered again • Hash functions can be used to quickly check for inequality — if two elements hash to different values they must be different • Storing sparse data

  9. Did you know that? • Cryptography was once known only to the key people in the the National Security Agency and a few academics. • Until 1996, it was illegal to export strong cryptography from the United States. • Fast forward to 2006, and the Payment Card Industry Data Security Standard (PCI DSS) requires merchants to encrypt cardholder information. Visa and MasterCard can levy fines of up to $500,000 for not complying! • Among methods recommended are: • Strong one-way hash functions (hashed indexes) • Truncation • Index tokens and pads (pads must be securely stored) • Strong cryptography [Hashing for fun and profit: Demystifying encryption for PCI DSSRoger Nebel]

  10. Did you know that? • Transport Layer Security protocol on networks (TLS) uses the Rivest, Shamir, and Adleman (RSA) public key algorithm for the TLS key exchange and authentication, and the Secure Hashing Algorithm 1 (SHA-1) for the key exchange and hashing. [System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing, Microsoft TechNews, 2005]

  11. Did you know that? • Spatial hashing studies performed at Microsoft Research, Redmond combine hashing with computer graphics to create a new set of tools for rendering, mesh reconstruction, and collision optimization (see public poster by Hugues Hoppe on the next slide)

  12. Perfect Spatial Hashing Sylvain Lefebvre Hugues Hoppe(Microsoft Research) • We design a perfect hash function to losslessly pack sparse data while retaining efficient random access: Hash table Offset table Hash table Offset table 1283 382 182 1282 353 193 Hash function Applications 2D 3D • Simply: (modulo table sizes) Offsettable  24372 11632 83333 453 3D textures 3D painting Hashtable H Domain 10243, 46MB, 530fps 20483, 56MB, 200fps Vector images Sprite maps nearest: 7.5MB, 370fps 10242, 500KB, 700fps +900KB, 200fps • Perfect hash on multidimensional data • No collisions  ideal for GPU • Single lookup into a small offset table • Offsets only ~4 bits per defined data • Access only ~4 instructions on GPU • Optimized spatial coherence 1.8% Simulation Collision detection Alpha compression 2563, 100fps 10243, 12MB, 140fps 0.9bits/pixel, 800fps

  13. Did you know that? • Combining hashing and encryption provides a much stronger tool for database and password protection. • http://msdn.microsoft.com/msdnmag/issues/03/08/SecurityBriefs/ [Security Briefs, SMDN Magazine]

  14. Hash Functions • Hash Functions • Hashing is the process of chopping up the key and mixing it up in various ways in order to obtain an index which will be uniformly distributed over the range of indices - hence the ‘hashing’. There are several common ways of doing this: • Truncation • Folding • Modular Arithmetic

  15. Hash Functions • Hash Functions – Truncation • Truncation is a method in which parts of the key are ignored and the remaining portion becomes the index. - For this, we take the given key and produce a hash location by taking portions of the key (truncating the key). • Example – If a hash table can hold 1000 entries and an 8-digit number is used as key, the 3rd, 5thand 7th digits starting from the left of the key could be used to produce the index. - e.g. .. Key is 62538194 and the hash location is 589. - • Advantage: Simple and easy to implement. • Problems: Clustering and repetition.

  16. Hash Functions • Hash Functions – Folding • Folding breaks the key into several parts and combines the parts to form an index. - The parts may be recombined by addition, subtraction, multiplications and may have to be truncated as well. - Such a process is usually better than truncation by itself since it produces a better distribution: all of the digits in the key are considered. - Using a key 62538194 and breaking it into 3 numbers using the first 3 and the last 2 digits produced 625, 381 and 94. These could be added to get 1100 which could be truncated to 100. They could be also be multiplied together and then three digits chosen from the middle of the number produced.

  17. Hash Functions • Hash Functions – (Modular Arithmetic) • Modular Arithmetic process essentially assures that the index produced is within a specified range. For this, the key is converted to an integer which is divided by the range of the index with the resulting function being the value of the remainder. Uses: biometrics, encryption, compression - If the value of the modulus is a prime number, the distribution of indices obtained is quite uniform. - A table whose size is some number which has many factors provides the possibility of many indices which are the same, so the size should be a prime number.

  18. Hash Functions • Good Hash Functions • Hash functions which use all of the key are almost always better than those which use only some of the key. - When only portions are used, information is lost and therefore the number of possibilities for the final key are reduced. - If we deal with the integer its binary form, then the number of pieces that can be manipulated by the hash function is greatly increased.

  19. Collision Resolution • Collision • It is obvious that no matter what function is used, the possibility exists that the use of the function will produce an index which is a duplicate of an index which already exists. This is a Collision. Collision resolution strategy: • - Open addressing: store the key/entry in a different position • - Chaining: chain together several keys/entries in each position

  20. Collision Resolution Collision - Example - - Hash table size 11 - - Hash function: key mod hash size So, the new positions in the hash table are: Some collisions occur with this hash function.

  21. Collision Resolution • Collision Resolution – Open Addressing • Resolving collisions by open addressing is resolving the problem by taking the next open space as determined by rehashing the key according to some algorithm. • Two main open addressing collision resolution techniques: • - - Linear probing: increase by 1 each time [mod table size!] • - - Quadratic probing: to the original position, add 1, 4, 9, 16,… • also in some cases key-dependent increment technique is used. Probing If the table position given by the hashed key is already occupied, increase the position by some amount, until an empty position is found

  22. Collision Resolution Collision Resolution – Open Addressing Linear Probing new position = (current position + 1) MOD hash size Example – Before linear probing: After linear probing: Problem – Clustering occurs, that is, the used spaces tend to appear in groups which tends to grow and thus increase the search time to reach an open space.

  23. Collision Resolution • Collision Resolution – Open Addressing • In order to try to avoid clustering, a method which does not look for the first open space must be used. • Two common methods are used – - - Quadratic Probing - - Key-dependent Increments

  24. Collision Resolution Collision Resolution – Open Addressing Quadratic Probing new position = (collision position + j2) MOD hash size { j = 1, 2, 3, 4, ……} Example – Before quadratic probing: After quadratic probing: Problem – Overflow may occurs when there is still space in the hash table.

  25. Collision Resolution • Collision Resolution – Open Addressing • Key-dependent Increments • This technique is used to solve the overflow problem of the quadratic probing method. • These increments vary according to the key used for the hash function. If the original hash function results in a good distribution, then key- dependent functions work quite well for rehashing and all locations in the table will eventually be probed for a free position. • Key dependent increments are determined by using the key to calculate a new value and then using this as an increment to determine successive probes.

  26. Collision Resolution Collision Resolution – Open Addressing Key-dependent Increments For example, since the original hash function was key Mod 11, we might choose a function of key MOD 7 to find the increment. Thus the hash function becomes - - new position = current position + ( key DIV 11) MOD 11 Example – Before key-dependent increments: After key-dependent increments:

  27. Collision Resolution • Collision Resolution – Open Addressing • Key-dependent Increments • In all of the closed hash functions it is important to ensure that an increment of 0 does not arise. If the increment is equal to hash size the same position will be probed all the time, so this value cannot be used. • If we ensure that the hash size is prime and the divisors for the open and closed hash are prime, the rehash function does not produce a 0 increment, then this method will usually access all positions as does the linear probe. - Using a key-dependent method usually result reduces clustering and therefore searches for an empty position should not be as long as for the linear method.

  28. Collision Resolution • Collision Resolution – Chaining • Each table position is a linked list • Add the keys and entries anywhere in the list (front easiest) Advantagesover open addressing: - Simpler insertion and removal - Array size is not a limitation (but should still minimize collisions: make table size roughly equal to expected number of keys and entries) Disadvantage - Memory overhead is large if entries are small

  29. Collision Resolution Collision Resolution – Chaining Example: Before chaining: After chaining:

  30. Analysis of Searching using Hash Tables • In analyzing search efficiency, the average is usually used. Searching with hash tables is highly dependent on how full the table is since as the table approaches a full state, more rehashes are necessary. The proportion of the table which is full is called the Load Factor. - When collisions are resolved using open addressing, the maximum load factor is 1. - Using chaining, however, the load factor can be greater than 1 when the table is full and the linked list attached to each hash address has more than one element. • - Chaining consistently requires fewer probes than open addressing. - Traversal of the linked list is slow and if the records are small, it may be just as well to use open addressing. - Chaining is the best under two conditions --- when the number of unsuccessful searches is large or when the records are large. - Open addressing would likely be a reasonable choice when most searches are likely to be successful, the load factor is moderate and the records are relatively small.

  31. Analysis of Searching using Hash Tables Average number of probes for different collision resolution methods: [ The values are for large hash tables, in this case larger than 500]

  32. Analysis of Searching using Hash Tables When are other representations more suitable than hashing: • Hash tables are very good if there is a need for many searches in a reasonably stable database • Hash tables are not so good if there are many insertions and deletions, or if table traversals are needed — in this case, trees are better for indexing • Also, hashing is very slow for any operations which require the entries to be sorted (e.g. query to Find the minimum key)

  33. Perfect Hashing • A perfect hashing function maps a key into a unique address. If the range of potential addresses is the same as the number of keys, the function is a minimal (in space) perfect hashing function. • What makes perfect hashing distinctive is that it is a process for mapping a key space to a unique address in a smaller address space, that is hash (key) unique address • Not only does a perfect hashing function improve retrieval performance, but a minimal perfect hashing function would provide 100 percent storage utilization.

  34. Perfect Hashing Process of creating a perfect hash function A general form of a perfect hashing function is: p.hash (key) =(h0(key) + g[h1(key)] + g[h2(key)] mod N

  35. Cichelli’s Algorithm • In Cichelli’s algorithm, the component functions are: h0 = length (key) h1 = first_character (key) h2 = second_character (key) andg = T (x) where T is the table of values associated with individual characters x which may apply in a key. The time consuming part of Cichelli’s algorithm is determining T.

  36. Cichelli’s Algorithm Table 1: Values associated with the characters of the Pascal reserved words • When we apply the Cichelli’s perfect hashing functionto the keyword begin using table 1, we can get – The keyword begin would be stored in location 33. Since the hash values run from 2 through 37 for this set of data, the hash function is a minimal perfect hashing function.

  37. Some Links to Hashing Animation Links for interactive hashing examples: • http://www.engin.umd.umich.edu/CIS/course.des/cis350/hashing/WEB/HashApplet.htm • http://www.cs.auckland.ac.nz/software/AlgAnim/hash_tables.html • http://www.cse.yorku.ca/~aaw/Hang/hash/Hash.html • http://www.cs.pitt.edu/~kirk/cs1501/animations/Hashing.html

  38. Hashing as Database Index • The basic idea is to use hashing function, which maps a search key value(of a field) into a record or bucket of records. • As for any index, 3 alternatives for data entries k*: • Data record with key value k • <k, rid of data record with search key value k> • <k, list of rids of data records with search key k> • Hash-based indexes are best for equalityselections. Cannot support range searches. • Static and dynamic hashing techniques exist; trade-offs similar to ISAM vs. B+ trees.

  39. Static Hashing • # primary pages fixed, allocated sequentially, never de-allocated; overflow pages allowed if needed. • h(k) mod M = bucket to which data entry withkey k belongs. (M = # of buckets) 0 h(key) mod N 2 key h N-1 Primary bucket pages Overflow pages

  40. Static Hashing (Contd.) • Buckets contain data entries. • Hash function depends on search key field of record r. Must distribute values over range 0 ... M-1 (table size). • h(key) = (a * key + b) mod M usually works well. • a and b are constants; lots known about how to tune h. • Long overflow chains can develop and degrade performance. • Extendible and LinearHashing: Dynamic techniques to fix this problem.

  41. Rule of thumb • Try to keep space utilization between 50% and 80% • If <50%, wasting space • If >80%, overflow significant Depends on how good hashing function is & On # keys/bucket

  42. Hash Functions for Extendible Hashing • Extendible Hashing (Fagin et. al. 1979) Expandable hashing (Knott 1971) Dynamic Hashing (Larson 1978)

  43. Extendible Hashing Assume that a hashing technique is applied to a dynamically changing file composed of buckets, and each bucket can hold only a fixed number of items. Extendible hashing accesses the data stored in buckets indirectly through an index that is dynamically adjusted to reflect changes in the file. The characteristic feature of extendible hashing is the organization of the index, which is an expandable table.

  44. Extendible Hashing • A hash function applied to a certain key indicates a position in the index and not in the file (or table or keys). Values returned by such a hash function are called pseudokeys. • The database/file requires no reorganization when data are added to or deleted from it, since these changes are indicated in the index. Only one hash function h can be used, but depending on the size of the index, only a portion of the added h(K) is utilized. • A simple way to achieve this effect is by looking at the address into the string of bits from which only the i leftmost bits can be used. The number i is the depth of the directory. In figure 1(a) (in the next slide), the depth is equal to two.

  45. Example Figure 1. An example of extendible hashing (Drozdek Textbook)

  46. Extendible Hashing as Index • Situation: Bucket (primary page) becomes full. Why not re-organize file by doubling # of buckets? • Reading and writing all pages is expensive! • Idea: Use directory of pointers to buckets, double # of buckets by doubling the directory, splitting just the bucket that overflowed! • Directory much smaller than file, so doubling it is much cheaper. Only one page of data entries is split. Nooverflowpage! • Trick lies in how hash function is adjusted!

  47. LOCAL DEPTH 2 Example Bucket A 16* 4* 12* 32* GLOBAL DEPTH 2 2 Bucket B 00 5* 1* 21* 13* 01 • Directory is array of size 4. • To find bucket for r, take last `global depth’ # bits of h(r); we denote r by h(r). • If h(r) = 5 = binary 101, it is in bucket pointed to by 01. 2 10 Bucket C 10* 11 2 DIRECTORY Bucket D 15* 7* 19* DATA PAGES • Insert: If bucket is full, splitit (allocate new page, re-distribute). • If necessary, double the directory. (As we will see, splitting a • bucket does not always require doubling; we can tell by • comparing global depth with local depth for the split bucket.)

  48. Insert h(r)=20 (Causes Doubling) 2 LOCAL DEPTH 3 LOCAL DEPTH Bucket A 16* 32* 32* 16* GLOBAL DEPTH Bucket A GLOBAL DEPTH 2 2 2 3 Bucket B 5* 21* 13* 1* 00 1* 5* 21* 13* 000 Bucket B 01 001 2 10 2 010 Bucket C 10* 11 10* Bucket C 011 100 2 2 DIRECTORY 101 Bucket D 15* 7* 19* 15* 7* 19* Bucket D 110 111 2 3 Bucket A2 4* 12* 20* DIRECTORY 12* 20* Bucket A2 4* (`split image' of Bucket A) (`split image' of Bucket A)

  49. Points to Note • 20 = binary 10100. Last 2 bits (00) tell us r belongs in A or A2. Last 3 bits needed to tell which. • Global depth of directory: Max # of bits needed to tell which bucket an entry belongs to. • Local depth of a bucket: # of bits used to determine if an entry belongs to this bucket. • When does bucket split cause directory doubling? • Before insert, local depth of bucket = global depth. Insert causes local depth to become > global depth; directory is doubled by copying it over and `fixing’ pointer to split image page. (Use of least significant bits enables efficient doubling via copying of directory!)

  50. Directory Doubling • Why use least significant bits in directory? • Allows for doubling via copying! 6 = 110 6 = 110 3 3 000 000 001 100 2 2 010 010 00 00 1 1 011 110 6* 01 10 0 0 100 001 6* 6* 10 01 1 1 101 101 6* 11 11 6* 6* 110 011 111 111 vs. Least Significant Most Significant

More Related