1 / 160

Reading and Review Chapter 12: Indexing and Hashing

Reading and Review Chapter 12: Indexing and Hashing. 12.1 Basic Concepts 12.2 Ordered Indices 12.3 B+-tree Index Files lots of stuff 12.4 B-tree Index Files 12.5 Static Hashing lots of stuff 12.6 Dynamic Hashing (Extendable Hashing) lots of stuff

iona
Télécharger la présentation

Reading and Review Chapter 12: Indexing and Hashing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reading and ReviewChapter 12: Indexing and Hashing • 12.1 Basic Concepts • 12.2 Ordered Indices • 12.3 B+-tree Index Files • lots of stuff • 12.4 B-tree Index Files • 12.5 Static Hashing • lots of stuff • 12.6 Dynamic Hashing (Extendable Hashing) • lots of stuff • 12.7 Comparison of Ordered Indexing and Hashing • 12.9 Multiple-Key Access • Multiple-key indices, Grid files, Bitmap Indices • 12.10 Summary

  2. Reading and ReviewChapter 13: Query Processing • 13.1 Overview • 13.2 Measures of Query Cost • 13.3 Selection Operation • 13.4 Sorting (Sort-Merge Algorithm) • 13.5 Join Operation • Nested-loop Join (regular, block, indexed) • Merge Join • Hash Join • Complex Joins • 13.6 Other Operations • 13.7 Evaluation of Expressions • 13.8 Summary

  3. Reading and Review Chapter 14: Query Optimization • 14.1 Overview • 14.2 Estimating Statistics of Expression Results • Catalog information • Selection size estimation • Join size estimation • Size estimation for other operations • Estimation of number of distinct values • 14.3 Transformation of Relational Expressions • Equivalence rules and examples • Join ordering, enumeration of equivalent expressions • 14.4 Choice of Evaluation Plans • from 14.4 on, ignore any section with a “**” at the end of the section title • 14.6 Summary

  4. Reading and Review Chapter 15: Transaction Management • 15.1 Transaction Concept • ACID properties • 15.2 Transaction State • state diagram of a transaction • (15.3 Implementation of Atomicity and Durability) • not important -- we covered this material better in section 17.4 • 15.4 Concurrent Execution and Scheduling • 15.5 Serializability (Conflict Serializability) • (ignore 15.5.2 View Serializability) • 15.6 Recoverability • (15.7 Isolation -- not important, see 16.1 on locking instead) • 15.9 Testing for Serializability (Precedence Graphs) • 15.10 Summary

  5. Reading and Review Chapters 16 and 17 • 16.1: Lock-Based Protocols • granting locks • deadlocks • two-phase locking protocol • ignore 16.1.4 and 16.1.5 • 16.6.3 Deadlock Detection and Recovery • Wait-for Graph • 17.1, 17.2.1 -- skim these sections. Material should be familiar to you as background, but I won’t be testing on definitions or memorization of it • 17.4: Log-Based Recovery • logs, redo/undo, basic concepts • checkpoints

  6. Indexing and Hashing: Motivation • Query response speed is a major issue in database design • Some queries only need to access a very small proportion of the records in a database • “Find all accounts at the Perryridge bank branch” • “Find the balance of account number A-101” • “Find all accounts owned by Zephraim Cochrane” • Checking every single record for the queries above is very inefficient and slow. • To allow fast access for those sorts of queries, we create additional structures that we associate with files: indices (index files).

  7. Basic Concepts • Indexing methods are used to speed up access to desired data • e.g. Author card catalog in a library • Search key -- an attribute or set of attributes used to look up records in a file. This use of the word key differs from that used before in class. • An index file consists of records (index entries) of the form: (search-key, pointer) where the pointer is a link to a location in the original file • Index files are typically much smaller than the original file • Two basic types of index: • orderedindices: search keys are stored in sorted order • hashindices: search keys are distributed uniformly across “buckets” using a “hash function”

  8. Index Evaluation Metrics • We will look at a number of techniques for both ordered indexing and hashing. No one technique is best in all circumstances -- each has its advantages and disadvantages. The factors that can be used to evaluate different indices are: • Access types supported efficiently. • Finding records with a specified attribute value • Finding records with attribute values within a specified range of values • Access (retrieval) time. Finding a single desired tuple in the file. • Insertion time • Deletion time • Update time • Space overhead for the index

  9. Index Evaluation Metrics (2) • Speed issues are • Access time • Insertion time • Deletion time • Update time • Access is the operation that occurs the most frequently, because it is also used for insert, delete, update • For insert, delete, and update operations we must consider not only the time for the operation itself (inserting a new record into the file) but also any time required to update the index structure to reflect changes. • We will often want to have more than one index for a file • e.g., card catalogs for author, subject, and title in a library

  10. Ordered Indices • An ordered index stores the values of the search keys in sorted order • records in the original file may themselves be stored in some sorted order (or not) • the original file may have several indices, on different search keys • when there are multiple indices, a primary index is an index whose search key also determines the sort order of the original file. Primary indices are also called clustering indices • secondary indices are indices whose search key specifies an order different from the sequential order of the file. Also called non-clustering indices • an index-sequential file is an ordered sequential file with a primary index.

  11. Dense and Sparse Indices • A dense index is where an index record appears for every search-key value in the file • A sparse index contains index records for only some search-key values • applicable when records are sequentially ordered on the search key • to locate a record with search-key value K we must: • find index record with largest search-key value <=K • search file sequentially starting at the location pointed to • stop (fail) when we hit a record with search-key value >K • less space and less maintenance overhead for insert, delete • generally slower than dense index for locating records

  12. Example of a Dense Index

  13. Example of a Sparse Index

  14. Problems with Index-Sequential Files • Retrieve: search until the key value or a larger key value is found • individual key access BAD • scan the file in order of the key GOOD • Insert is hard -- all higher key records must be shifted to place the new record • Delete may leave holes • Update is equivalent to a combined delete and insert • updating a search key field may cause the combined disadvantages of an insert and a delete by shifting the record’s location in the sorted file • differential files are often used to hold the recent updates until the database can be reorganized off-line

  15. Multi-level Index • If the primary index does not fit in memory, access becomes expensive (disk reads cost) • To reduce the number of disk accesses to index records, we treat the primary index kept on disk as a sequential file and construct a sparse index on it • If the outer index is still too large, we can create another level of index to index it, and so on • Indices at all levels must be updated on insertion or deletion from the file

  16. Index Update: Deletion • When deletions occur in the primary file, the index will sometimes need to change • If the deleted record was the only record in the file with its particular search-key value, the search-key is deleted from the index also • Single-level index deletion: • dense indices -- deletion of search-key is similar to record deletion • sparse indices -- if an entry for the search key exists in the index, it is replaced by the next search key value in the original file (taken in search-key order) • Multi-level index deletion is a simple extension of the above

  17. Index Update: Insertion • As with deletions, when insertions occur in the primary file, the index will sometimes need to change • Single-level index insertion: • first perform a lookup using the search-key value appearing in the record to be inserted • dense indices -- if the search-key value does not appear in the index, insert it • sparse indices -- depends upon the design of the sparse index. If the index is designed to store an entry for each block of the file, then no change is necessary to the index unless the insertion creates a new block (if the old block overflows). If that happens, the first search-key value in the new block is inserted in the index • multi-level insertion is a simple extension of the above

  18. Secondary Index Motivation • Good database design is to have an index to handle all common (frequently requested) queries. • Queries based upon values of the primary key can use the primary index • Queries based upon values of other attributes will require other (secondary) indices.

  19. Secondary Indices vs. Primary Indices • Secondary indices must be dense, with an index entry for every search-key value, and a pointer to every record in the file. A primary index may be dense or sparse. (why?) • A secondary index on a candidate key looks just like a dense primary index, except that the records pointed to by successive values of the index are not sequential • when a primary index is not on a candidate key it suffices if the index points to the first record with a particular value for the search key, as subsequent records can be found with a sequential scan from that point

  20. Secondary Indices vs. Primary Indices (2) • When the search key of a secondary index is not a candidate key (I.e., there may be more than one tuple in the relation with a given search-key value) it isn’t enough to just point to the first record with that value -- other records with that value may be scattered throughout the file. A secondary index must contain pointers to all the records • so an index record points to a bucket that contains pointers to every record in the file with that particular search-key value

  21. Secondary Index on balance

  22. Primary and Secondary Indices • As mentioned earlier: • secondary indices must be dense • Indices offer substantial benefits when searching for records • When a file is modified, every index on the file must be updated • this overhead imposes limits on the number of indices • relatively static files will reasonably permit more indices • relatively dynamic files make index maintenance quite expensive • Sequential scan using primary index is efficient; sequential scan on secondary indices is very expensive • each record access may fetch a new block from disk (oy!)

  23. Disadvantages of Index-Sequential Files • The main disadvantage of the index-sequential file organization is that performance degrades as the file grows, both for index lookups and for sequential scans through the data. Although this degradation can be remedied through file reorganization, frequent reorgs are expensive and therefore undesirable. • Next we’ll start examining index structures that maintain their efficiency despite insertion and deletion of data: the B+-tree (section 12.3)

  24. B+- Tree Index Files • Main disadvantage of ISAM files is that performance degrades as the file grows, creating many overflow blocks and the need for periodic reorganization of the entire file • B+- trees are an alternative to indexed-sequential files • used for both primary and secondary indexing • B+- trees are a multi-level index • B+- tree index files automatically reorganize themselves with small local changes on insertion and deletion. • No reorg of entire file is required to maintain performance • disadvantages: extra insertion, deletion, and space overhead • advantages outweigh disadvantages. B+-trees are used extensively

  25. B+- Tree Index Files (2) Definition: A B+-tree of order n has: • All leaves at the same level • balanced tree (“B” in the name stands for “balanced”) • logarithmic performance • root has between 1 and n-1 keys • all other nodes have between n/2 and n-1 keys (>= 50% space utilization) • we construct the tree with order n such that one node corresponds to one disk block I/O (in other words, each disk page read brings up one full tree node).

  26. B+- Tree Index Files (3) A B+-tree is a rooted tree satisfying the following properties: • All paths from root to tree are the same length • Search for an index value takes time according to the height of the tree (whether successful or unsuccessful)

  27. B+- Tree Node Structure • The B+-tree is constructed so that each node (when full) fits on a single disk page • parameters: B: size of a block in bytes (e.g., 4096) K: size of the key in bytes (e.g., 8) P: size of a pointer in bytes (e.g., 4) • internal node must have n such that: (n-1)*K + n*P <= B n<= (B+K)/(K+P) • with the example values above, this becomes n<=(4096+8)/(8+4)=4114/12 n<=342.83

  28. B+- Tree Node Structure (2) • Typical B+-tree Node Ki are the search-key values Pi are the pointers to children (for non-leaf nodes) or pointers to records or buckets of records (for leaf nodes) • the search keys in a node are ordered: K1<K2 <K3 …<Kn-1

  29. Non-Leaf Nodes in B+-Trees • Non-leaf nodes form a multi-level sparse index on the leaf nodes. For a non-leaf node with n pointers: • all the search keys in the subtree to which P1 points are less than K1 • For 2<= i <= n-1, all the search keys in the subtree to which Pi points have values greater than or equal to Ki-1 and less than Kn-1

  30. Leaf Nodes in B+-Trees • As mentioned last class, primary indices may be sparse indices. So B+-trees constructed on a primary key (that is, where the search key order corresponds to the sort order of the original file) can have the pointers of their leaf nodes point to an appropriate position in the original file that represents the first occurrence of that key value. • Secondary indices must be dense indices. B+-trees constructed as a secondary index must have the pointers of their leaf nodes point to a bucket storing all locations where a given search key value occur; this set of buckets is often called an occurrence file

  31. Example of a B+-tree • B+-tree for the account file (n=3)

  32. Another Example of a B+-tree • B+-tree for the account file (n=5) • Leaf nodes must have between 2 and 4 values ((n-1)/2 and (n-1), with n=5) • Non-leaf nodes other than the root must have between 3 and 5 children (n/2 and n, with n=5) • Root must have at least 2 children

  33. Observations about B+-trees • Since the inter-node connections are done by pointers, “logically” close blocks need not be “physically” close • The non-leaf levels of the B+-tree form a hierarchy of sparse indices • The B+-tree contains a relatively small number of levels (logarithmic in the size of the main file), thus searches can be conducted efficiently • Insertions and deletions to the main file can be handled efficiently, as the index can be restructured in logarithmic time (as we shall examine later in class)

  34. Queries on B+-trees • Find all records with a search-key value of k • start with the root node (assume it has m pointers) • examine the node for the smallest search-key value > k • if we find such a value, say at Kj , follow the pointer Pj to its child node • if no such k value exists, then k >= Km-1, so follow Pm • if the node reached is not a leaf node, repeat the procedure above and follow the corresponding pointer • eventually we reach a leaf node. If we find a matching key value (our search value k = Ki for some i) then we follow Pi to the desired record or bucket. If we find no matching value, the search is unsuccessful and we are done.

  35. Queries on B+-trees (2) • Processing a query traces a path from the root node to a leaf node • If there are K search-key values in the file, the path is no longer than logn/2 (K) • A node is generally the same size as a disk block, typically 4 kilobytes, and n is typically around 100 (40 bytes per index entry) • With 1 million search key values and n=100, at most log50(1,000,000) = 4 nodes are accessed in a lookup • In a balanced binary tree with 1 million search key values, around 20 nodes are accessed in a lookup • the difference is significant since every node access might need a disk I/O, costing around 20 milliseconds

  36. Insertion on B+-trees • Find the leaf node in which the search-key value would appear • If the search key value is already present, add the record to the main file and (if necessary) add a pointer to the record to the appropriate occurrence file bucket • If the search-key value is not there, add the record to the main file as above (including creating a new occurrence file bucket if necessary). Then: • if there is room in the leaf node, insert (key-value, pointer) in the leaf node • otherwise, overflow. Split the leaf node (along with the new entry)

  37. Insertion on B+-trees (2) • Splitting a node: • take the n (search-key-value, pointer) pairs, including the one being inserted, in sorted order. Place half in the original node, and the other half in a new node. • Let the new node be p, and let k be the least key value in p. Insert (k, p) in the parent of the node being split. • If the parent becomes full by this new insertion, split it as described above, and propogate the split as far up as necessary • The splitting of nodes proceeds upwards til a node that is not full is found. In the worst case the root node may be split, increasing the height of the tree by 1.

  38. Insertion on B+-trees Example

  39. Deletion on B+-trees • Find the record to be deleted, and remove it from the main file and the bucket (if necessary) • If there is no occurrence-file bucket, or if the deletion caused the bucket to become empty, then delete (key-value, pointer) from the B+-tree leaf-node • If the leaf-node now has too few entries, underflow has occurred. If the active leaf-node has a sibling with few enough entries that the combined entries can fit in a single node, then • combine all the entries of both nodes in a single one • delete the (K,P) pair pointing to the deleted node from the parent. Follow this procedure recursively if the parent node underflows.

  40. Deletion on B+-trees (2) • Otherwise, if no sibling node is small enough to combine with the active node without causing overflow, then: • Redistribute the pointers between the active node and the sibling so that both of them have sufficient pointers to avoid underflow • Update the corresponding search key value in the parent node • No deletion occurs in the parent node, so no further recursion is necessary in this case. • Deletions may cascade upwards until a node with n/2 or more pointers is found. If the root node has only one pointer after deletion, it is removed and the sole child becomes the root (reducing the height of the tree by 1)

  41. Deletion on B+-trees Example 1

  42. Deletion on B+-trees Example 2

  43. Deletion on B+-trees Example 3

  44. B+-tree File Organization • B+-Tree Indices solve the problem of index file degradation. The original data file will still degrade upon a stream of insert/delete operations. • Solve data-file degradation by using a B+-tree file organization • Leaf nodes in a B+-tree file organization store records, not pointers into a separate original datafile • since records are larger than pointers, the maximum number of recrods that can be stored in a leaf node is less than the number of pointers in a non-leaf node • leaf nodes must still be maintained at least half full • insert and delete are handled in the same was as insert and delete for entries in a B+-tree index

  45. B+-tree File Organization Example • Records are much bigger than pointers, so good space usage is important • To improve space usage, involve more sibling nodes in redistribution during splits and merges (to avoid split/merge when possible) • involving one sibling guarantees 50% space use • involving two guarantees at least 2/3 space use, etc.

  46. B-tree Index Files • B-trees are similar to B+-trees, but search-key values appear only once in the index (eliminates redundant storage of key values) • search keys in non-leaf nodes don’t appear in the leaf nodes, so an additional pointer field for each search key in a non-leaf node must be stored to point to the bucket or record for that key value • leaf nodes look like B+-tree leaf nodes: (P1, K1, P2, K2, …, Pn) • non-leaf nodes look like so: (P1, B1, K1, P2, B2, K2, …, Pn) where the Bi are pointers to buckets or file records.

  47. B-tree Index File Example B-tree and B+-tree

  48. B-tree Index Files (cont.) • Advantages of B-tree Indices (vs. B+-trees) • May use less tree nodes than a B+-tree on the same data • Sometimes possible to find a specific key value before reaching a leaf node • Disadvantages of B-tree Indices • Only a small fraction of key values are found early • Non-leaf nodes are larger, so fanout is reduced, and B-trees may be slightly taller than B+-trees on the same data • Insertion and deletion are more complicated than on B+-trees • Implementation is more difficult than B+-trees • In general, advantages don’t outweigh disadvantages

  49. Hashing • We’ve examined Ordered Indices (design based upon sorting or ordering search key values); the other type of major indexing technique is Hashing • Underlying concept is very simple: • observation: small files don’t require indices or complicated search methods • use some clever method, based upon the search key, to split a large file into a lot of little buckets • each bucket is sufficiently small • use the same method to find the bucket for a given search key

  50. Hashing Basics • A bucket is a unit of storage containing one or more records (typically a bucket is one disk block in size) • In a hash file organization we find the bucket for a record directly from its search-key value using a hash function • A hash function is a function that maps from the set of all search-key values K to the set of all bucket addresses B • The hash function is used to locate records for access, insertion, and deletion • Records with different search-key values may be mapped to the same bucket • the entire bucket must be searched to find a record • buckets are designed to be small, so this task is usually not onerous

More Related