1.94k likes | 2.06k Vues
SQL Unit 19: Data Management: Databases and Organizations Richard Watson. Summary of Selections from Chapter 11 prepared by Kirk Scott. Outline of Topics. Relationship between O/S and dbms Indexing Hashing File organization and access Joining B+ trees.
E N D
SQL Unit 19: Data Management: Databases and OrganizationsRichard Watson Summary of Selections from Chapter 11 prepared by Kirk Scott
Outline of Topics • Relationship between O/S and dbms • Indexing • Hashing • File organization and access • Joining • B+ trees
Relationship between O/S and dbms • Most performance concerns in dbms internals eventually hinge on secondary storage • In other words, by definition, a db is persistent, stored on disk • The dbms is responsible for managing and accessing this data on disk • As such, dbms internals are either closely related to or integrated with O/S functions
In general, db records may be longer or shorter than O/S pages • It’s convenient to think of them as shorter—many records per page • The performance goal can be stated succinctly: • Keep paging to a minimum
If the dbms and O/S are integrated, the db administrator may have the ability to specify physical storage characteristics of tables: • Clustering in sectors, tracks, cylinders, etc.
Recall that in relational db’s, everything is value based • There is no such thing as following linked data structures in order to find related data • One of the fundamental problems of implementing dbms internals is mapping from values to locations (in secondary storage)
Indexing • Indexes were introduced when considering SQL • In simple terms, they provide key based access to the contents of tables • It turns out that devising a special kind of index was one of the critical parts of making relational dbms’s practical to implement
Indexes are one of the fundamental structures used to provide access to data • They also can be used to implement operations like joining
In simplest terms, the index can be visualized as a two column look-up table • This applies to an index on a primary key, for example • The index is sorted by the look-up key, the primary key, for example
Simple look up could be O(n)—linear search through the index • A better scheme would be O(log2n)—binary search, for example • The value obtained is the relative record number (RRN) or the address of the corresponding record
Indexes on non-primary key fields are also OK • In this case, the index can be visualized as something slightly more complex than a 2 column look-up table • There may be duplicate values of non-primary key fields
Therefore, for any look-up key, there may be more than one corresponding record/address • These multiple record addresses could be managed as linked lists extending from the look-up key
In general, it is possible to have more than one index on a table, on different fields • It is also possible to specify a single index on more than one field at a time (city + state for example)
In reality, an index is not typically implemented as a simple look-up table • The full scale details of one kind of indexing scheme are given in the section on B+ trees • In the meantime, it is worth considering one nuance that results from the interaction between dbms records and O/S pages
Sparse Indexes • A given file may be stored in order, sorted by a given field of interest • Superficially, this might suggest that an index on that field is not needed • However, the size of database tables means that you don’t want to have to do linear search through secondary storage in order to find a desired record
The reality is that what you want is not a RRN or an address—what you want is the page that the desired record would be on • This is because the O/S returns data in pages anyway • An RRN request would be translated into a page request and the complete set of records on the page would be returned as a block anyway
For a sorted file, an index may be sparse • The index can again be envisioned as a simple look-up table • The look-up key values would correspond only to the first records on each page • If a desired value fell between two entries in the index, it would be on the page of the first of those two entries • Note again that this only works if the table is stored in sorted order on the look-up key
Clustering Tables • The issue of whether a table is stored in some sorted order is significant and will be treated in general in a later section • In the meantime, note that SQL supports this with the keyword CLUSTER
This is an example of its use: • CREATE INDEX indexname • ON tablename(fieldname) CLUSTER • This means that as records are entered, the table is organized in sorted order in secondary storage
The term inter-file clustering refers to storing records from related tables in order • For example, you could have mothers followed by their children • This violates every precept of relational databases • However, in rare circumstances this may be within the db administrator’s power for performance reasons
Index Access • Indexing supports two kinds of access into a file: • Random access: Given a single look-up key value, it’s possible to find the one (or more) corresponding record(s) • Sequential access: Reading through the index from beginning to end produces all of the records in a table sorted in the order of the index key field
Index Support for Queries • Not only do keys support the simple access schemes given above, they can also support various aspects of SQL queries • Take a query with a WHERE clause for example • Let the table be indexed on the field in the WHERE clause • Then a query optimizer could use the index in order to restrict the results of the query without having to search through the whole table looking for matches
Hashing • Hashing has many uses in computer science • It turns out to have particularly useful applications in dbms internals • In a perfect world, a primary key field might be an unbroken set of integers • The identifiers for records would map directly into a linear address space
In reality, typically the set of values for a key is sparse • You have a few records with widely varying key values • Suppose you have 100 records • It would be helpful to have a scheme that would map those records into a linear address space from 0 to 99
The reason for this is the following: • In general, you have alternatives on how to store the records of a table • They can be stored in arrival sequence • You could cluster them • If you hash them, they can be saved at a particular address (offset), without wasting space due to the sparseness of the key values
The utility of hashing comes from the following: • The location of a record is computed based on the key on its way in • That means that, given the key value, the location of the corresponding record can be computed again for easy access upon retrieval
Indexing supports both direct access and sequential access • Hashing doesn’t support sequential access, but it does support direct access • As a matter of fact, no better scheme for implementing direct access exists • It is quicker to hash than it is to search an index
The classic hashing algorithm, which is relatively easy to illustrate, is division-remainder hashing • The look-up or hashing key of interest may be of any type • If it’s not actually an integer field, let it be converted into a unique integer value • Let the number of expected records, the size of the desired address space, be n
Then choose p to be the smallest prime number larger than n • A prime number is desirable because it will tend to minimize problems like collisions (see below) • Why this is the case will not be explained • It is rooted in the mysteries of abstract algebra
The idea is that for key values of integer form which are larger (or smaller) than p, you can do integer division by p • What you are interested in is the remainder—in other words modulus • The range of possible modulus values when dividing by p is 0 through p – 1 • This is the new, limited address space defined by the hashing scheme
A simple example will illustrate the idea • Let the key of interest be 9 digit social security numbers • Let the desired address space be 20 • 23 is the smallest prime number larger than 20
The table on the next overhead shows the results of hashing for a given set of values • It also illustrates what a collision is • Many different 9 digit numbers, mod 23, might give the same remainder • A collision is an occurrence of such a situation
Collisions are a natural result of this scheme • They are not an insuperable problem • The thing to keep in mind is that when doing look-up, you repeat what you did at hashing time—you hash again
In other words, you get the same hash value back • This means that the collision occurs again, but this is not a problem • The only thing you have to worry about is where you store two different things that hash to the same location in the 0-22 address space • There are basically two approaches
The first approach is to maintain an overflow area at the end of a page • Suppose you hash something on look-up • You go to the hash address obtained • When you get there, you do not find the desired key value • Then go to the overflow area at the end of the page and do linear search in it
Alternatively, if records collide, they can simply be displaced • In other words, let there be a collision upon data entry • Simply search forward in the address space until the next empty slot is found and place the new record there • The same strategy is used for finding the right record when accessing data later
The address space and the placement of records after the first six hashes in the table above is illustrated in the table on the following overhead • This shows the results after 6 hashes.
Note that in order to be practical, there have to be limitations on collision handling in this way • If an overflow area is used, it should be of fixed size • If you just move to the next available space, you might be limited to spaces on the same page
If such limitations didn’t exist, existence queries or incorrect key value input wouldn’t be practical • If the key input on search isn’t valid, then you start a fruitless search for where that value hashed to. • A consequence of this is that it is possible to run out of space • When that happens, a hashed file has to be reorganized
File Organization and Access • The previous discussions of indexing and hashing may have seemed somewhat disjointed • Now that both topics have been covered, it’s possible to summarize some of the choices for maintaining tables in secondary storage and their advantages and disadvantages • Choices like indexing are available to users • Other choices, like clustering and hashing, would only be available to database administrators
Arrival Order Files • File organization: Arrival order—this is standard • Indexed: Yes, possibly on >1 field • Access: Random and sequential by index • Performance: Good for both • Maintenance and cost: None on base file; update and deletion maintenance costs on index(es)
Clustered Files • File organization: Sequential—in other words, maintained in sorted order by some field • Indexed: Not necessarily—possibly desirable on non-key fields, sparse if on key field • Access: Sequential on key. No other unless indexed • Performance: perfect sequential on key • Maintenance and cost: Initial sorting, overflow and reorganization of base table—cost is horrendous
Hashed Files • File organization: Hashed (on one field only) • Indexed: Typically, no—hashing implies that the principal goal is random access on a single field; you don’t need sequential access and don’t want the cost of index maintenance • Access: Direct/random (only) • Performance: The best possible direct access • Maintenance and cost: Reorganization if the address space fills or there are too many collisions
Notice that choosing hashed file organization is a specialized option that would be available to a database administrator • It is not necessarily part of a standard database application • It is used when direct access is critical to performance • Historically, things like airline ticketing databases have driven the need for extremely quick direct access
Joining • Historically, the performance costs of joining were one of the things that made the relational model impractical • As noted in the chapter on the relational model, implementing the theoretical definition of a join is out of the question: • Form the Cartesian product of two tables and then perform selection and projection on the results…
Merge Join • In theory, the cost of joining would be manageable under these conditions: • The two tables were each maintained in sequential order on their respective joining fields • This would make merge-join possible • However, the cost of maintaining sequential files is prohibitive
Interfile Clustering • There is also a concept known as interfile clustering • This means that the records for two different tables are stored intermixed • In other words, the physical data, for example, follows a pattern such as this: