1 / 97

Chapter 16 : Concurrency Control

Chapter 16 : Concurrency Control. Lock-Based Protocols Timestamp-Based Protocols Validation-Based Protocols Multiple Granularity Multiversion Schemes Insert and Delete Operations Concurrency in Index Structures. Chapter 16: Concurrency Control. Goal of Concurrency Control.

Télécharger la présentation

Chapter 16 : Concurrency Control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 16 : Concurrency Control

  2. Lock-Based Protocols Timestamp-Based Protocols Validation-Based Protocols Multiple Granularity Multiversion Schemes Insert and Delete Operations Concurrency in Index Structures Chapter 16: Concurrency Control

  3. Goal of Concurrency Control • Transactions should be executed so that it is as though they executed in some serial order • Also called Isolation or Serializability

  4. Transactional Concurrency Control • Three ways to ensure a serial-equivalent order on conflicts: • Option 1, execute transactions serially. • “single shot” transactions • Option 2, pessimistic concurrency control: block T until transactions with conflicting operations are done. • Done using locks • Option 3, optimistic concurrency control: proceed as if no conflicts will occur, and recover if constraints are violated. • Repair the damage by rolling back (aborting) one of the conflicting transactions. • Option 4, hybrid timestamp ordering using versions.

  5. Locking • Locking is the most frequent technique used to control concurrent execution of database transactions • Operating systems provide a binary locking system that is too restrictive for database transactions • That's why DBMS contains its own lock manager • A lock_value(X ) is variable associated with (each) database data item X • The lock_value(X ) describes the status of the data item X, by telling which operations can be applied to X

  6. Concurrency Control using Locking • The locking technique operates by preventing a transaction from improperly accessing data that is being used by another transaction. • Before a transaction can perform a read or write operation, it must claim a read (shared) or write (exclusive) lock on the relevant data item. • Once a read lock has been granted on a particular data item, other transactions may read the data, but not update it. • A write lock prevents all access to the data by other transactions.

  7. Kinds of Locks • Generally, the lock manager of a DBMS offers two kinds of locks: • A shared (read) lock, and • An exclusive (write) lock • If a transaction T issues a read_lock(X ) command, it will be added to the list of transactions that share lock on item X, unless there is a transaction already holding write lock on X • If a transaction T issues a write_lock(X ) command, it will be granted an exclusive lock on X, unless another transaction is already holding lock on X • Accordingly, lock_value{read_lock, write_lock, unlocked }

  8. Lock Semantics • Since read operations cannot conflict, it is acceptable for more than one transaction to hold read locks simultaneously on the same item. • On the other hand, a write lock gives a transaction exclusive access to the data item • Locks are used in the following way: • A transaction needing access to a data item must first lock the item, requesting a read lock for read-only access or a write-lock for read-write access. • If the item is not already locked by another transaction, the lock request will be granted. • If the item is currently locked, the DBMS determines whether the request is compatible with the current lock. If a read lock is requested on an item that is already read locked, the request is granted, otherwise the transaction must wait until the existing write lock is released. • A transaction holds a lock until it explicitly releases it, commits, or aborts.

  9. Basic Locking Rules • The basic locking rules are: • T must issue a read_lock(X ) or write_ lock(X ) command before any read_item(X ) operation • T must issue a write_lock(X ) command before any write_item(X ) operation • T must issue an unlock(X ) command when all read_item(X ) or write_item(X ) operations are completed • Some DBMS lock managers perform automatic locking by granting an appropriate database item lock to a transaction when it attempts to read or write an item into database • So, an item lock request can be either explicit, or implicit

  10. Lock manager- The part of DBMS that keeps track of locks issued to transactions is called lock manager. It maintains lock table Data items can be locked in two modes : 1. exclusive (X) mode. Data item can be both read as well as written. X-lock is requested using lock-X instruction. 2. shared (S) mode. Data item can only be read. S-lock is requested using lock-S instruction. Lock requests are made to concurrency-control manager. Transaction can proceed only after request is granted. Locking Rules

  11. Lock-compatibility matrix A transaction may be granted a lock on an item if the requested lock is compatible with locks already held on the item by other transactions Any number of transactions can hold shared locks on an item, but if any transaction holds an exclusive on the item no other transaction may hold any lock on the item. If a lock cannot be granted, the requesting transaction has to wait till all incompatible locks held by other transactions have been released. The lock is then granted. Lock-Based Protocols

  12. Lost Update Problem and Locking T2 T1 T2's update to X is lost because T1 wrote over X, and it happened despite the fact that both transactions are issuing lock and unlock commands The problem is that T1 releases lock on X to early, allowing T2 to start updating X We need a protocol that will guarantee serializability write_lock(X ) read_item X X = X + M write_item(X ) unlock(X ) read_lock(X ) read_item ( X ) unlock(X ) write_lock(X ) X = X – N write_item (X ) unlock(X ) t i m e A locking protocol is a set of rules followed by all transactions while requesting and releasing locks. Locking protocols restrict the set of possible schedules.

  13. The basic Two Phase Locking Protocol • All lock operations must precede the first unlock operation • Now, it can be considered as if a transaction has two phases: • Growing (orexpanding), when all locks are being acquired, and • Shrinking, when locks are being downgraded and released, but non can be acquired or upgraded • A theorem: If all transactions in a schedule obey locking rules and two phase locking protocol, the schedule is a conflict serializable one • Consequently: • A schedule that obeys to two phase locking protocol has not to be tested for conflict serializability

  14. Lost Update and Two Phase Locking T1 T2 T2 can not obtain a write_lock on X since T1 holds a read lock on X and it has to wait When T1 releases lock on X, T2 acquires a lock on X and finishes successfully Two phase locking provides for a safe conflict serializable schedule write_lock(X ) //has to wait write_lock(X ) read_item (X) X = X + M write_item(X ) unlock(X ) read_lock(X ) read_item ( X ) X = X – N write_lock(X ) write_item (X ) unlock(X ) t i m e

  15. The Two-Phase Locking Protocol Many database systems employ a two-phase locking protocol to control the manner in which locks are acquired and released. This is a protocol which ensures conflict-serializable schedules. Phase 1: Growing Phase transaction may obtain locks transaction may not release locks Once all locks have been acquired , transaction is in its locked point. Phase 2: Shrinking Phase transaction may release locks transaction may not obtain locks The rules of the protocol are as follows: A transaction must acquire a lock on an item before operating on the item. The lock may be read or write, depending on the type of access needed. Once the transaction releases a lock, it can never acquire any new locks. The protocol assures serializability. It can be proved that the transactions can be serialized in the order of their lock points (i.e. the point where a transaction acquired its final lock, end of growing phase).

  16. The Two-Phase Locking Protocol Two-phase locking is governed by following rules Two transaction can not have conflicting locks. No unlock operation can precede lock operation in the same transaction. The point in the schedule where the final lock is obtained is called the lock point. Phase 1 lock Phase 2 unlock Lock phase Growing phase Shrinking phase

  17. A Question for You • This slide describes the dirty • read problem • The question: • Does two phase locking solve • The dirty read problem? • Answers: • Yes • No, because the dirty read • problem is not a consequence • of conflicting operations. • Strict Two Phase Locking • protocol solves the “dirty read” • problem T2 T1 read_item (X) X = X + M write_item (X) read_item ( X ) X = X – N write_item (X ) read_item ( Y ) T1 fails t i m e

  18. Two Phase Locking: Dirty Read T2 T1 If T1 gets exclusive lock on X first, T2 has to wait until T1 unlocks X Note that interleaving is still possible, only not within the transactions that access the same data items Two phase locking alone does not solve the dirty read problem, because T2 is allowed to read uncommitted database item X write_lock(X ) read_item ( X ) X = X – N write_item (X ) write_lock(Y ) unlock(X ) read_item ( Y ) Y = Y + Q write_item (Y ) unlock(Y ) //T1 fails before it commits write_lock(X ) read_item X X = X + M write_item(X ) unlock(X ) t i m e

  19. Strict Two Phase Locking • A variant of the two phase locking protocol • Protocol: • A transaction T does not release any of exclusive locks until after it commits or aborts • Hence, no other transaction can read or write an item X that is written by T unless T has committed • The strict two phase locking protocol is safe for dirty read • Rigorous two-phase locking is even stricter: here all locks are held till commit/abort. In this protocol transactions can be serialized in the order in which they commit. • Most DBMS implement either strict or rigorous two phase locking

  20. Schedule for Strict 2PL with serial execution T1 T2 X(A) R(A) W(A) X(B) R(B) W(B) Commit X(A) R(A) W(A) X(B) R(B) W(B) Commit

  21. Disadvantages of Locking • Pessimistic concurrency control has a number of key disadvantages, particularly in distributed systems: • Overhead. Locks cost, and you pay even if no conflict occurs. • Even readonly actions must acquire locks. • High overhead forces careful choices about lock granularity. • Low concurrency. • If locks are too coarse, they reduce concurrency unnecessarily. • Need for strict 2PL to avoid cascading aborts makes it even worse. • Low availability. • A client cannot make progress if the server or lock holder is temporarily unreachable. • Deadlock. • Two phase locking can introduce some undesirable effects. These are: • waits, • deadlocks, • starvation

  22. Problems with Locking • The use of locks and the 2PL protocol prevents many of the problems arising from concurrent access to the database. • However, it does not solve all problems, and it can even introduce new ones. • Firstly, there is the issue of cascading rollbacks: • 2PL allows locks to be released before the final commit or rollback of a transaction. • During this time, another transaction may acquire the locks released by the first transaction, and operate on the results of the first transaction. • If the first transaction subsequently aborts, the second transaction must abort since it has used data now being rolled back by the first transaction. • This problem can be avoided by preventing the release of locks until the final commit or abort action. • Cascading roll-back is possible under two-phase locking. Cascade less schedule is not possible

  23. Dead Lock • Dead lock is also called deadly embrace • Deadlock occurs when two or more transactions reach an impasse because they are waiting to acquire locks held by each other. • Typical sequence of operations is given on the following diagram • T1 acquired exclusive lock on X • T2 acquired exclusive lock on Y • No one can finish, because both are inthe waiting state • To handle a deadlock one of T3 or T4 must be rolled back and its locks released. T2 T1 write_lock(Y ) write_lock(X ) //has to wait write_lock(X ) write_lock(Y ) //has to wait t i m e

  24. Dead Lock (continued) • Dead lock examples: a) • T1 has locked X and waits to lock Y • T2 has locked Y and waits to lock Z • T3 has locked Z and waits to lock X b) • BothT1 and T2 have acquired sharable locks on X and wait to lock X exclusively • All that results in a cyclic wait for graph T2 waits for T1 T2 T1 T1 waits for T2

  25. Starvation Starvation is also possible if concurrency control manager is badly designed. For example: A transaction may be waiting for an X-lock on an item, while a sequence of other transactions request and are granted an S-lock on the same item. If T2 has s-lock on data item and T1 request X-lock on same data item. So T1 has to wait for T2 to release S-lock. Meanwhile T3 request S-lock on same data item and lock request is compatible with lock granted to T2 so T3 may be granted S-lock. so now T2 release a lock but still T1 is not granted until T3 finish. Concurrency control manager can be designed to prevent starvation.

  26. Starvation • Starvation is a problem that appears when using locks, or deadlock detection protocols • Starvation occurs when a transaction can not make any progress for an indefinite period of time, while other transactions proceed • Starvation can occur when: • Waiting protocol for locked items is unfair (used stacks instead of queues) • The same transaction is selected as `victim` repeatedly

  27. Lock Conversion • A transaction T that already holds a lock on item X can convert it to another state • Lock conversionrules are: • T can upgrade a read_lock(X ) to a write_lock(X ) if it is the only one that holds a lock on the item X (otherwise, T has to wait) • T can always downgrade a write_lock(X ) to a read_lock(X )

  28. Two-phase locking with lock conversions: For getting more concurrency we used 2PL with lock conversion. – First Phase: can acquire a lock-S on item can acquire a lock-X on item can convert a lock-S to a lock-X (upgrade) Upgrading possible only in growing phase. – Second Phase: can release a lock-S can release a lock-X can convert a lock-X to a lock-S (downgrade) Downgrading possible only in shrinking phase. Lock Conversions

  29. Lock Conversions • Normal 2PL would make T2 wait till the write_item (a1) is executed • Lock conversion would allow higher concurrency and allow shared lock to be acquired by T2 • T1 could upgrade shared lock to exclusive just before the write instruction T2 T1 read_item (a1) read_item ( a2) P=a1+a2 read_item ( a1 ) read_item ( a2 ) read_item ( a3 ) write_item ( a1 ) t i m e

  30. Lock Conversions • Transactions attempting to upgrade may need to wait • Lock conversions generate serializable schedules, serialized by their lock points • If exclusive locks are held till the end of the transactions then schedules are cascadeless

  31. A transaction Ti issues the standard read/write instruction, without explicit locking calls. The operation read(D) is processed as: ifTi has a lock on D then read(D) else begin if necessary wait until no other transaction has a lock-X on D grant Ti a lock-S on D; read(D) end Automatic Acquisition of Locks

  32. write(D) is processed as: if Ti has a lock-X on D then write(D) else begin if necessary wait until no other trans. has any lock on D, if Ti has a lock-S on D then upgrade lock on D to lock-X else grant Ti a lock-X on D write(D) end; All locks are released after commit or abort Automatic Acquisition of Locks (Cont.)

  33. Implementation of Locking • A lock manager can be implemented as a separate process to which transactions send lock and unlock requests • The lock manager replies to a lock request by sending a lock grant messages (or a message asking the transaction to roll back, in case of a deadlock) • The requesting transaction waits until its request is answered • The lock manager maintains a data-structure called a lock table to record granted locks and pending requests • The lock table is usually implemented as an in-memory hash table indexed on the name of the data item being locked

  34. Lock Management • Lock table entry: • Number of transactions currently holding a lock • Type of lock held (shared or exclusive) • Pointer to queue of lock requests • Locking and unlocking have to be atomic operations • Lock upgrade: transaction that holds a shared lock can be upgraded to hold an exclusive lock

  35. Granted Waiting Lock Table • Black rectangles indicate granted locks, white ones indicate waiting requests • Lock table also records the type of lock granted or requested • New request is added to the end of the queue of requests for the data item, and granted if it is compatible with all earlier locks • Unlock requests result in the request being deleted, and later requests are checked to see if they can now be granted • If transaction aborts, all waiting or granted requests of the transaction are deleted • lock manager may keep a list of locks held by each transaction, to implement this efficiently

  36. Other protocols than 2-PL – graph-based • Graph-based protocols are an alternative to two-phase locking • Assumption: we have prior knowledge about the order in which data items will be accessed • (hierarchical) ordering on the data items, like, e.g., pages of a B-tree A C B

  37. Impose a partial ordering  on the set D = {d1, d2 ,..., dh} of all data items. If di dj then any transaction accessing both di and dj must access di before accessing dj. Implies that the set D may now be viewed as a directed acyclic graph, called a database graph. The tree-protocol is a simple kind of graph protocol. Transactions access items from the root of this partial order Graph-Based Protocols

  38. Only exclusive locks are allowed. The first lock by Ti may be on any data item. Subsequently, a data Q can be locked by Ti only if the parent of Q is currently locked by Ti. Data items may be unlocked at any time. A data item that has been locked and unlocked by Ti cannot subsequently be relocked by Ti Tree Protocol

  39. Tree protocol - example T1 T2 L(B) L(D) L(H) U(D) L(E) U(E) L(D) U(B) U(H) L(G) U(D) U(G) A • 2PL? • follows tree protocol? • ‘correct’? B C D E F G H I

  40. The tree protocol ensures conflict serializability as well as freedom from deadlock. Advantages of tree protocol over 2 phase locking protocol shorter waiting times, unlocking may occur earlier and increase in concurrency protocol is deadlock-free, no rollbacks are required Drawbacks Protocol does not guarantee recoverability or cascade freedom Need to introduce commit dependencies to ensure recoverability If Ti performs a read of uncommitted data item , we record a commit dependency of Ti on the transaction that performed the last write on the data item. Transactions may have to lock data items that they do not access. increased locking overhead, and additional waiting time Without prior knowledge of what data items will be locked, transactions may lock the root of the tree reducing concurrency Schedules not possible under two-phase locking are possible under tree protocol, and vice versa. Main application used for locking in B+trees, to allow high-concurrency update access; otherwise, root page lock is a bottleneck Graph-Based Protocols (Cont.)

  41. Timestamp-Based Protocols BASIC TIMESTAMP ORDERING:- Timestamp is unique identifier created by DBMS to identify a transaction. Timestamp values are assigned in order in which transactions are submitted to the system. Each transaction is issued a timestamp when it enters the system. If an old transaction Ti has time-stamp TS(Ti), a new transaction Tj is assigned time-stamp TS(Tj) such that TS(Ti) <TS(Tj). The timestamp could use the value of the system clock or a logical counter The protocol manages concurrent execution such that the time-stamps determine the serializability order. In order to assure such behavior, the protocol maintains for each data item Q two timestamp values: W-timestamp(Q) is the largest time-stamp of any transaction that executed write(Q) successfully. R-timestamp(Q) is the largest time-stamp of any transaction that executed read(Q) successfully.

  42. Timestamp-Ordering Protocol The timestamp ordering protocol ensures that any conflicting read and write operations are executed in timestamp order. Suppose a transaction Ti issues a read(Q) If TS(Ti) < W-timestamp(Q), then Ti needs to read a value of Q that was already overwritten. Hence, the read operation is rejected, and Ti is rolled back. If TS(Ti)  W-timestamp(Q), then the read operation is executed, and R-timestamp(Q) is set to max(R-timestamp(Q), TS(Ti)).

  43. Timestamp Ordering Protocols (Cont.) Suppose that transaction Ti issues write(Q). If TS(Ti) < R-timestamp(Q), then abort and rollback T and reject the operation. This should be done because some younger transaction with timestamp greater than TS(T)-after T in timestamp ordering-has already read or write the value of item Q before T had chance to write Q,thus violating the timestamp ordering. If TS(Ti) < W-timestamp(Q), then Ti is attempting to write an obsolete value of Q. Hence, this write operation is rejected, and Ti is rolled back. Otherwise, the write operation is executed, and W-timestamp(Q) is set to TS(Ti). (If TS(Ti) > W-timestamp(Q) and TS(Ti) > R-timestamp(Q))

  44. Example of Timestamp Ordering Protocol T14 T15 read(B) read(B) B:= B-50 write(B) read (A) read(A) display (A+B) A := A+50 write (A) display (A+B)

  45. The time stamp ordering protocol ensures conflict serializability as well as freedom from deadlock. However there is a possibility of starvation, if a sequence of conflicting transactions cause repeated restarts Consider two transactions T1 and T2 shown below. The write(q) of T1 fails and this rollsback T1 and in effect T2 as T2 is dependent on T1 This protocol can generate schedules that are non recoverable Recoverability and cascadelessness can be ensured by performing all writes at the end of the transaction and recoverability alone can be ensured by tracking uncommitted dependencies Timestamp Ordering Protocols (Cont.) T1 T2 Read(p) Read(q) Write(p) Write(q)

  46. Strict Timestamp ordering • Variation of basic TO called strict TO ensures schedules are both strict(recoverable) and (conflict)serializable. If transaction T issues read_item(X) or write_item(X) such that TS(T)>write_TS(X) has its read or write delayed until transaction T’(TS(T’)=write_TS(X) has commited or aborted. • Thomas’ Write Rule • Modified version of the timestamp-ordering protocol in which obsolete or outdated write operations may be ignored under certain circumstances. • When Ti attempts to write data item Q, if TS(Ti) < W-timestamp(Q), then Ti is attempting to write an obsolete or outdated value of {Q}. • Rather than rolling back Ti as the timestamp ordering protocol would have done, this {write} operation can be ignored.

  47. Otherwise this protocol is the same as the timestamp ordering protocol. Thomas' Write Rule allows greater potential concurrency. Allows some view-serializable schedules that are not conflict-serializable. T1 T2 R(A) W(A) commit W(A) Commit View Serializable schedule equivalent to serial schedule <T1,T2>

  48. Validation-Based Protocol (Optimistic method for concurrency control) Optimistic concurrency control techniques also known as validation or certification methods This techniques are called optimistic because they assume that conflicts of database operations are rare and hence that there is no need to do checking during transaction execution. Checking represent overhead during transaction execution, with effect of slowing down the transaction and checking needs to be done before commit Updates in transaction are not applied directly to d/b item until transaction reaches its end and are carried out in a temporary database file The three phases of concurrently executing transactions can be interleaved, but each transaction must go through the three phases in that order. Execution of transaction is done in three phases.

  49. Validation-Based Protocol (Optimistic method for concurrency control) Read and execution phase: • Transaction Ti read values of committed items from database. Updates are applied only to temporary local copies of data items / temporary update file. Validation phase (Certification phase): • Transaction Ti performs a ``validation test'‘ to determine if local copies can be written without violating serializability. If validation test positive then moves to write phase else transactions are discarded Write phase: • If Ti is validated successfully , the updates are applied to the database; otherwise, Ti is rolled back and then restarted. • This phase is performed only for Read-write transactions and not for Read-only transactions

  50. Optimistic concurrency control • Each transaction T is given 3 timestamps: • Start(T): when the transaction starts • Validation(T): when the transaction enters the validation phase • Finish(T) : when the transaction finishes • Goal: to ensure the transaction following a serial schedule based on Validation(T)

More Related