1 / 25

Chapter 8 Lock Implementation

Chapter 8 Lock Implementation. COP 6730. Lock Implementation. Locks record all the current requests, either granted or waiting, for a named resource. They are a simple data structure with two basic operations: lock() and unlock() , along with support operations. Lock Names.

paul
Télécharger la présentation

Chapter 8 Lock Implementation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 8 Lock Implementation COP 6730

  2. Lock Implementation • Locks record all the current requests, either granted or waiting, for a named resource. • They are a simple data structure with two basic operations: lock() and unlock(), along with support operations.

  3. Lock Names Each lock has a name typedef struct /* definition for lock name */ { RMID rmid /* resource manager id */ char resource [14] /* object name is 14 bytes */ } lock_name; • RM ID in each lock name is to allow RMs to have disjoint sets of locks and to pick lock names at will from their own name spaces. • We would like the object names to be of unlimited length, but for performance reasons, they are usually limited to a small, fixed length. • Above definition suggests object names fit in 14 characters • In general, each RM must hash long object names into this smaller unit. If two objects hash to the same lock name, and the two request modes are incompatible, then the collision may cause spurious waits.

  4. Group Mode Each granted group has a summary mode that is the maximum of the modes of the group members. • Initial state: • After the IX mode request is unlocked: Lock Conversion Lattice Group Mode

  5. Lock Conversion • A conversion needs to wait until the converted mode is compatible with all requests granted to other transactions • When conversions are waiting, no new members are admitted to the granted group until all conversions have been granted No new members are admitted

  6. Lock Conversion Examples /* The IX lock requests a lock conversion to S */ /* SIX is compatible with IS */ /* IX is incompatible with SIX */ /* The conversion is still waiting */ /* SIX lock is released, and the conversion is granted */

  7. Lock Class • Each lock is requested and held in some class. • Instant, short, long are examples of classes: Instant: The lock operation immediately calls unlock on requests that are acquired for instant duration. Short: The lock is released at the end of a particular operation. long: The lock is held to transaction commit.

  8. Lock Class • The class names are represented as integers and are accordingly ranked: longer duration locks have larger class numbers. • If a lock is held in one class and requested in a second, then the resulting class is the max of the two (class escalation). • Most of the lock classes are understood only by the RMs.

  9. Lock Manager: lock • A lock request specifies a lock name, a mode, a class, and a timeout for waits. lock_reply lock ( lock_name name, lock_mode mode, lock_class class, long timeout); • The lock request returns either OK, timeout, or deadlock. If it returns OK, the lock was acquired.

  10. Lock Manager: unlock lock_reply unlock_name(lock_namename); lock_reply unlock_class(lock_class class Boolean all_le, RMID rmid); • The all_le parameter is a Boolean and used to control unlocking of all classes below the specified class • The unlock can be restricted to a particular RM if the rmid is nonzero.

  11. Lock Manager: Data Structure Locked object Hash chain Locked object

  12. Lock Manager: Data Structure (Cont’d) • Adding a new locked object (lock header) to the hash chain requires getting the semaphore on the hash chain. • Adding a new lock request to a lock queue requires only the semaphore on the lock queue. • The lock header free pool is a preallocated and pre-formatted pool of blocks for quick allocation. • A similar lock request pool is also maintained. • The transaction lock list is used to accelerate generic unlock operations (e. g. releasing all locks at transaction commit).

  13. Transaction Commit Two strategies for releasing locks: • Each RM can use the generic unlock routine to ask the lock manager to release all the RM’s locks for that transaction. This approach gives the RM more control. • The lock manager can join the transaction and thereby get prepare, commit, and abort callbacks from the TM when the transaction changes state. This approach provides a simpler interface to the resource manager.

  14. Strategy #2 Strategy #1 Transaction Commit (Cont’d) RM1 RM1 TM TM Lock Manager RM2 RM2 RM3 RM3 Lock Manager

  15. Transaction Savepoints At each savepoint, it is necessary to save the transaction state so that the state can be restored later. From a locking perspective, this state restoration consists of: • unlocking resources acquired since the savepoint, and • reacquiring locks released since the savepoint

  16. Transaction Savepoints (Cont’d) • Unlocking resources acquired since the savepoint • Insert a dummy lock request block in the transaction lock list at each savepoint. • After rollback to a savepoint, all locks after the dummy lock request block in the transaction lock list are released. • Reacquiring locks released since the savepoints. • The lock manager writes a log record recording all the locks held by the transaction at the current savepoint.

  17. Deadlock Detection: The Idea • The algorithm takes each node (transaction), in turn, as the root of what it hopes will be a tree (i.e., no cycles), and does a depth-first search on it. • If it ever comes back to a node it has already encountered, then it has found a cycle • If it exhausts all the arcs from any given node, it backtracks to the previous node. • If it backtracks to the root and cannot go further, the subgraph reachable from the current node does not contain any cycles. • If property 4 is true for all nodes (transactions), the entire graph is cycle free, so the system is not deadlock.

  18. Deadlock Detection Algorithm Data Structure: L = a list of nodes (transactions). Algorithm: For each node, N, in the graph, perform the following 5 steps with N as the starting node. • Initialize L to the empty list, and designate all the arcs as unmarked. • Add the current node to the end of L and check to see if the node now appears in L two times. If it does, the graph contains a cycle (listed in L) and the algorithm terminates. • From the given node, see if there are any unmarked outgoing arcs. If so, go to step 4; else, go to step 5.

  19. Deadlock Detection Algorithm (Cont’d) • Pick an unmarked outgoing arc at random and mark it. Then follow it to the new current node and go to step 2. • We have now reached a dead end. • Remove the current node from L, and go back to the previous node, that is, the one that was current just before this one, make that one the current node. • If the new current node is the initial node, the subgraph does not contain any cycles; otherwise, go to step 2.

  20. Deadlock Detection: Examples • The tree rooted at T1 (formed by depth-first search) has no cycle: L =  T1  • T1 is not involved in a dead lock. • The “tree” rooted at T6 contains a cycle: L = [ T6, T3, T7, T11, T7 ] • a deadlock is detected. • either T7 or T11 must be rollbacked. a cycle T6 T12 T7 T1 T11 T8 T3 T2 T10 T9 T4 T5

  21. Distributed Deadlock Detection • When a transaction is blocked, it sends a special probe message to the blocking transaction. The message consists of three numbers: • the transaction that just blocked, • the transaction sending the message, • and the transaction to whom it being sent. • When the message arrives, the recipient checks to see if itself is waiting for any transaction. If so, • the message is updated, replacing the second field by its own TID and the third one by the TID of the transaction it is waiting for. • The message is then sent to the blocking transaction. • If a message goes all the way around and come back to the original sender, a deadlock is detected.

  22. Distributed Deadlock DetectionExample (0, 8, 0) (0, 4, 6) 6 8 4 0 (0, 0, 1) 3 1 (0, 1, 2) (0, 5, 7) 5 7 2 (0, 2, 3) Machine 2 Machine 1 Machine 0

  23. Distributed Deadlock Resolution Strategy 1: Has the transaction that initiated the probe commit suicide. Problem: If several transactions involved in the same cycle simultaneously invoke the algorithm, then each would eventually discover the deadlock, and each would kill itself.

  24. Distributed Deadlock Resolution Strategy 2: Have each transaction adds its TID to the end of the probe message, so that when it returned to the initial sender, the complete cycle would be listed. The sender can then request the transaction with the largest TID (youngest) to kill itself. Note: If multiple transactions discover the same cycle at the same time, they will all choose the same victim.

  25. Complexities • A real deadlock detector handling all the cases would be about 200 lines of code. • The whole body of a lock manager (including a deadlock detector) typically comes to about 1000 lines of code.

More Related