1 / 30

PMIT-6102 Advanced Database Systems

PMIT-6102 Advanced Database Systems. By- Jesmin Akhter Assistant Professor, IIT, Jahangirnagar University. Lecture -11 Distributed Concurrency Control . Outline. Transaction Role of the distributed execution monitor Schedule Detailed Model of the Distributed Execution Monitor ,

ashtyn
Télécharger la présentation

PMIT-6102 Advanced Database Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PMIT-6102Advanced Database Systems By- Jesmin Akhter Assistant Professor, IIT, Jahangirnagar University

  2. Lecture -11Distributed Concurrency Control

  3. Outline • Transaction • Role of the distributed execution monitor Schedule • Detailed Model of the Distributed Execution Monitor, Distributed Concurrency Control • Serializability in Distributed DBMS • Concurrency Control Algorithms • time-stamping, • Deadlock • Centralized Deadlock Detection • Distributed Deadlock Detection

  4. Role of the distributed execution monitor • The distributed execution monitor consists of two modules: • a transaction manager (TM) and a scheduler (SC). • The transaction manager is responsible for coordinating the execution of the database operations on behalf of an application. • The scheduler, on the other hand, is responsible for the implementation of a specific concurrency control algorithm for synchronizing access to the database. • A third component that participates in the management of distributed transactions is the local recovery managers (LRM) that exist at each site. • Their function is to implement the local procedures by which the local database can be recovered to a consistent state following a failure.

  5. Role of the distributed execution monitor • Each transaction originates at one site, which we will call its originating site. • The execution of the database operations of a transaction is coordinated by the TM at that transaction’s originating site. • The transaction managers implement an interface for the application programs which consists of five commands: begin transaction, read, write, commit, and abort.

  6. Role of the distributed execution monitor • Begin transaction: This is an indicator to the TM that a new transaction is starting. The TM does some bookkeeping, such as recording the transaction’s name, the originating application, and so on, in coordination with the data processor. • Read: If the data item to be read is stored locally, its value is read and returned to the transaction. Otherwise, the TM finds where the data item is stored and requests its value to be returned (after appropriate concurrency control measures are taken). • Write: If the data item is stored locally, its value is updated (in coordination with the data processor). Otherwise, the TM finds where the data item is located and requests the update to be carried out at that site after appropriate concurrency control measures are taken). • Commit: The TM coordinates the sites involved in updating data items on behalf of this transaction so that the updates are made permanent at every site. • Abort The TM makes sure that no effects of the transaction are reflected in any of the databases at the sites where it updated data items.

  7. Begin_transaction, Read, Write, Commit, Abort Distributed Execution Monitor With other With other TMs SCs Scheduler (SC) To data processor Detailed Model of the Distributed Execution Monitor Results Transaction Manager (TM) Scheduling/ Descheduling Requests

  8. Serializability in Distributed DBMS • Somewhat more involved. Two histories have to be considered: • local histories • global history • For global transactions (i.e., global history) to be serializable, two conditions are necessary: • Each local history should be serializable. • Two conflicting operations should be in the same relative order in all of the local histories where they appear together.

  9. Global Non-serializability • T1: Read(x) T2: Read(x) • xx5 xx15 • Write(x) Write(x) • Commit Commit The following two local histories are individually serializable (in fact serial), but the two transactions are not globally serializable. LH1={R1(x),W1(x),C1,R2(x),W2(x),C2} LH2={R2(x),W2(x),C2,R1(x),W1(x),C1}

  10. Concurrency Control Algorithms • Pessimistic • Two-Phase Locking-based (2PL) • Centralized (primary site) 2PL • Primary copy 2PL • Distributed 2PL • Timestamp Ordering (TO) • Basic TO • Multiversion TO • Conservative TO • Hybrid • Optimistic • Locking-based • Timestamp ordering-based

  11. Locking-Based Algorithms • Transactions indicate their intentions by requesting locks from the scheduler (called lock manager). • Locks are either read lock (rl) [also called shared lock] or write lock (wl) [also called exclusive lock] • Read locks and write locks conflict (because Read and Write operations are incompatible rlwl rl yes no wl no no • Locking works nicely to allow concurrent processing of transactions.

  12. Two-Phase Locking (2PL) • A Transaction locks an object before using it. • When an object is locked by another transaction, the requesting transaction must wait. • When a transaction releases a lock, it may not request another lock. Lock point Obtain lock Release lock No. of locks Phase 1 Phase 2 BEGIN END

  13. Strict 2PL Hold locks until the end. Obtain lock Release lock Transaction duration BEGIN END period of data item use

  14. Testing for Serializability Consider transactions T1, T2, …, Tk Create a directed graph (called a conflict graph), whose nodes are transactions. Consider a history of transactions. If T1 unlocks an item and T2 locks it afterwards, draw an edge from T1 to T2 implying T1 must precede T2 in any serial history T1→T2 Repeat this for all unlock and lock actions for different transactions. If graph has a cycle, the history is not serializable. If graph is a cyclic, a topological sorting will give the serial history.

  15. Example T1: Lock X T1: Unlock X T2: Lock X T2: Lock Y T2: Unlock X T2: Unlock Y T3: Lock Y T3: Unlock Y T1→T2 T2→T3 T1 T2 T3

  16. Theorem Two phase locking is a sufficient condition to ensure serializablility. Proof: By contradiction. If history is not serializable, a cycle must exist in the conflict graph. This means the existence of a path such as T1→T2→T3 … Tk → T1. This implies T1 unlocked before T2 and after Tk. T1 requested a lock again. This violates the condition of two phase locking.

  17. Centralized 2PL • Only one of the sites has a lock manager; • The transaction managers at the other sites communicate with it rather than with their own lock managers. This approach is also known as the primary site 2PL algorithm • The communication between the cooperating sites in executing a transaction • A centralized 2PL (C2PL) algorithm is depicted in Figure in next slide. • This communication is between • the transaction manager at the site where the transaction is initiated (called the coordinating TM), the lock manager at the central site, and the data processors (DP) at the other participating sites. • The participating sites are those that store the data item and at which the operation is to be carried out. • The order of messages is denoted in the figure.

  18. Centralized 2PL • There is only one 2PL scheduler in the distributed system. • Lock requests are issued to the central scheduler. Data Processors at participating sites Coordinating TM Central Site LM Lock Request Lock Granted Operation End of Operation Release Locks

  19. Distributed 2PL • 2PL schedulers are placed at each site. Each scheduler handles lock requests for data at that site. • A transaction may read any of the replicated copies of item x, by obtaining a read lock on one of the copies of x. Writing into x requires obtaining write locks for all copies of x.

  20. Distributed 2PL Execution Coordinating TM Participating LMs Participating DPs Lock Request Operation End of Operation Release Locks

  21. Timestamp Ordering • Transaction (Ti) is assigned a globally unique timestamp ts(Ti). • Transaction manager attaches the timestamp to all operations issued by the transaction. • Each data item is assigned a write timestamp (wts) and a read timestamp (rts): • rts(x) = largest timestamp of any read on x • wts(x) = largest timestamp of any read on x • Conflicting operations are resolved by timestamp order. Basic T/O: for Ri(x) for Wi(x) ifts(Ti) < wts(x) ifts(Ti) < rts(x) andts(Ti) < wts(x) then reject Ri(x) then reject Wi(x) else accept Ri(x) else accept Wi(x) rts(x) ts(Ti) wts(x) ts(Ti)

  22. Deadlock • A transaction is deadlocked if it is blocked and will remain blocked until there is intervention. • Locking-based CC algorithms may cause deadlocks. • TO-based algorithms that involve waiting may cause deadlocks. • Wait-for graph • If transaction Ti waits for another transaction Tj to release a lock on an entity, then TiTj in WFG. Tj Ti

  23. Local versus Global WFG Assume T1 and T2 run at site 1, T3 and T4 run at site 2. Also assume T3 waits for a lock held by T4 which waits for a lock held by T1 which waits for a lock held by T2 which, in turn, waits for a lock held by T3. Local WFG Site 1 Site 2 T4 T1 T2 T3 Global WFG T4 T1 T2 T3

  24. Deadlock Management • Ignore • Let the application programmer deal with it, or restart the system • Prevention • Guaranteeing that deadlocks can never occur in the first place. Check transaction when it is initiated. Requires no run time support. • Avoidance • Detecting potential deadlocks in advance and taking action to insure that deadlock will not occur. Requires run time support. • Detection and Recovery • Allowing deadlocks to form and then finding and breaking them. As in the avoidance scheme, this requires run time support.

  25. Deadlock Avoidance • Transactions are not required to request resources a priori. • Transactions are allowed to proceed unless a requested resource is unavailable. • In case of conflict, transactions may be allowed to wait for a fixed time interval. • Order either the data items or the sites and always request locks in that order. • More attractive than prevention in a database environment.

  26. Deadlock Avoidance –Wait-Die & Wound-Wait Algorithms WAIT-DIE Rule:If Ti requests a lock on a data item which is already locked by Tj, then Ti is permitted to wait iff ts(Ti)<ts(Tj). If ts(Ti)>ts(Tj), then Ti is aborted and restarted with the same timestamp. • ifts(Ti)<ts(Tj) thenTi waits elseTi dies • non-preemptive: Ti never preempts Tj • prefers younger transactions WOUND-WAIT Rule:If Ti requests a lock on a data item which is already locked by Tj, then Ti is permitted to wait iff ts(Ti)>ts(Tj). If ts(Ti)<ts(Tj), then Tj is aborted and the lock is granted to Ti. • ifts(Ti)<ts(Tj) thenTjis wounded elseTi waits • preemptive: Ti preempts Tjif it is younger • prefers older transactions

  27. Deadlock Detection • Transactions are allowed to wait freely. • Wait-for graphs and cycles. • Topologies for deadlock detection algorithms • Centralized • Distributed • Hierarchical

  28. Centralized Deadlock Detection • One site is designated as the deadlock detector for the system. Each scheduler periodically sends its local WFG to the central site which merges them to a global WFG to determine cycles. • How often to transmit? • Too often  higher communication cost but lower delays due to undetected deadlocks • Too late  higher delays due to deadlocks, but lower communication cost • Would be a reasonable choice if the concurrency control algorithm is also centralized. • Proposed for Distributed INGRES

  29. Distributed Deadlock Detection • Sites cooperate in detection of deadlocks. • One example: • The local WFGs are formed at each site and passed on to other sites. Each local WFG is modified as follows: • Since each site receives the potential deadlock cycles from other sites, these edges are added to the local WFGs • The edges in the local WFG which show that local transactions are waiting for transactions at other sites are joined with edges in the local WFGs which show that remote transactions are waiting for local ones. • Each local deadlock detector: • looks for a cycle that does not involve the external edge. If it exists, there is a local deadlock which can be handled locally. • looks for a cycle involving the external edge. If it exists, it indicates a potential global deadlock. Pass on the information to the next site.

  30. Thank you

More Related