1 / 50

Distributed Algorithms

Distributed Mutual Exclusion Ludovic HENRIO Ludovic.henrio @inria . fr Borrowed from plenty of sources, including Ghosh’s lecture. Distributed Algorithms. Distributed Mutual Exclusion. Mostly from Sukumar Ghosh's book and handsout : 1 – Introduction

adora
Télécharger la présentation

Distributed Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DistributedMutual Exclusion Ludovic HENRIO Ludovic.henrio@inria.fr Borrowedfromplenty of sources, includingGhosh’s lecture Distributed Algorithms

  2. Distributed Mutual Exclusion • MostlyfromSukumarGhosh's book and handsout: • 1 – Introduction • 2 – Solutions Using Message Passing • 3 – Token Passing Algorithms • Also in Ghosh's book (not covered by this lecture): • Group basedmutualesclusion • Mutual exclusion usingspecial instruction • Solution using Test-and-Set • Solution using DEC LL and SC instruction

  3. Distributed Mutual Exclusion • 1 – Introduction • 2 – Solutions Using Message Passing • 3 – Token Passing Algorithms

  4. 1- Introduction Why Do We Need Distributed Mutual Exclusion (DME) ? • Atomicity exists only up to a certain level: • Atomic instructions • Defines the granularity of the computation • Types of possible interleaving • Assembly Language Instruction? • Remote Procedure Call?

  5. 1- Introduction Why Do We Need Distributed Mutual Exclusion (DME) ? • Some applications are: • Resource sharing • Avoiding concurrent update on shared data • Controlling the grain of atomicity • Medium Access Control in Ethernet • Collision avoidance in wireless broadcasts

  6. 1- Introduction Why Do We Need Distributed Mutual Exclusion (DME) ? • Example: Bank Account Operations shared n : integer Process P Account receives amount nP Computation: n = n +nP: P1. Load Reg_P, n P2. Add Reg_P, nP P3. Store Reg_P, n Process Q Account receives amount nQ Computation: n = n +nQ: Q1. Load Reg_Q, n Q2. Add Reg_Q, nQ Q3. Store Reg_Q, n

  7. 1- Introduction Why Do We Need DME? (example cont'd) • Possible Interleaves of Executions of P and Q: • 2 give the expected result n= n + nP + nQ • P1, P2, P3, Q1, Q2, Q3 • Q1, Q2, Q3, P1, P2, P3 • 5 give erroneous result n = n+nQ • P1, Q1, P2, Q2, P3, Q3 • P1, P2, Q1, Q2, P3, Q3 • P1, Q1, Q2, P2, P3, Q3 • Q1, P1, Q2, P2, P3, Q3 • Q1, Q2, P1, P2, P3, Q3 • 5 give erroneous result n = n + nP • Q1, P1, Q2, P2, Q3, P3 • Q1, Q2, P1, P2, Q3, P3 • Q1, P1, P2, Q2, Q3, P3 • P1, Q1, P2, Q2, Q3, P3 • P1, P2, Q1, Q2, Q3, P3

  8. 1- Introduction Solutions to the Mutual Exclusion Problem CS p0 CS p1 CS p2 CS p3

  9. 1- Introduction Principle of the Mutual Exclusion Problem (2) Eachprocess, beforeentering the CS acquires the authorizatino to do so. Critical section shouldeventuallyterminate Acquireauthorisation Acquireauthorisation Enter CS <critical section> Exit CS Enter CS <critical section> Exit CS req=[1,1,1,0,0] req=[1,1,1,0,0]

  10. Mutual exclusion in the sharedmemory model – a solution for 2 processes

  11. 2- Solutions using Message Passing CorrectnessConditions... • ME1 : Mutual Exclusion • Atmost one processcanremain in CS atany time • Safetyproperty • ME2 : Freedomfromdeadlock • At least one processiseligible to enter CS • Livenessproperty • ME3 : Fairness • Everyprocesstrying to enter must eventuallysucceed • Absence of starvation • A measure of fairness: boundedwaiting • Specifies an upperbound on the number of times a processwaits for itsturn to enter SC -> n-fairness (n is the MAXIMUM number of rounds) • FIFO fairnesswhen n=0

  12. How to Measure Performance • Number of msg’s per CR invocation. • Fairness measure (next slide) • Synchronization Delay (SD). • Response Time (RT). • System Throughput (ST): • ST = 1/(SD + E) • where E is the average CR execution time. Usedhere last req exits CR next req enters CR SD exits CR enters CR CR req E RT

  13. Last wordsbeforegetting to the algorithms … • Fault tolerance: in the case of failure, the algorithm can reorganize itself so that it continues to function without any disruption. -> not much true for the algorithms presented here • Twokinds of algorithms: logicalclockbased and tokenbased.

  14. Distributed Mutual Exclusion • 1 – Introduction • 2 – Solutions using Message Passing • 3 – Token Passing Algorithms

  15. 2- Solutions using Message Passing Problem formulation • Assumptions • n processes (n>1), numbered 0 ... n-1, notedPi communicating by sending / receiving messages • topology: completelyconnectedgraph • each Pi periodicallywants: • enter the Critical Section (CS) • execute the CS code • eventually exit the CS code • Devise a protocolthatsatisfies: ME1 : Mutual Exclusion ME2 : Freedomfromdeadlock ME3 : Progress → Fairness

  16. A trivial solution: centralized solution Trivial solution consists in having one centralised server distributingauthorisations • Queue requests and authorize one by one Defaults: • Centralized server • Faults, contention 1 0 Acquireauthorisation Server 2 3 4 req=[1,1,1,0,0] req=[1,1,1,0,0]

  17. 2- Solutions using Message Passing Centralized solution • Use a coordinatorprocess • Externalprocess • One of the Pi-s • Problems: • Single point of failure • Unable to achieve FIFO fairness(except if CO) Example: server busy: boolean queue release req reply request CS message clients request CS i How to anticipate this late arrival? j coord What if a timestampisgivenwhensending the CS request?

  18. 2- Solutions using Message Passing Lamport's Solution • Assumptions: • Each communication channelis FIFO • Eachprocessmaintains a request queue Q • Algorithmdescribed by 5 rules LA1. To request entry, send a time-stamped message to everyotherprocess and enqueue to local Q (of sender) LA2. Uponreception place request in Q and send time-stamped ACK but once out of CS (possiblyimmediately if already out) LA3. Enter CS when: • request first in Q (chronologicalorder) • all ACK receivedfromothers LA4. To exit CS, a process must: • deleterequestfrom Q • send time-stamped release message to others LA5. Whenreceiving a release msg, removerequestfrom Q Run an examplewith 3 processes and differentinterleavings

  19. 2- Solutions using Message Passing Analysis of Lamport's Solution • Can you show that it satisfies all the properties • (i.e. ME1, ME2, ME3) of a correct solution? • Observation. Processes taking a decision to enter CS must have identical views of their local queues, when all ACKs have been received. • Proof of ME1. At most one process can be in its CS at any time. • Suppose not, and both j,k enter their CS. This implies • j in CS Qj.ts.j < Qk.ts.k • k in CS Qk.ts.k < Qj.ts.j • Impossible. WHY? WHY?

  20. 2- Solutions Using Message Passing Analysis of Lamport's Solution (2) • Proof of ME2. (No deadlock) • The waiting chain is acyclic. • i waits for j • i is behind j in all queues (or j is in its CS) • j does not wait for i Proof of ME3. (progress) New requests join the end of the queues, so new requests do not pass the old ones WHY? ALWAYS?

  21. Analysis of Lamport's Solution (3) • Proof of FIFO fairness. • timestamp (j) < timestamp (k) •  j enters its CS before k does so • Suppose not. So, k enters its CS before j. So k did not receive j’s request. But k received the ack from j for its own req. • This is impossible if the channels are FIFO • . • Message complexity = 3(N-1) (per trip to CS) • (N-1 requests + N-1 ack + N-1 release) Req (30) j k ack Req (20) Exercise: IS THIS REALLY FIFO FAIRNESS? (rememberthereis no global time) If no: is the system n-fair? For which n? why? (note: a lot of messages are exchanged, whichgives the opportunity for differentclocks to synchronise)

  22. 2- Solutions Using Message Passing Ricart & Agrawala’s Solution • What is new? • 1. Broadcast a timestampedrequestto all. • 2. Upon receiving a request, send ack if • -You do not want to enter your CS, or • -You are trying to enter your CS, but your timestamp is higher than that of the sender. • (If you are already in CS, then buffer the request) • 3. Enter CS, when you receive ackfrom all. • 4. Upon exit from CS, send ack to each • pending request before making a new request. • (No release message is necessary) Run an examplewith 3 processes and differentinterleavings

  23. 2- Solutions Using Message Passing Analysis of Ricart & Agrawala’s Solution • Exercise • ME1. Prove that at most one process can be in CS. • ME2. Prove that deadlock is not possible. • ME3. Prove that FIFO fairness holds even if • channels are not FIFO (note: this is the same fairness as in Lamport’s solution) • Message complexity = 2(N-1) • (N-1 requests + N-1 acks - no release message) TS(j) < TS(k) Req(k) Ack(j) j k Req(j)

  24. Exercises • A Generalized version of the mutual exclusion problem in which up to L processes (L ³1) are allowed to be in theircritical sections simultaneouslyisknown as the L-exclusionproblem. Precisely, if fewerthan L processes are in the CS atany time and one more processwants to enter it, it must beallowed to do so. Modify R.-A. algorithm to solve the L-exclusionproblem.

  25. Distributed Mutual Exclusion • 1 – Introduction • 2 – Solutions Using Message Passing • 3 – Token Passing Algorithms

  26. Token Ring Approach Processes are organized in a logical ring: pi has a communication channel to p(i+1) mod (n). • Operations: • Only the process holding the token can enter the CS. • To enter the critical section, wait passively for the token. When in CS, hold on to the token. • To exit the CS, the process sends the token onto its neighbor. • If a process does not want to enter the CS when it receives the token, it forwards the token to the next neighbor. Previous holder of token P0 current holder of token P1 PN-1 P2 next holder of token P3

  27. The basic ring approach • Features: • Safety & liveness are guaranteed, but ordering is not. • Bandwidth: 1 message per exit • (N-1) -fairness • Delay between one process’s exit from the CS and the next process’s entry is between 1 and N-1 message transmissions.

  28. 3- Tokens passing algorithms Suzuki-Kasami Solution • A mix of the lamport queue and the token approach • Completely connected network of processes • There is one token in the network. The holder of the token has the permission to enter CS. • Any other process trying to enter CS must acquire that token. Thus the token will move from one process to another based on demand. I want to enter CS I want to enter CS

  29. 3- Tokens passing algorithms Suzuki-Kasami Algorithm • Process i broadcasts (i, num) • Each process maintains • -an array req: req[j] denotes the sequence nbof the latest request from process j • (Some requests will be stale soon) • Additionally, the holder of the token maintains • -an array last: last[j] denotes the sequence number of the latest visit to CS for process j. • - a queueQ of waiting processes last req Sequence number of the request queue Q req req req req: array[0..n-1] of integer last: array [0..n-1] of integer

  30. 3- Tokens passing algorithms Suzuki-Kasami Algorithm (2) • When a process i receives a request (k, num) from • process k, it sets req[k] to max(req[k], num). • The holder of the token • --Completes its CS • --Sets last[i]:= its own num • --Updates Q by retaining each process k only if • 1+last[k] = req[k] • (This guarantees the freshness of the request) • --Sends the token to the head of Q, along with • the array last and the tail of Q • In fact, token  (Q, last) Req: array[0..n-1] of integer Last: Array [0..n-1] of integer

  31. 3- Tokens passing algorithms Suzuki-Kasami Algorithm (3) • {Program of process j} • Initially, i: req[i] = last[i] = 0 • * Entry protocol * • req[j] := req[j] + 1 • Send (j, req[j]) to all • Wait until token (Q, last) arrives • Critical Section • * Exit protocol * • last[j] := req[j] • k ≠ j: k  Q  req[k] = last[k] + 1  append k to Q; • if Q is not empty  send (tail-of-Q, last) to head-of-Q fi • * Upon receiving a request (k, num) * • req[k] := max(req[k], num)

  32. 3- Tokens passing algorithms Example of Suzuki-Kasami Algorithm Execution req=[1,0,0,0,0] req=[1,0,0,0,0] last=[0,0,0,0,0] 1 0 2 req=[1,0,0,0,0] 4 req=[1,0,0,0,0] 3 req=[1,0,0,0,0] initial state: process 0 has sent a request to all, and grabbed the token

  33. 3- Tokens passing algorithms Example of Suzuki-Kasami Algorithm Execution req=[1,1,1,0,0] req=[1,1,1,0,0] last=[0,0,0,0,0] 1 0 2 req=[1,1,1,0,0] 4 req=[1,1,1,0,0] 3 req=[1,1,1,0,0] 1 & 2 send requests to enter CS

  34. 3- Tokens passing algorithms Example of Suzuki-Kasami Algorithm Execution req=[1,1,1,0,0] req=[1,1,1,0,0] last=[1,0,0,0,0] Q=(1,2) 1 0 2 req=[1,1,1,0,0] 4 req=[1,1,1,0,0] 3 req=[1,1,1,0,0] 0 prepares to exit CS

  35. 3- Tokens passing algorithms Example of Suzuki-Kasami Algorithm Execution req=[1,1,1,0,0] last=[1,0,0,0,0] Q=(2) req=[1,1,1,0,0] 1 0 2 req=[1,1,1,0,0] 4 req=[1,1,1,0,0] 3 req=[1,1,1,0,0] 0 passes token (Q and last) to 1

  36. 3- Tokens passing algorithms Example of Suzuki-Kasami Algorithm Execution req=[2,1,1,1,0] last=[1,0,0,0,0] Q=(2,0,3) req=[2,1,1,1,0] 1 0 2 req=[2,1,1,1,0] 4 req=[2,1,1,1,0] 3 req=[2,1,1,1,0] 0 and 3 send requests

  37. 3- Tokens passing algorithms Example of Suzuki-Kasami Algorithm Execution req=[2,1,1,1,0] req=[2,1,1,1,0] 1 0 2 req=[2,1,1,1,0] last=[1,1,0,0,0] Q=(0,3) 4 req=[2,1,1,1,0] 3 req=[2,1,1,1,0] 1 sends token to 2

  38. Summary and advantages • Token-based+ queue : • Satisfies ME1 to ME3 • Less messages: N by CS • Question: isthisalgorithmfair? All messages receivedduring the CS are enqueuedat the same position, cannotwe do better? • Note: index canbebound • Note 2: A similaralgorithmwaspublished by Ricart and Agrawaat the sameperiod WHY? WHY?

  39. Back to message passing: Maekawa’sSolution • First solution with a sublinearO(sqrt N) message complexity. • “Close to” Ricart-Agrawala’s solution, but each process is required to obtain permission from only a subsetof peers

  40. 2- Solutions Using Message Passing Maekawa’s Algorithm • With each process i, associate a subset Si.Divide the set of processes into subsets that satisfy the following two conditions: i Si i,j :  i,j  n-1 :: Si Sj ≠  • Main idea. Each process i is required to receive permission from Si only. Correctness requires that multiple processes will never receive permission from all members of their respective subsets. S1 S0 0,1,2 1,3,5 2,4,5 S2

  41. 2- Solutions Using Message Passing Maekawa’s Algorithm • Example. Let there be seven processes 0, 1, 2, 3, 4, 5, 6 • S0 = {0, 1, 2} • S1 = {1, 3, 5} • S2 = {2, 4, 5} • S3 = {0, 3, 4} • S4 = {1, 4, 6} • S5 = {0, 5, 6} • S6 = {2, 3, 6}

  42. 2- Solutions Using Message Passing Maekawa’s Algorithm (example cont'd) • Version 1 {Life of process I} • 1. Send timestamped request to each process in Si. • 2. Request received  send ack to process with the lowest timestamp. Thereafter, "lock" (i.e. commit) yourself to that process, and keep others waiting. • 3. Enter CS if you receive an ack from each member in Si. • 4. To exit CS, send release to every process in Si. • 5. Release received unlock yourself. Then send ack to the next process with the lowest timestamp. S0 = {0, 1, 2} S1 = {1, 3, 5} S2 = {2, 4, 5} S3 = {0, 3, 4} S4 = {1, 4, 6} S5 = {0, 5, 6} S6 = {2, 3, 6}

  43. 2- Solutions Using Message Passing Analysis of Maekawa’s Algorithm (version 1) • ME1. At most one process can enter its critical section at any time. • Let i and j attempt to enter their Critical Sections • Si Sj ≠  there is a process kSi Sj • Process k will never send ack to both. • So it will act as the arbitrator and establishes ME1 S0 = {0, 1, 2} S1 = {1, 3, 5} S2 = {2, 4, 5} S3 = {0, 3, 4} S4 = {1, 4, 6} S5 = {0, 5, 6} S6 = {2, 3, 6}

  44. 2- Solutions Using Message Passing Analysis of Maekawa’s Algorithm (version 1) • ME2. No deadlock. Unfortunately deadlock is possible! Assume 0, 1, 2 want to enter their critical sections. • From S0= {0,1,2}, 0,2 send ack to 0, but 1 sends ack to 1; • From S1= {1,3,5}, 1,3 send ack to 1, but 5 sends ack to 2; • From S2= {2,4,5}, 4,5 send ack to 2, but 2 sends ack to 0; • Now, 0 waits for 1 (to send a release), 1 waits for 2 (to send a release), , and 2 waits for 0 (to send a release), . So deadlock is possible! S0 = {0, 1, 2} S1 = {1, 3, 5} S2 = {2, 4, 5} S3 = {0, 3, 4} S4 = {1, 4, 6} S5 = {0, 5, 6} S6 = {2, 3, 6}

  45. Problem: Potential of Deadlock • Without the loss of generality, assume that three sites Si, Sj and Sk simultaneously invoke Mutual Exclusion, and suppose: • Ri  Rj= {Su} • Rj  Rk= {Sv} • Rk  Ri= {Sw} • Solution: extra msg’s to detect deadlock, maximum number of msg’s = 5(sqtr(n) + 1). W Ack Si Su Sw W Ack Sj Sk W Ack Sv

  46. 2- Solutions Using Message Passing Maekawa’sAlgorithm, avoidingdeadlocks • Avoiding deadlock • If processes always receive messages in increasing order of timestamp, then deadlock “could be” avoided. But this is too strong an assumption. • Another more complex version exists that should ensure absence of deadlocks (not presented here) S0 = {0, 1, 2} S1 = {1, 3, 5} S2 = {2, 4, 5} S3 = {0, 3, 4} S4 = {1, 4, 6} S5 = {0, 5, 6} S6 = {2, 3, 6}

  47. EXERCISES

  48. Dekker’salgorithm Designed for sharedmemory (oldest) Doesitverify ME1 – ME2 – ME3?

  49. Boundedtimestamps Can weapplythisidea to algorithmsusingtimestampsshown in this lecture?

  50. Faulttolerance • Take the basic tokenalgorithm. (RING) • Suppose processes are given a number • Suppose a process crashes but a new ring isautomaticallycreated (keeping the oldnumbers) • Whathappens if the disappearedprocessdid not have the token • What if ithad the token • What information canweadd to the tokensothatitcanberegenerated if necessary? • if the disappearedprocesswasneither P0 norPn • if itwas P0 • if itwasPn (how to detectitwasPn?) • See Le Lann 77

More Related