1 / 33

O(log n / log log n) RMRs Randomized Mutual Exclusion

O(log n / log log n) RMRs Randomized Mutual Exclusion. Danny Hendler Philipp Woelfel PODC 2009. Ben-Gurion University University of Calgary. Talk outline. Prior art and our results Basic Algorithm (CC) Enhanced Algorithm (CC) Pseudo-code Open questions. Most Relevant Prior Art.

editht
Télécharger la présentation

O(log n / log log n) RMRs Randomized Mutual Exclusion

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. O(log n / log log n) RMRs Randomized Mutual Exclusion Danny Hendler Philipp Woelfel PODC 2009 Ben-Gurion University University of Calgary

  2. Talk outline • Prior art and our results • Basic Algorithm (CC) • Enhanced Algorithm (CC) • Pseudo-code • Open questions

  3. Most Relevant Prior Art • Best upper bound for mutual exclusion: O(log n) RMRs (Yang and Anderson, Distributed Computing '96). • A tight Θ(n log n) RMRs lower bound for deterministic mutex(Attiya, Hendler and Woelfel, STOC '08) • Compare-and-swap (CAS) is equivalent to read/write for RMR complexity(Golab, Hadzilacos, Hendler and Woelfel, PODC '07)

  4. Our Results Randomized mutual exclusion algorithms (for both CC/DSM) that have: • O(log N / log log N) expected RMR complexity against a strong adversary, and • O(log N)deterministic worst-case RMR complexity Separation in terms of RMR complexity between deterministic/randomized mutual exclusion algorithms

  5. Shared-memory scheduling adversary types • Oblivious adversary: Makes all scheduling decisions in advance • Weak adversary: Sees a process' coin-flip only after the process takes the following step, can change future scheduling based on history • Strong adversary: Can change future scheduling after each coin-flip / step based on history

  6. Talk outline • Prior art and our results • Basic algorithm (CC model) • Enhanced Algorithm (CC model) • Pseudo-code • Open questions

  7. Basic Algorithm – Data Structures Δ Δ=Θ(log n / log log n) Δ-1 Key idea: Processes apply randomized promotion Δ 1 2 1 0 1 2 n

  8. Basic Algorithm – Data Structures (cont'd) Δ Promotion Queue notified[1…n] pi1 pi2 pik Δ-1 lock{P,} Per-node structure apply: <v1,v2, …,vΔ> Δ 1 2 1 0 1 2 n

  9. Basic Algorithm – Entry Section Δ Δ-1 i CAS(, i) Lock=  i apply: <v1, , …,vΔ> 1 0 i

  10. Basic Algorithm – Entry Section: scenario #2 Δ Δ-1 Failure CAS(,i) Lock=q i apply: <v1, , …,vΔ> 1 0 i

  11. Basic Algorithm – Entry Section: scenario #2 Δ Δ-1 Lock=q i apply: <v1, , …,vΔ> await (n.lock=) || apply[ch]=) 1 0 i

  12. Basic Algorithm – Entry Section: scenario #2 Δ Δ-1 apply: Lock=q i <v1, , …,vΔ> await (n.lock=) ||apply[ch]=) 1 0 i

  13. Basic Algorithm – Entry Section: scenario #2 Δ Δ-1 await (notified[i) =true) 1 0 CS i

  14. Basic Algorithm – Exit Section Δ Δ-1 Climb up from leaf until last node capturedin entry section  Lock=p apply: <v1, q, …,vΔ> 1 0 Lottery p

  15. Basic Algorithm – Exit Section Δ Perform a lottery on the root Δ-1  Lock=p apply: <v1, , …,vΔ> Promotion Queue q 1 s 0 t p

  16. Basic Algorithm – Exit Section Δ t CS Δ-1 await (notified[i) =true) Promotion Queue q 1 s 0 t i

  17. Basic Algorithm – Exit Section (scenario #2) Δ Free Root Lock Δ-1 Promotion Queue EMPTY 1 0 i

  18. Basic Algorithm – Properties Lemma: mutual exclusion is satisfied Proof intuition: when a process exits, it either • signals a single process without releasing the root's lock, or • if the promoted-processes queue is empty, releases the lock • When lock is free, it is captured atomically by CAS

  19. Basic Algorithm – Properties (cont'd) Lemma: Expected RMR complexity is Θ(log N / log log N) await (n.lock=) || apply[ch]=) A waiting process participates in a lotteryevery constant number of RMRs incurred here Probability of winning a lottery is 1/Δ Expected #RMRs incurred before promotion is Θ(log N / log log N)

  20. Basic Algorithm – Properties (cont'd) • Mutual Exclusion • Expected RMR complexity:Θ(log N / log log N) • Non-optimal worst-case complexity and (even worse) starvation possible.

  21. Talk outline • Prior art and our results • Basic algorithm (CC) • Enhanced Algorithm (CC) • Pseudo-code • Open questions

  22. The enhanced algorithm. Key idea Quit randomized algorithm after incurring ‘'too many’’ RMRS and then execute a deterministic algorithm. Problems • How do we count the number of RMRs incurred? • How do we “quit” the randomized algorithm?

  23. Enhanced algorithm: counting RMRs problem await (n.lock=) || apply[ch]=) The problem: A process may incur here an unbounded number of RMRs without being aware of it.

  24. Counting RMRs: solution Key idea Perform both randomized and deterministic promotion Lock=p apply: <v1, q, …,vΔ> token: • Increment promotion token whenever releasing a node • Perform deterministic promotion according to promotion index in addition to randomized promotion

  25. The enhanced algorithm: quitting problem Upon exceeding allowed number of RMRs, why can't a process simply release captured locks and revert to a deterministic algorithm?? Δ 1 2 Waiting processes may incur RMRs without participating in lotteries! 1 2 N

  26. Quitting problem: solution Δ Add a deterministicΔ-process mutex object to each node Δ-1 Per-node structure lock{P,} Δ apply: 1 2 <v1,v2, …,vΔ> token: 1 MX: Δ-process mutex 0 1 2 n

  27. Quitting problem: solution (cont'd) Per-node structure lock{P,} apply: <v1,v2, …,vΔ> token: MX: Δ-process mutex • After incurring O(log Δ) RMRs on a node, compete for the MX lock. Then spin trying to capture node lock. • In addition to randomized and deterministic promotion, an exiting process promotes also the process that holds the MX lock, if any.

  28. Quitting problem: solution (cont'd) • After incurring O(log Δ) RMRs on a node, compete for the MX lock. Then spin trying to capture node lock. Worst-case number of RMRs = O(Δ log Δ)=O(log n)

  29. Talk outline • Prior art and our results • Basic algorithm (CC) • Enhanced Algorithm (CC) • Pseudo-code • Open questions

  30. Data-structures the i'th leaf i'th

  31. The entry section i'th

  32. The exit section i'th

  33. Open Problems • Is this best possible? • For strong adversary? • For weak adversary? • For oblivious adversary? • Is there an abortable randomized algorithm? • Is there an adaptive one?

More Related