1 / 20

An experimental comparison of lock-based distributed mutual exclusion algorithms

An experimental comparison of lock-based distributed mutual exclusion algorithms. Victor Lee Kent State University Department of Computer Science. Distributed Mutual Exclusion Algorithms. Ricart-Agrawala REQUEST, REPLY Lock needed from every process Maekawa

emory
Télécharger la présentation

An experimental comparison of lock-based distributed mutual exclusion algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An experimental comparison of lock-based distributed mutual exclusion algorithms Victor LeeKent State UniversityDepartment of Computer Science

  2. Distributed Mutual Exclusion Algorithms • Ricart-Agrawala • REQUEST, REPLY • Lock needed from every process • Maekawa • REQUEST, GRANT, FAIL, INQUIRE, YIELD, RELEASE • Locked needed from a quorum, where • A process is a member of its own quorum • Each quorum intersects with every other quorum

  3. Performance Measures • Maekawa’s quorum and arbitration scheme • Intent is to reduce the number of messages needed per CS entry • Messages per CS: • Ricart-Agrawala: 2(N-1) (REQ + REPLY)*(# Proc) • Maekawa: K√N, where • Quorum size ~ √N • K(min) = 3 (REQ + GRANT + RELEASE) • As load increases, K increases because FAIL, INQUIRE, and YIELD messages come into use

  4. Experiment Proposal • Measure how the average number of messages per CS entry varies with: • The number of processes N, load is constant • The load L, number of processes is constant • Compare Ricart-Agrawala to Maekawa

  5. Quorums and Limitations on N • Need an algorithm to construct quorums: • Billiard quorum algorithm:N = ( Q2 – 1)/2, Q must be odd: • Q = 3, N = 4 • Q = 5, N = 12 • Q = 7, N = 24 • Q = 9, N = 40 • Experiments use these four values of N Reference: Agrawal et al, Billiard quorums on the grid, Information Processing Letters 64 (1997) 9-16. Elsevier.

  6. Defining and Controlling Load • Load is the number of processes that are contending for the CS. • Ranges from 1 to N • Experiments use L = 1, ½ N, and N • Simulator initializes a run by selecting L random processes to contend for CS. • When a process exits CS, actual L decrements, so • Simulator selects another random process to contend for CS, restoring L

  7. Experimental Results: Msg/CS vs. N Ricart-Agrawala: Expected results; all trials were identical Test Conditions: • Low Load (L = 1) • CS = 100 entries/exits • Contending processes are chosen randomly

  8. Experimental Results: Msg/CS vs. N

  9. Experimental Results: Msg/CS vs. L Ricart-Agrawala: Expected results, no variation Test Conditions: • N = 40 • CS = 100 entries/exits • Contending processes are chosen randomly

  10. Experimental Results: Msg/CS vs. N

  11. Discussion of Results • Ricart-Agrawala results were exactly as expected • Maekawa trends were as expected, but • K was slightly higher than expected for low load (expected 3.0) • K did not increase as much as expected with load (expected ~6)

  12. Explanation of Behavior of K • Test logs show that FAIL, INQUIRE, and YIELD in fact occur even when only one process contends. • Example: • A process P1 that exits CS sends RELEASE to all its quorum members. • Another process P2 may now send out REQUESTs • A third process P3 might receive the P2’s request before receive P1’s RELEASE. • K does not reach 6 because even with maximum load, not every REQUEST is followed by FAIL, INQUIRE,YIELD, GRANT, and RELEASE

  13. Future Investigations • Investigate the frequency distribution of FAIL, INQUIRE, and YIELD in Maekawa’s algorithm. • Example: In a 100-CS run with 1098 messages, what percentage of the messages were of each type? • CS latency increases as load increases, and Maekawa’s algorithm permits out-of-timestamp order CS entry. Can we make predictions about the expected latency?

  14. Implementation Notes:Object-Oriented Components Process (abstract)TMessage (sender, recvr, timestamp, body)Channel (linked list of Messages) ME MEProcess (abstract send(), receive())MEChannel (queuing and delivery)MESimulator (basic execution cycle) Ricart-Agrawala Maekawa RAProcess (RA algorithm)RASimulator (RA topology) MaekProcess (ME algorithm)MaekSimulator (ME topology)

  15. Implementation Notes:The Channel • Initial implementation of the Channel was a linked list (Project 1 observed global in-order delivery) • Ricart-Agrawala supports out-of-order delivery • In Project 2, send() inserts a message in a random location in the list. Receive always removes the first item in the list. • Maekawa receives in-order delivery • Didn’t notice this during Project 2, not until Project 3… • Problem: I use 1 global channel to represent the many local process-process channels. Ordering must be observed locally but not necessarily globally • Don’t want to create N2 or N*Q separate channel objects!

  16. Implementation Notes:The Channel • Solution: Replace the single list with N senderLists. • Messages in one list are from a single sender but may be addresses to any other process. • For send(), if Maekawa: • search from the end of the list for the first occurrence of a message with the same receiver as the new message. • Insert the new message anywhere between the end and this point. • If Ricart-Agrawala, can still insert anywhere in the list • For receive(): • Remove from a random senderList. If the list if empty, try again.

  17. Implementation Notes:Simulator Cycle & Load Management initializeCSreq(N, L) Execute CSReq() L times CSReq() Move one randomly chosen process P from readyList to contendList Tell P to RequestCS EnteringCS(Pid) inCS = true CSid = Pid CSExit() inCS = false numCSexits++ Move Process CSid from contendList to readyList Tell Process CSid to exit

  18. Implementation Notes:Simulator Cycle & Load Management contendList = {} // procs in contention readyList = {all P’s} // procs not in contention messageCount = 0 initializeCSreq(N, L); while (numCSexits < maxCSexits && Channel.notEmpty) { if (inCS && ProbabilityOfExiting) { CSExit; } if (contendList.size < L) { CSReq; } if (Channel.notEmpty) { Message m = Channel.remove; messageCount++ Deliver(m) } }

  19. Implementation Notes:Tracking Locks and Fails in Maekawa • Fails are just as important as Locks • Process must track not only that it has received a FAIL, but it must recall the identity of the FAIL senders. • Fail is a state variable, affecting how a process responds to INQUIRE • A FAIL gets “erased” when the sender of the FAIL later sends a GRANT. If a requestor receives FAIL from two different processes, it will remain in the Fail state until it receives GRANT from those two processes. • Thus, we need a Fail array just like the Lock array. • Fail state = any Fail[i] is setLock state= all Lock[i] are set

  20. Implementation Notes:Tracking Locks and Fails in Maekawa • A requesting Maekawa process must track which quorum members have granted it a lock. • When all locks are received, it may enter CS • Maekawa discusses the need for a lock list or array. Conceptually, an associative array of size Q. Index is a quorum member’s process ID. • When would a process give up its own lock? • If it receives a preceding request from another process • When would it get it back? • When its request pops up to the top of its request queue

More Related