1 / 22

Fair K-Mutual Exclusion Algorithm for Peer to Peer Systems

Fair K-Mutual Exclusion Algorithm for Peer to Peer Systems. Vijay Anand, Prateek Mittal and Indranil Gupta Department of Computer Science University of Illinois Urbana Champaign. Motivation. Central statistics collection server (PlanetLab, Data Centers) [HP OpenView] [CoMon].

aprilbryant
Télécharger la présentation

Fair K-Mutual Exclusion Algorithm for Peer to Peer Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fair K-Mutual Exclusion Algorithm for Peer to Peer Systems Vijay Anand, Prateek Mittal and Indranil Gupta Department of Computer Science University of Illinois Urbana Champaign

  2. Motivation Central statistics collection server (PlanetLab, Data Centers) [HP OpenView] [CoMon]. • Control and Data logs. • Limited bandwidth is a bottleneck. • Timeliness of data collection is required. Control over bandwidth (by varying k). Services with limited computational resources. • Downloading large multimedia file (Gridftp, CoBlitz). A K-Mutual Exclusion Algorithm which provides Fairness is required.

  3. Definition • k Mutual Exclusion problem involves a group of processes, each of which intermittently requires access to an identical resource called the critical section (CS). • Safety means that at most k processes, 1≤ k ≤ n may be in CS at any given time. • Liveness means that every request for critical section access is satisfied in finite time.

  4. Prior Work & Fairness Prior works rarely accounted for Fairness • Optimized the mean time to access the critical section [BV95] [SR92] [RA81]. Fairness [LK00] : FIFO ordering with respect to request timestamps. Access Time = Time to get CS – CS request time. spread = width of distribution across clients. Our Fairness Metric: The access time spread for the critical section should be small across requesting clients across multiple requests . • Not a binary property (smaller the better). • More practical. • Generalization of the FIFO ordering.

  5. Our Contributions Proposed a practical fair algorithm for k-mutual exclusion problem. • Minimizes the difference between maximum and mean time to access the critical section. Proved that our algorithm satisfies both safety and liveness requirements. Proposed a fault tolerant methodology for our algorithm using Chord peer to peer system. • Showed that it is resilient against churn.

  6. System Model Communication channel is reliable and does not duplicate messages. Message delivery to the destination is time bounded. No Joins and failures: We handle failures by relying on chord DHT. Constant critical section (CS) time. • Generalizes to varying but known CS times.

  7. First Cut Algorithm • Token Based Algorithm (K Tokens). • Obtained from a real world scenario : Cinema hall with K ticket counters. • Customer has no idea about the length of the queues and picks the queue randomly. This approach gives good average time to get the ticket (token). No load balancing, thus access time spread is very high.

  8. Algorithm(2): Coordinator • Coordinator guides customers to the ticket counters in a round robin manner. • Presence of coordinator leads to centralized solution (single point of failure). 3 2

  9. Algorithm(3): Distributed Approach Distributed approach: Every customer entering the hall acts as a coordinator and guides the next customer to the round robin ticket counter. It provides load balancing of requests for tickets among the counters. 2 3 4

  10. Algorithm(3): Distributed Approach Distributed approach: Every customer entering the hall acts as a coordinator and guides the next customer to the round robin ticket counter. It provides load balancing of requests for tickets among the counters. 3 4 1 This scenario exactly maps to our k-mutual exclusion Algorithm.

  11. Distributed Data Structures in Algorithm Entry in to the cinema hall – Single mutual exclusion. • Dynamic tree is used to represent the queue (O (log N)) • Tree is based on the path reversal technique by [NTA96]. K- Counter queues – K distributed token queues Coordinator Node • Has addresses of the tails of the k token queues, and • Counter variable – Incremented in round robin way. • Passes both to the next element in the queue (DY Tree).

  12. Example N = 8, K =3 Node With Token Coordinator Node 1 2 3 Father Pointer 8 Child Pointer 7 4 6 5 Initial Configuration: Node 3 is the coordinator

  13. Example Node With Token Coordinator/root Node 1 2 3 Father Pointer 8 Child Pointer Message_Child 7 Message_Request 4 6 5 Message_Token_ Locations Node 4 Requests CS

  14. Example Node With Token Coordinator Node 1 2 3 Father Pointer 8 Child Pointer Message_Child 7 4 Message_Request 6 5 Message_Token_ Locations Node 5 Requests CS

  15. Example Node With Token Coordinator Node 1 2 3 Father Pointer 8 Child Pointer 4 5 7 6 Node 5 has been appended to the Child queue

  16. Example Node With Token Coordinator Node 1 2 3 Father Pointer 8 Child Pointer 6 4 5 7 Node 6 and 7 Request CS (Not showing the formation of privileged queue)

  17. Example Node With Token Coordinator Node 1 2 3 Father Pointer 8 Child Pointer 6 4 5 7 Node 1 Exits CS

  18. Experiment Methodology Critical Section duration = 10s Number of nodes = 100 Number of tokens = 3 Latencies • LAN Setting : Normalized 1 sec between every pair of nodes • WAN Setting : King Data Set (Highly Heterogeneous) (average RTT = 182ms , Maximum RTT = 800 ms) Compared with BV95[Best known algorithm] • Remaining algorithms have worse mean access times.

  19. MTTT: Mean Time to get the Token Our Algorithm and BV95 in LAN Setting Our Algorithm and BV95 in WAN Setting Our algorithm is comparable to BV95 (similar MTTT values under both settings)

  20. Results: Fairness Globally Maximum Time to get the token Vs Request rate More than1600s BV95 in LAN Setting Our Algorithm in LAN Setting 370s Access time spread (Maximum - Mean) for our algorithm is less than15s, whereas for BV95 it is more than1250s.

  21. Trade off Average Messages Vs Request Rate Our Algorithm in LAN Setting Message Complexity is same in both the algorithms (O (log N)) BV95 in LAN Setting • This Increase in Average message is due to • 1) Additional constant number of messages (message_child etc). • 2) Unlike BV95 our algorithm does not cache requests for CS to preserve fairness.

  22. Conclusion A Practical Fairness Metric: Access Time Spread. A New Distributed Fair Mutual Exclusion Algorithm • Similar performance for MTTT values compared to BV95. • An order of magnitude performance Improvement in fairness metric (15s Vs 1250s). • Trade off: Additional constant number of messages.

More Related