1 / 55

Reliable Distributed Systems

Reliable Distributed Systems. Membership Adopted from Prof. Birman’s slides with minor changes. Motivation: why membership?. Consider the appeared-often scenario: There are many client processes, many replicated server processes that operate on many replicated database servers.

rigg
Télécharger la présentation

Reliable Distributed Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reliable Distributed Systems Membership Adopted from Prof. Birman’s slides with minor changes

  2. Motivation: why membership? • Consider the appeared-often scenario: • There are many client processes, many replicated server processes that operate on many replicated database servers. • The server processes need to coordinate,. • Broadcast in last lecture, with various flavors, provides a means for “transporting” messages whereby the server processes may communicate in a “semantically correct” fashion. • But who should receive the broadcast messages? How can the correct processes know the replicated data they read are up-to-date? How can they coordinate, even though (some of) them may fail from time to time (e.g., commit or not)? • We need group membership service (who is live?)

  3. Motivation: why membership? • A bit on the relationship between broadcast and group membership • The former aims to “virtually” emulate a “perfect” transportation system between cities • The later aims to make all cities act in a consistent way • Two philosophy about membership • Static: always keep the total list, but without knowing who is alive • Dynamic: a list only includes those live ones

  4. Agreement on Membership • Recall our approach: • Detecting failure is a lost cause. • Too many things can mimic failure • To be accurate would end up waiting for a process to recover • Substitute agreement on membership • Now we can drop a process because it isn’t fast enough • This can seem “arbitrary”, e.g. A kills B… • GMS implements this service for everyone else

  5. Architecture Applications use replicated data for high availability 2PC-like protocols use membership changes instead of failure notification Membership Agreement, “join/leave” and “P seems to be unresponsive”

  6. Architecture Application processes membership views A {A} {A,B,D} {A,D} {A,D,C} {D,C} GMS processes join B leave GMS join C X Y Z D A seems to have failed

  7. Contrast dynamic with static model • Static model: fixed set of processes “tied” to resources • Processes may be unreachable (while failed or partitioned away) but later recover • Think: “cluster of PCs” • Dynamic model: changing set of processes launched while system runs, some fail/terminate • Failed processes never recover (partitioned process may reconnect, but uses a new pid) • And can still own a physical resource, allowing us to emulate a static model

  8. Consistency options • Could require that system always be consistent with actions taken at a process even if that process fails immediately after taking the action • This property is needed in systems that take external actions, like advising an air traffic controller • May not be needed in high availability systems • Alternative is to require that operational part of system remain continuously self-consistent

  9. Obstacles to progress • Fischer, Lynch and Patterson result: proof that agreement protocols cannot be both externally consistent and live in asynchronous environments • Suggests that choice between internal consistency and external consistency is a fundamental one! • Can show that this result also applies to dynamic membership problems

  10. Usual response to FLP: Chandra/Toueg • Consider system as having a failure detector that provides input to the basic system itself • Agreement protocols within system are considered safe and live if they satisfy their properties and are live when the failure detector is live • Babaoglu: expresses similar result in terms of reachability of processes: protocols are live during periods of reachability

  11. Towards an Alternative • In this lecture, focus on systems with self-defined membership • Idea is that if p can’t talk to q it will initiate a membership change that removes q from p’s system “membership view” • Illustrated on next slide

  12. Commit protocol from when we discussed transactions ok to commit? vote unknown! ok decision unknown! ok

  13. Suppose this is a partitioning failure ok to commit? vote unknown! ok decision unknown! ok Do these processes actually need to be consistent with the others?

  14. Primary partition concept • Idea is to identify notion of “the system” with a unique component of the partitioned system • Call this distinguished component the “primary” partition of the system as a whole. • Primary partition can speak with authority for the system as a whole • Non-primary partitions have weaker consistency guarantees and limited ability to initiate new actions

  15. Ricciardi: Group Membership Protocol • For use in a group membership service (usually just a few processes that run on behalf of whole system) • Tracks own membership; own members use this to maintain membership list for the whole system • All user’s of the service see subsequences of a single system-wide group membership history • GMS also tracks the primary partition

  16. GMP protocol itself • Used only to track membership of the “core” GMS • Designates one GMS member as the coordinator • Switches between 2PC and 3PC • 2PC if the coordinator didn’t fail and other members failed or are joining • 3PC if the coordinator failed and some other member is taking over as new coordinator • Question: how to avoid “logical partitioning”?

  17. GMS majority requirement • To move from system “view” i to view i+1, GMS requires explicit acknowledgement by a majority of the processes in view i • Can’t get a majority: causes GMS to lose its primaryness information • Dahlia Malkhi has extended GMP to support partitioning and remerging; similar idea used by Yair Amir and others in Totem system

  18. GMS in Action p0 p1 ... p5 p0is the initial coordinator. p1and p2 join, then p3...p5 join. But p0 fails during join protocol, and later so does p3. Notice use of majority consent to avoid partitioning!

  19. GMS in Action p0 p1 ... p5 2-phase commit…3-phase…2–phase P0 is coordinator… P1 takes over… P1 is new coordinator

  20. What if system has thousands of processes? • Idea is to build a GMS subsystem that runs on just a few nodes • GMS members track themselves • Other processes ask to be admitted to system or for faulty processes to be excluded • GMS treats overall system membership as a form of replicated data that it manages, reports to its “listeners”

  21. Uses of membership? • If we rewire TCP and RPC to use membership changes as trigger for breaking connections, can eliminate split-brain problems! • But nobody really does this • Problem is that networks lack standard GMS subsystems now! • But we can still use it ourselves

  22. Replicated data within groups • A very general requirement: • Data actually managed by group • Inputs and outputs, in a server replicated for fault-tolerance • Coordination and synchronization data • Will see how to solve this, and then will use solution to implement “process groups” which are subgroups of the overall system membership

  23. Replicated data • Assume that we have a (dynamically defined) group of processes G and that its members manage a replicated data item • Goal: update by sending a multicast to G • Should be able to safely read any copy “locally” • Consider situation where members of G may fail or recover

  24. Some Initial Assumptions • For now, assume that we work directly on the real network, not using Ricciardi’s GMS • Later will need to put GMS in to solve a problem this raises, but for now, the model will be the very simple one: processes that communicate using messages, asynchronous network, crash failures • We’ll also need our own implementation of TCP-style reliable point-to-point channels using GMS as input

  25. Process group model • Initially, we’ll assume we are simply given the model • Later will see that we can use reliable multicast to implement the model • First approximation: a process group is defined by a series of “views” of its membership. All members see the same sequence of view changes. Failures, joins reported by changing membership

  26. Process groups with joins, failures G0={p,q} G1={p,q,r,s} G2={q,r,s} G3={q,r,s,t} p q r s t crash r, s request to join p fails r,s added; state xfer t requests to join t added, state xfer

  27. State transfer • Method for passing information about state of a group to a joining member • Looks instantaneous, at time the member is added to the view

  28. Outline of treatment • First, look at reliability and failure atomicity • Next, look at options for “ordering” in group multicast • Next, discuss implementation of the group view mechanisms themselves • Finally, return to state transfer • Outcome: process groups, group communication, state transfer, and fault-tolerance properties

  29. Atomic delivery • Atomic or failure atomic delivery • If any process receives the message and remains operational, all operational destinations receive it a fails p q r s fails b All processes that receive a subsequently fail. All processes receive b.

  30. Additional properties • A multicast is dynamically uniform if: • If any process delivers the multicast, all group members that don’t fail will deliver it (even if the initial recipient fails immediately after delivery). • Otherwise we say that the multicast is “not uniform”

  31. Uniform and non-uniform delivery a fails p q r s fails b Uniform delivery of a and b a fails p q r s fails b Non-uniform delivery of a

  32. Stronger properties cost more • Weaker ordering guarantees are cheaper than stronger ones • Non-uniform delivery is cheap • Dynamic uniformity is costly • Dynamic membership is cheap • Static membership is more costly

  33. Conceptual cost graph uniform and globally total “abcast” in a static group Total, safe abcast in Totem or Transis: 600/second, 750ms latency sender to dest non-uniform, dynamic group uniform static group cbcast in Horus: 85,000/second, 85us latency sender to dest asynchronous and non-uniform “cbcast” to dynamically defined group less ordered local total order global total order

  34. Implementing multicast primitives • Initially assume a static process group • Crash failures: permanent failures, a process fails by crashing undetectably. No GMS (at first). • Unreliable communication: messages can be lost in the channels • ... looks like the asynchronous model of FLP

  35. Failures? • Message loss: overcome with retransmission • Process failures: assume they “crash” silently • Network failures: also called “partitioning” • Can’t distinguish between these cases! timeout: q failed! p q network partitions timeout: p failed!

  36. Multicast by “flooding” • All recipients echo message to all other recipients, O(n2) messages exchanged • Reject duplicates on basis of message id • When can we garbage collect the id? • Important because remembering the ids cost resources! a fails p q r s fails

  37. Multicast by “flooding” • All recipients echo message to all other recipients, O(n2) messages exchanged • Reject duplicates on basis of message id • When can we garbage collect the id? a p q r s

  38. Multicast by “flooding” • All recipients echo message to all other recipients, O(n2) messages exchanged • Reject duplicates on basis of message id • When can we garbage collect the id? a fails p q r s fails

  39. Multicast by “flooding” • All recipients echo message to all other recipients, O(n2) messages exchanged • Reject duplicates on basis of message id • When can we garbage collect the id? a fails p q r s fails

  40. Garbage collection issue • Must remember id as long as might still see a duplicate copy • If no process fails: garbage collect after echoed by all destinations • Very similar to 3PC protocol... correctness of this protocol depends upon having an accurate way to detect failure! Return to this point in a few minutes.

  41. “Lazy” flooding and garbage collection • Idea is to delay “non urgent” messages • Recipients delay the echo in hope that sender will confirm successful delivery: O(n) messages a p q r s ack...

  42. “Lazy” flooding • Recipients delay the echo in hope that sender will confirm successful delivery: O(n) messages • The sender do the “aggregate” of answers a p q r s ack... all got it...

  43. “Lazy” flooding • Recipients delay the echo in hope that sender will confirm successful delivery: O(n) messages • Notice that garbage collection occurs in 3rd phase a fails p q r s fails ack... all got it... garbage collect

  44. “Lazy” flooding, delayed phases • “Background” acknowedgements (not shown) • Piggyback 2nd, 3rd phase on other multicasts m1 p q r s m1

  45. “Lazy” flooding, delayed phases • “Background” acknowedgements (not shown) • Piggyback 2nd, 3rd phase on other multicasts m1 m2 p q r s m1 m2, all got m1

  46. “Lazy” flooding, delayed phases • “Background” acknowedgements (not shown) • Piggyback 2nd, 3rd phase on other multicasts m1 m2 m3 p q r s fails m1 m2, all got m1m3, gc m2

  47. “Lazy” flooding, delayed phases • “Background” acknowedgements (not shown) • Piggyback 2nd, 3rd phase on other multicasts • Reliable multicasts now look cheap! • Gossiping is one alternative; Parker next lecture m1 m2 m3 m4 p q r s fails m1 m2, all got m1m3, gc m2 m4, gc m2

  48. Lazy scheme continued • If sender fails, recipients switch to flood-style algorithm ... but now we have the same garbage collection problem: if sender fails we may never be able to garbage collect the id! • Problem is caused by lack of failure detector

  49. Garbage collection with inaccurate failure detections • ... we lack an accurate way to detect failure • If any does seem to fail, but is really still operational and merely partitioned away, the connection might later be fixed. • That process might “wake up” and send a duplicate • Hence, if we are not sure a process has failed, can’t garbage collect our duplicate-supression data yet!

  50. Exploiting a failure detector • Suppose that we had a failstop environment • Process group membership managed by oracle, perhaps the GMS we saw earlier • Failures reported as “new group views” • All see the same sequence of views: • G = {p,q,r,s} {p,r,s} {r,s} • Now can assume failures are accurately detected

More Related