1 / 80

CS514: Intermediate Course in Operating Systems

CS514: Intermediate Course in Operating Systems. Professor Ken Birman Ben Atkin: TA Lecture 21 Nov. 7. Bimodal Multicast. Work is joint: Ken Birman, Mark Hayden, Oznur Ozkasap, Zhen Xiao, Mihai Budiu, Yaron Minsky Paper appeared in ACM Transactions on Computer Systems May, 1999.

nibal
Télécharger la présentation

CS514: Intermediate Course in Operating Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS514: Intermediate Course in Operating Systems Professor Ken Birman Ben Atkin: TA Lecture 21 Nov. 7

  2. Bimodal Multicast • Work is joint: • Ken Birman, Mark Hayden, Oznur Ozkasap, Zhen Xiao, Mihai Budiu, Yaron Minsky • Paper appeared in ACM Transactions on Computer Systems • May, 1999

  3. Reliable Multicast • Family of protocols for 1-n communication • Increasingly important in modern networks • “Reliability” tends to be at one of two extremes on the spectrum: • Very strong guarantees, for example to replicate a controller position in an ATC system • Best effort guarantees, for applications like Internet conferencing or radio

  4. Reliable Multicast • Common to try and reduce problems such as this to theory • Issue is to specify the goal of the multicast protocol and then demonstrate that a particular implementation satisfies its spec. • Each form of reliability has its down specification and its own pros and cons

  5. Classical “multicast” problems • Related to “coordinated attack,” “consensus” • The two “schools” of reliable multicast • Virtual synchrony model (Isis, Horus, Ensemble) • All or nothing message delivery with ordering • Membership managed on behalf of group • State transfer to joining member • Scalable reliable multicast • Local repair of problems but no end-to-end guarantees

  6. Virtual Synchrony Model G0={p,q} G1={p,q,r,s} G2={q,r,s} G3={q,r,s,t} p q r s t crash r, s request to join p fails r,s added; state xfer t requests to join t added, state xfer ... to date, the only widely adopted model for consistency and fault-tolerance in highly available networked applications

  7. Virtual Synchrony Tools • Various forms of replication: • Replicated data, replicate an object, state transfer for starting new replicas... • 1-many event streams (“network news”) • Load-balanced and fault-tolerant request execution • Management of groups of nodes or machines in a network setting

  8. ATC system (simplified) Onboard Radar Controllers X.500 Directory Air Traffic Database(flight plans, etc)

  9. Roles for reliable multicast? • Replicate the state of controller consoles so that a group of consoles can support a team of controllers fault-tolerantly

  10. Roles for reliable multicast? • Distribute routine updates from the radar system, every 3-5 seconds (two kinds: images and digital tracking info)

  11. Roles for reliable multicast? • Distribute flight plan updates and decisions rarely (very critical information)

  12. Other settings that use reliable multicast • Basis of several stock exchanges • Could be used to implement enterprise database systems and wide-area network services • Could be the basis of web caching tools

  13. But vsync protocols don’t scale well • For small systems, performance is great. • For large systems can demonstrate very good performance… but only if applications “cooperate” • In large systems, under heavy load, with applications that may act a little flakey, performance is very hard to maintain

  14. Stock Exchange Problem: Vsync. multicast is too “fragile” Most members are healthy…. … but one is slow

  15. Throughput as one member of a multicast group is "perturbed" by forcing it to sleep for varying amounts of time.

  16. Why does this happen? • Superficially, because data for the slow process piles up in the sender’s buffer, causing flow control to kick in (prematurely) • But why does the problem grow worse as a function of group size, with just one “red” process? • Our theory is that small perturbations happen all the time (but more study is needed)

  17. Dilemma confronting developers • Application is extremely critical: stock market, air traffic control, medical system • Hence need a strong model, guarantees • SRM, RMTP, etc: model is too weak • Anyhow, we’ve seen that they don’t scale either! • But these applications often have a soft-realtime subsystem • Steady data generation • May need to deliver over a large scale

  18. Today introduce a new design pt. • Bimodal multicast is reliable in a sense that can be formalized, at least for some networks • Generalization for larger class of networks should be possible but maybe not easy • Protocol is also very stable under steady load even if 25% of processes are perturbed • Scalable in much the same way as SRM

  19. Bimodal Attack • General commands soldiers • If almost all attack victory is certain • If very few attack, they will perish but Empire survives • If many attack, Empire is lost Consensus is impossible! Attack!

  20. Worse and worse... • General sends orders via couriers, who might perish or get lost in the forest (no spies or traitors) • Some of the armies are sick with flu. They won’t fight and may not even have healthy couriers to participate in protocol (Message omission, crash and timing faults.)

  21. Properties Desired • Bimodal distribution of multicast delivery • Ordering: FIFO on sender basis • Stability: lets us garbage collect without violating bimodal distribution • Detection of lost messages

  22. More desired properties • Formal model allows us to reason about how the primitive will behave in real systems • Scalable protocol: costs (background messages, throughput) do not degrade as a function of system size • Extremely stable throughput

  23. Naming our new protocol • Bimodal doesn’t lend itself to traditional contractions (bcast is taken, bicast and bimcast sound pretty awful) • Focus on probabilistic nature of guarantees • Call it pbcast for sure • Historical footnote: these are multicast protocols but everyone uses “bcast” in names

  24. Environment • Will assume that most links have known throughput and loss properties • Also assume that most processes are responsive to messages in bounded time • But can tolerate some flakey links and some crashed or slow processes.

  25. Start by using unreliable multicast to rapidly distribute the message. But some messages may not get through, and some processes may be faulty. So initial state involves partial distribution of multicast(s)

  26. Periodically (e.g. every 100ms) each process sends a digest describing its state to some randomly selected group member. The digest identifies messages. It doesn’t include them.

  27. Recipient checks the gossip digest against its own history and solicits a copy of any missing message from the process that sent the gossip

  28. Processes respond to solicitations received during a round of gossip by retransmitting the requested message. The round lasts much longer than a typical RPC time.

  29. Delivery? Garbage Collection? • Deliver a message when it is in FIFO order • Garbage collect a message when you believe that no “healthy” process could still need a copy (we used to wait 10 rounds, but now are using gossip to detect this condition) • Match parameters to intended environment

  30. Need to bound costs • Worries: • Someone could fall behind and never catch up, endlessly loading everyone else • What if some process has lots of stuff others want and they bombard him with requests? • What about scalability in buffering and in list of members of the system, or costs of updating that list?

  31. Optimizations • Request retransmissions most recent multicast first • Idea is to “catch up quickly” leaving at most one gap in the retrieved sequence

  32. Optimizations • Participants bound the amount of data they will retransmit during any given round of gossip. If too much is solicited they ignore the excess requests

  33. Optimizations • Label each gossip message with senders gossip round number • Ignore solicitations that have expired round number, reasoning that they arrived very late hence are probably no longer correct

  34. Optimizations • Don’t retransmit same message twice in a row to any given destination (the copy may still be in transit hence request may be redundant)

  35. Optimizations • Use IP multicast when retransmitting a message if several processes lack a copy • For example, if solicited twice • Also, if a retransmission is received from “far away” • Tradeoff: excess messages versus low latency • Use regional TTL to restrict multicast scope

  36. Scalability • Protocol is scalable except for its use of the membership of the full process group • Updates could be costly • Size of list could be costly • In large groups, would also prefer not to gossip over long high-latency links

  37. Can extend pbcast to solve both • Could use IP multicast to send initial message. (Right now, we have a tree-structured alternative, but to use it, need to know the membership) • Tell each process only about some subset k of the processes, k << N • Keeps costs constant.

  38. Router overload problem • Random gossip can overload a central router • Yet information flowing through this router is of diminishing quality as rate of gossip rises • Insight: constant rate of gossip is achievable and adequate

  39. Limited bandwidth (1500Kbits) Bandwidth of other links:10Mbits Receiver Sender High noise rate Receiver Sender

  40. Hierarchical Gossip • Weight gossip so that probability of gossip to a remote cluster is smaller • Can adjust weight to have constant load on router • Now propagation delays rise… but just increase rate of gossip to compensate

  41. Hierarchical Gossip

  42. Remainder of talk • Show results of formal analysis • We developed a model (won’t do the math here -- nothing very fancy) • Used model to solve for expected reliability • Then show more experimental data • Real question: what would pbcast “do” in the Internet? Our experience: it works!

  43. Idea behind analysis • Can use the mathematics of epidemic theory to predict reliability of the protocol • Assume an initial state • Now look at result of running B rounds of gossip: converges exponentially quickly towards atomic delivery

  44. Failure analysis • Suppose someone tells me what they hope to “avoid” • Model as a predicate on final system state • Can compute the probability that pbcast would terminate in that state, again from the model

  45. Two predicates • Predicate I: A faulty outcome is one where more than 10% but less than 90% of the processes get the multicast …call this the “General’s predicate” because he views it as a disaster if many but not most of his troops attack

  46. Two predicates • Predicate II: A faulty outcome is one where roughly half get the multicast and failures might “conceal” true outcome … this would make sense if using pbcast to distribute quorum-style updates to replicated data. The costly hence undesired outcome is the one where we need to rollback because outcome is “uncertain”

  47. Two predicates • Predicate I: More than 10% but less than 90% of the processes get the multicast • Predicate II: Roughly half get the multicast but crash failures might “conceal” outcome • Easy to add your own predicate. Our methodology supports any predicate over final system state

More Related