1 / 64

Gossip and its application

Gossip and its application . Presented by Anna Kaplun. Agenda. Technical preliminaries Gossip algorithms Randomized unbalanced gossip unbalanced gossip Consensus Distributed computing. Technical preliminaries. The system is synchronous

barto
Télécharger la présentation

Gossip and its application

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Gossip and its application Presented by Anna Kaplun

  2. Agenda • Technical preliminaries • Gossip algorithms • Randomized unbalanced gossip • unbalanced gossip • Consensus • Distributed computing

  3. Technical preliminaries • The system is synchronous • There are n processors, each with unique integer name in interval {1,..,n}. n is known to all processors. • Each processor can send a message to any subset of processors in one round. The size of the message assumed to be sufficiently large to carry a complete local state.

  4. Technical preliminaries – Performance metrics • Communication denotes the total number of point-to-point messages sent • Time is measured as number of rounds. The round is such number of clock cycles that is sufficient to complete: • receive messages delivered in previous round • Local computations • Send messages to arbitrary set of processors and deliver them.

  5. Gossip • In the beginning each processor has an input value called rumor. • The goal: every non faulty processor p • knows the rumor of any other processor q OR • p knows that q has crushed

  6. Randomized unbalanced gossip • Processors are partitioned into mgroups of balanced size. • m=min{n,2t} • n – number of processors • t – maximum number of faulty processors • Every group has a leader • Only leaders send messages while regular nodes may only answer leaders’ requests

  7. Randomized unbalanced gossip • Processors are partitioned into wchunksof balanced size. • w is: • if 2t < n than w=2t • else w=n-t Note: If 2t<n then chunks and groups are the same

  8. Communication graph • Every node is connected to appropriate leader • Leaders form a graph Is a graph which consists of m nodes and has the following properties: For each subgraph of size at least m-t ,there is a subgraph P(R) such that: its degree is

  9. Communication graph (m=2t<n) m groups m leaders connect nodes to leaders Leaders form , graph Chunks and groups are the same

  10. Communication graph (m=n≤2t) m groups m leaders connect nodes to leaders Leaders form , graph n-t chunks

  11. Randomized unbalanced gossip -local view • Rumors – all known rumors initialized: Rumorsp[p]=myRumor • Active – list of crashed processors initialized: Activep[q]=nill (for every q) • Pending – list of fully informed processors initialized: Pendingp[q]=nill (for every q)

  12. Randomized unbalanced gossip -messages • – carrying the whole local state, sent along communication graph • – requests a local state from specific node • – carrying the whole local state, sent when the sender knows all rumors (or knows that a certain processor crushed) • – carrying the whole local state, sent as reply to inquiry message. graph inquiry notification reply

  13. Randomized unbalanced gossip –the algorithm • Only leaders send messages, regular nodes only answer queries • Leader starts as collector, when it knows about all nodes it becomes disseminator • Algorithm consists of phases: • Regular phase – executed T times • Ending phase – executed 4 times

  14. Randomized unbalanced gossip – the algorithm(regular phase ,executed T times ) • Update the local arrays • If p is a collector that has already heard about all nodes than become disseminator For each processor q: • If q is active and q is my neighbor in communication graph than send a graph message to q • If I’m collector and q is in the first chunk with a processor about which I haven't heard yet, then send inquiring message • I I’m disseminator and q is in the first chunk with a processor that need to be notified, then send notifying message. • If q is a collector from which an inquiry message was received then send a reply message to q graph inquiry notification reply Chunks are ordered according to permutation πp

  15. Randomized unbalanced gossip -the algorithm (regular phase) For some leader p Send graph messages Take first unknown chunk from πp, send query Answer queries

  16. Randomized unbalanced gossip -the algorithm (regular phase) For some leader p That collected all rumors Send graph messages Take first uninformed chunk from πp, send notification Answer queries

  17. Randomized unbalanced gossip -the algorithm (ending phase) • for 4 times • Update the local arrays • If p is a collector that has already heard about all nodes than become disseminator For each processor q: • If I’m collector and I don’t know about q , then send inquiring message • I I’m disseminator and q need to be notified, then send notifying message. • If q is a collector from which an inquiry message was received then send a reply message to q inquiry notification reply

  18. Randomized unbalanced gossip –updating lists • Rumors – when some message received, new rumors are merged to the local list of known rumors r0 r0 r2 r2 r4 r4

  19. Randomized unbalanced gossip –updating lists (cont) • Active – q can be marked as faulty if: • Received a message where q is marked as faulty • q is my neighbor in communication graph and I didn’t receive a graph message from it • I sent a query to q and didn’t receive a reply in two rounds.

  20. Randomized unbalanced gossip –updating lists (cont) • Pending – q can be marked as fully informed if: • Received a message where q is marked as fully informed • Received a notification message from q • I’m a disseminator and I sent a notification to q notification notification

  21. Randomized unbalanced gossip -correctness • Claim: there is at least one leader that never fails if t<n • If 2t<n than m=2t hence at least half of the leaders won’t fail • If 2t>n than m=n hence at least one leader won’t fail • Conclusion: at least one leader will run an ending phase. During the phase it will learn about all processors and will disseminate this knowledge

  22. Randomized unbalanced gossip -complexity • Rp – is a conceptual list of chunks that has at least one node that p has not hear about • rk(i) – let K be subgraph of • Sp – is a conceptual list of chunks that has at least one node that p has to notify • sk(i) – let K be subgraph of

  23. Randomized unbalanced gossip -complexity • Look on the graph formed by m leaders. • At least m-t leaders never fail • there are at least (m-t)/7 nodes in connected component with radius 2+30ln(m). Let call this subgraph K

  24. Randomized unbalanced gossip -complexity (cont) • Lemma: if a stage takes phases then With probability that at least • Proof: If chunk is not in all Rp lists it will be removed from all other lists in one stage. The worst case is when all chunks are in all lists

  25. Randomized unbalanced gossip -complexity (cont) • Let us consider the choices of chunks made by the processors in K as occurring sequentially. • Consider a sequence of 30*|K|*ln(m) consecutive trials X1,X2, . . . , which represents the case of c = 1

  26. Randomized unbalanced gossip -complexity (cont) • Case |K|ln(m)>r(i-1)/2 • Let us consider it to be a success in trial Xi if either a new chunk is selected or the number of chunks selected already by this trial is at least r(i − 1)/2. • The probability of success in a trial is at least 1/2

  27. Randomized unbalanced gossip -complexity (cont) • Case |K|ln(m) ≤ r(i-1)/2 • Let us consider it to be a success in trial Xi if either a new chunk is selected or the number of chunks selected already by this trial is at least |K|ln(m) . • The probability of success in a trial is at least 1/2

  28. Randomized unbalanced gossip -complexity (cont) • In both cases we have 30*|K|*ln(m) Bernouli trials with probability at least ½ for success With probability that at least

  29. Randomized unbalanced gossip -complexity (cont) • Lemma For each , there is such that gossiping is successfully completed by regular phase , while communication is by this phase, with probability that is at least

  30. Randomized unbalanced gossip -complexity (cont) • Proof: Let L be a fixed subgraph of induced by m−t vertices. • There are O(1) stages that • There is at most 1+log(w) stages that • Other states are called useless

  31. Randomized unbalanced gossip -complexity (cont) • there is a constant β > 0 that if there is no useless stage among the first even β +lg(w) ones, then r(β +lg(w)) = 0. • The probability that there is a useless stages is: • There are at most subgraphs L

  32. Randomized unbalanced gossip -complexity (cont) • Hence the probability that there is a useless stage among the first even β + lg(w) ones, for an arbitrary subgraph L This is a probability that some collector didn’t become a disseminator

  33. Randomized unbalanced gossip -complexity (cont) • We have the same probability that if after β + lg(w) even stages all leaders became a collectors, but in following β + lg(w) even stages there is some uninformed node. • The probability that there is a disseminator after 4 (β + lg(w)) stages is

  34. Randomized unbalanced gossip -time complexity • stages • Each stage takes phases • phases

  35. Randomized unbalanced gossip -message complexity • Number of graph messages • Number of inquiry messages • Message complexity

  36. From random to deterministic - unbalanced gossip • Take number such that . • Let α > 0 be the corresponding number • that there exists a family of local permutations Π such that termination threshold T = α lgwln(m) = guarantees completion of gossiping without invoking ending phases. Make this threshold value T and such family Π a part of code of algorithm UNBALANCED-GOSSIP • UNBALANCED-GOSSIP has time complexity , and message complexity • If then we get : time complexity , and message complexity ∏ is only proved to exist If a=0 then message complexity

  37. Consensus • Every process starts with some initial value {0,1} • Processor decides on its decision value • Termination: Each processor eventually chooses a decision value, unless it crushes • Agreement: No two processors choose different decision values • Validity: Only a value among the initial ones may be chosen as a decision values

  38. Consensus • gossip consensus Let gossip and than decide on the maximum value…… What if some processor has crushed and its input value may be known only to subset of processors? Gossip can be solved in O(1) time while consensus with failures can’t be solved in less than t+1 rounds. ?

  39. Consensus • The algorithm is designed for t failures If • Time complexity: • Communication complexity:

  40. Consensus – White knights consensus • Leaders reach a consensus and then tell their decision to regular nodes. • Leaders send messages along a communication graph • In order to handle partition of ,in case of failure, nodes run gossip algorithm

  41. Consensus – White knights consensus • Set to initial value • Repeat times • if then send message to every neighbor • Repeat m times • Receive short messages • If and received message than set • If was set to1 in this round, send message to all neighbors • Repeat 2+30log(m) times • Receive compactness messages • Merge Nearby lists • Send compactness messages to all neighbors • If Nearby list containing less than (m-t)/7 nodes, set • Perform gossiping • Decide on value

  42. Consensus – White knights consensus Check compactness Gossip Send your preference 0 1 1 0 0 1 0 1 0 1 0 1 1 1 0 1 1 1 0 1 1 0 0

  43. White knights consensus – intuition • In every round If nodes preference value is 1 it sends it to its neighbors. If it received new preference value 1 it sends it to its neighbors This is done for m rounds to ensure that ‘1‘ propagates to all nodes in the connected component All nodes in connected component has the same rumor value after step 2.b.b

  44. White knights consensus – intuitionwhy so many phases? • Let look on some node that before gossiping has rumor = 1 • According to the algorithm it’s connected component contains at least (m-t)/7 nods and they should have rumor = 1too. • What if they all crash while gossiping? after gossiping some nodes may have rumor = 1 but others don’t

  45. White knights consensus – intuitionwhy so many phases? (cont) • If this scenario happens every iteration than nodes won’t reach consensus. • every iteration at least (m-t)/7 nodes should fail • It is impossible that all m nodes crush

  46. White knights consensus - correctness • A processor is said to be a white knight in an iteration, if it starts gossiping in this iteration with the rumor equal to 1.

  47. White knights consensus - correctness(cont) The decision value is among input values • all inputs are“0” - no “1“ ever appear • all inputs are“1” – At least one processor that never fails will stay compact through all iterations and it will spread its “1” value at the last gossiping step

  48. White knights consensus - correctness(cont) All processors decide on the same value • There is an iteration without white knight all nodes have rumor “0” • There is an iteration with white knight and it survives gossiping – every processor learns its rumor and next iteration all processors will start with rumor=1. At least one processor will stay compact through all iterations and it will spread its “1” value at the last gossiping step.

  49. White knights consensus - correctness(cont) • There are white knights in each iteration, but no white knight survives gossiping in any iteration We have shown that it can’t happen since there are “to many” iterations

  50. White knights consensus – time complexity • Phases number • Phases number

More Related