1 / 37

Lower Bounds on the Communication of Distributed Graph Algorithms: Progress and Obstacles

Lower Bounds on the Communication of Distributed Graph Algorithms: Progress and Obstacles. Rotem Oshman ADGA 2013. Overview: Network Models. LOCAL. CONGESTED CLIQUE. ASYNC MESSAGE-PASSING. CONGEST / general network. X. Talk Overview. Lower bound techniques

afric
Télécharger la présentation

Lower Bounds on the Communication of Distributed Graph Algorithms: Progress and Obstacles

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lower Bounds on the Communication of Distributed Graph Algorithms: Progress and Obstacles Rotem Oshman ADGA 2013

  2. Overview: Network Models LOCAL CONGESTED CLIQUE ASYNC MESSAGE-PASSING CONGEST / general network X

  3. Talk Overview • Lower bound techniques • CONGEST general networks: reductions from 2-party communication complexity • Asynchronous message passing: reductions from multi-party communication complexity • Obstacles on proving lower bounds for the congested clique

  4. Communication Complexity = ?

  5. Example: Disjointness Disj : bits needed [Kalyanasundaramand Schnitger, Razborov ’92]

  6. Applying 2-Party Communication Complexity Lower Bounds Textbook reduction: Given algorithm for solving task … Based on • Based on Simulate bits Solution for answer for Disjointness

  7. Example: Spanning Trees • Setting: directed, strongly-connected network • Communication by local broadcast with bandwidth • UIDs • Diameter 2 • Question: how many rounds to find a rooted spanning tree?

  8. New Problem: Partition • Inputs: , with the promise that • Goal: Alice outputs , Bob outputs such that partition.

  9. The Partition Problem • Trivial algorithm: • Alice sends her input to Bob • Alice outputs all tasks in her input • Bob outputs all remaining tasks • Communication complexity: bits • Lower bound?

  10. Reduction from Disj to Partition • Given input for Disj: • Notice: iff • To test whether : • Try to solve Partition on • Ensure • Check if is a partition of : Alice sends Bob hash(), Bob compares it to hash()

  11. From Partition to Spanning Tree • Given a spanning tree algorithm … 1 2 6 3 4 5 a b

  12. From Partition to Spanning Tree • Simulating one round of : 1 2 6 3 4 5 a b Node a’s message Node b’s message

  13. From Partition to Spanning Tree • When outputs a spanning tree: 1 2 6 3 4 5 a b

  14. From Partition to Spanning Tree • If runs for rounds, we use bits • One detail: randomness • Solution: Alice and Bob use public randomness

  15. When Two Players Just Aren’t Enough • No bottlenecks in the network

  16. When Two Players Just Aren’t Enough • Too much information revealed

  17. Multi-Player Communication Complexity • Communication by shared blackboard • Number-on-forehead • Number-in-hand ??

  18. The Message-Passing Model • players • Private channels • Private -bit inputs • Private randomness • Goal: compute • Cost: total communication

  19. The Coordinator Model • players, one coordinator • The coordinator has no input

  20. Message-Passing vs. Coordinator

  21. Prior Work on Message-Passing • For players with -bit inputs… • Phillips, Verbin, Zhang ’12: • for bitwise problems (AND/OR, MAJ, …) • Woodruff, Zhang ‘12, ‘13: • for threshold and graph problems • Braverman, Ellen, O., Pitassi, Vaikuntanathan ‘13: for

  22. Set Disjointness ?

  23. Notation • : randomized protocol • Also, the protocol’s transcript • : player ’s view of the transcript • worst-case communication of in the worst case

  24. Entropy and Mutual Information • Entropy: • A lossless encoding of requires bits • Conditional entropy:

  25. Entropy and Mutual Information • Mutual information: • Conditional mutual information:

  26. Information Cost for Two Players [Chakrabarti, Shi, Wirth, Yao ’01], [Bar-Yossef, Jayram, Kumar, Sivakumar ‘04], [Braverman, Rao‘10], … Fix a distribution , • External information cost: • Internal information cost: Extension to the coordinator model:

  27. Why is Info Complexity Nice? • Formalizes a natural notion • Analogous to causality/knowledge • Admits direct sum theorem: “The cost of solving independent copies of problem is times the cost of solving ”

  28. Example

  29. Example (Work in Progress) • Triangle detection in general congested graphs • “Is there a triangle” = ”is a triangle”

  30. Application of DisjLower Bound • Open problem from Woodruff & Zhang ‘13: • Hardness of computing the diameter of a graph • We can show: bits to distinguish diameter 3 from diameter • Reduction from Disj: given , • Notice: disjoint iff

  31. Application of Disj Lower Bound • Diameter • Diameter

  32. Part II: The Power of the Congested Clique CONGESTED CLIQUE

  33. Conversion from Boolean Circuit • Suppose we have a Boolean circuit • Any type of gate, inputs • Fan-in • Depth = , #gates and wires = • Step 1: reduce the fan-out to • Convert large fan-out gates to “copying tree” • Blowup: depth, size • Step 2: convert to a layered circuit

  34. Conversion from Boolean Circuit • Now we have a layered circuit of depth and size = • With fan-in and fan-out • Design a CONGEST protocol: • Fix partition of inputs of size each • Assign each gate to a random CONGEST node • Simulate the circuit layer-by-layer

  35. Simulating a Layer • If node “owns” gate on layer , it sends ’s output to the nodes that need it on layer • Size of layer size of layer • What is the load on edge ? • For each wire from layer to layer , • At most wires in total • By Chernoff, w.h.p. the load is

  36. Conversion from Boolean Circuit • A union-bound finishes the proof • Corollary: explicit lower bounds in the congested clique imply explicit lower bounds on Boolean circuits with polylogarithmic depth and nearly-linear size. • Even worse: • Reasons to believe even bound hard

  37. Conclusion LOCAL CONGESTED CLIQUE ASYNC MESSAGE-PASSING CONGEST / general network X

More Related