1 / 32

Sampling in Graphs

Sampling in Graphs. Alexandr Andoni (Microsoft Research). Graph compression. Why smaller graphs ? use less storage space faster algorithms e asier visualization. ?. Preserve some structure Cuts a pproximately Other properties:

Télécharger la présentation

Sampling in Graphs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sampling in Graphs Alexandr Andoni (Microsoft Research)

  2. Graph compression Why smaller graphs? • use less storage space • faster algorithms • easier visualization

  3. ? • Preserve some structure • Cuts • approximately • Other properties: • Distances, (multi-commodity) flows, effective resistances…

  4. Plan 1) Cut sparsifiers 2) More efficient cut sparsifiers 3) Node sparsifiers

  5. Cut sparsifiers • have nodes, edges; unweighted • For any set , • up to approximation

  6. Approach? [Karger’94,’96]: • Sample edges! • Each edge kept with probability • Original: cut value • New: expected cut value • Set new edge weight = • Cut value => expected cut value

  7. How to set ? • # of edges after sampling • Small => smaller sparsifier • can’t hope to go below edges (will disconnect!) • ideally • In fact, need • otherwise, can disconnect a vertex

  8. Setting • Can we get away with ? • Issue: small cuts • Settle for: • , where • = is min-cut in the graph • = oversampling constant • any cut will retain at leastedges in expectation

  9. Concentration • Usually, will have: • expectation: ok (by “correct” rescaling) • hard part: concentration • up to factor • Chernoff bound: • given r.v. • expectation • then

  10. Applying Chernoff bound • Chernoff bound: • given r.v. • expectation • then • Take any cut, value= • Then: • = if sampled = 0 otherwise • Intuition: #edges kept = • enough to argue “high probability bound” • Pr[cut estimate not approximation] • Set to obtain probability < since

  11. Enough? • We need to argue that all cuts are preserved… • there are cuts • Theorem [Karger]: the number of cuts of value is at most • E.g., at most cuts of min-size • Union bound: • cuts of size , each fails with • Cutsof size ? • tighter Chernoff: • Overall failure probability is

  12. Smaller size? • , where • Useless if min-cut is small… • Sample different edges with different probabilities!

  13. Non-uniform sampling [Benczur-Karger’96] • Theorem: • sample each edge with probability • re-weight sampled edge as • Then: cut sparsifier with • edges in total, with 99% probability • construction time: • Where = “strong connectivity” • inside the cliques • =1 for the bridge

  14. Strong connectivity • Connectivity of edge : min-cut separating • -strongly connected component: min-cut is • Strong connectivity of : highest such that in -strong component • Unique partitioning into -strong components Connectivity: 5 Strong conn.: 2

  15. Proof of theorem • i) Number of edges is small, in expectation • ii) Each cut approximately preserved i) Expected # edges is: • Fact: strong connectivity < => #edges • count edges by repeatedly cutting cuts of value • Take all edges in a -strong component , for max • Sum of for them is at most • Contract to a node, and repeat • Total sum: • Hence: expected # edges

  16. ii) Cut values are approximated • Iteratively apply the “uniform sampling” argument • Partition edges by approx. strong-connectivity: • are edges with strong-conn in • Sampling: as if • first sample edges , keeping rest intact (Pr sample =1) • then sample • …

  17. Iterative sampling • In iteration : • focus on cuts in • min-cut • OK to sample with • for => error • Total error • = • Use • Total size: • More attentive counting:

  18. Comments • Works for weighted graphs too • Can compute strong connectivities in time • Can sample according to other measures: • connectivity, Nagamochi-Ibaraki index [FHHP’11] • random spanning trees [GRV’09, FHHP’11] • effective resistance [ST’04, SS’08] • distance spanners [KP’11] • All obtain edges (like coupon collector) • Can we do better? • Yes! • [BSS’09]: edges (deterministically) • But: construction time is • OPEN: size sparsifier in time ?

  19. BREAK Improve dependence on ?

  20. Improve dependence on ? • Sometimes can be a large factor, • Generally NO • Need size graph [Alon-Boppana, Alo’97, AKW’14] • But: YES ifwe relax the desiderata a bit…

  21. Smaller relaxed cut sparsifiers [A-Krauthgamer-Woodruff’14]: • Relaxations: • A small sketch instead of small graph • Each cut preserved with high probability • instead of all cuts at the same time • Theorem: can construct relaxed cut sparsifier of size . • The sketch is also a sample of the graph, but “estimation” is different • small data structure that can report cut value whp.

  22. Motivating example • Why same sampling does not work? • consider complete graph (or random) • edge sampled with • degree of each node = • vertex cuts: need for approximation • Alon-Boppana: essentially best possible (for regular graphs, for spectral approximation) • But, if interested in cut values only: • can store the degrees of nodes => space • for much larger cuts , sampling is enough

  23. Proof of theorem • Three parts: • i) sketch description • ii) sketch size: • iii) estimation algorithm, correctness

  24. i) Sketch description guess value of the unknown cut , up to factor 2: so, after sample: • 1) for , • down-sample edges with probability • graph for each possible guess • 2) for each : decompose along sparse cuts: • in a connected component , if there is a set s.t. • store cross-cut edges, • delete these edges from the graph, and repeat • 3) store: • degrees for each node • sample edges out of each node

  25. ii) Sketch size space • For each of graphs , we store: • edges across sparse cuts • degrees of vertices • sample of edges out of all nodes • Claim: there are a total of edges across sparse cuts in each • for each cut along sparse cut, have to store edges • can assume • “charge” edges to nodes in => edges per node • if a node is charged, the size of its connected component drops by at least factor 2 => can be charged times! • Overall: ??? space space

  26. iii) Estimation • Given a set , need to estimate • Suppose we guess up to factor 2: • will try different if turns out to be wrong • use to estimate the cut value up to • : times the sum of • # of sparse-cut edges crossing • for each connected component : let

  27. Estimation illustration #edges incident to • : times the sum of • #sparse-cut edges crossing • for each connected : let estimate of #edges inside dense components

  28. iii) Correctness of estimation • Each of 3 steps preserves cut value up to • 1) sample down to edges: • variance is , which is a approximation • (like in “uniform sampling”) • 2) edges crossing sparse cuts are preserved exactly! • 3) edges inside dense components… ? • intuition: there are fewer edges inside that edges leaving , hence smaller variance to estimate the latter!

  29. Dense component • Estimate for component : • Claim: • otherwise would be a sparse cut, and would have been cut • but • since we “guessed” that => • hence • E.g.: => #edges inside is • this can be large: const fraction of • but only if average degree is • then sampling edges/node suffices!

  30. Variance • Estimate for component : • Let

  31. Concluding remarks • Done with the proof! • Can get “high probability” by repeating logarithmic number of times and taking median • Construction time? • requires computation of sparse cuts… NP-hard problem • OK to compute approximately! • approximation => sketch size • E.g., using [Madry’10] • runtime with size

  32. Open questions • Final: small data structure that, for given cut, outputs cut value with high probability • 1) Can a graph achieve the same guarantee? • i.e., use estimate=“cut value in the sample graph” instead of estimate=“sum degrees - #edges inside” ? • 2) Applications where “w.h.p. per cut” enough? • E.g., good for sketching/streaming • Can compute min-cut from sketch: there are only of 2-approx. min-cuts => can query all of them • 3) Same guarantee for spectral sparsification?

More Related