1 / 92

Concentration of scalar random variables

A matrix Chernoff bound for strongly Rayleigh distributions and spectral sparsifiers from a few random spanning trees. Concentration of scalar random variables. Independent random Is with high probability?. Concentration of scalar random variables. Chernoff inequality

atkison
Télécharger la présentation

Concentration of scalar random variables

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A matrix Chernoff boundfor strongly Rayleigh distributions and spectral sparsifiersfrom a few random spanning trees

  2. Concentration of scalar random variables Independent random Is with high probability?

  3. Concentration of scalar random variables Chernoff inequality Independent random gives E.g. if and

  4. Concentration of random matrices Independent random Is with high probability?

  5. Concentration of random matrices Matrix Chernoff [Tropp ’11] Independent random positive semi-definite gives E.g. if and [Rudelson ‘99, Ahlswede-Winter ’02]

  6. What if variables are not independent? random variable not concentrated, 0 or 2 very concentrated, always 1 Negative dependence: makes less likely and vice versa

  7. What if variables are not independent? random variable Negative pairwise correlation For all pairs lower prob. of Formally |

  8. What if variables are not independent? random variable Negative correlation For all Can we get a Chernoff bound? Yes. If AND (negated bits) are negatively correlated, Chernoff-like concentration applies to [Goyal-Rademacher-Vempala ‘09, Dubhashi-Ranjan ’98]

  9. Strongly Rayleigh distributions A class of negatively dependent distributions [Borcea-Branden-Liggett ’09] random variable Many nice properties Negative pairwise correlation Negative association Closed under conditioning, marginalization

  10. Strongly Rayleigh distributions A class of negatively dependent distributions [Borcea-Branden-Liggett ’09] random variable Examples: Uniformly sampling items without replacement Random spanning trees Determinantal point processes, volume sampling Symmetric exclusion processes

  11. Strongly Rayleigh distributions A class of negatively dependent distributions [Borcea-Branden-Liggett ’09] random variable -homogeneous Strongly Rayleigh: always

  12. Concentration of random matrices Strongly Rayleigh matrix Chernoff [K. & Song ’18] Fixed positive semi-definite is -homogeneous strongly Rayleigh Random gives E.g. if and Scalar version: Peres-Pemantle ‘14

  13. Concentration of random matrices Strongly Rayleigh matrix Chernoff [K. & Song ’18] Fixed positive semi-definite is -homogeneous strongly Rayleigh Random gives E.g. if and Scalar version: Peres-Pemantle ‘14

  14. An application:Graph approximation using random spanning trees

  15. Spanning trees of a graph Graph Edge weights Spanning trees of

  16. Random spanning trees Graph Tree distribution Pick a random tree?

  17. Random spanning trees Does the sum of a few random spanning trees resemble the graph? E.g. is the weight across each cut similar? Starter question: Are the edge weights similar in expectation?

  18. Random spanning trees Graph Tree distribution Pick a random tree

  19. Random spanning trees Graph Tree distribution Pick a random tree probability of edge present

  20. Random spanning trees Graph Tree distribution Pick a random tree probability of edge present

  21. Random spanning trees Getting the expectation right: w. probability o.w.

  22. Reweighted random spanning trees Tree weights Original weights

  23. Reweighted random spanning trees Tree weights Original weights

  24. Reweighted random spanning trees Tree weights Original weights random tree

  25. Reweighted random spanning trees Tree weights Original weights random tree The average weight over trees equals the original weight Does the tree “behave like” the original graph?

  26. Preserving cuts? Given cut , Want for all with high probability? Too much to ask of one tree! How many edges are necessary?

  27. Independent edge samples graph Flip a coin for each edge to decide if present random graph, independent edges

  28. Independent edge samples graph Flip a coin for each edge to decide if present random graph, independent edges

  29. Independent edge samples Getting the expectation right: w. probability o.w.

  30. Preserving cuts? Benczur-Karger‘96 Sample edges independently with “well-chosen” coin probabilities , s.t. has on average Edges then w.h.p. for all cuts Proof sketch Count #cuts of each sizeChernoff concentration bound per cut

  31. Reweighted random tree Tree weights Original weights random tree The average weight over trees equals the original weight Does the tree “behave like” the original graph?

  32. Combining trees Maybe it’s better if we average a few trees? ??

  33. Preserving cuts? Fung-Harvey & Hariharan-Panigrahi ‘10 Let be the average of reweighted random spanning trees of then w.h.p. for all cuts Proof sketch Benczur-Karger cut countingScalar Chernoff works for negatively correlated variables

  34. Preserving cuts? Goyal-Rademacher-Vempala ‘09 Given an unweighted bounded degree graph G, let be the average of unweighted random spanning trees of then w.h.p. for all cuts Proof sketch Benczur-Karger cut counting + first tree gets small cutsScalar Chernoff works for negatively correlated variables

  35. Preserving more than cuts:Matrices and quadratic forms

  36. Laplacians: It’s springs! Weighted, undirected graph , The Laplacian is a matrix describing On each edge put a spring between the vertices. Nail down each vertex at position along the real line.

  37. Laplacians: It’s springs! Length Energy spring const. length

  38. Laplacians “baby Laplacian” per edge

  39. Laplacian of a graph

  40. Preserving matrices? Suppose is a random weighted graph s.t. for every edge , . Then Does “behave like” ?

  41. Preserving quadratic forms? For all with high probability? Useful? Since implies cuts are preserved by letting Quadratic form crucial for solving linear equations

  42. Preserving quadratic forms? Spielman-Srivastava’08 (a la Tropp) Sample edges independently with “well-chosen” coin probabilities , s.t. has on average edges then w.h.p. for all Proof sketch Bound spectral norm of sampled edge “baby Laplacians” Matrix Chernoff concentration

  43. What sampling probabilities? Spielman-Srivastava’08 “well-chosen” coin probabilities What is the marginal probability of edges being present in a random spanning tree? Also proportional to (!) Random spanning trees similar to sparsification?

  44. Preserving quadratic forms? K.-Song’18 Let be the average of reweighted random spanning trees of then w.h.p. for all Proof sketch Bound norms of sampled matrices (immediate via SS’08) Strongly Rayleigh matrix Chernoff concentration

  45. Random spanning trees Lower bound (K.-Song’18) needed for -spectral sparsifier Open question Right number of logs? Guess: trees

  46. Random spanning trees More results (K.-Song ’18) for all w.h.p. in -spectrally connected graphs random tree is -spectrally thin Lower bounds In some graphs, w. prob. there exists s.t. and for some , In a ring graph, there exists s.t. and

  47. Proving the strongly Rayleigh matrix Chernoff bound

  48. An illustrative case for all w.h.p.

  49. Loewner order iff for all for all

  50. Proof strategy? Convert problem to Doob martingales Matrix martingale concentration Control effect of conditioning via coupling Norm bound from coupling Variance bound: coupling symmetry + shrinking marginals

More Related