1 / 65

Algorithms for submodular objectives: continuous extensions & dependent randomized rounding

Algorithms for submodular objectives: continuous extensions & dependent randomized rounding. Chandra Chekuri Univ. of Illinois, Urbana-Champaign. Combinatorial Optimization. N a finite ground set w : N ! R weights on N. max/min w(S) s.t S µ N satisfies constraints.

cleta
Télécharger la présentation

Algorithms for submodular objectives: continuous extensions & dependent randomized rounding

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Algorithms for submodularobjectives: continuous extensions &dependent randomized rounding Chandra Chekuri Univ. of Illinois, Urbana-Champaign

  2. Combinatorial Optimization • N a finite ground set • w : N !Rweights on N max/minw(S) s.tS µ N satisfies constraints

  3. Combinatorial Optimization • N a finite ground set • w : N !Rweights on N • Sµ2Nfeasible solutions to problem max/minw(S) s.tS 2S

  4. Examples: poly-time solvable • max weight matching • s-t shortest path in a graph • s-t minimum cut in a graph • max weight independent set in a matroid and intersection of two matroids • ...

  5. Examples: NP-Hard • max cut • min-cost multiway/multiterminal cut • min-cost (metric) labeling • max weight independent set in a graph • ...

  6. Approximation Algorithms A is an approx. alg. for a problem: • A runs in polynomial time • maximization problem: for all instances I of the problem A(I) ¸® OPT(I) • minimization problem: for all instances I of the problem A(I) ·® OPT(I) • ®is the worst-case approximation ratio of A

  7. This talk min/maxf(S) s.t.S 2S f is a non-negative submodular set function on N Motivation: • several applications • mathematical interest • modeling power and new results

  8. Submodular Set Functions A function f : 2N!R+is submodular if f(A+j) – f(A) ¸ f(B+j) – f(B) for all A ½ B, j 2 N\B j A B f(A+j) – f(A) ≥ f(A+i+j) – f(A+i) for all A  N, i, j  N\A

  9. Submodular Set Functions A function f : 2N!R+is submodular if f(A+j) – f(A) ¸ f(B+j) – f(B) for all A ½ B, i2 N\B j A B f(A+j) – f(A) ≥ f(A+i+j) – f(A+i) for all A  N, i, j  N\A Equivalently: f(A) + f(B) ≥ f(AB) + f(AB) 8A,B  N

  10. Cut functions in graphs • G=(V,E) undirected graph • f : 2V!R+where f(S) = |δ(S)| S

  11. Coverage in Set Systems • X1, X2, ..., Xnsubsets of set U • f : 2{1,2, ..., n} !R+where f(A) = |[i in AXi | X1 X1 X5 X5 X4 X4 X2 X2 X3 X3

  12. Submodular Set Functions • Non-negative submodular set functions f(A) ≥ 0 8A )f(A) + f(B) ¸ f(A[ B) (sub-additive) • Monotone submodular set functions f(ϕ) = 0 and f(A) ≤ f(B) for all A  B • Symmetric submodular set functions f(A) = f(N\A) for all A

  13. Other examples • Cut functions in hypergraphs (symmetric non-negative) • Cut functions in directed graphs (non-negative) • Rank functions of matroids (monotone) • Generalizations of coverage in set systems (monotone) • Entropy/mutual information of a set of random variables • ...

  14. Max-Cut maxf(S) s.tS 2S • f is cut function of a given graph G=(V,E) • S = 2V: unconstrained • NP-Hard!

  15. Unconstrained problem min/maxf(S) • minimization poly-time solvable assuming value oracle for f • Ellipsoid method [GLS’79] • Strongly-polynomial time combinatorial algorithms [Schrijver, Iwata-Fleischer-Fujishige’00] • maximization NP-Hard even for explicit cut-function

  16. Techniques min/maxf(S) s.t.S 2S f is a non-negative submodular set function on N • Greedy • Local Search • Mathematical Programming Relaxation + Rounding

  17. Math. Programming approach min/maxw(S) s.tS 2S min/maxw¢x s.tx2 P(S) xi2[0,1] indicator variable for i Exact algorithm: P(S) = convexhull( {1S : S 2S})

  18. Math. Programming approach min/maxw(S) s.tS 2S min/maxw¢x s.tx2 P(S) Round x*2 P(S) to S*2S Exact algorithm: P(S) = convexhull( {1S : S 2S}) Approx. algorithm: P(S)¾convexhull( {1S : S 2S}) P(S) solvable: can do linear optimization over it

  19. Math. Programming approach min/maxf(S) s.tS 2S min/maxg(x) s.tx2 P(S) Round x*2 P(S) to S*2S P(S)¶convexhull( {1S : S 2S}) and solvable

  20. Math. Programming approach • What is the continuous extension g ? • How to optimize with objective g ? • How do we round ? min/maxf(S) s.tS 2S min/maxg(x) s.tx2 P(S) Round x*2 P(S) to S*2S

  21. Continuous extensions of f For f : 2N!R+define g : [0,1]N!R+s.t • for any S µ N want f(S) = g(1S) • given x = (x1, x2, ..., xn)  [0,1]N want polynomial time algorithm to evaluate g(x) • for minimization want g to be convex and for maximization want g to be concave

  22. Canonical extensions: convex and concave closure x= (x1, x2, ..., xn)  [0,1]N min/max S ®S f(S) S®S = 1 S®S = xifor all i ®S¸ 0 for all S f-(x) for minimization and f+(x) for maximization: convex and concave respectively for anyf

  23. Submodularf • For minimization f-(x) can be evaluated in poly-time via submodular function minimization • Equivalent to the Lovasz-extension • For maximization f+(x) is NP-Hard to evaluate even when f is monotone submodular • Rely on the multi-linear-extension

  24. Lovasz-extension of f f»(x) = Eµ2 [0,1][ f(xµ) ] wherexµ = { i | xi¸µ } Example:x = (0.3, 0, 0.7, 0.1) xµ = {1,3} forµ = 0.2 andxµ = {3} forµ = 0.6 f»(x) = (1-0.7) f(;) + (0.7-0.3)f({3}) + (0.3-0.1) f({1,3}) + (0.1-0) f({1,3,4}) + (0-0) f({1,2,3,4})

  25. Properties of f» • f»is convex ifff is submodular • f»(x) = f-(x) for all x when f is submodular • Easy to evaluate f» • For submodf : solve relax. via convex optimization minf»(x) s.tx2 P(S)

  26. Multilinear extension of f [Calinescu-C-Pal-Vondrak’07] inspired by [Ageev-Svir.] For f : 2N!R+define F : [0,1]N!R+ as x = (x1, x2, ..., xn)  [0,1]N R: random set, include iindependently with prob. xi F(x) =E[ f(R) ] =S N f(S) iS xi iN\S (1-xi)

  27. Properties of F • F(x) can be evaluated by random sampling • F is a smooth submodular function • 2F/xixj ≤ 0 for all i,j. Recall f(A+j) – f(A) ≥ f(A+i+j) – f(A+i) for all A, i, j • Fis concave along any non-negative direction vector • F/xi ≥ 0 for all iif f is monotone

  28. Maximizing F max { F(x) | xi2 [0,1] for all i} is NP-Hard equivalent to unconstrained maximization of f When f is monotone max { F(x) | ixi· k, xi2[0,1] for all i} is NP-Hard

  29. Approximately maximizing F [Vondrak’08] Theorem: For any monotone f, there is a (1-1/e) approximation for the problem max { F(x) | x  P } where P [0,1]N is any solvable polytope. Algorithm: Continuous-Greedy

  30. Approximately maximizing F [C-Vondrak-Zenklusen’11] Theorem: For any non-negative f, there is a ¼ approximation for the problem max { F(x) | x  P } where P [0,1]nis any down-closed solvable polytope. Remark: 0.325-approximation can be obtained Remark: Current best 1/e ' 0.3678 [Feldman-Naor-Schwartz’11] Algorithms: variantsof local-search and continuous-greedy

  31. Math. Programming approach min/maxf(S) s.tS 2S min/maxg(x) s.tx2 P(S) Round x*2 P(S) to S*2S • What is the continuous extension g ? • Lovasz-extension for min and multilinear ext. for max • How to optimize with objective g ? • Convex optimization for min and O(1)-approx. alg for max • How do we round ? ✔ ✔

  32. Rounding Rounding and approximation depend on Sand P(S) Two competing issues: • Obtain feasible solution S* from fractional x* • Want f(S*) to be close to g(x*)

  33. Rounding approach Viewpoint: objective function is complex • round x* to S* to approximately preserve objective • fix/alter S* to satisfy constraints • analyze loss in fixing/altering

  34. Rounding to preserve objective x* : fractional solution to relaxation Minimization: f»(x) = Eµ2 [0,1][ f(xµ) ] Pick µ uniformly at random in [0,1] (or [a, b]) S* = { i | x*i¸µ } Maximization: F(x) = E[f(R)] S* = pick each i2 N independently with probability ®x*i (®· 1)

  35. Maximization maxf(S) s.tS 2I Iµ2Nis a downward closed family A 2I& B ½ A)B 2I Captures “packing” problems

  36. Maximization High-level results: • optimal rounding in matroidpolytopes[Calinescu-C-Vondrak-Pal’07,C-Vondrak-Zeklusen’09]] • contention resolution scheme based rounding framework [C-Vondrak-Zenklusen’11]

  37. Max k-Coverage maxf(S) s.tS 2I • X1,X2,...,Xnsubsets of U and integer k • N = {1,2,...,n} • f is the set coverage function (monotone) • I = { A µ N : |A| · k } (cardinality constraint) • NP-Hard

  38. Greedy [Nemhauser-Wolsey-Fisher’78, FNW’78] • Greedy gives (1-1/e)-approximation for the problem max { f(S) | |S| · k } when f is monotone Obtaining a (1-1/e + ²)-approximation requires exponentially many value queries to f • Greedy give ½ for maximizing monotone f over a matroid constraint • Unless P=NP no (1-1/e +²)-approximation for special case of Max k-Coverage [Feige’98]

  39. Matroid Rounding [Calinescu-C-Pal-Vondrak’07]+[Vondrak’08]=[CCPV’09] Theorem: There is a randomized (1-1/e)' 0.632 approximation for maximizing a monotone f subject to any matroid constraint. [C-Vondrak-Zenklusen’09] Theorem: (1-1/e-²)-approximation for monotone f subject to a matroid and a constant number of packing/knapsack constraints.

  40. Rounding in Matroids [Calinescu-C-Pal-Vondrak’07] Theorem: Given any point x in P(M), there is a randomized polynomial time algorithm to round x to a vertex X(hence an indep set of M) such that • E[X] = x • f(X) = F(X) ≥ F(x) [C-Vondrak-Zenklusen’09] Different rounding with additional properties and apps.

  41. Contention Resolution Schemes • Ian independence family on N • P(I) a relaxation for I and x2 P(I) • R: random set from independent rounding of x CR scheme for P(I): given x, R outputs R’ µ R s.t. • R’ 2I • and for all i, Pr[i2 R’ | i2 R] ¸ c

  42. Rounding and CR schemes maxF(x) s.tx2 P(I) Round x*2 P(I) to S*2I Theorem: A monotone CR scheme for P(I) can be used to round s.t. E[f(S*)] ¸ c F(x*) Via FKG inequality

  43. Summary for maximization • Optimal results in some cases • Several new technical ideas and results • Questions led to results even for modular case • Similar results for modular and submodular (with in constant factors) for most known problems

  44. Minimization • Landscape is more complex • Many problems that are “easy” in modular case are hard in submodular case: shortest paths, spanning trees, sparse cuts ... • Some successes via Lovasz-extension • Future: need to understand special families of submodular functions and applications

  45. Submodular-cost Vertex Cover • Input:G=(V,E) and f : 2V!R+ • Goal:min f(S) s.tS is a vertex cover in G • 2-approx for modular case well-known • 2-approx for sub-modular costs [Koufogiannakis-Young’ 99, Iwata-Nagano’99, Goel etal’99]

  46. Submodular-cost Vertex Cover • Input:G=(V,E) and f : 2V!R+ • Goal:min f(S) s.tS is a vertex cover in G min f»(x) xi + xj¸ 1 for all ij2 E xi¸ 0 for all i2 V

  47. Rounding min f»(x) xi + xj¸ 1 for all ij2 E xi¸ 0 for all i2 V Pick µ2 [0, 1/2] uniformly at random Output S = { i | xi¸µ }

  48. Rounding Analysis min f»(x) xi + xj¸ 1 for all ij2 E xi¸ 0 for all i2 V Pick µ2 [0, 1/2] uniformly at random Output S = { i | x*i¸µ } Claim 1: S is a vertex cover with probability 1 Claim 2: E[ f(S) ] · 2 f»(x*) Proof: 2f»(x) = 2s10 f(xµ) dµ¸ 2s1/20 f(xµ) = E[ f(S) ]

  49. Submodular-cost Set Cover • Input:SubsetsX1,...,XnofU,f : 2N!R+ • Goal:min f(S) s.t[i2 SXi = U • Rounding according to objective gives only k-approx where k is max-element frequency. Also integrality gap of (k) • [Iwata-Nagano’99] (k/log k)-hardness

More Related