660 likes | 818 Vues
This talk focuses on continuous extensions and dependent randomized rounding methods in the context of optimizing submodular objectives. Key aspects include the definition of submodular set functions, the relevant mathematical principles, and applications across various combinatorial optimization problems. The session also covers examples of solvable problems, such as maximum weight matching and NP-hard cases like max-cut. We will explore approximation algorithms that guarantee performance ratios against optimal solutions using polynomial-time strategies.
E N D
Algorithms for submodularobjectives: continuous extensions &dependent randomized rounding Chandra Chekuri Univ. of Illinois, Urbana-Champaign
Combinatorial Optimization • N a finite ground set • w : N !Rweights on N max/minw(S) s.tS µ N satisfies constraints
Combinatorial Optimization • N a finite ground set • w : N !Rweights on N • Sµ2Nfeasible solutions to problem max/minw(S) s.tS 2S
Examples: poly-time solvable • max weight matching • s-t shortest path in a graph • s-t minimum cut in a graph • max weight independent set in a matroid and intersection of two matroids • ...
Examples: NP-Hard • max cut • min-cost multiway/multiterminal cut • min-cost (metric) labeling • max weight independent set in a graph • ...
Approximation Algorithms A is an approx. alg. for a problem: • A runs in polynomial time • maximization problem: for all instances I of the problem A(I) ¸® OPT(I) • minimization problem: for all instances I of the problem A(I) ·® OPT(I) • ®is the worst-case approximation ratio of A
This talk min/maxf(S) s.t.S 2S f is a non-negative submodular set function on N Motivation: • several applications • mathematical interest • modeling power and new results
Submodular Set Functions A function f : 2N!R+is submodular if f(A+j) – f(A) ¸ f(B+j) – f(B) for all A ½ B, j 2 N\B j A B f(A+j) – f(A) ≥ f(A+i+j) – f(A+i) for all A N, i, j N\A
Submodular Set Functions A function f : 2N!R+is submodular if f(A+j) – f(A) ¸ f(B+j) – f(B) for all A ½ B, i2 N\B j A B f(A+j) – f(A) ≥ f(A+i+j) – f(A+i) for all A N, i, j N\A Equivalently: f(A) + f(B) ≥ f(AB) + f(AB) 8A,B N
Cut functions in graphs • G=(V,E) undirected graph • f : 2V!R+where f(S) = |δ(S)| S
Coverage in Set Systems • X1, X2, ..., Xnsubsets of set U • f : 2{1,2, ..., n} !R+where f(A) = |[i in AXi | X1 X1 X5 X5 X4 X4 X2 X2 X3 X3
Submodular Set Functions • Non-negative submodular set functions f(A) ≥ 0 8A )f(A) + f(B) ¸ f(A[ B) (sub-additive) • Monotone submodular set functions f(ϕ) = 0 and f(A) ≤ f(B) for all A B • Symmetric submodular set functions f(A) = f(N\A) for all A
Other examples • Cut functions in hypergraphs (symmetric non-negative) • Cut functions in directed graphs (non-negative) • Rank functions of matroids (monotone) • Generalizations of coverage in set systems (monotone) • Entropy/mutual information of a set of random variables • ...
Max-Cut maxf(S) s.tS 2S • f is cut function of a given graph G=(V,E) • S = 2V: unconstrained • NP-Hard!
Unconstrained problem min/maxf(S) • minimization poly-time solvable assuming value oracle for f • Ellipsoid method [GLS’79] • Strongly-polynomial time combinatorial algorithms [Schrijver, Iwata-Fleischer-Fujishige’00] • maximization NP-Hard even for explicit cut-function
Techniques min/maxf(S) s.t.S 2S f is a non-negative submodular set function on N • Greedy • Local Search • Mathematical Programming Relaxation + Rounding
Math. Programming approach min/maxw(S) s.tS 2S min/maxw¢x s.tx2 P(S) xi2[0,1] indicator variable for i Exact algorithm: P(S) = convexhull( {1S : S 2S})
Math. Programming approach min/maxw(S) s.tS 2S min/maxw¢x s.tx2 P(S) Round x*2 P(S) to S*2S Exact algorithm: P(S) = convexhull( {1S : S 2S}) Approx. algorithm: P(S)¾convexhull( {1S : S 2S}) P(S) solvable: can do linear optimization over it
Math. Programming approach min/maxf(S) s.tS 2S min/maxg(x) s.tx2 P(S) Round x*2 P(S) to S*2S P(S)¶convexhull( {1S : S 2S}) and solvable
Math. Programming approach • What is the continuous extension g ? • How to optimize with objective g ? • How do we round ? min/maxf(S) s.tS 2S min/maxg(x) s.tx2 P(S) Round x*2 P(S) to S*2S
Continuous extensions of f For f : 2N!R+define g : [0,1]N!R+s.t • for any S µ N want f(S) = g(1S) • given x = (x1, x2, ..., xn) [0,1]N want polynomial time algorithm to evaluate g(x) • for minimization want g to be convex and for maximization want g to be concave
Canonical extensions: convex and concave closure x= (x1, x2, ..., xn) [0,1]N min/max S ®S f(S) S®S = 1 S®S = xifor all i ®S¸ 0 for all S f-(x) for minimization and f+(x) for maximization: convex and concave respectively for anyf
Submodularf • For minimization f-(x) can be evaluated in poly-time via submodular function minimization • Equivalent to the Lovasz-extension • For maximization f+(x) is NP-Hard to evaluate even when f is monotone submodular • Rely on the multi-linear-extension
Lovasz-extension of f f»(x) = Eµ2 [0,1][ f(xµ) ] wherexµ = { i | xi¸µ } Example:x = (0.3, 0, 0.7, 0.1) xµ = {1,3} forµ = 0.2 andxµ = {3} forµ = 0.6 f»(x) = (1-0.7) f(;) + (0.7-0.3)f({3}) + (0.3-0.1) f({1,3}) + (0.1-0) f({1,3,4}) + (0-0) f({1,2,3,4})
Properties of f» • f»is convex ifff is submodular • f»(x) = f-(x) for all x when f is submodular • Easy to evaluate f» • For submodf : solve relax. via convex optimization minf»(x) s.tx2 P(S)
Multilinear extension of f [Calinescu-C-Pal-Vondrak’07] inspired by [Ageev-Svir.] For f : 2N!R+define F : [0,1]N!R+ as x = (x1, x2, ..., xn) [0,1]N R: random set, include iindependently with prob. xi F(x) =E[ f(R) ] =S N f(S) iS xi iN\S (1-xi)
Properties of F • F(x) can be evaluated by random sampling • F is a smooth submodular function • 2F/xixj ≤ 0 for all i,j. Recall f(A+j) – f(A) ≥ f(A+i+j) – f(A+i) for all A, i, j • Fis concave along any non-negative direction vector • F/xi ≥ 0 for all iif f is monotone
Maximizing F max { F(x) | xi2 [0,1] for all i} is NP-Hard equivalent to unconstrained maximization of f When f is monotone max { F(x) | ixi· k, xi2[0,1] for all i} is NP-Hard
Approximately maximizing F [Vondrak’08] Theorem: For any monotone f, there is a (1-1/e) approximation for the problem max { F(x) | x P } where P [0,1]N is any solvable polytope. Algorithm: Continuous-Greedy
Approximately maximizing F [C-Vondrak-Zenklusen’11] Theorem: For any non-negative f, there is a ¼ approximation for the problem max { F(x) | x P } where P [0,1]nis any down-closed solvable polytope. Remark: 0.325-approximation can be obtained Remark: Current best 1/e ' 0.3678 [Feldman-Naor-Schwartz’11] Algorithms: variantsof local-search and continuous-greedy
Math. Programming approach min/maxf(S) s.tS 2S min/maxg(x) s.tx2 P(S) Round x*2 P(S) to S*2S • What is the continuous extension g ? • Lovasz-extension for min and multilinear ext. for max • How to optimize with objective g ? • Convex optimization for min and O(1)-approx. alg for max • How do we round ? ✔ ✔
Rounding Rounding and approximation depend on Sand P(S) Two competing issues: • Obtain feasible solution S* from fractional x* • Want f(S*) to be close to g(x*)
Rounding approach Viewpoint: objective function is complex • round x* to S* to approximately preserve objective • fix/alter S* to satisfy constraints • analyze loss in fixing/altering
Rounding to preserve objective x* : fractional solution to relaxation Minimization: f»(x) = Eµ2 [0,1][ f(xµ) ] Pick µ uniformly at random in [0,1] (or [a, b]) S* = { i | x*i¸µ } Maximization: F(x) = E[f(R)] S* = pick each i2 N independently with probability ®x*i (®· 1)
Maximization maxf(S) s.tS 2I Iµ2Nis a downward closed family A 2I& B ½ A)B 2I Captures “packing” problems
Maximization High-level results: • optimal rounding in matroidpolytopes[Calinescu-C-Vondrak-Pal’07,C-Vondrak-Zeklusen’09]] • contention resolution scheme based rounding framework [C-Vondrak-Zenklusen’11]
Max k-Coverage maxf(S) s.tS 2I • X1,X2,...,Xnsubsets of U and integer k • N = {1,2,...,n} • f is the set coverage function (monotone) • I = { A µ N : |A| · k } (cardinality constraint) • NP-Hard
Greedy [Nemhauser-Wolsey-Fisher’78, FNW’78] • Greedy gives (1-1/e)-approximation for the problem max { f(S) | |S| · k } when f is monotone Obtaining a (1-1/e + ²)-approximation requires exponentially many value queries to f • Greedy give ½ for maximizing monotone f over a matroid constraint • Unless P=NP no (1-1/e +²)-approximation for special case of Max k-Coverage [Feige’98]
Matroid Rounding [Calinescu-C-Pal-Vondrak’07]+[Vondrak’08]=[CCPV’09] Theorem: There is a randomized (1-1/e)' 0.632 approximation for maximizing a monotone f subject to any matroid constraint. [C-Vondrak-Zenklusen’09] Theorem: (1-1/e-²)-approximation for monotone f subject to a matroid and a constant number of packing/knapsack constraints.
Rounding in Matroids [Calinescu-C-Pal-Vondrak’07] Theorem: Given any point x in P(M), there is a randomized polynomial time algorithm to round x to a vertex X(hence an indep set of M) such that • E[X] = x • f(X) = F(X) ≥ F(x) [C-Vondrak-Zenklusen’09] Different rounding with additional properties and apps.
Contention Resolution Schemes • Ian independence family on N • P(I) a relaxation for I and x2 P(I) • R: random set from independent rounding of x CR scheme for P(I): given x, R outputs R’ µ R s.t. • R’ 2I • and for all i, Pr[i2 R’ | i2 R] ¸ c
Rounding and CR schemes maxF(x) s.tx2 P(I) Round x*2 P(I) to S*2I Theorem: A monotone CR scheme for P(I) can be used to round s.t. E[f(S*)] ¸ c F(x*) Via FKG inequality
Summary for maximization • Optimal results in some cases • Several new technical ideas and results • Questions led to results even for modular case • Similar results for modular and submodular (with in constant factors) for most known problems
Minimization • Landscape is more complex • Many problems that are “easy” in modular case are hard in submodular case: shortest paths, spanning trees, sparse cuts ... • Some successes via Lovasz-extension • Future: need to understand special families of submodular functions and applications
Submodular-cost Vertex Cover • Input:G=(V,E) and f : 2V!R+ • Goal:min f(S) s.tS is a vertex cover in G • 2-approx for modular case well-known • 2-approx for sub-modular costs [Koufogiannakis-Young’ 99, Iwata-Nagano’99, Goel etal’99]
Submodular-cost Vertex Cover • Input:G=(V,E) and f : 2V!R+ • Goal:min f(S) s.tS is a vertex cover in G min f»(x) xi + xj¸ 1 for all ij2 E xi¸ 0 for all i2 V
Rounding min f»(x) xi + xj¸ 1 for all ij2 E xi¸ 0 for all i2 V Pick µ2 [0, 1/2] uniformly at random Output S = { i | xi¸µ }
Rounding Analysis min f»(x) xi + xj¸ 1 for all ij2 E xi¸ 0 for all i2 V Pick µ2 [0, 1/2] uniformly at random Output S = { i | x*i¸µ } Claim 1: S is a vertex cover with probability 1 Claim 2: E[ f(S) ] · 2 f»(x*) Proof: 2f»(x) = 2s10 f(xµ) dµ¸ 2s1/20 f(xµ) = E[ f(S) ]
Submodular-cost Set Cover • Input:SubsetsX1,...,XnofU,f : 2N!R+ • Goal:min f(S) s.t[i2 SXi = U • Rounding according to objective gives only k-approx where k is max-element frequency. Also integrality gap of (k) • [Iwata-Nagano’99] (k/log k)-hardness