1 / 46

Improved Approximation for Minimum Power covering problems G. Kortsarz

Improved Approximation for Minimum Power covering problems G. Kortsarz. Joined work with Calinescu and Nutov. A description of IRR. Write a Set Cover like LP that seeks to solve some Covering Problem using a union of a collection of subsets .

donjones
Télécharger la présentation

Improved Approximation for Minimum Power covering problems G. Kortsarz

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Improved Approximation for Minimum Power covering problemsG. Kortsarz Joined work with Calinescu and Nutov

  2. A description of IRR • Write a Set Cover like LP that seeks to solve some Covering Problem using a union of a collection of subsets. • Full Steiner tree: all leaves are terminals. Zelikovsky: Use trees with k terminals. • Decompose a tree to (non disjoint) full Steiner k-trees.

  3. Decomposition into non disjoint full Steiner trees • Using non disjoint full Steiner 3-trees leads to a penalty of 1.6666… in the ratio. • For k=3 there is a PTAS to approximate the best 3-Trees cover. • This easily gives 1.66667 ratio.

  4. Reduction in the optimum value • Byrka, Grandoni, Rothvo and Sanita in their seminal paper used the Bi-directed Cut Relaxation. And the Directed components cut relaxation. • The sets in the LP are the directed components. • That is the Set Cover like way need to cover the terminals.

  5. Remark on their LP • Number of variables and constrains exponential but only small components are taken. • Its good to know that in general exponential constrains and variable may not be a problem as you can project the variables into a convex function.

  6. How the ratio for Steiner tree improved • Ratio2(folklore?) • 1.83 [Zelikovsky ’93] • 1.667[Pr ̈omel & Steger 97] • 1.644[Karpinski Zelikovsky ’97] • 1.598[Hougardy & Pr ̈omel ’99] • 1.55[Robins & Zelikovsky 00] • Byrka et al ln 4<1.39 using LP.

  7. Reduction in the optimum • In Set Cover, a combinatorial analysis of Greedy, might as well assume ROPT=, for the solutionRso far. To get a constant ratio we must show that the optimum goes down. • In this paper they show that the (expected) cost of the residual problemgoes down.

  8. An Intro into a different objective: not the sum of edge costs • In a Wired Network each edge chosen has to be installed and we need to pay for it. • What if we deal withWireless Network? Of course we discuss the static case only, which is basicallySensor Network

  9. Motivation-Wireless Networks • Nodes in the network correspond to transmitters • More power  larger transmission range • transmitting to distance r requires rpower, 2  4 • Transmission range = disk centered at the node • Battery operated  power conservation critical • Type of problems: • Find min-power range assignment so that the resulting • communication network satisfies prescribed properties.

  10. The power of a vertex and a graph pE'(v)=Max{eE' touching v}{c(e)} • Many classical problems have been studied with respect to the (more difficult) min-power model • P(G)=v p(v). The sum over each vertex of its largest cost edge.

  11. e e d d f f c c g g b b a a Range assignment Communication network

  12. EXAMPLE UNIT COSTS c(G) = n p(G) = 1 c(G) = n p(G) = n + 1

  13. Example • Example of power versus cost. A B C 6 5 3 5 8 6 D F 7 6 5 J H 6 P(A)=7, P(B)=8, P(C)=6, P(D)=6, p(F)=8, p(H)=6, p(J)=7 P(G)=7+8+6+6+8+6+7=48. C(G)=57

  14. Comparing Power And Cost: Spanning Tree Case Min-cost version is a polynomial problem • Min-power spanning tree: is NP-hard[Clementi, Penna, Silvestri, 2000] • Best known approximation ratio: 5/3 [E. Althaus, G. Calinescu, S.Prasad, N. Tchervensky, A. Zelikovsky, 2004]

  15. The Min-Power Vertex k- Connectivity Problem • We are given a graph G(V,E)edge costs and an integerk • Design a min-power subgraphG(V, E) so that every u,v Vadmits at least k vertex-disjoint paths from utov • May seem unrelated to Min cost vertex k-connectivity. But equivalent w.r to approximation (difficult proof).

  16. The problem we study • Only undirected graphs • The Minimum Power Terminals cover: • Input: A graph G(V,E) c:EN^+, and a set TV of Terminals • Output: A minimum power graph so that each terminal has degree at least 1.

  17. The Set Cover we choose • We reduce to the problem of covering the terminals with stars with at most k leaves with ksome constant. • In [KN] it is shown that the penalty is only1+1/k. And is negligible for large constant k. • Can avoid by using a separating oracle for general stars

  18. Our paper technically is not exactly IRR • TheSet Cover LPis covering terminals with stars with a constant number k of leaves. • Such a star S selected randomly using the LP. • Since k is constant we can take the OPTIMUM COVER for the leaves of S. We DO NOT choose S.

  19. What is known about covering k terminals for a constant k • As far as I know the best running time to solve this problem of Coveringk Terminals with Minimum Power is 2k∙poly(n). • Thus for constant k negligiblepenalty and polynomial time.

  20. We do not show a reduction in the optimum • Motivated by a paper of Grandoni we define an interesting Potential function (R) with R the optimal solution for the remaining terminals . We show that our power is at most p(R)+ (R). • The proof shows that (R) goes down at every iteration.

  21. Power problems are harder • For sum of costs it’s a matching-like problem and so can be solved in polynomial time. • For Minimum power NP-Hard, even if the terminals T are an independent set. • Minimum Power NP-hard for uniform costs.

  22. Our Results • For the general Minimum Power Terminals Cover we give a 1.41<1.5 ratio using IRR. • If T is an independent set the head is not a terminal, thus we get a 1.22 ratio using IRR. • For uniform cost, 1.2 ratio.

  23. Finally, The Backup Terminals Problem • We study the following problem: Choose a Minimum Power Subgraph H So That For Every Terminal t There is in H a Path to Another Terminal t’. • There is a trivial 2ratio. • We give a 1.5ratio.

  24. Selecting edges so that Every terminal has degree at least 1 • The solution is a collection of stars. • The only difference is the root. If the cost of the edges in S are ck≤ ck-1 ≤ ck-2≤…… ≤ c1 p(S)= 2∙c1+ c2+ c3+ ck It is interesting that counting c1 twice makes such a difference.

  25. Example: the unfilled nodes are terminals. A minimum power cover 4 1 2 1 2 5 1 7 3 2 6 3 8 3

  26. Example: the unfilled nodes are terminals. A collection of stars 1 2 1 1 3 3

  27. The most related to us result • Grandoni gives a 1.91 ratio algorithm for The Minimum Power Steiner Tree. • We use the same potential function. But in different ways. • Grandoni covers cuts, we cover terminals. • Combined with an algorithm by [KN]. IRRalone gives nothing

  28. ASet-Coverlike LP with k-Stars • Minimize S pS ∙xS • Subject to tS xS ≥1 xS ≥0 • The LP has only stars with k leaves (not mentioned again). • We prove xS ≤|T| ≤n

  29. A sample space • The values {xS/n} are a sample space (polynomial size). 1 While there is non cover term’ 1.1 Write the LP. 1.2 Chose some S at random. 1.3 Find opt for S and add 1.4 Delete all terminals of S 2. Return the solution.

  30. The probability that a terminal is covered • By the LP, a vertex is covered with probability 1/n. • Thus in O(n∙log n) iterations we are done • Our solution for covering the chosen S optimal hence better than fractional solution. • But less than the cost the optimum used to cover S.

  31. The following proof works for any potential function • Let Ri-1 be the optimal stars of size k for terminals uncovered, before iteration i-1. • Ri results from Ri-1 by removing covered terminals in iteration i-1 • Elaborate later. Denote the potential function by (Ri)

  32. The axiom needed • (Ri-1) - (Ri) ≥ p(Ri-1)/n • Let J be the solution that we output by our algorithm. This is the union of the optimal integral solutions over all stars chosen at all iterations. • We show that p(J)≤ (R0) • This can be proved for any 

  33. Proof • Denote opti =SRi-1p(S) • The expected fractional cost when it ends is  opti /n • Thus  opti /n≤ p(Ri-1)/n • But in IRRJ, takes the optimum for each chosen star. • p(J) ≤  opti /n

  34. Proof • p(J) ≤  opti /n ≤  p(Ri-1)/n≤  ((Ri-1) - (Ri))= (R0) • The algorithm itself takes the minimum between the above solution and some solution by [KN]. More precisely a convex combination.

  35. The potential function • (R)=i≥1ci/i • The bound on opt is (R)=(R)+p(R). • The value of the solution of [KN] • p(R)+c3+c4+…….+cq • In[KR] it is shown that choosing 2-stars gives 3/2 ratio.

  36. Different behavior • The ratio of (R)=i≥1ci/i over ci c1+i≥1ci goes down as q goes up. • For small q, [KN]is good, as c1and c2 start to be dominating. • We take a convex combination, with the [KN] getting 2/3 and the IRR gets a 1/3. The optimization is unusually hard.

  37. Inequality I • What happens if we add a collection of edges H. Let the reduction in the potential be denoted by (H). • How doess this compare to  vH (v)? • It turns out that (H)≥  vH (v).

  38. A technically complex proof • True only for the (H) we defined. • But now we explain how we finish the proof of the main lemma. • And only in the case the head is not a terminal. If it is a terminal very complex.

  39. Proof in two parts • Denote by Pr(H) the probability exactly H is hit. • Exp(())=Pr(H)∙ (H). • Indeed, for ever H that can be hit, it will bring (H) change as by notation and the probability for that is that H is hit, is denoted Pr(H).

  40. Two inequalities • We will show two things: 1) Exp(())=Pr(H)∙(H)≥  vR (v)/n 2)  vR (v)≥p(R). 3) Together gives Exp(()) ≥p(R)/n

  41. First inequality • Exp(())=  Pr(H)∙(H)≥  Pr(H)  vR (v) • This holds by Inequality 1. • ≥ vR (v)/n∙ Pr(v is hit). • The reason is that we add the probability of every H that contains v this is exactly the probability that v is hit.

  42. Continued • vR (v) Pr(v is hit)≥ vR (v)/n • Recall that every vertex is hit with probability 1/n. • Note this this ends the proof of the first inequality. • The second inequality hard to prove. See the paper.

  43. And this is only the easy case • A much more complex case is that the head of the star is a terminal. • One thing is clear: this paper is very very hard technically. A simpler proof will be wellcome but seems unlikely

  44. Can we turn this into a collection of axioms that imply a good ratio? • Unfortunately many of the properties hold just because the specific potential function  chosen. • A nice (probably hard) question is can we give some axioms that imply some ratio? • Can we get something general?

  45. Is there a lesson from this paper? • It does not seem so. Even though the same potential function appears in two papers. • I like general theorems that work for many problems like the GW primal dual and the Jain Result. • Otherwise its just one approximation after the other.

  46. Open problems • There are very few results that use IRR. • The potential function seems to be an idea that can help. • Can we solve an LP once? • A non constant lower bound for Minimum Cost vertex k-Connected Subgraph?

More Related