1 / 84

Weak Kernels and Their Applications

Weak Kernels and Their Applications. Binhai Zhu Computer Science Department Montana State University *Joint work with Haitao Jiang, Chihao Zhang* Early version posted at ECCC, downloaded 600+ times by mid-Oct. Latest version http://www.cs.montana.edu/bhz/weak-kernel2.pdf

ivie
Télécharger la présentation

Weak Kernels and Their Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Weak Kernels and Their Applications Binhai Zhu Computer Science Department Montana State University *Joint work with Haitao Jiang, Chihao Zhang* Early version posted at ECCC, downloaded 600+ times by mid-Oct. Latest version http://www.cs.montana.edu/bhz/weak-kernel2.pdf Applications in this talk include results in 2 extra papers.

  2. Background • For intractable problems, approximation algorithms and parameterized algorithms are the two dominant methods (which will generate results with performance guarantee). Heuristic methods (like evolutionary computation) are beyond this talk. • In computational biology and bioinformatics, due to the inaccuracy and errors in the datasets, sometimes even a 1.5-factor approximation is useless to the biologists. • So parameterized (or FPT) algorithms become a natural choice for many problems.

  3. Background • For intractable problems, approximation algorithms and parameterized algorithms are the two dominant methods (which will generate results with performance guarantee). Heuristic methods (like evolutionary computation) are beyond this talk. • In computational biology and bioinformatics, due to the inaccuracy and errors in the datasets, sometimes even a 1.5-factor approximation is useless to the biologists. • So parameterized (or FPT) algorithms become a natural choice for many problems. • Of course, not all problems admit FPT algorithms.

  4. Background An FPT algorithm for a decision problem with optimal solution value k runs in O(f(k)nc) or O*(f(k)) time, where f(-) is any function only on k, c is some constant not related to k, and n is the input size.

  5. Background An FPT algorithm for a decision problem with optimal solution value k runs in O(f(k)nc) or O*(f(k)) time, where f(-) is any function only on k, c is some constant not related to k, and n is the input size. • As a convention, we typically consider the decision version of the problem. • For example, with Vertex Cover, instead of computing the minimum-size subset of vertices which covers all the edges, we ask “Can the edges in the input graph be covered by k vertices?”

  6. Background An FPT algorithm for a decision problem with optimal solution value k runs in O(f(k)nc) or O*(f(k)) time, where f(-) is any function only on k, c is some constant not related to k, and n is the input size. • As a convention, we typically consider the decision version of the problem. • For example, with Vertex Cover, instead of computing the minimum-size subset of vertices which covers all the edges, we ask “Can the edges in the input graph be covered by k vertices?” • The current best FPT algorithm for Vertex Cover runs in O*(1.27k) time. In other words, as long as k≤ ~150, Vertex Cover can be solved exactly in polynomial time (even though it is NP-complete).

  7. Background • Kernelization is a standard method (arguably the most fundamental one) in parameterized computation. Intuitively, it is data reduction. So once we have a (small) kernel for a problem, besides solving it exactly by brute-force, we can try to handle the problem with Integer Linear Programming and/or Branch-and-Bound, etc.

  8. Background • Kernelization is a standard method (arguably the most fundamental one) in parameterized computation. Intuitively, it is data reduction. So once we have a (small) kernel for a problem, besides solving it exactly by brute-force, we can try to handle the problem with Integer Linear Programming and/or Branch-and-Bound, etc. • Let’s take Vertex Cover as an example. The input is a graph G=(V,E), with |V|=n, the question is whether one can find a subset V’ of V, |V’|=k, whose removal leaves no edge left (or, all the edges in E are covered by V’).

  9. Background • Let’s take Vertex Cover as an example. The input is a graph G=(V,E), with |V|=n, the question is whether one can find a subset V’ of V, |V’|=k, whose removal leaves no edge left (or, all the edges in E are covered by V’). An easy kernelization algorithm for Vertex Cover: (1) Repeat: as long as there is a vertex uwith degree larger than k, include u in the solution, delete u (and the edges incident to u). (2) Return the resulting graph G’.

  10. Background An easy kernelization algorithm for Vertex Cover: (1) Repeat: as long as there is a vertex uwith degree larger than k, include u in the solution, delete u (and the edges incident to u). (2) Return the resulting graph G’. It is easy to see that G has a VC of size k iff G’ has a VC of size k’ (with k’≤ k).

  11. Background An easy kernelization algorithm for Vertex Cover: (1) Repeat: as long as there is a vertex uwith degree larger than k, include u in the solution, delete u (and the edges incident to u). (2) Return the resulting graph G’. It is easy to see that G has a VC of size k iff G’ has a VC of size k’ (with k’≤ k). G’ has size at most k2: G’ admits a VC of at most k vertices, the degree of each vertex is at most k.

  12. Background An easy kernelization algorithm for Vertex Cover: (1) Repeat: as long as there is a vertex uwith degree larger than k, include u in the solution, delete u (and the edges incident to u). (2) Return the resulting graph G’. It is easy to see that G has a VC of size k iff G’ has a VC of size k’ (with k’≤ k). G’ has size at most k2: G’ admits a VC of at most k vertices, the degree of each vertex is at most k. We usually say that G’ is a kernel of size k2 for Vertex Cover.

  13. Kernel (formal definition) Kernelization is a polynomial time algorithm which transforms a problem instance (I,k) into (I’,k’) such that: (1) (I,k) is a yes-instance iff (I’,k’) is a yes-instance; (2) k’ ≤ k; and (3) |I’| ≤ f(k) for some function f(-). (I’,k) or I’ is usually called the kernel for the problem.

  14. Kernel (formal definition) Kernelization is a polynomial time algorithm which transforms a problem instance (I,k) into (I’,k’) such that: (1) (I,k) is a yes-instance iff (I’,k’) is a yes-instance; (2) k’ ≤ k; and (3) |I’| ≤ f(k) for some function f(-). (I’,k) or I’ is usually called the kernel for the problem. For the Vertex Cover example, I’=G’, f(k)=k2.

  15. Kernel (more information) It is well known that a problem admits an FPT algorithm iff it has a kernel. All these info can be found in standard textbooks on FPT algorithms; e.g., Downey and Fellows (1999), Flum and Grohe (2006), and Niedermeier (2006).

  16. Weak Kernel While kernelization is really data reduction, weak kernel is about “search space” reduction. How do we get this idea?

  17. Weak Kernel While kernelization is really data reduction, weak kernel is about “search space” reduction. How do we get this idea? Well, initially by mistake!

  18. Search Problem Given a search problem Π, let Π(I) be an instance of Π of size n, where a solution of certain size can be searched or drawn from a space S(I) which can be constructed from some components of Π(I). //Non-mathematical def

  19. Search Problem Given a search problem Π, let Π(I) be an instance of Π of size n, where a solution of certain size can be searched or drawn from a space S(I) which can be constructed from some components of Π(I). //Non-mathematical def The “decision vs search” question was raised in 1974 by Valiant and very much ended in 1994. The main conclusion is that even for problems in NP, there is a problem whose search version cannot be reduced to its decision version. Loosely speaking, search problems are harder than decision problems.

  20. Weak Kernel Given a search problem Π, let Π(I) be an instance of Π of size n, where a solution of size k can be searched or drawn from a space S(I) which can be constructed from some components of Π(I). We denote the search problem as (Π(I),S(I),k).

  21. Weak Kernel Given a search problem Π, let Π(I) be an instance of Π of size n, where a solution of size k can be searched or drawn from a space S(I) which can be constructed from some components of Π(I). We denote the search problem as (Π(I),S(I),k). Vertex Cover Example: (G=(V,E),V,k).

  22. Weak Kernel Given a search problem Π, let Π(I) be an instance of Π of size n, where a solution of size k can be searched or drawn from a space S(I) which can be constructed from some components of Π(I). We denote the search problem as (Π(I),S(I),k). k-LEAF OUTBRANCHING (finding a rooted oriented spanning tree with at least k leaves in an input digraph D)

  23. Weak Kernel Given a search problem Π, let Π(I) be an instance of Π of size n, where a solution of size k can be searched or drawn from a space S(I) which can be constructed from some components of Π(I). We denote the search problem as (Π(I),S(I),k). k-LEAF OUTBRANCHING (finding a rooted oriented spanning tree with at least k leaves in an input digraph D): (D=(V,E),L,k), where L is the super-set of vertices each corresponding to the set of leaves belonging to some rooted oriented spanning tree of D.

  24. Weak Kernel (semi-formal def) A weak kernelization is a polynomial time algorithm which transforms a search problem instance (Π(I),S(I),k) into (Π(I’),S’(I),k) such that: (1) |Π(I’)| ≤ |Π(I)|; (2) |S’(I)| ≤ f(k) for some function f(-); and (3) (Π(I),S(I),k) is a yes-instance iff (Π(I’),S’(I),k) is a yes-instance. (Π(I’),S’(I),k) or simply S’(I) is called a weak kernel for Π, with size |S’(I)|.

  25. Weak Kernel (properties) Lemma 1. There is a search problem beyond P which has a weak kernel but its corresponding decision version does not have a kernel. So Weak Kernel ≠ Kernel

  26. Weak Kernel (properties) Lemma 1. There is a search problem beyond P which has a weak kernel but its corresponding decision version does not have a kernel. Lemma 2. If a problem Π∈NP has a weak kernel, then it admits an FPT algorithm. Proof Sketch:

  27. Weak Kernel (properties) Lemma 1. There is a search problem beyond P which has a weak kernel but its corresponding decision version does not have a kernel. Lemma 2. If a problem Π∈NP has a weak kernel, then it admits an FPT algorithm. Proof Sketch: As the weak kernel has size at most f(k), we can enumerate all possible solutions of size k. For each of them, we can check whether it is a solution for Π in polynomial time (because Π∈NP). Hence we have an FPT algorithm.

  28. Weak Kernel (properties) Lemma 1. There is a search problem beyond P which has a weak kernel but its corresponding decision version does not have a kernel. Lemma 2. If a problem Π∈NP has a weak kernel, then it admits an FPT algorithm. Corollary. If a problem Π∈NP has a weak kernel, then it has a kernel.

  29. Weak Kernel (properties) Lemma 1. There is a search problem beyond P which has a weak kernel but its corresponding decision version does not have a kernel. Lemma 2. If a problem Π∈NP has a weak kernel, then it admits an FPT algorithm. Corollary. If a problem Π∈NP has a weak kernel, then it has a kernel. Implication: For problems belonging to NP, weak kernels might be more powerful than kernels in designing FPT algorithms.

  30. Weak Kernel (Applications) 1. Max LEAF (complement of the minimum connected dominating set): weak kernel 3.5k (Bonsma, Brueggemann and Woeginger, 2003). --- communicated by M. Fellows. 2. Minimum co-Path Set: weak kernel 5k. 3. Sorting with Minimum Unsigned Reversals: weak kernel 4k. 4. Sorting with Minimum Unsigned Translocations: weak kernel 4k. 5. Sorting with Minimum Unsigned DCJ Operations: weak kernel 2k. --- for (3),(4),(5) the weak kernels are indirect. 6. CMSR (complement of Maximal Strip Recovery): weak kernel 18k. --- first non-trivial and tight weak kernel. Hopefully, more to come …

  31. Application 1. Min co-Path Set Motivation: Gene clusters. Given a set of gene clusters, like {a,b,c}, we want to make a sequence from them; and when it is impossible to do so, delete the minimum number of gene clusters.

  32. Application 1. Min co-Path Set Motivation: Gene clusters. Given a set of gene clusters, like {a,b,c}, we want to make a sequence from them; and when it is impossible to do so, delete the minimum number of gene clusters. Example. {{a,b,c},{a,c},{b,e}}  acbe

  33. Application 1. Min co-Path Set Motivation: Gene clusters. Given a set of gene clusters, like {a,b,c}, we want to make a sequence from them; and when it is impossible to do so, delete the minimum number of gene clusters. Example. {{a,b,c},{a,c},{b,e}}  acbe {{a,b,c},{c,e},{b,e}}  abce, {b,e} has to be deleted

  34. Application 1. Min co-Path Set Motivation: Gene clusters. Given a set of gene clusters, like {a,b,c}, we want to make a sequence from them; and when it is impossible to do so, delete the minimum number of gene clusters. When each cluster has size 2, we can formulate this as a graph problem: Delete the minimum number of edges from a graph such that the resulting graph is composed of a set of disjoint paths.

  35. Application 1. Min co-Path Set When each cluster has size 2, we can formulate this as a graph problem: Delete the minimum number of edges from a graph such that the resulting graph is composed of a set of disjoint paths. Status: Min co-Path is NP-complete (Cheng et al., 2008), it is APX-hard and the best approximation has a factor of 10/7 (Chen et al., 2010). I will sketch a simple FPT algorithm using weak kernels.

  36. Application 1. Min co-Path Set Lemma. There is a solution R for the minimum co-path set such that R contains only edges incident to some vertices of degree at least 3 in the input graph G.

  37. Application 1. Min co-Path Set Lemma. There is a solution R for the minimum co-path set such that R contains only edges incident to some vertices of degree at least 3 in the input graph G. Weak Kernelization: (1) Identify the vertices of G with degree at least 3. Let this set be V3(G). (2) Let the set of edges which are incident to some vertices in V3(G) be E3(G). Return (G,E3(G),k) as a weak kernel.

  38. Application 1. Min co-Path Set Lemma. There is a solution R for the minimum co-path set such that R contains only edges incident to some vertices of degree at least 3 in the input graph G. Weak Kernelization: (1) Identify the vertices of G with degree at least 3. Let this set be V3(G). (2) Let the set of edges which are incident to some vertices in V3(G) be E3(G). Return (G,E3(G),k) as a weak kernel. Lemma. The minimum co-path set problem has a solution of size k iff the solution can be obtained by deleting k edges in E3(G).

  39. Application 1. Min co-Path Set Lemma. There is a solution R for the minimum co-path set such that R contains only edges incident to some vertices of degree at least 3 in the input graph G. Lemma. The minimum co-path set problem has a solution of size k iff the solution can be obtained by deleting k edges in E3(G). Lemma. Let D be the solution of size k (obtained from E3(G)). Then |E3(G)| ≤ 5k (or, we have a weak kernel of size 5k).

  40. Application 1. Min co-Path Set Lemma. There is a solution R for the minimum co-path set such that R contains only edges incident to some vertices of degree at least 3 in the input graph G. Lemma. The minimum co-path set problem has a solution of size k iff the solution can be obtained by deleting k edges in E3(G). Lemma. Let D be the solution of size k (obtained from E3(G)). Then |E3(G)| ≤ 5k (or, we have a weak kernel of size 5k).

  41. Application 1. Min co-Path Set Theorem. Let k be the size of the minimum co-path set. The minimum co-path set problem has a weak kernel of size 5k, hence can be solved in O(23.61k(n+k)) time.

  42. Application 1. Min co-Path Set Theorem. Let k be the size of the minimum co-path set. The minimum co-path set problem has a weak kernel of size 5k, hence can be solved in O(23.61k(n+k)) time. With bounded search tree, the time bound can be improved to O(2.3k(n+k)).

  43. Application 2. Sorting with Minimum Unsigned Reversals Problem: Given a sequence H which is a permutation of {1,2,3,…,n-1,n}, using the minimum number of unsigned reversals to convert H into the identity permutation I=<1,2,3,…,n>. Example: <3,4,1,2>  <3,2,1,4> (unsigned reversal)

  44. Application 2. Sorting with Minimum Unsigned Reversals Problem: Given a sequence H which is a permutation of {1,2,3,…,n-1,n}, using the minimum number of unsigned reversals to convert H into the identity permutation I=<1,2,3,…,n>. Example: <3,4,1,2>  <3,2,1,4>  <1,2,3,4>

  45. Application 2. Sorting with Minimum Unsigned Reversals Problem: Given a sequence H which is a permutation of {1,2,3,…,n-1,n}, using the minimum number of unsigned reversals to convert H into the identity permutation I=<1,2,3,…,n>. Example: <3,4,1,2>  <3,2,1,4>  <1,2,3,4> Related problem: Sorting with Minimum Signed Reversals.

  46. Application 2. Sorting with Minimum Unsigned Reversals Problem: Given a sequence H which is a permutation of {1,2,3,…,n-1,n}, using the minimum number of unsigned reversals to convert H into the identity permutation I=<1,2,3,…,n>. Example: <3,4,1,2>  <3,2,1,4>  <1,2,3,4> Related problem: Sorting with Minimum Signed Reversals. Example: <3,4,1,2>  <-2,-1,-4,-3> (signed reversal)

  47. Application 2. Sorting with Minimum Unsigned Reversals Problem: Given a sequence H which is a permutation of {1,2,3,…,n-1,n}, using the minimum number of unsigned reversals to convert H into the identity permutation I=<1,2,3,…,n>. Example: <3,4,1,2>  <3,2,1,4>  <1,2,3,4> Related problem: Sorting with Minimum Signed Reversals. Example: <3,4,1,2>  <-2,-1,-4,-3>  <1,2,-4,-3>  <1,2,3,4>

  48. Application 2. Sorting with Minimum Unsigned Reversals Status: Two of the most important problems in computational genomics. Sorting with Minimum Signed Reversals: (1) ∈P (Hannenhalli and Pevzner, STOC’95), bounds subsequently improved by Kaplan, Shamir and Tarjan (SODA’97), and the current best bound is O(n log n) (Swenson et al., RECOMB’09). (2) The distance (not the actual reversals) can be computed in O(n) time (Bader, Moret and Yan; J. Computational Biology, 2001).

  49. Application 2. Sorting with Minimum Unsigned Reversals Status: Two of the most important problems in computational genomics. Sorting with Minimum Unsigned Reversals: (1) NP-complete (Caprara, RECOMB’97; best paper award). (2) Best approximation factor is 1.375 (Berman, Hannenhalli and Karpinski, ESA’02). (3) No non-trivial FPT algorithm is known. (4) We can have a simple FPT algorithm running in O(4kn+n log n) time, using (indirect) weak kernels.

  50. Application 2. Sorting with Minimum Unsigned Reversals Idea: Using the solutions for Sorting with Minimum Signed Reversals as subroutines, searching in the (indirect) weak kernel of size at most 4k.

More Related