1 / 24

Boi Faltings and Martin Charles Golumbic

A Little Lemma on Treewidth An Effective Upperbound on Treewidth Using Partial Fill-in of Separators. Boi Faltings and Martin Charles Golumbic. Definitions. A tree decomposition for a graph G=(V,E) is a tree T whose

keden
Télécharger la présentation

Boi Faltings and Martin Charles Golumbic

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Little Lemma on TreewidthAn Effective Upperbound on Treewidth UsingPartial Fill-in of Separators Boi Faltings and Martin Charles Golumbic

  2. Definitions A tree decomposition for a graph G=(V,E) is a tree T whose nodes are labelled by subsets of V called ``clusters'' (or ``bags'') such that (1) every vertex v in V appears in at least one cluster, (2) every (u,v) in E must have u and v co-occur in some cluster, and (3) for every v in V , the set of nodes of T which include v in their cluster induces a connected subgraph, i.e., a subtree T(v). The width(T) of a tree decomposition T is the size of the largest cluster minus 1. The treewidth tw(G) of G is the minimum width over all tree decompositions for G. Such a tree decomposition is called a minimum tree decomposition for G .

  3. Treewidth Remark 1. The treewidth of a • tree equals 1 • chordless cycle equals 2, • clique on k vertices equals k-1 • stable (independent) set equals zero • chordal graph is the size of its largest clique minus 1. The theory of treewidth, introduced by Robinson and Seymour 1986, is a very rich topic in discrete mathematics. It has important algorithmic significance, since many NP-complete problems may be solved efficiently on graphs with bounded treewidth.

  4. A Jaca (Jaqueira) Tree Decomposition

  5. Algorithms using Treewidth • First, a tree decomposition of the graph with small width is found. • Then, this tree decomposition is used in a dynamic programming algorithm to solve the original problem. Bodlaender (1996) gave a linear time algorithm to decide whether a graph has treewidth at most k, and if so constructs a decomposition tree. However, this algorithm is generally “not useful due to the huge constant factor. Thus, there is a need for practical algorithms that find tree decompositions of given graphs of small width.” from: Tree Computations I. Upper Bounds (Bodlaender and Koster, 2009)

  6. Treewidth Upper Bounds using Separators • Partitioning a graph using graph separators, • particularly clique separators • well-known technique to decompose a graph into smaller • units which can be treated independently. • previously known: the treewidth is bounded above by the • size of the separator • plus the treewidth of disjoint components • tw(G) ≤ |S| + tw(GV \ S) • this was obtained by the heuristic of filling in all edges • of the separator making it into a clique.

  7. Our New Result onTreewidth Upper Bounds using Separators A new, tighter upper bound on the treewidth of a graph obtained by only partially filling in the edges of a separator. tw(G) ≤ tw(HS) + tw(GV \ S) + 1 The method: Complete (fill-in) just those pairs of separator vertices that are adjacent to a common component, This is a more effective heuristic than filling in the entire separator. H1 H4 HS H2 H3

  8. An implication of the Helly property A well known (folkfore) result that already appears in a paper by Bodlaender and Mohring,The pathwidth and treewidth of cographs. SIAM J. Disc. Meth. 6 (1993) 181-188. Remark 2. Let T be a tree decomposition for G. If C is a clique of G , then there is a cluster X such that C is a subset of X . We will use this in the proof of our result.

  9. Notation For S a subset V, consider the connected components H1, … , Ht of GV \ S , i.e., the connected subgraphs obtained from G by deleting all vertices of S and their incident edges. Denote Hi = (Vi,Ei) and let Si S denote of all vertices with neighbors in Hi . Define (x,y) to be a fill-in edge if (x,y)  E and x,y  Si for some i, and let F be the set of all fill-in edges. Define the graph H =(V,E‘) to be the supergraph of G , where E‘ = E  F. In other words, an edge is filled in between u,v of S in E‘ if there is a path in G from u to v using only intermediate vertices of some component Hi . Thus, each Si becomes a clique in HS .

  10. Our Result The following is our new little lemma: tw(G) ≤ maxi { tw(HS) , |Si| + tw(Hi) } ≤ tw(HS) + tw(GV \ S) + 1 H1 H4 HS H2 H3

  11. Motivation Although this result is purely mathematical, it has its motivation in an important heuristic method for solving distributed constraint satisfaction problems (DCSP). [Ref: Rina Dechter and Boi Faltings] Specifically, when a separator (cutset) S of the constraint graph of a DCSP can be found which has certain good treewidth properties, it will allow an efficient solution to the DCSP using a hybrid algorithm combining search with dynamic programming.

  12. Motivation, cont. In search algorithms, there is a tradeoff between (1) the time complexity of searching for a solution, (2) the size of the memory (or cache) to store intermediate computations, and (3) the communication complexity for sending and sharing information between parts of the graph. Balancing these three parameters within the resources available, is the basis of our motivation.

  13. Proof of the Result tw(G) ≤ maxi { tw(HS) , |Si| + tw(Hi) } • Let TS be a minimum tree decomposition for the subgraph HS • and Ti be a minimum tree decomposition for Hi. • We construct a tree decomposition T for G: • Since each set Si forms a clique in HS , by Remark 2, • there is a cluster Xi in TS containing Si. To form T , we now • add all members of Si to each cluster of Ti , and • add a new edge from the node xi with label Xi to an arbitrary node uiof Ti .

  14. Proof of the Result, cont. • Claim: T is a tree decomposition for H, and thus also for G . • Condition (1) of the definition of tree decomposition is trivial. • Condition (3) is proven as follows: • For v  V \ S: T(v) remains unchanged, therefore a subtree of T . • For x  S: T(x) consists of TS(x) TSi (ui ,xi ) for each Si for which x has neighbors in Hi , and this is a subtree of T. • Condition (2) is proven in 3 cases: • Case 1: u,v V \ S: • If (u,v)  E, then u and v are in the same connected component, say Hj, and they appear together in some cluster at a node of Tj . • Case 2: u  V \ S and v  S: • If (u,v)  E where u is in Hj, then v  Sj, and they appear together in some (in fact, in every) cluster of Tj . • Case 3: u,v S: • If (u,v)  E, then (u,v)  E’S, so u and v co-occur in some cluster in TS, hence in T. Thus, T is a tree decomposition for G.

  15. Proof of the Result, cont. Claim: w = width(T) is at most maxi { tw(HS) , |Si| + tw(Hi) } Let Y be the largest cluster in T , that is, width(T) = |Y| - 1 . and y the node of T with label Y. If y  S , then width(T) = tw(HS) and the claim holds trivially. If y  Vj , then Y = Sj B where B is the largest (original) cluster in Tj and tw(Hj )=|B|-1. Therefore, since Sj Xj we have width(T) = |Y| - 1 = | Sj | + |B| - 1 = | Sj | + tw(Hj ) Q.E.D.

  16. Corollary tw(G) ≤ tw(HS) + tw(GV \ S) + 1 Proof. We have, tw(G) ≤ maxi { tw(HS) , |Si| + tw(Hi) } Since |Si| ≤ |Xi| ≤tw(HS) + 1 for all i and tw(GV \S) = maxi{tw(Hi)}. the result follows.

  17. Example Consider the tree decomposition into cliques: So,

  18. Example In previous work, In our new work,

  19. Example, cont. A non-optimal separator

  20. An Application to Constraint Satisfaction Problems • Using dynamic programming, a CSP can be solved in time and memory exponential in the treewidth of the constraint graph. • Using search, it can be solved in time exponential in the number of nodes but space linear in the number of nodes. • Our example: 10 variables; d possible values each, arcs correspond to arbitrary unstructured constraints.

  21. CSP our Example • Dynamic programming: cubic time and quadratic space in d • Tree search: O(d10) time but memory linear in d. When the treewidth of a constraint graph makes the memory required for dynamic programming exceed what is available, it becomes desirable to decompose the problem into pieces with lower treewidth that are solved using dynamic programming, and use search over the variables in the separator.

  22. CSP our Example, cont. • Overall complexity becomes exponential in the size of the separator plus the largest treewidth of a component. • Choosing separator S0 = {v3, v5, v8} would reduce the space complexity from O(d2) to O(d), but the time complexity would grow to O(d5). • Our method would pick the larger S = {v3, v4, v5, v8} since it shows a bound of tw(G) = 2 rather than 3. This algorithm requires only linear space and cubic time in the domain size d , much better than the decomposition pointed to by earlier results.

  23. Conclusion We thus believe that Theorem 2 can provide a useful heuristic for decomposing combinatorial problems and solving them efficiently. Bodlaender and Koster (2009), Tree Computations I. devote their Section 4 to the topic of building a tree decomposition using separators

  24. Thank you

More Related