1 / 54

HARDNESS OF APPROXIMATIONS

HARDNESS OF APPROXIMATIONS. Gap Introducing Reduction. For simplicity we assume that we are always reducing from SAT(or any other NP- hard problem). Let Π be a minimization problem. A gap introducing reduction from SAT to Π comes with two parameters f and α.

asher
Télécharger la présentation

HARDNESS OF APPROXIMATIONS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HARDNESS OFAPPROXIMATIONS

  2. Gap Introducing Reduction For simplicity we assume that we are always reducing from SAT(or any other NP- hard problem). Let Π be a minimization problem. A gap introducing reduction from SAT to Π comes with two parameters f and α. • f is a function of the instance. • α is a function of the size of the instance. Given an instance Φ of SAT it outputs, in polynomial time, an instance x of Π such that: Gap α(|x|) represents the hardness factor established by the gap-introducing reduction for the NP-hard optimization problem.

  3. Gap Preserving Reduction Once we have obtained a gap-introducing reduction from SAT to an optimization problem, say Π1, a hardness result for another optimization problem, say Π2can be proved by a gap preserving reduction from Π1to Π2. We assume • Π1is a minimization problem, and • Π2is a maximization problem. A gap-preserving reductionΓ from Π1 το Π2 comes with 4 parameters (functions), f1, α, f2, β. Given an instance x of Π1, it computes in polynomial time, an instance y of Π2such that Since Π1 is a minimization problem Since Π2 is a maximization problem Composed reduction shows that there is no β(|y|) factor approximation algorithm for Π2assuming

  4. Remarks on reductions • The “gap” β can, in general, be bigger or smaller than α. In this sense, “gap-preserving” is a slight misnomer. • An approximation algorithm for Π2 together with a gap reduction Γ from Π1 to Π2 does not necessarily yield an approximation algorithm for Π1. • Gap preserving reduction Γ together with an appropriate gap-introducing reduction from SAT to Π1 does suffice for proving a hardness of approximation for results for Π2.

  5. Probabilistically Checkable Proof Systems A verifier is a polynomial-time probabilistic Turing Machine containing • An input tape. • A work tape. • A tape that contains a random string. • A tape called the proof string and denoted as Π. Proof system should be thought of as an array of bits; out of which the verifier will examine a few.

  6. Definitionson PCP I A verifier is O(r(n),q(n)) – restricted if on each input size n it • uses at most O(r(n)) random bits for its computation, and • queries at most O(q(n)) bits of the proof. In other words, an (r(n),q(n)) – restricted verifier has two associated integers c, k • Random string has length cr(n). • Verifier reads the random string R, computes a sequence of kq(n) locations, and queries these locations in Π

  7. Definitionson PCP II A verifier M probabilistically checks membership proof for language L, if • For every input x in L, there is a proof Πxthat causes M to accept for every random string, i.e., with probability 1, . • For any input x not in L, every proof is rejected with probability at least 0.5, i.e., The probability of accepting in case is called the error probability. Observation: The choice of probability ½ in the second part is arbitrary. By repeating the verifier’s program O(1) times, and rejecting if the verifier rejects once, the error probability can be reduced to any arbitrary positive constant

  8. A language L is in PCP(r(n),q(n)), written , if there is an (r(n),q(n)) - verifier M that probabilistically checks membership proof for L. Note that since NP is the set of languages for which membership proofs can be checked in deterministic polynomial time

  9. PCP Theorem Gives an alternative characterization of NP in terms of PCP Theorem: Proof is divided into two parts • (easy to prove) • (difficult to prove) In terms of the 3SAT problem, the interesting and difficult part of the PCP theorem is decreasing the error probability to <1/2 (i.e., maximizing the acceptance probability of M), even though the verifier M is allowed to read only a constant number of bits. Use of PCP Theorem: The PCP theorem directly gives an optimization problem in particular a maximization problem for which there is no factor ½ approximation algorithm, assuming

  10. Maximizing Accept Probability Maximization Problem: Let M be a PCP(logn,1) verifier for SAT. On input Φ, a SAT problem, find a proof that maximizes the probability of acceptance of V. Theorem: Assuming P is not NP, there is no factor ½ approximation algorithm for the above problem. Proof: Suppose there is a factor ½ approximation algorithm. If Φ is satisfiable, then this algorithm must provide a proof on which M’s acceptance probability is But the acceptance probability can be computed in polynomial time, by simply simulating M for all random strings of length O(logn). Thus this polynomial time can be computed in polynomial time, contradicting the assumption that

  11. Reductions Using PCP-theorem

  12. Inapproximability Results Recent inapproximability results divide into four broad classes based on the approximation ratio that is provably hard to achieve

  13. Hardness of MAX-3SAT MAX-3SAT is the restriction to of MAX-SAT to instances in which each clause has at most 3 literals. In MAX-3SAT optimization problem, feasible solutions are truth assignments and objective function is the number or fraction of satisfied clauses. MAX-3SAT plays a similar role in hardness of approximation as 3SAT plays in the theory of NP – Hardness. We will prove the next theorem Theorem: There is a constant for which there is a gap introducing reduction from SAT to MAX-3SAT that transforms a boolean formula Φ το Ψ such that • If Φ is satisfiable, OPT(Ψ)=m, and • If Φ is not satisfiable, m denotes the number of clauses in Ψ

  14. Strategy Proof will be accomplished in two stages. Definition of MAX k-function Problem Given • n boolean variables x1,x2,…,xn • m functions f1,f2,…,fm each of which is a function of k of the boolean variables. Find A truth assignment to x1,…,xn that maximizes the number of functions satisfied. Here k is assumed to be a fixed constant.

  15. From SAT to MAX k-functions SAT Lemma: There is a constant k for which there is a gap-introducing reduction from SAT to MAX k-FUNCTION SAT such that transforms a boolean formula Φ to an instance Ι of MAX k-FUNCTION SAT such that • If Φ is satisfiable, , and • If Φ is not satisfiable, where m is the number of formulae in I Proof • Φ:instance of SAT of length n • M: PCP(logn,1) verifier for SAT, with associated parameters c and q. Corresponding to each random string r of length c*log(n), M reads q bits of the proof. Thus M reads a total of at most bits of the proof • B: is the set of boolean variables corresponding to each of these bits • fr:A boolean function of q variables from B corresponding to each string r. There is a polynomial algorithm which given input Φ, outputs the m=nc functions fr. If Φ is satisfiable, there is a proof Π that makes M accepts with probability 1. The corresponding truth assignment to B satisfies all nc functions fr. If Φ is not satisfiable, then on every proof Π, M accepts with probability <1/2. Thus every truth assignment satisfies ½ nc functions fr.

  16. Proof of Hardness of MAX-3SAT We show how to obtain a 3SAT formula from the nc functions • Ψ: Boolean formula defined by • Ψr: fr boolean function written as a SAT formula containing at most 2q clauses. Each clause contains at most q literals. • : 3SAT formula, obtained by using the standard trick of introducing new variables to every clause of Ψ containing more than 3 literals. The resultant formula contains at most clauses If a truth assignment satisfies formula fr, then it must satisfies all clauses of Ψr On the other hand if it does not satisfy fr, then it must leave at least one clause of Ψr un-satisfied. Thus if Φ is not satisfiable, any truth assignment must leave >1/2*nc clauses of Ψ unsatisfied If Φ is satisfiable, then there is a truth assignment satisfying all clause of If Φ is not satisfiable >1/2*nc remain unsatifiable, under any truth assignment.

  17. MAX-3SAT with bounded occurrences of variables Useful Notions and some Notations Expander GraphG(V, E): • Every vertex has the same degree • For any nonempty subset • Gx: A degree 14 expander graph on k vertices. Label the vertices with distinct boolean variables x1,x2,…,xk • ψx: A CNF formula • B: The set of boolean variables occuring in Φ • Consistent truth assignments to x1,…,xk. All the variables are set to true or all are set to false The purpose of the Expander graph is to ensure that in any optimal truth assignment, a given set of Boolean must have consistent assignment, i.e., all true or all false. where is the set of edges in the cut Corresponding to each edge of Gx we will include the clauses and

  18. Description of reduction • W.l.o.g it is assumed that every variable occurs in Φ at least Νοtimes. If not we can replicate each clause No times. • For each variable x in B which occurs times in Φ • Each variable in of Vx occurs exactly 29 times is a set of completely new variables. Let Gx be a 14 degree expander on k-vertices. Label its vertices with variables from Vx and obtain formula ψx. Replace each occurrence of variable x in Φ by a distinct variable from Vx After the end of the above for loop every occurrence of a variable in Φ is replaced by a distinct variable from the set of new variables. denotes the new formula after the replacement in Φ • Type Iclauses: Clause of • Type II clauses: Remaining clausesof ψ

  19. Proof (continued) An inconsistent truth assignment partition the vertices of Gx into two sets S and Corresponding to each edge in the cut ψx will have an unsatisfied clause (see example). Since by definition the number of unsatisfied clause in ψx is at least assuming w.l.og that S is the smallest subset Claim: An optimal truth assignment for ψ must satisfy all type II clauses, and therefore must be consistent for each set Vx. Proof: By contradiction. τis an optimal assignment for ψ that is not consistent for some Vx, with x in B. Thus τpartitions the edges of Gx into two disjoint sets. Flip the truth assignment to variables in S, keeping the rest of the assignment the same as τ. As a result, some type I clauses that were satisfied under τmay now be unsatisfied. Each of these must contain a variable of S so their number is at most |S|. On the other hand we get at least |S|+1 new satisfied clauses corresponding to the edges in the cut. Consequently the flipped assignment satisfies more clauses thanτ.(Contradiction)

  20. Gap analysis • m: Number of clauses in Φ • : Number of clauses in ψ. • 3m: Total number of occurences of all variables in Φ is at most 3m Each occurrence participates in 28 type II two literal clauses giving a total cost of at most 42m clauses. In addition, ψ has m type I clauses. Therefore m+42m=43m. Thus If Φ is satisfiable, then by construction ψ is satisfiable, i.e . If Φ is unsatisfiable, then i.e., clauses of Φ remain unsatisfied Thus

  21. Hardness of Vertex Cover Input • An undirected graph G=(V,E) • A cost function on vertices Find A minimum cost vertex cover, i.e., a subset such that every edge has at least on endpoint incident at Cardinality vertex cover is a special case in which all vertices are of unit cost. VC(d): Restriction of the cardinality vertex cover problem to instances in which each vertex has degree at most d. Theorem: There is a gap-preserving reduction from MAX-3SAT(29) to VC(30) that transforms a boolean formula Φ to a graph such that where m is the number of clauses in Φ.

  22. Description of reduction • W.l.o.g it is assumed that each clause of Φ has exactly three literals. • Corresponding to each clause of Φ,graph G has 3 vertices. • Each of these vertices is labeled with one literal of the clause. Thus • Graph G has two types of edges • For each clause of Φ, G has 3 edges connecting its vertices, and • For each u, v in V, if the literals u and v are negations of each other, then (u,v) is an edge in V. By construction each vertex of G has two edges of the first type and at most 28 edges of the second type. Hence G has at most degree (28+2)=30.

  23. Proof (continued) Claim: The size of a maximum independent set in G is precisely OPT(Φ) Proof: • Consider an optimal truth assignment for clause Φ • Pick one vertex, corresponding to a satisfied clause, from each satisfied clause. Clearly the picked vertices form an independent set. Conversely: • Consider an independent set I in G • Set literals corresponding to its vertices to be true. Any extension of this truth setting to all variables must satisfy at least |I| clauses in Φ. Gap Analysis: Note that the complement of a maximum independent set in G is a minimum vertex cover.

  24. Hardness of Steiner Tree Input • An undirected graph G=(V,E) • Nonnegative edge costs • Vertices are partitioned into two sets, Required and Steiner Find A minimum cost tree in G that contains all the required vertices and any subset of the Steiner vertices Theorem: There is a gap-preserving reduction from VC(30) to the Steiner tree problem. It transforms an instance G=(V,E) of VC(30) to an instance H(R,S,cost) of Steiner tree, where R and S are the required and steiner vertices of H, and cost is a metric on . It satisfies

  25. Description of reduction (Construction of graph H) Construct a graph H(R, S, cost) such that G(V,E) has a vertex cover of size c iff H has a Steiner tree of cost |R|+c-1. • H will have a required vertex re corresponding to each edge e in E. • H will have a Steiner vertex su corresponding to each vertex u in V. • Assigned edge costs on graph H

  26. Proof Claim: G(V,E) has a vertex cover of size c iff H has a Steiner tree of cost |R|+c-1 Proof: => • Let G has a vertex cover of size c. • Let Sc be the set of Steiner vertices in H corresponding to the c vertices in the cover. There is a steiner tree in H covering using cost 1 edges only, since every edge e in E must be incident at a vertex in the cover. The cost of the Steiner tree is . <= Let T be a Steiner tree in H of cost . We will show that • T can be transformed into a Steiner tree of the same cost that uses edged of cost 1 only. If so the latter must contain exactly c Steiner Vertices. • Every required vertex of H must have a unit cost edge to one of these Steiner Vertices. (Therefore, the corresponding c vertices of G form a cover).

  27. Proof (continued) Let (u,v) be an edge of cost 2 in T. (W.l.og vertices u and v are both required) • Let eu be the edge in G corresponding to the required vertex u in T. • Let ev be the edge in G corresponding to the required vertex v in T. Since G is connected there is a path, p, from one of the endpoints of eu to one of the endpoints of ev in G Removing edge (u,v) from T gives two connected components • Let R1 be the set of required vertices in the first connected component. • Let R2 be the set of required vertices in the second connected component. Vertices u,v lie in different sets, so path p in G must have two adjacent edges, say (a,b) and (b,c) such that their corresponding vertices in H, say w and lie in R1 and R2 respectively.

  28. Proof (continued) Let the Steiner vertex in H, corresponding to b be sb. Now throwing the edges and must connect the two components . These two edges are of unit cost. Gap Analysis

  29. HARDNESS OFCLIQUE PROBLEM

  30. A First Approach On Hardness of Clique Problem Input • An undirected graph G=(V,E) • Nonnegative weights on vertices. Cardinality version: all weights are equal to 1. Find A clique of maximum weight. A clique in G is a subset of vertices such that for each pair u, v in S, (u,v) is in E. Its weight is the sum of weights of its vertices Theorem: For fixed constants b and q, there is a gap introducing reduction from SAT to Clique that transforms a boolean formula of size n to a graph G=(V,E), where such that

  31. Preparation for the proof • F: A PCP(logn,1) verifier for SAT. F requires blogn random bits and queries q bits of the proof • r: Binary string of blogn bits. • τ:truth assignment to q boolean variables. • Q(r): The q positions in the proof that F queries when it is given string r as the random string. • p(r): Truth setting assigned by proof p to positions Q(r) • Ur,τ: A vertex in G for each choice of the random string, r, of blogn bits, and each truth assignment,τ, of q boolean variables

  32. Proof • Vertices and are consistent if τ1and τ2 agree at each position at which Q(r1) and Q(r2) overlap. • Two distinct vertices and are connected by an edge in G iff they are consistent and they are both accepting. • If Φ is satisfiable, there is a proof, p, on which F accepts for each choice of r, of the random string. There are 2blogn=nb possible random strings. • If Φ is not satisfiable, suppose that C is a clique in G. For each random string r, let p(r) be the truth setting assigned by proof p to positions Q(r). Thus, the vertices form a clique of size nb. Since the vertices of C are pairwise consistent, there is a proof p such that the Q(r) positions of p contain the truth assignment τ for each vertex of clique. The number of vertices in clique is 2q|C|. The total number of vetices in G is 2qnb. Thus the probability of acceptance is at least

  33. Generalization of definitionof PCP In the previous proof the hardness factor established is precisely the bound on the error probability of the PCP verifier for SAT. Question: Why the need for generalizing the definition of PCP Answer: Error probability needs to be made inverse polynomial Definition of We introduce two parameters • c parameter stands for completeness, and • s parameter stands for soundness. Definition: if there is a verifier V, which on input x of length n, obtains a random string of length O(r(n)), queries O(q(n)) bits of the proof and satisfies. • If x is in L, there is a proof y that makes V accept with probability >=c. • If x is not in L, for every proof y, V accepts with probability <s. According to the previous definition

  34. How to reduce parameter s Two ways • Obvious way: Simulate a PCP[logn,1] verifier multiple number of times and accepts iff the verifier accepts each time • Clever way: Use a constand degree expander graph to generate O(logn) strings of blogn bits each, using only O(logn) truly random bits. Simulating k times will reduce soundness to However this will increase the number of random bits needed to O(klogn) and the number of query bits to O(k) Verifier will be simulated using these O(logn) strings as the random strings. Clearly these are not truly random strings. Properties of expanders show that they are almost random. Probability of error still drops exponentially in the number of times the verifier is simulated

  35. A useful theorem Graph H with the following properties • A constant degree expander on nb vertices. • Each vetrex has a unique blogn bits Theorem: Let S be any set of vertices of H and There is a constant k such that Question 1: Why we introduce graph H Answer: We will use it to generate O(logn) strings of blogn bits, using only O(logn) truly random strings. The verifier will be simulated using these O(logn) strings as ‘random’ strings. Question 2: How we construct a random walk on graph H of length O(logn) Answer: We use only O(logn) random bits. • blogn bits to pick the starting vertex, and • a constant number of bits to pick successive vertex

  36. Theorem Proof is divided into two parts Let which means there is a verifier F for L which requires blogn random bits and queries q bits of the proof, where b and q are constants We give a verifier for language L, which. • constructs an expander graph H • constructs a random walk of length klogn using only O(logn) random bits. By construction of H, the label of each vertex on this path specifies a blogn bit string. • It uses these these klogn+1 strings as the ‘random’ string on which it simulates verifier F.

  37. Proof • Consider x is in L. Let p be a proof that makes verifier F accepts with probability 1. • Consider x is not in L. Let p be an arbitrary proof supplied to Given proof p, also accepts x with probability 1. Hence the completeness parameter is 1 (c=1) Given proof p, verifier F accepts on random strings of length blogn Let S denote the corresponding set of vertices of H, . Since accepts x iff F accepts x on all klogn+1 random strings, accepts x if the random walk remains entirely in S. But the probability of this event is <1/n. Thus the soundness of is 1/n. Observe that requires only O(logn) random bits and queries O(logn) bits of the query.

  38. Attack On Hardness of Clique Problem Theorem: For fixed constants b and q, there is a gap introducing reduction from SAT to Clique that transforms a boolean formula of size n to a graph G=(V,E), where such that Proof: Let F be a verifier for SAT that requires blogn random bits and queries qlogn bits of the proof • If Φ is satisfiable and p is a good proof, choose the nb vertices of G such that the klogn positions of p associated with each chosen vertex contains assignment τ. These vertices form a clique • If Φ is not satisfiable, suppose that C is a clique in G We have shown that any clique C in G gives rise to a proof that is accepted by F with probability Since the soundness of F is 1/n, the largest clique in G is of size <nb-1.

  39. Attack On Hardness of Clique Problem II Corollary: There is no factor approximation algorithm for the cardinality clique problem assuming , where

  40. HARDNESS OFSET-COVER PROBLEM

  41. A known approximation algorithm Input • A universe U of n elements, • a collection of subsets of U, , and • a cost function Find A minimum cost subcollection of S that covers all elements of U. A greedy approximation algorithm with factor C=0 while ( ) Let the cost-effectiveness of S find the set whose cost-effectiveness is smallest, say S Pick S, and for each e in S-C, set end while

  42. Two-prover one-round proof System (Introduction) Since now for the purpose of showing hardness of MAX-3SAT and Clique we did not require a detailed description of the kinds of queries made by the verifier. The only restriction was that we only required a bound on the number of queries made on the proof. Question: Which is the notion behind a Two-prover One-round proof system Answer: Think of the proof system as a game between the prover and the verifier. Prover is trying to cheat in the sense that it is trying to convince the verifier that a “no” instance for Language L is actually in L. Question: Is there a verifier that can ensure that the probability of getting cheated is <1/2 fro every “no” instance?

  43. Two-prover one-round proof System Two-prover model • Verifier is allowed to query two non-communicating provers, denoted P1 and P2. • Verifier can cross check the prover’s answer. Thus the prover’sability to cheat gets restricted in this model. One-round proof system • Verifier is allowed one round of communication with each prover. The simplest way formalizing this is as follows • Proof P1 is written in alphabet Σ1. The size of the alphabet, |Σ1| may be unbounded. • Proof P2 is written in alphabet Σ2. The size of the alphabet, |Σ2| may be unbounded. • Verifier is allowed to query one position of the two proofs.

  44. Two-prover one-round proof System Comes with 3 parameters • Completeness (c), • Soundness (s), and • # of random bits provided to the verifier (r(n)) Two-prover One-round model defines the class There is a polynomial time bounded verifier M that receives O(r(n)) truly random bits and satisfies • for every input , there is a pair of proofs and that makes M accepts with probability . • for every input , and every pair of proofs and that makes M accepts with probability <s.

  45. Theorem Theorem: There is a constant εp>0 such that Proof: Divided into two parts Proving the second part. We know that • If Φ is satisfiable, OPT(Ψ)=m • If Φ is not satisfiable, OPT(Ψ)<(1-ε5)m

  46. Proof (Continued) The Two-prover one-round verifier, M, for SAT works as follows • Given a SAT formula Φ, it uses the aforementioned reduction to obtain a MAX-3SAT(5) instance of Ψ. • M assumes that P1 contains an optimal truth assignment, τ, for Ψ. (|Σ1|=2). It assumes that P2 contains for each clause, the assignment to its three boolean variables under τ (|Σ2|=23). • It uses the O(logn) random bits to pick • A random clause C from Ψ. • A random boolean variable x occurring in clause C. • M obtains the truth assignment to x and the three variables in C by querying P1 and P2, respectively. • M accepts if C is satisfied and the two proofs agree on their assignment for variable x.

  47. Proof (Continued) • If Φ is satisfiable, then so is Ψ.Clearly there are proofs y1 and y2 such that M accepts with probability 1. • If Φ is unsatisfiable, any truth assignment must leave strictly more than ε5clauses unsatisfied Consider any pair of proofs (y1,y2), and assume y1 as a truth assignment, say, τ. The random clause C picked by M, is not satisfied by τwith probability If so, and if the assignment for C contained in y2 is satisfying then y1 and y2 must be inconsistent. Let A, B be two events A: Random Clause C is not satisfied by τ. Β: Random clause C is satisfied by the assignment contained in y2 Hence overall, verifier M must reject with probability

  48. Main Reduction Theorem: There is a constant c>0 for which there is a randomized gap-introducing reduction Γ, requiring time n(O(loglogn)), from SAT to the cardinality set cover problem that transforms a Boolean formula Φ to a set system S over a universal set of size n(O(loglogn)) such that where • n: the length of each of the two proofs for SAT under the two-prover one-round model(polynomial in the size of Φ). • k=O(loglogn). Observation: A slight abuse of notation, since gap introducing reductions were defined to run in polynomial time

More Related