1 / 40

NP-Completness

NP-Completness. Turing Machine. Hard problems. There are many many important problems for which no polynomial algorithms is known.

jmishler
Télécharger la présentation

NP-Completness

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NP-Completness Turing Machine

  2. Hard problems • There are many many important problems for which no polynomial algorithms is known. • We show that a polynomial-time algorithm for one “hard” problem would imply a polynomial-time algorithm for many (almost all) well-known hard combinatorial problem. • We consider a machine model, i.e. a precise definition of a polynomial-time algorithms. • We will use a very simple model for computation: the Turing machine.

  3. Turing machine • The Turing machine can be regarded as a sequence of simple instructions working on a string. The input and the output will be a binary string.

  4. Alphabet • An alphabet is a finite set with at least two elements, not containing the special symbol ⊔(which we shall use for blanks).

  5. For an alphabet A we denote by the set of all (finite) strings whose symbols are elements of A. We use the convention that A0 contains exactly one element, the empty string. Strings

  6. Language • A language over A is a subset of A*. • The elements of a language are often called words. • Ifx ∈An , we writesize(x) = n for the length of the string.

  7. Binary strings • We shall often work with the alphabet A = {0, 1} and the set {0, 1}∗ of all 0-1-strings (or binary strings). The components of a 0-1-string are sometimes called its bits. So there is exactly one 0-1-string of zero length, the empty string. A language over {0, 1} is a subset of {0, 1}∗ .

  8. Input (informally) • A Turing machine gets as input a string x ∈ A∗ for some fixed alphabet A. • The input is completed by blank symbols (denoted by ⊔) to a two-way infinite string s ∈ (A∪{⊔})Z. • This string s can be regarded as a tape with a read-write head; only a single position can be read and modified at each step, and the read-write head can be moved by one position in each step.

  9. Turing machine (informally) • A Turing machine consists of a set of N + 1 statements numbered 0, . . . , N . • In the beginning statement 0 is executed and the current position of the string is position 1. • Now each statement is of the following type: Read the bit at the current position, and depending on its value do the following: Overwrite the current bit by some element of A∪{⊔}, possibly move the current position by one to the left or to the right, and go to a statement which will be executed next.

  10. Output • There is a special statement denoted by −1 which marks the end of the computation. • The components of our infinite string s indexed by 1, 2, 3, . . . up to the first then yield the output string.

  11. Turing machine (formally) • A Turing machine (with alphabet A) is defined by a function Φ:{0,…, N}× A{⊔}→{-1,…, N}×A{⊔}×{-1,0,1} for some N ∈Z+. • The computation of Φon input x, wherex∈A*, is the finite or infinite sequence of triples (n(i),s(i),π(i)) with n(i) ∈{-1,…,N}, s(i) ∈(A{⊔})Z and π(i) ∈Z (i = 0,1,2,…), defined recursively, where • n(i) denotes the current statement, • s(i) represents the string, • π(i) is the current position.

  12. Turing machine(the beginning of computing) • n(0) := 0. • sj(0) := xjfor 1≤ j ≤ size(x), and sj(0) := 0for all j ≤ 0 and j > size(x). • π(0) := 1.

  13. Turing machine (recursion) • If(n(i),s(i),π(i)) is already defined, we distinguish two cases. • Ifn(i) ≠ −1 then let • We set n(i+1) := m, • π(i+1) := π(i) + δ.

  14. Turing machine (the end of computing) • Ifn(i) =–1, then this is the end of the sequence. • We then define • time(Φ,x) = i • output(Φ,x){0,1}k, wherek= min{jℕ: sj(i) =⊔}-1 • (Φ, x)j = sj(i)forj=1,…k. • If this sequence is infinite (i.e. n(i) ≠ −1 for all i), • then we set • time(Φ, x) = ∞ • In this case output(Φ, x)is undefined.

  15. Polynomial-time Turing machine • Let A be an alphabet, S, T ⊆ A∗ two languages, and f : S → T a function. Let Φ be a Turing machine with alphabet A such that time(Φ, s) < ∞ and output(Φ, s)= f(s) for each s∈S. Then we say thatΦcomputes f. • If there exists a polynomial p such that for all s ∈ S we have time(Φ,s) ≤ p(size(s)), then Φ is a polynomial-time Turing machine. • In the case T={0,1} we say that Φdecides the language L:={s ∈ S: f(s)=1}. • If there exists some polynomial-time Turing machine computing a function f (or deciding a language L), then we say that f is computable in polynomial time (or L is decidable in polynomial time, respectively).

  16. Church’s thesis • The Turing machine is the most customary theoretical model for algorithms. Although it seems to be very restricted, it is as powerful as any other reasonable model: the set of computable functions (sometimes also called recursive functions) is always the same.

  17. Decision problems (informally) • Any language L ⊆ {0, 1}∗ can be interpreted as decision problem: given a 0-1-string, decide whether it belongs to L. • However, we are more interested in problems like the following: (Hamiltonian Circuit problem: given an undirected graph G.Decide whether G has a Hamiltonian Circuit.) • Graph  Adjacency listbinary string of length (O(n+m log n)). • For most interesting decision problems the instances are a proper subset of the 0-1-strings. • Not all binary strings are instances of Hamiltonian Circuit but only those representing an undirected graph. • We require that we can decide in polynomial time whether an arbitrary string is an instance or not.

  18. Decision problems (formally) • A decision problem is a pair Π = ( X , Y ), where X is a language decidable in polynomial time and Y ⊆ X. • The elements of X are called instances of P; • the elements of Y are yes-instances, • the elements ofX\ Y are no-instances. • An algorithm for a decision problem (X,Y) is an algorithm computing the function f : X → {0,1}, defined by f(x) = 1 for x ∈ Y and f(x) = 0 for x∈X\Y.

  19. ClassP • The class of all decision problems for which there is a polynomial-time algorithm is denoted by P. • In other words, a member of P is a pair (X,Y) with Y ⊆ X ⊆ {0,1}∗ where both X and Y are languages decidable in polynomial time. • To prove that a problem is in P one usually describes a polynomial-time algorithm. • Church’s thesis implies that there is a polynomial-time Turing machine for each problem in P.

  20. ClassNP (informally) • It is not known whether Integer Linear Inequalities or Hamiltonian Circuit belong to P. • We do not insist on a polynomial-time algorithm, but we require that for each yes-instance there is a certificate which can be checked in polynomial time. • Note that we do not require a certificate for no-instances.

  21. ClassNP (formally) • A decision problem Π = ( X , Y ) belongs to NP if there is a polynomial p and a decision problem Π' = ( X , Y ) in P, where • X' = {x#c: x ∈X, c ∈{0,1}└ p(size(x))┘} such that • Y = {y X :  c ∈{0,1}└ p(size(x))┘: y#cY'}. • Here x#cdenotes the concatenation of the string x, the symbol#and the stringc. • A stringc with y#c∈Y'is called acertificate fory. • An algorithm for Π'is called a certificate-checking algorithm.

  22. PNP Proposition 9.1PNP.

  23. PNP Proposition 9.1PNP. • One can choose p to be identically zero. An algorithm for Π’ just deletes the last symbol of the input “x#” and then applies an algorithm for Π.

  24. Problem from NP Proposition 9.2 Hamiltonian Circuit belongs to NP. • For each yes-instance G we take any Hamiltonian circuit of G as a certificate. • To check whether a given edge set is in fact a Hamiltonian circuit of a given graph is obviously possible in polynomial time.

  25. NP • The name NP stands for “nondeterministic polynomial”. To explain this we have to define what a nondeterministic algorithm is. This is a good opportunity to define randomized algorithms in general, a concept which has already been mentioned before. The common feature of randomized algorithms is that their computation does not only depend on the input but also on some random bits.

  26. Randomized algorithm • A randomized algorithm for computing a function f : S → T can be defined as an algorithm computing a function g : {s#r : s ∈ S, r ∈ {0, 1}k(s)} → T . • So for each instance s ∈ S the algorithm uses k(s) ∈ Z+ random bits. We measure the running time dependency on size(s) only. • Randomized algorithms running in polynomial time can read only a polynomial number of random bits.

  27. Las Vegas algorithm • If g(s#r) = f(s) for all s∈S and all r∈{0,1}k(s), we speak of a Las Vegas algorithm. • A Las Vegas algorithm always computes the correct result, only the running time may vary.

  28. Quicksort • Pick an element, called a pivot, from the array. • Reorder the array so that all elements with values less than the pivot come before the pivot, while all elements with values greater than the pivot come after it (equal values can go either way). After this partitioning, the pivot is in its final position. This is called the partition operation. • Recursively apply the above steps to the sub-array of elements with smaller values and separately to the sub-array of elements with greater value.

  29. Monte Carlo algorithm • If there is at least a positive probability p of a correct answer, independent of the instance, i.e. then we have a Monte Carlo algorithm.

  30. Estimation of the value of π • The amount of area within a quarter-circle of radius 1 depends on the value of π. The probability that a randomly-chosen point will lie in that quarter-circle depends on the area of the circle. If points are placed randomly in a square with sides of length 1, the percentage of points that fall within a quarter-circle of radius 1 will depend on the value of π. A Monte Carlo algorithm would randomly place points in the square and use the percentage of points falling inside of the circle to estimate the value of π.

  31. Nondeterministic algorithm • If T ={0,1}, and for each s∈S with f(s)=0 we have g(s#r)=0 for all r∈{0,1}k(s), then we have a randomized algorithm with one-sided error. • If in addition for each s ∈ S with f(s) = 1 there is at least one r∈{0,1}k(s)withg(s#r) = 1, then the algorithm is called a nondeterministic algorithm.

  32. Nondeterministic algorithm • Alternatively a randomized algorithm can be regarded as an oracle algorithm where the oracle produces a random bit (0 or 1) whenever called. • A nondeterministic algorithm for a decision problem always answers “no” for a no-instance, and for each yes-instance there is a chance that it answers “yes”.

  33. ClassNP Proposition 9.3 Adecision problem belongs to NP if and only if it has a polynomial-time nondeterministic algorithm.

  34. Proof Let Π= (X,Y) be a decision problem in NP, and let Π' = (X,Y) be defined as • X' = {x#c: x X, c ∈{0,1}└ p(size(x))┘} and • Y = {y ∈X : c ∈{0,1}└ p(size(x))┘: y#c∈Y'} Then a polynomial-time algorithm for Π' is in fact also a nondeterministic algorithm for Π: the unknown certificate is simply replaced by random bits. Since the number of random bits is bounded by a polynomial in size(x), x ∈ X, so is the running time of the algorithm.

  35. Proof • Conversely, if Π = (X,Y) has a polynomial-time nondeterministic algorithm using k(x) random bits for instance x, then there is a polynomial p such that k(x) ≤ p(size(x)) for each instance x. • We define X' = {x#c: x ∈X, c ∈{0,1}└ p(size(x))┘} and Y' = {x#c ∈X' : g(x#r) = 1consists of the first k(x) bits of c }. • Then by the definition of nondeterministic algorithms we have (X', Y' ) ∈ P andY = {y ∈X: c ∈{0,1}└ p(size(x))┘ y#c ∈Y' }.

  36. Polynomial reductions • Most decision problems encountered in combinatorial optimization belong to NP. For many of them it is not known whether they have a polynomial-time algorithm. However, one can say that certain problems are not easier than others. To make this precise we introduce the important concept of polynomial reductions.

  37. Polynomial reduction • LetΠ1 andΠ2=(X,Y) be decision problems. Letf :X{0,1} with f(x)=1forx∈Yandf(x)=0 forx∈X\Y. We say thatΠ1polynomially reduced toΠ2, if there exists a polynomial-time oracle algorithm for Π1usingf.

  38. Polynomial reduction Proposition 9.4 If Π1 polynomially reduced to Π2 andthere is a polynomial-time algorithm forΠ2 , then there is a polynomial-time algorithm forΠ1.

  39. Proof • Let A2 be an algorithm for Π2 with time(A2, y) ≤ p2(size(y)) for all instances y of Π2, and let f (x) := output(A2, x). Let A1 be an oracle algorithm for Π1 using f with time(A1,x) ≤ p1(size(x)) for all instances x of Π1. Then replacing the oracle calls in A1 by subroutines equivalent to A2 yields an algorithm A3 for Π1. For any instance x of Π1 with size(x) = n we have time(A3,x) ≤ p1(n) · p2(p1(n)): there can be at most p1(n) oracle calls in A1, and none of the instances of Π2 produced by A1 can be longer than p1(n). Since we can choose p1 and p2 to be polynomials we conclude that A3 is a polynomial-time algorithm.

  40. Homework • Prove: If Π =(X, Y)∈ NP, then there exists a polynomial p such that Πcan be solved by a deterministic algorithm having time complexity O(2p(n)), where n is the input size.

More Related