1 / 26

Chapter 3 Algorithm Design and Analysis

Chapter 3 Algorithm Design and Analysis. 3.1 Introduction. Complexity of Algorithms size of problem, measuring running time, efficiency criteria, … Techniques used for developing polynomial time network algorithms geometric improvement, (bit) scaling, dynamic programming, binary search

olwen
Télécharger la présentation

Chapter 3 Algorithm Design and Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 3Algorithm Design and Analysis

  2. 3.1 Introduction • Complexity of Algorithms size of problem, measuring running time, efficiency criteria, … • Techniques used for developing polynomial time network algorithms geometric improvement, (bit) scaling, dynamic programming, binary search • Search algorithms Identifying basic subgraph: finding all nodes reachable from s, finding all nodes that can reach node t, identifying connected components, numbering the nodes of an acyclic graph, … • Flow decomposition

  3. 3.2 Complexity Analysis • Complexity Measures: • Empirical analysis • Write program and test on some classes of problem instances • Results may not be consistent • Average-case analysis (statistical analysis) • Estimate the expected number of steps • Need probability distribution • Usually hard to do analysis • Worst-case analysis (guaranteed performance) • Usually easier to do analysis • Some bad instances may affect the analysis (e.g. simplex method for LP) • Most popular

  4. Problem Size: amount of computer storage to describe the problem in computer integer x: log2x + 1 ( O(log2x) ) rational number p/q: log2p + log2q + 2 size of network problem: approximately n log n + m log m + m log C + m log U ( f(m, n, logC, logU)

  5. Worst-Case Complexity: Counting running time: assume unit time for each operation (compare, +/-, *, /) as long as the size of the numbers during computation remains polynomial function of the size of the input data (e.g. k, log k = k log ) Running time of the problem of size n (g(n)) is the largest time (number of steps) taken by all problems whose size is n (worst-case view point). Take asymptotic upper bound function f(n) • Def: Algorithm is said to run in O(f(n)) time if for some numbers c and n0, the time taken by the algorithm is at most cf(n) for all n  n0. • Similarity Assumption: C = O(nk), U = O(nk)

  6. Polynomial and Exponential-Time Algorithms: • Def: Algorithm is said to be a polynomial-time algorithm if worst-case complexity is bounded by a polynomial function of problem size. (problem parameters and data size, m, n, logC, logU) • Def: exponential-time algorithm if not polynomial-time algorithm • Def: strongly polynomial-time algorithm if polynomial function of problem parameter (size of problem data not involved) (e.g., O(n2m), O(m2), not O(nlogU) ). Very desirable as an algorithm. • The running time of a problem is bounded by a polynomial function if and only if it is also bounded by a polynomial in the problem parameter and the length of the encoding of data.

  7. Note that if the running time is polynomial function of C, U (cost, capacity), it is not a polynomial-time algorithm ( C = 2log C , which is not polynomial in log C). Ex: binary knapsack problem has a dynamic programming algorithm which runs in O(nb) – pseudopolynomial-time algorithm. • Def: Algorithm is said to run in (f(n)) time if for some numbers c’ and n0, and all n  n0, the algorithm takes at least c’f(n) time on some problem instance. (asymptotic lower bound) • Def: Algorithm is said to be (f(n)) if the algorithm is both O(f(n)) and (f(n)). (tight bound)

  8. Potential Functions and Amortized Complexity: (maximum possible steps in each iteration)  (maximum possible iterations) can be an extreme overestimate. • Ex: Stack – two operatons push(x, S). Add element x to the top of the stack S. popall(S). Pop (i.e., take out) every element of S. What is the worst-case complexity of sequence of n operations? Naïve approach: push(x, S) – O(1), popall(S) – O(|S|) at most n popall operation, each operation O(|n|)  (n2) Potential function approach: (k) = |S| denote the number of items at the end of the k-th step. each push increases (k) by one unit and takes 1 unit time.

  9. each popall decreases (k) by at least 1 and requires time proportional to |(k)|. Total increase in (k) is at most n, hence total decrease in (k) is at most n.  total time is O(n). • Amortized complexity: An operation is said to be of amortized complexity O(f(n)) if the time to perform a sequence of k operations is O(kf(n)) for sufficiently large k. (average worst-case complexity) amortized complexity of popall operation is O(1).

  10. 3.3 Developing Polynomial-Time Algorithms • Geometric Improvement Approach: Assume integer optimal objective function value. Let H be the difference between the maximum and minimum objective function values. Minimization problem. • Thm 3.3. Let zk: objective function value at k-th iteration. z*: optimal value. If algorithm guarantees, for every k, (zk – zk+1)  (zk – z*), (0 <  < 1). Then the algorithm terminates in O((log H)/) iterations.

  11. (Bit) Scaling Approach: Represent data in binary (up to K digits). Consider P1, P2, … , PK, where Pi is the problem with data using 1~i leading digits. Use optimal solution of Pi as starting solution of Pi+1. • Ex: maximum flow problem (1) (uij = 5) i j P1  i j (101) (10) i j P2 (101) P3 i j

  12. Capacity of an arc in Pk = 2  capacity of arc in Pk-1 + {0 or 1} Set initial solution for Pk as 2 optimal solution of Pk-1 (still feasible in Pk) Let vk be optimal value of Pk . Then vk – 2vk-1  m. • Total number of reoptimization is O(logC) or O(logU). (polynomial)

  13. Dynamic Programming: table-filling approach in the text. • Computing Binomial Coefficients: How to compute pCq = p!/(p-q)!q! easily? Use iCj = i-1Cj + i-1Cj-1 Define lower triangular table D = {d(i, j)} with p rows and q columns. d(i, j) = iCj for i  j. Scan rows from 1 to p. When scan row i, scan columns from 1 to i.

  14. Knapsack problem: maximize i=1p uixi subject to i=1p wixi  W xi  {0, 1} for all i. Construct p  W table D, where d(i, j): max obj. value when we use items 1~i, with knapsack capacity j. recursive equation: d(i, j) = max { d(i-1, j), ui + d(i-1, j-wi)}, Scan rows from 1 to p. When scan row i, scan columns from 1 to W. Running time is O(pW). • Binary search: x[L, U], take the center point of the interval and discard half. runs in log(U-L).

  15. 3.4 Search Algorithms • Basic techniques for graphs that attempt to find all the nodes in a network satisfying a particular property. Frequently used as subroutines of other more complex algorithms. • Finding all nodes that are reachable by directed paths from a specific node s. • Finding all nodes that can reach a specific node t along directed paths. • Identifying all connected components • Determining whether a given network is bipartite. • Identifying whether a directed cycle exists. If acyclic, find numbering of the nodes such that if (i, j)  A, then i < j (topological ordering).

  16. Finding all nodes that are reachable by directed paths from a specific node s. Starting from node s, identify nodes reachable from the node s. States for nodes: marked or unmarked arc (i, j) admissible arc if i marked and j unmarked. inadmissible, otherwise. • Initially, only source node s is marked. From a marked node, mark another node using admissible arc. Then add the newly marked node to the LIST of marked nodes. Different results obtained depending on the data structure of LIST. • To identify admissible arcs, use current-arc data structure. In adjacency list A(i) of node i, current arc (i, j) is the next candidate arc that we wish to examine. Initially, current arc is the first arc in the list A(i). • Running time is O(n+m) = O(m)

  17. Breadth-First Search Maintain LIST as a queue. Select nodes from the front of LIST and add nodes to the rear. Results in breadth-first search tree • Depth-First Search Maintain LIST as a stack. Select nodes from the front of LIST and add nodes to the front. Results in depth-first search tree. • Reverse Search Algorithm: Identify all the nodes from which we can reach a given node t along directed paths. • Initialize LIST = {t} • While scanning a node, we scan the incoming arcs instead of its outgoing arcs. • Arc (i, j) is admissible if i unmarked and j marked.

  18. Determining Strong Connectivity: G is strongly connected if there exists a directed path from i to j for any node pair i and j. Strongly connected if and only if we can reach any node from an arbitrary node s, and s is reachable from any node in G.  Use two applications of the search algorithm.

  19. Topological Ordering: Topological ordering: labeling (order(i)) of nodes so that (i, j)A  order(i) < order(j). If G contains a directed cycle, topological ordering does not exist. (Contraposition of the statement is:  topological ordering  G is acyclic.) Give an algorithm for finding a topological ordering of acyclic graph. Then G is acyclic   topological ordering. Hence together: G is acyclic   topological ordering. • Thm: G = (N, A) directed. If each node has indegree at least one, the network contains a directed cycle. (exercise 3.38) • Hence If G is acyclic, there exists a node with 0 indegree.

  20. Algorithm Choose a node with 0 indegree. Give it a label of 1 and eliminate the node and all arcs emanating from it. Select a node with 0 indegree in the remaining graph and give it a label 2, … (The remaining graph is still acyclic) Repeat the process until no node has 0 indegree. If there are some nodes and arcs remaining, the subnetwork contains a directed cycle. Otherwise, we have a topological ordering. • Start with a set LIST containing nodes with 0 indegree. Choose a node i in LIST, and for every arc (i, j)A(i) we reduce the indegree of node j by 1, and if indegree of node j becomes 0, add node j to LIST • Running time is O(m).

  21. 3.5 Flow Decomposition Algorithms • Current model uses arc flow variables xij. May use path, cycle flows as decision variables. • Arc flow representation  path, cycle flow representation (?) • Arc flow: {j: (i, j)A} xij - {j: (j, i)A} xji = -e(i) for all i  N, 0  xij  uij for all (i, j)  A, where i=1n e(i) = 0 • e(i) = inflow – outflow of node i • e(i) > 0  node i is excess node (inflow > outflow) • e(i) < 0  node i is deficit node (inflow < outflow) • e(i) = 0  node i is balanced

  22. Let P = set of all directed paths, W = set of all directed cycles f(P) = decision variable for flow value on path P f(W) = decision variable for flow value on directed cycle W ij(P) = 1, if (i, j) is contained in the path P = 0, otherwise Similarly for ij(W) • xij = PP ij(P)f(P) + WWij(W)f(W) converse?

  23. Thm 3.5 (Flow Decomposition Theorem). arc flow  path and cycle flow • Positive path flow connects a deficit node to an excess node • At most n + m paths and cycles have positive flow. At most m cycles have positive flow. Pf) show that arc flow  path and cycle flow (1) Choose a deficit node (inflow < outflow), follow directed arcs with positive flow until an excess node met or a cycle found • Path found: f(P) = min { -e(i0), eik, min{xij: (i, j)P} }. Update flow • Cycle found: f(W) = min{xij: (i, j)W}. xij = xij – f(W). Continue until all node imbalances are zero. Now eliminate all flows using directed cycles. When path found, we reduce the excess/deficit of some node to 0 or the flow on some arc to 0. When cycle found, flow on some arc becomes 0. Hence n + m total paths and cycles, and at most m directed cycles. 

  24. Note that the flow decomposition may not be unique • Property 3.6. A circulation x can be represented as cycle flow along at most m directed cycles. • Maintain LIST of deficit nodes as doubly linked list. When LIST eventually becomes empty, initialize it with the set of arcs with positive flow. To Identify an arc with positive flow emanating from a node (admissible arc), use current-arc data structure. In an iteration: O(n) + time to scan arcs to identify admissible arcs. Flows arc nonincreasing, hence if an arc becomes inadmissible, it remains inadmissible. Can use current-arc structure to scan arcs  total O(m) total iteration is (n+m)  running time O(m+(n+m)n) = O(nm).

  25. Given two feasible solutions x and x0 to MCF, the flow difference x – x0 has e(i) = 0 for all i (interpret xij < 0 as sending |xij| flow on (j, i) ) . Hence x – x0 is a circulation. By flow decomposition theorem, we can construct any x from x0 by adding flows on directed cycles (when opposite direction arcs are considered). • Augmenting cycle: A cycle W (not necessarily directed) in G is called an augmenting cycle with respect to the flow x if by augmenting a positive amount of flow f(W) around the cycle, the flow remains feasible. W is augmenting cycle in G if xij < uij for forward arc (i, j)(increase flow) and xij > 0 for every backward arc (i, j) (decrease flow). Hence can construct any feasible flow x from feasible flow x0 by using augmenting cycle. Let ij(W) = 1, if arc (i, j) is a forward arc in W = -1, if arc (i, j) is a backward arc in W = 0, otherwise Cost of augmenting cycle W is c(W) = (i, j)Wcijij(W)

  26. Now interpret augmenting cycle using residual network G(x0). Each augmenting cycle in G with respect to a flow x0 corresponds to a directed cycle W in residual network G(x0) and vice versa. • Cost of feasible flow x in G : cx = cx0 + cx1 ( x, x0: feasible flow in G, x1: feasible circulation in G(x0)) • (Augmenting Cycle Theorem) Let x and x0 be any two feasible solutions of a network flow problem. Then x equals x0 plus the flow on at most m directed cycles in G(x0). Furthermore, the cost of x equals the cost of x0 plus the cost of flow on these augmenting cycles. • Thm 3.8 (Negative Cycle Optimality Theorem). A feasible solution x* of the MCF is an optimal solution if and only if the residual network G(x*) contains no negative cost directed cycle.

More Related