1 / 65

Analysis & Design of Algorithms (CSCE 321)

Analysis & Design of Algorithms (CSCE 321). Prof. Amr Goneid Department of Computer Science, AUC Part 10. Dynamic Programming. Dynamic Programming. Dynamic Programming. Introduction What is Dynamic Programming? How To Devise a Dynamic Programming Approach The Sum of Subset Problem

kiral
Télécharger la présentation

Analysis & Design of Algorithms (CSCE 321)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analysis & Design of Algorithms(CSCE 321) Prof. Amr Goneid Department of Computer Science, AUC Part 10. Dynamic Programming Prof. Amr Goneid, AUC

  2. Dynamic Programming Prof. Amr Goneid, AUC

  3. Dynamic Programming • Introduction • What is Dynamic Programming? • How To Devise a Dynamic Programming Approach • The Sum of Subset Problem • The Knapsack Problem • Minimum Cost Path • Coin Change Problem • Optimal BST • DP Algorithms in Graph Problems • Comparison with Greedy and D&Q Methods Prof. Amr Goneid, AUC

  4. 1. Introduction We have demonstrated that • Sometimes, the divide and conquer approach seems appropriate but fails to produce an efficient algorithm. • One of the reasons is that D&Q produces overlapping subproblems. Prof. Amr Goneid, AUC

  5. Introduction • Solution: • Buy speed using space • Store previous instances to compute current instance • instead of dividing the large problem into two (or more) smaller problems and solving those problems (as we did in the divide and conquer approach), we start with the simplest possible problems. • We solve them (usually trivially) and save these results. These results are then used to solve slightly larger problems which are, in turn, saved and used to solve larger problems again. • This method is called Dynamic Programming Prof. Amr Goneid, AUC

  6. 2. What is Dynamic Programming • An algorithm design method used when the solution is a result of a sequence of decisions (e.g. Knapsack, Optimal Search Trees, Shortest Path, .. etc). • Makes decisions one at a time. Never make an erroneous decision. • Solves a sub-problem by making use of previously stored solutions for all other sub-problems. Prof. Amr Goneid, AUC

  7. Dynamic Programming • Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems • “Programming” here means “planning” Prof. Amr Goneid, AUC

  8. When is Dynamic Programming Two main properties of a problem that suggest that the given problem can be solved using Dynamic programming. • Overlapping Subproblems • Optimal Substructure Prof. Amr Goneid, AUC

  9. Overlapping Subproblems • Like Divide and Conquer, Dynamic Programming combines solutions to subproblems. Dynamic Programming is mainly used when solutions of same subproblems are needed again and again. • Examples are computing the Fibonacci Sequence, Binomial Coefficients, etc. Prof. Amr Goneid, AUC

  10. Optimal Substructure: Principle of Optimality • Dynamic programming uses the Principle of Optimalityto avoid non-optimal decision sequences. • For an optimal sequence of decisions, the remaining decisions must constitute an optimal sequence. • Example: Shortest Path Find Shortest path from vertex (i) to vertex (j) Prof. Amr Goneid, AUC

  11. Principle of Optimality Let k be an intermediate vertex on a shortest i-to-j path i , a , b, … , k , l , m , … , j . The path i , a , b, … , k must be shortest i-to-k , and the path k , l , m , … , j must be shortest k-to-j k i j l a b m Prof. Amr Goneid, AUC

  12. 3. How To Devise a Dynamic Programming Approach • Given a problem that is solvable by a Divide & Conquer method • Prepare a table to store results of sub-problems • Replace base case by filling the start of the table • Replace recursive calls by table lookups • Devise for-loops to fill the table with sub-problem solutions instead of returning values • Solution is at the end of the table • Notice that previous table locations also contain valid (optimal) sub-problem solutions Prof. Amr Goneid, AUC

  13. Example(1): Fibonacci Sequence • The Fibonacci graph is not a tree, indicating an Overlapping Subproblem. • Optimal Substructue: If F(n-2) and F(n-1) are optimal, then F(n) = F(n-2) + F(n-1) is optimal Prof. Amr Goneid, AUC

  14. Fibonacci Sequence Dynamic Programming Solution: • Buy speed with space, a table F(n) • Store previous instances to compute current instance Prof. Amr Goneid, AUC

  15. Fibonacci Sequence Fib(n): if (n < 2) return 1; else return Fib(n-1) + Fib(n-2) Table F[n] F[0] = F[1] = 1; if ( n >= 2) for i = 2 to n F[i] = F[i-1] + F[i-2]; return F[n]; Prof. Amr Goneid, AUC

  16. Fibonacci Sequence Dynamic Programming Solution: • Space Complexity is O(n) • Time Complexity is T(n) = O(n) Prof. Amr Goneid, AUC

  17. Example(2): Counting Combinations • Overlapping Subproblem. 5,3 4,2 4,3 3,1 3,2 3,2 3,3 2,1 2,2 2,1 2,2 1,0 1,1 1,0 1,1 Prof. Amr Goneid, AUC

  18. Counting Combinations • Optimal SubstructureThe value of Comb(n, m) can be recursively calculated using following standard formula for Binomial Coefficients: Comb(n, m) = Comb(n-1, m-1) + Comb(n-1, m) Comb(n, 0) = Comb(n, n) = 1 Prof. Amr Goneid, AUC

  19. Counting Combinations Dynamic Programming Solution: • Buy speed with space, Pascal’s Triangle. Use a table T[0..n, 0..m]. • Store previous instances to compute current instance Prof. Amr Goneid, AUC

  20. Counting Combinations Table T[n,m] for (i = 0 to n − m) T[i, 0] = 1; for (i = 0 to m) T[i, i] = 1; for (j = 1 to m) for (i = j + 1 to n − m + j) T[i, j] = T[i − 1, j − 1] + T[i − 1, j]; return T[n, m]; comb(n,m): if ((m == 0) || (m == n)) return1; else return comb (n − 1, m − 1) + comb (n − 1, m); i , j Prof. Amr Goneid, AUC

  21. Counting Combinations Dynamic Programming Solution: • Space Complexity is O(nm) • Time Complexity is T(n) = O(nm) Prof. Amr Goneid, AUC

  22. Exercise Consider the following function: Consider the number of arithmetic operations used to be T(n): • Show that a direct recursive algorithm would give an exponential complexity. • Explain how, by not re-computing the same F(i) value twice, one can obtain an algorithm with T(n) = O(n2) • Give an algorithm for this problem that only uses O(n) arithmetic operations. Prof. Amr Goneid, AUC

  23. 4. The Sum of Subset Problem • Given a set of positive integers W = {w1,w2...wn} • The problem:is there a subset of W that sums exactly to m? i.e, isSumSub (w,n,m)true? • Example: W = { 11 , 13 , 27 , 7} , m = 31 A possible subset that sums exactly to 31 is {11 , 13 , 7} Hence, SumSub (w,4,31) is true Prof. Amr Goneid, AUC

  24. The Sum of Subset Problem • Consider the partial problem SumSub (w,i,j) • SumSub (w, i, j) is true if: • wi is not needed, {w1,..,wi-1} has a subset that sums to (j), i.e., SumSub (w, i-1, j) is true, OR • wi is needed to fill the rest of (j), i.e., {w1,..,wi-1} has a subset that sums to (j-wi) • If there are no elements, i.e. (i = 0) then SumSub (w, 0, j) is true if (j = 0) and false otherwise Prof. Amr Goneid, AUC

  25. Divide & Conquer Approach Algorithm: bool SumSub (w, i, j) { if (i == 0) return (j == 0); else if (SumSub (w, i-1, j)) return true; else if ((j - wi) >= 0) return SumSub (w, i-1, j - wi); else return false; } Prof. Amr Goneid, AUC

  26. Dynamic Programming Approach Use a tabel t[i,j], i = 0.. n, j = 0..m • Base case: set t[0,0] = true and t[0,j] = false for (j != 0) • Recursive calls are constructed as follows: loop on i = 1 to n loop on j = 1 to m • test on SumSub (w, i-1, j) is replaced by t[i,j] = t[i-1,j] • return SumSub (w, i-1, j - wi) is replaced by t[i,j] = t[i-1,j] OR t[i-1,j – wi] Prof. Amr Goneid, AUC

  27. Dynamic Programming Algorithm bool SumSub (w , n , m) { • t[0,0] = true; for (j = 1 to m) t[0,j] = false; • for (i = 1 to n) • for (j = 0 to m) • t[i,j] = t[i-1,j]; • if ((j-wi) >= 0) • t[i,j] = t[i-1,j] || t[i-1, j – wi]; • return t[n,m]; } i , j Prof. Amr Goneid, AUC

  28. Dynamic Programming Algorithm Analysis: • costs O(1) + O(m) • costs O(n) • costs O(m) • costs O(1) • costs O(1) • costs O(1) • costs O(1) Hence, space complexity is O(nm) Time complexity is T(n) = O(m) + O(nm) = O(nm) Prof. Amr Goneid, AUC

  29. 5. The (0/1) Knapsack Problem • Given n indivisible objects with positive integer weights W = {w1,w2...wn} and positive integer values V = {v1,v2...vn} and a knapsack of size (m) • Find the highest valued subset of objects with total weight at most (m) i = n i = 2 i = 1 w1 m wn w2 …….. p1 pn p2 Prof. Amr Goneid, AUC

  30. The Decision Instance • Assume that we have tried objects of type (1,2,..., i -1) to fill the sack up to a capacity (j) with a maximum profit of P(i-1,j) • If j  withen P(i-1, j - wi) is the maximum profit if we remove the equivalent weight wi of an object of type (i). • By trying to add object (i), we expect the maximum profit to change to P(i-1, j - wi) + vi Prof. Amr Goneid, AUC

  31. The Decision Instance • If this change is better, we do it, otherwise we leave things as they were, i.e., P(i , j) = max { P(i-1, j) , P(i-1, j - wi) + vi } for j  wi P(i , j) = P(i-1, j) for j < wi • The above instance can be solved for P(n , m) by initializing P(0,j) = 0and successively computing P(1,j) , P(2,j) ....., P(n , j) for all 0  j  m Prof. Amr Goneid, AUC

  32. Divide & Conquer Approach Algorithm: int Knapsackr (int w[ ], int v[ ], int i, int j) { if (i == 0) return 0; else { int a = Knapsackr (w,v,i-1,j); if ((j - w[i]) >= 0) { int b = Knapsackr (w,v,i-1,j-w[i]) + v[i]; return (b > a? b : a); } else return a; } } Prof. Amr Goneid, AUC

  33. Divide & Conquer Approach Analysis: T(n) = no. of calls to Knapsackr (w, v, n, m): • For n = 0, one main call, T(0) = 1 • For n > 0, one main call plus two calls each with n-1 • The recurrence relation is: T(n) = 2T(n-1) + 1 for n > 0 with T(0) = 1 • Hence T(n) = 2n+1 -1 = O(2n) = exponential time Prof. Amr Goneid, AUC

  34. Dynamic Programming Approach • The following approach will give the maximum profit, but not the collection of objects that produced this profit • Initialize P(0 , j)= 0 for 0  j  m • Initialize P(i , 0)= 0 for 0  i n • for each object i from 1 to n do for a capacity j from 0 to m do P(i , j) = P(i-1 , j) if ( j >= wi) if (P(i-1 , j)< P(i-1, j - wi) + vi ) P(i , j) P(i-1 , j - wi) + vi Prof. Amr Goneid, AUC

  35. DP Algorithm int Knapsackdp (int w[ ], int v[ ], int n, int m) { int p[N][M]; for (int j = 0; j <= m; j++) p[0][j] = 0; for (int i= 0; i <= n; i++) p[i][0] = 0; for (int i = 1; i <= n; i++) for (j = 0; j <= m; j++) { int a = p[i-1][j]; p[i][j] = a; if ((j-w[i]) >= 0) {int b = p[i-1][j-w[i]]+v[i]; if (b > a) p[i][j] = b; } } return p[n][m]; } Hence, space complexity is O(nm) Time complexity is T(n) = O(n) + O(m) + O(nm) = O(nm) Prof. Amr Goneid, AUC

  36. Example Example: Knapsack capacity m = 5 item weight value ($) • 2 12 • 1 10 • 3 20 • 2 15 Prof. Amr Goneid, AUC

  37. Example P(i-1, j-wi) P(i-1, j) wi, vi P(i , j) Goal P(n,m) Prof. Amr Goneid, AUC

  38. Example . Prof. Amr Goneid, AUC

  39. Exercises • Modify the previous Knapsack algorithm so that it could also list the objects contributing to the maximum profit. • Explain how to reduce the space complexity of the Knapsack problem to only O(m). You need only to find the maximum profit, not the actual collection of objects. Prof. Amr Goneid, AUC

  40. Exercise: Longest Common Sequence Problem • Given two sequences A = {a1, . . . , an} and B = {b1, . . . , bm}. • Find the longest sequence that is a subsequence of both A and B. For example, ifA = {aaadebcbac}and B = {abcadebcbec},then{adebcb}is subsequence of length 6 of both sequences. • Give the recursive Divide & Conquer algorithm and the Dynamic programming algorithm together with their analyses • Hint: Let L(i, j) be the length of the longest common subsequence of {a1, . . . , ai} and {b1, . . . , bj}. If ai = bj then L(i, j) = L(i−1, j −1)+1. Otherwise, one can see that L(i, j) = max (L(i, j − 1), L(i − 1, j)). Prof. Amr Goneid, AUC

  41. 6. Minimum Cost Path • Given a cost matrix C[ ][ ] and a position (n, m) in C[ ][ ], find the cost of minimum cost path to reach (n , m) from (0, 0). • Each cell of the matrix represents a cost to traverse through that cell. Total cost of a path to reach (n, m) is the sum of all the costs on that path (including both source and destination). • From a given cell (i, j), You can only traverse down to cell (i+1, j), , right to cell (i, j+1) and diagonally to cell (i+1, j+1) . Assume that all costs are positive integers Prof. Amr Goneid, AUC

  42. Minimum Cost Path Example:what is the minimum cost path to (2, 2)? The path is (0, 0) –> (0, 1) –> (1, 2) –> (2, 2). The cost of the path is 8 (1 + 2 + 2 + 3). • Optimal SubstructureMinimum cost to reach (n, m) is “minimum of the 3 cells plus cost[n][m]“. i.e., minCost(n, m) = min (minCost(n-1, m-1), minCost(n-1, m), minCost(n, m-1)) + C[n][m] Prof. Amr Goneid, AUC

  43. Minimum Cost Path (D&Q) • Overlapping Subproblems: The recursive definition suggests a D&Q approach with overlapping subproblems: intMinCost(int C[ ][M], int n, int m) { if (n < 0 || m < 0) return ∞; else if (n == 0 && m == 0) return C[n][m]; else return C[n][m] + min( MinCost(C, n-1, m-1), MinCost(C, n-1, m), MinCost(C, n, m-1) ); } Anaysis: For m=n, T(n) = 3 T(n-1) + 3 for n > 0 with T(0) = 0 Hence T(n) = O(3n) , exponential complexity Prof. Amr Goneid, AUC

  44. Dynamic Programming Algorithm In the Dynamic Programming(DP) algorithm, recomputations of same subproblems can be avoided by constructing a temporary array T[ ][ ] in a bottom up manner. intminCost(int C[ ][M], int n, int m) { inti, j; int T[N][M]; T[0][0] = C[0][0]; /* Initialize first column */ for (i = 1; i <= n; i++) T[i][0] = T[i-1][0] + C[i][0]; /* Initialize first row */ for (j = 1; j <= m; j++) T[0][j] = T[0][j-1] + C[0][j]; /* Construct rest of the array */ for (i = 1; i <= n; i++) for (j = 1; j <= m; j++) T[i][j] = min(T[i-1][j-1], T[i-1][j], T[i][j-1]) + C[i][j]; return T[n][m]; } Space complexity is O(nm) Time complexity is T(n) = O(n)+O(m) + O(nm) = O(nm) Prof. Amr Goneid, AUC

  45. 7. Coin Change Problem • We want to make change for N cents, and we have infinite supply of each of S = {S1 , S2, …Sm} valued coins, how many ways can we make the change? (For simplicity's sake, the order does not matter.) • Mathematically, how many ways can we express N as • For example, for N = 4,S = {1,2,3}, there are four solutions: {1,1,1,1},{1,1,2},{2,2},{1,3}. • We are trying to count the number of distinct sets. • Since order does not matter, we will impose that our solutions (sets) are all sorted in non-decreasing order (Thus, we are looking at sorted-set solutions: collections). Prof. Amr Goneid, AUC

  46. Coin Change Problem • With S1 < S2 < …<Smthe number of possible sets C(N,m) is Composed of: • Those sets that contain at least 1 Sm, i.e. C(N-Sm, m) • Those sets that do not contain any Sm, i.e. C(N, m-1) • Hence, the solution can be represented by the recurrence relation: C(N,m) = C(N, m-1) + C(N-Sm, m) with the base cases: C(N,m) = 1 for N = 0 C(N,m) = 0 for N < 0 C(N,m) = 0 for N  1 , M ≤ 0 • Therefore, the problem has optimal substructure property as the problem can be solved using solutions to subproblems. • It also has the property of overlapping subproblems. Prof. Amr Goneid, AUC

  47. D&Q Algorithm int count( int S[ ], int m, int n ) { // If n is 0 then there is 1 solution (do not include any coin) if (n == 0) return 1; // If n is less than 0 then no solution exists if (n < 0) return 0; // If there are no coins and n is greater than 0, then no solution if (m <=0 && n >= 1) return 0; // count is sum of solutions (i) including S[m-1] (ii) excluding // S[m-1] return count( S, m - 1, n ) + count( S, m, n-S[m-1] ); } The algorithm has exponential complexity. Prof. Amr Goneid, AUC

  48. DP Algorithm int count( int S[ ], int m, int n ) { int i, j, x, y; int table[n+1][m]; // n+1 rows to include the case (n = 0) for (i=0; i < m; i++) table[0][i] = 1; // Fill for the case (n = 0) // Fill rest of the table enteries bottom up for (i = 1; i < n+1; i++) for (j = 0; j < m; j++) { x = (i-S[j] >= 0)? table[i - S[j]][j]: 0; // solutions including S[j] y = (j >= 1)? table[i][j-1]: 0; // solutions excluding S[j] table[i][j] = x + y; // total count } return table[n][m-1]; } Space comlexity is O(nm) Time comlexity is O(m) + O(nm) = O(nm) Prof. Amr Goneid, AUC

  49. 8. Optimal Binary Search Trees • Problem: Given a set of keys K1 , K2 , … , Kn and their corresponding search frequencies P1 , P2 , … , Pn , find a binary search tree for the keys such that the total search cost is minimum. • Remark: The problem is similar to that of the optimal merge trees (Huffman Coding) but more difficult because now: - Keys can exist in internal nodes - The binary search tree condition (Left < Parent < Right) is imposed Prof. Amr Goneid, AUC

  50. Word a am and if two Freq (tot 100) 22 18 20 30 10 (a) Example • A Binary Search Tree of 5 words: • A Greedy Algorithm: Insert words in the tree in order of decreasing frequency of search. Prof. Amr Goneid, AUC

More Related