1 / 164

Example 1: Coin-row problem [Page 285-286, Levitin]

Example 1: Coin-row problem [Page 285-286, Levitin] Given: A row of n coins whose values are some positive integers c 1 , c 2 , …, c n , not necessarily distinct.

rodneyf
Télécharger la présentation

Example 1: Coin-row problem [Page 285-286, Levitin]

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Example 1:Coin-row problem [Page 285-286, Levitin] • Given: A row of n coins whose values are some positive integers c1, c2 , …, cn , not necessarily distinct. • Output: Pick up the maximum amount of money subject to the constraint that no two coins adjacent in the initial row can be picked up. • Solution: • … • The largest amount we can get from the first group is equal to cn+ F(n – 2) -- the value of the nth coin plus the maximum amount we can pick up from the first n-2 coins. • The maximum amount we can get from the second group is equal to F(n – 1) by the definition of F(n). • Thus, we have the following recurrence subject to the obvious initial conditions: • F(n) = max(cn + F(n – 2), F(n-1)) for n > 1 … (8.3) • F(0) = 0, F(1) = c1 .

  2. Algorithm CoinRow( C[1 .. n] ) //Applies formula (8.3) bottom up to find the maximum amount //of money that can be picked up from a coin row without //picking two adjacent coins. Input: Array C[1 .. n] ) of positive integers indicating the coin values Output: The maximum amount of money that can be picked up F[0] ← 0; F[1] ← C[1]; fori← 2 to n do F[ i ] ← max ( C[ i ] + F[ i – 2 ], F[ i – 1 ] ); return F[ n ] T(n) = θ(n)

  3. Using the CoinRow to find F(n), the largest amount of money that can be picked up, as well as the coins composing an optimal set, clearly take ϴ(n) time and ϴ(n) space. This is by far superior to the alternatives: the straightforward top-down application of recurrence (8.3) (which is exponential) and solving the problem by exhaustive search (which is at least exponential).

  4. This is by far superior to the alternatives: the straightforward top-down application of recurrence (8.3) (which is exponential) T(n) = θ() F[6] c[6] + F[4] F[5] c[4] + F[2] F[3] c[5] + F[3] F[4] c[2] + F[0] F[1] c[3] + F[1] F[2] c[3] + F[1] F[2] c[4] + F[2] F[3] c[0] c[1] c[1] c[2] + F[0] F[1] c[1] c[2] + F[0] F[1] ] c[3] + F[1] F[2] c[1] c[1]

  5. Example 2: Change-making problem. [page 287-288, Levitin] Consider the general instance of the following well-known problem. Given change for amount n using the minimum number of coins denominations d1 < d2 < … < dm, assume that the availability of unlimited quantities of coins for each of the m denominations d1 < d2 < … < dm where d1 = 1. Consider a dynamic programming algorithm for the general case: Let F(n) be the minimum number of coins whose values add up to n. Let define F(0) = 0. The amount n can only be obtained by adding one coin of denomination dj to the amount n – dj for j = 1, 2, …, m such that n ≥ dj .

  6. Change-making problem. Therefore we can consider all such denominations and select the one minimizing F(n – dj ) + 1. Since 1 is a constant, we can find the smallest F(n – dj ) first and then add 1 to it. Hence, we have the following recurrence for F(n): F(n) = minj: n ≥dj { F(n – dj ) } + 1 for n > 0 …………… 8.4 F(0) = 0. We can compute F(n) by filling a one-row table left to right in the manner similar to the way it was done above for the coin-row problem, but computing a table entry here requires finding the minimum of up to m numbers.

  7. Algorithm ChangeMaking ( D[ 1.. m], n) //Applies dynamic programming to find the minimum number of //coins of denominations d1 < d2 < … < dm where d1 = 1 that add up //to a given amount n. Input: Positive integer n and array D[1 .. m] of increase positive integers indicating the coin denominators where D[1] = 1 Output: The minimum number of coins that add up to n F[0] ← 0 fori← 1 to n do { //add up to n. temp ←∞ ; j ←1; while j ≤ m andi≥ D[j] do { //1 ≤ j ≤ m and if the coin D(j) ≤ n=i temp ← min(F[i – D[j]], temp);//what will make up F[i-D(1)], j ← j + 1 } //end while //F[i-D(2)], … F[i-D(k)]. F[i] ← tempt + 1; } //end for return F[n] T(m,n) = O(n m)

  8. The application of the algorithm to amount n = 6 and denominations 1, 3, 4 is shown in Figure 8.2. The answer it yields is two coins. The time and space efficiencies of the algorithm are obvious O(n m) and ϴ(n), respectively.

  9. F[0] = 0 F[3] = min { F[3 - 1], F[3 – 3]} + 1 = 1 F[1] = min { F[1 - 1]} + 1 = 1 F[4] = min {F[4 - 1], F[4 – 3], F[4 – 4]} + 1 = 1 F[2] = min { F[2 - 1]} + 1 = 2 F[5] = min {F[5 - 1], F[5 – 3], F[5 – 4]} + 1 = 2

  10. F[6] = min { F[6 - 1], F[6 – 3], F[6 – 4] } + 1 = 2 Figure 8.2 Application of Algorithm MinCoinChange to amount n = 6 and coin denominations 1, 3, 4.

  11. F[6] = min { F[6 - 1], F[6 – 3], F[6 – 4] } + 1 = 2 Figure 8.2 Application of Algorithm MinCoinChange to amount n = 6 and coin denominations 1, 3, 4. To find the coins of an optimal solution, we need to backtrace the computations to see which of the denominations produced the minima in formula (8.4). For the instance considered, the last application of the formula (for n = 6), the minimum was produced by d2 = 3. The second minimum (for n = 6 – 3) was also produced for a coin of that denomination. Thus, the minimum-coin set for n = 6 is two 3’s. F[6] = min(F[6-4], F[6-3], F[6-1])+1 =F[3]+1=1+1 =2 That is, we need D[3]=3 + D[3]=3 to get total n=6.

  12. The top-down Approach for MinCoinChangeto amount n = 6 and coin denominations 1, 3, 4. F[6] F[6-1] F[6-3]* F[6-4] F[5-1] F[5-3] F[5-4] F[3-1] F[3-3]* F[2-1] F[4-1] F[4-3] F[4-4] F[2-1] F[1-1] F[2-1] F[0]=0 F[1-1] F[3-1] F[3-3] F[1-1] F[0] F[1-1] F[0] F[1-1] F[0]=0 F[2-1] F[0]=0 F[0]=0 F[0] = 0 F[0]=0 It is exponential. T(m,n) = O(n m)

  13. Example 3: Coin-collecting problem [page 288, Levitin] • Several coins are placed in cells of an nrow x mcol board, no more than one coin per cell. (That is a board with n rows and m columns) • A robot, located in the upper left cell (1,1) of the board, needs to collect as many of the coins as possible and bring them to the bottom right cell (n, m). • On each step, the robot can move either one cell to the right (→) or one cell down ( ↓ ) from its current location. • When the robot visits a cell with a coin, it always picks up that coin. • Design an algorithm to find the maximum number of coins the robot can collect and a path it needs to follow to do this.

  14. Therefore, the largest number of coins the robot can bring to cell (irow, jcol) is the maximum of these two numbers plus one possible coin at cell (irow, jcol) itself. The, we have the following formula for F(i, j) : F(i, j) = max{ F(i – 1, j)left, F(i, j -1)top } + cij for 1 ≤i≤ n, 1 ≤ j ≤ m, ……………….…8.5 F(0, j) = 0 for 1 ≤ j ≤ m and F(i, 0) = 0 for 1 ≤i≤ n, where cij = 1 if there is a coin in cell (irow, jcol), and cij = 0 otherwise.

  15. AlgorithmRobotCoinCollection ( C[1 .. n, 1 ..m] ) //Applies dynamic programming to compute the largest number of //coins a robot can collect on an nrow x mcol board by starting at (1, 1) //and moving right and down from upper left to down right corner Input: Matrix C[1 .. n, 1 ..m]row, col whose elements are equal to 1 and 0 for cells with and without a coin, respectively Output: Largest number of coins the robot can bring to cell (n, m) F[1, 1] ← C[1, 1]; for j ← 2 to m do { //j refers to the columns F[1, j] ← F[1, j -1] + C[1, j]; } //compute nos of coins at row 1 from col 2 thru col m. // compute all the entries of the first row. fori← 2 to n do{//irefers to the rows F[i, 1] ← F[i - 1, 1] + C[i, 1]; //compute nos of coins at col 1 from row 2 thru row n. //Add nos of coins at the top of the current entry to the //current entry. Compute all the entries of the first col. for j ← 2 to m do { // from col 2 to col m F[i, j] ← max ( F[i - 1, j], F[i, j - 1] ) + C[i, j]; } //add current entry to max ( total nos of coins at the left of the current entry, // total nos of coins at the top of the current entry ) return F[n, m]

  16. AlgorithmRobotCoinCollection ( C[1 .. n, 1 ..m] ) //Applies dynamic programming to compute the largest number of //coins a robot can collect on an nrow x mcol board by starting at (1, 1) //and moving right and down from upper left to down right corner Input: Matrix C[1 .. n, 1 ..m]row, col whose elements are equal to 1 and 0 for cells with and without a coin, respectively Output: Largest number of coins the robot can bring to cell (n, m) F[1, 1] ← C[1, 1]; for j ← 2 to m do { F[1, j] ← F[1, j -1] + C[1, j]; } fori← 2 to n do { F[i, 1] ← F[i - 1, 1] + C[i, 1]; for j ← 2 to m do { F[i, j] ← max ( F[i - 1, j], F[i, j - 1] ) + C[i, j]; } returnF[n, m] T(m,n) = O(n m)

  17. Figure 8.3 (a) Coins to collect.

  18. F(i, j) = max{ F(i – 1, j)left, F(i, j -1)top } + cij for 1 ≤i≤ n, 1 ≤ j ≤ m F(0, j) = 0 for 1 ≤ j ≤ m and F(i, 0) = 0 for 1 ≤i≤ n ……………8.5 where cij = 1 if there is a coin in cell (i, j) , and cij = 0 otherwise Figure 8.3 (a) Coins to collect. Figure 8.3 (b) Dynamic programming algorithm results. F[1, 1] ← C[1, 1]; F[1, j] ← F[1, j -1] + C[1, j], 2≤ j ≤ m;

  19. F(i, j) = max{ F(i – 1, j)left, F(i, j -1)top } + cij for 1 ≤i≤ n, 1 ≤ j ≤ m F(0, j) = 0 for 1 ≤ j ≤ m and F(i, 0) = 0 for 1 ≤i≤ n ……………8.5 where cij = 1 if there is a coin in cell (i, j) , and cij = 0 otherwise Use backtrack Figure 8.3 (c) Two paths to collect 5 coins, the maximum number of coins possible.

  20. Tracing the computation backwardmakes it possible to get an optimal path: • if F(i – 1, j)top> F(i, j -1)left, an optimal path to cell (i, j) must come down from the adjacent cell above it. i.e, cell (i – 1, j); • if F(i – 1, j) < F(i, j -1), and optimal path to cell (i, j) must come from the adjacent cell on the left, cell (i, j-1); and • if F(i – 1, j) = F(i, j -1), it can reach cell (i, j) from either direction. • This yields two optimal paths for the instance Figure 8.3a, which are shown in Figure 8.3c. • If ties are ignored, one optimal path can be obtained in ϴ(n + m) time.

  21. The Knapsack Problem and Memory Functions • Design a dynamic programming algorithm for the knapsack problem: • Given n items of known weights w1 , w2 , …, wn and • their corresponding values v1 , v2 , …, vn and • a knapsack of capacity W (i.e., the total weight), • find the most valuable subset of the items that fit into the knapsack. • Assume that all the weights and the knapsack capacity are positive integers: • the item values do not have to be integers. • First, derive a recurrence relation that expresses • a solution to an instance of the knapsack problem in terms of solutions to its smaller sub-instances.

  22. Consider an instance defined by the first i items, 1 ≤i≤ n, with • weights w1 , w2 , …, wi , • values v1 , v2 , …, vi , and • knapsack capacity j, for 1 ≤ j ≤ W. • Let F(i, j) be the value of an optimal solution to this instance, i.e., • the value of the most valuable subset of the first i items that fit into the knapsack of capacity j. • Our goal is to find F(n, W), the maximal value of a subset of the n given items that fit into the knapsack of capacity W, and an optimal subset itself.

  23. Thus, the maximum of these two values is the value of an optimal solution among all feasible subsets of the first i items. If the ith item does not fit into the knapsack, the value of an optimal subset selected from the first i items is the same as the value of an optimal subset selected from the first i – 1 items. These observations lead to the following recurrence: Max{ F( i – 1, j),vi + F( i -1, j - wi) } if j - wi≥ 0. F(i , j) = F(i-1, j). ….. 8.6 Define the initial conditions as follows: F(0, j) = 0 for j ≥ 0 and F(i , 0) = 0 for i≥ 0. …. 8.7 Our goal is to find F(n, W), the maximal value of a subset of the n given items that fit into the knapsack of capacity W, and an optimal subset itself.

  24. For i, j > 0, to compute F(i, j), the entry in the ith row and the jth column, we compute the maximum of the entry in the previous row and the same column and the sum of vi and the entry in the previous row and wi columns to the left (i.e., j - wi). The table can be filled either row by row or column by column. Figure 8.4 Table for solving the knapsack problem by dynamic programming.

  25. Example 1 Let us consider the instance given by the following data: capacity W = 5 Max{ F( i – 1, j),vi + F( i -1, j - wi) } if j - wi≥ 0. F(i , j) = F(i -1, j).

  26. The time efficiency and space efficiency of this algorithm are both in ϴ( nW ). The time needed to find the composition of an optimal solution is in O( n ).

  27. Memory Functions • A naïve recursive solution is inefficient because it solves the same subproblems repeatedly. Instead, • we arrange for each subproblem to be solved only once, saving it solution. • If we need to refer to this subproblem’s solution again later, we can just look it up, rather than recomputed it. • …

  28. Dynamic programming thus uses additional memory to save computation time; • It is an example of a time-memory trade-off. • The savings may be dramatic: an exponential-time solution may be transformed into a polynomial-time solution. • A dynamic-programming approach runs in polynomial time when the number of distinct subproblems involved is polynomial in the input size and we can solve each such subproblem in polynomial time. •  …

  29. There are usually two equivalent ways to implement a dynamic programming approach: • The first approach is top-down with memorization. • We write the procedure recursively in a natural manner, but modified to save the result of each subproblem(usually in an array or hash table). • The procedure • first checks to see whether it has previously solved this subproblem. • If so, it returns the save value, save further computation at this level; • if not, the procedure computes the value in the usual manner. • We say that the recursive procedure has been memorized; it remembers what results it has computed previously.

  30. The second approach is the bottom-up method. • This approach depends on the “size” of a subproblem, such that solving any particular subproblem depends only on solving “smaller” subproblems. • We sort the subproblems by size and solve them in size order, smallest first. • When solving a particular subproblem, we have already solved all of the smaller subproblems its solution depends upon, and we have saved their solutions. • We solve each subproblem only once, and when we first see it, we have already solved all of its prerequisite subproblems.

  31. These two approaches yield algorithms with the same asymptotic running time, except in unusual circumstance where • the top-down approach does not actually recurse to examine all possible subproblems. • The bottom-up approach often has much better constant factors, since it has less overhead for procedure calls. [pp. 360-370, Cohen et al.]

  32. There is an alternative approach to dynamic programming that often offers the efficiency of the bottom-up dynamic programming approach while maintaining a top-down strategy. • The idea is to memorize the natural, but inefficient, recursive algorithm. • As in the bottom-up approach, we maintain a table with subproblem solutions, but the control structure for filling in the table is more like the recursive algorithm. • A memorized recursive algorithm maintains an entry in a table for the solution to each subproblem. • Each table entry initially contains a special value to indicate that the entry has yet to be filled in. • When the subproblem is first encountered as the recursive algorithm unfolds, its solution is computed and then stored in the table. • Each subsequent time that we encountered this subproblem, we simply look up the value stored in the table and return it.

  33. A bottom-up dynamic-programming algorithm usually outperforms the corresponding top-down memorized algorithm by a constant factor, if all subproblems must be solved at least once. • This is because the bottom-up algorithm has • no overhead for recursion and • less overhead for maintaining the table accesses in the dynamic-programming algorithm • to reduce time or space requirements even further.

  34. Alternatively if some subproblems in the subproblem space need not be solved at all, the memorized solution (top-down memorized algorithm) has the advantage of solving only those subproblems that are definitely required. Using memory functions, a method solves a given problem in the top-down manner but, in addition, maintains a table of the kind that would have been used by a bottom-up dynamic programming algorithm.

  35. Initially, all the table’s entries are initialized with a special “null” symbol to indicate that they have not yet been calculated. • Thereafter, whenever a new value needs to be calculated, the method checks the corresponding entry in the table first: • if this entry is not “null”, it is simply retrieved from the table ; • otherwise, it is computed by the recursive call whose result is then recorded in the table.

  36. The following algorithm implements this idea for the knapsack problem. After initializing the table, the recursive function needs to be called with i = n (the number of items) and j = W (the knapsack capacity).

  37. AlgorithmMFKnapsack(i, j) //Implements the memory function method for the knapsack problem Input: A nonnegative integer i indicating the number of the first items being considered and a nonnegative integer j indicating the knapsack capacity. Output: The value of an optimal feasible subset of the first i items //Note : Uses as global variables input arrays Weights[1 .. n], Values[1 .. n], // and table F[0 .. n, 0 .. W] whose entries are initialize with -1’s except // for row 0 and column 0 initialized with 0’s if F[i, j] < 0 { if j < Weights[ i ] value← MFKnapsack(i -1, j) else value← max{ MFKnapsack(i -1, j), Value[ i ] + MFKnapsack(i -1, j – Weights[ i ]) }; F[i, j] ← value; } return F[i, j];

  38. Example 2 Let us apply the memory function method to the instance considered in Example 1. The table in Figure 8.6 gives the results. Only 11 out of 20 nontrivial values (i.e., not those in row 0 or in column 0) have been computed. Just one nontrivial entry, F(1, 2), is retrieved rather than being recomputed. For larger instances, the proportion of such entries can be significantly larger. Figure 8.6 Example of solving an instance of the knapsack problem by the memory function algorithm

  39. In general, we cannot expect more than a constant-factor gain in using the memory function method for the knapsack problem, because its time efficiency class is the same as that of the bottom-up algorithm (why?). • A more significant improvement can be expected for dynamic programming algorithms in which a computation of one value takes more than constant time. • A memory function algorithm may be less space-efficient than a space efficient version of a bottom-up algorithm.

  40. Optimal Binary Search Trees [page 297 – 304, Levitin] • One of the principal applications of a binary search tree is to implement a dictionary, a set of elements with the operations of searching, insertion, and deletion. • It is natural to pose a question about an optimal binary search tree for which the average number of comparisons in a search is the smallest possible. • For simplicity, our discussion will be limited to minimizing the average number of comparisons in a successful search. • The method can be extended to include unsuccessful searches.

  41. As an example, consider four keys A, B, C, and D to be searched for with probabilities 0.1, 0.2, 0.4 and 0.3, respectively. Figure 8.7 depicts two(2) out of 14 possible binary search trees contained these keys. 0.1 0.2 A B B 0.2 0.1 0.4 A C 0.4 0.3 C D D 0.3 Figure 8.7 two out of 14 possible binary search trees with keys A, B, C, and D.

  42. The average number of comparisons in a successful search in the first of these trees is 0.1 *1 +0.2 * 2 + 0.4 * 3 + 0.3 * 4 = 2.9, and for the second one it is 0.1 *2 +0.2 * 1 + 0.4 * 2 + 0.3 * 3 = 2.1. Neither of these two trees is, in fact, optimal (Can you tell which binary tree is optimal?)

  43. 0.4 C 0.2 0.3 D B A 0.1 The optimal number of comparisons in a successful search of these trees is: 0.4 * 1 + 0.3 * 2 + 0.2 * 2 + 0.1 * 3 = 1.7.

  44. For this example, we could find the optimal tree by generating all 14 binary search trees with these keys. As a general algorithm, this exhaustive-search approach is unrealistic: the total number of binary search trees with n keys is equal to the nth Catalan number, which grows to infinity as fast as 4n / n1.5 .

  45. For such a binary search tree (Figure 8.8), the root contains key ak, the left subtree contains keys ai , ai+1 , …, ak-1 optimally arranged, and the right contains keys ak+1 , ak+2 , …, aj also optimally arranged. ak Figure 8.8 Binary search tree (BST) with root ak and two optimal binary search subtrees and . Optimal BST for ak+1, ak+2 , …, aj Optimal BST for ai , ai+1 , …, ak-1

  46. If we count tree levels starting with 1 to make the comparison numbers equal the keys’ levels, the following recurrence relation is obtained:

More Related