1 / 17

Chapter 10: Algorithm Design Techniques

Chapter 10: Algorithm Design Techniques. Greedy Algorithms. Divide-And-Conquer Algorithms. Dynamic Programming. Randomized Algorithms. Backtracking Algorithms. CS 340. Page 171. JOB. j 1. j 2. j 3. j 4. j 5. j 6. j 7. j 8. j 9. j 10. j 11. j 12. j 13. j 14. j 15.

greg
Télécharger la présentation

Chapter 10: Algorithm Design Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 10: Algorithm Design Techniques • Greedy Algorithms • Divide-And-Conquer Algorithms • Dynamic Programming • Randomized Algorithms • Backtracking Algorithms CS 340 Page 171

  2. JOB j1 j2 j3 j4 j5 j6 j7 j8 j9 j10 j11 j12 j13 j14 j15 RUNNING TIME 8 19 24 37 3 30 13 40 5 21 32 10 27 35 16 JOB j1 j2 j3 j4 j5 j6 j7 j8 j9 j10 j11 j12 j13 j14 j15 COMPLETION TIME 8 24 34 71 3 49 13 80 5 29 56 10 40 64 19 Greedy Algorithms Certain problems lend themselves to a greedy approach, i.e., to obtain an optimal global solution, just make a series of optimal local steps. Greedy Example 1: Multiprocessor Scheduling To schedule n jobs j1, j2, …, jn, with respective running times t1, t2, …, tn, on p processors, just cycle through the processors, assigning the job with the smallest running time to the next processor. The result will be a schedule that minimizes the average completion time for all of the jobs. Processor 1: j5 (3)  j15 (16)  j6 (30) Processor 2: j9 (5)  j2 (19)  j11 (32) Processor 3: j1 (8)  j10 (21)  j14 (35) Processor 4: j12 (10)  j3 (24)  j4 (37) Processor 5: j7 (13)  j13 (27)  j8 (40) Average Completion Time: 33.67 CS 340 Page 172

  3. Greedy Example 2: Huffman Codes To compress the binary representation of textual data as much as possible, a greedy approach known as Huffman coding can be used. Original Text: I DO NOT LIKE GREEN EGGS AND HAM. I DO NOT LIKE THEM, SAM-I-AM. CHARACTER A D E G H I K L M N O R S T space , . - FREQUENCY 4 3 6 3 2 5 2 2 4 4 4 1 2 3 14 1 2 2 A D E G H I K L M N O R S T sp , . - 4 3 6 3 2 5 2 2 4 4 4 1 2 3 14 1 2 2 A D E G H I K L M N O R , S T sp . - 4 3 6 3 2 5 2 2 4 4 4 2 2 3 14 2 2 A D E G H K I L M N O R , S T sp . - 4 3 6 3 4 5 2 4 4 4 2 2 3 14 2 2 Create a binary tree by placing the characters in the leaf nodes and repeatedly joining the pair of least frequent characters with a common parent, replacing that pair with a new “character” that has their combined frequencies. CS 340 Page 173

  4. A D E G H K I L M N O R , S T sp . - 4 3 6 3 4 5 2 4 4 4 2 2 3 14 2 2 A D E G H K I L S M N O R , T sp . - 4 3 6 3 4 5 4 4 4 4 2 3 14 2 2 A D E G H K I L S M N O R , T sp . - 4 3 6 3 4 5 4 4 4 4 2 3 14 4 D A R , E G H K I L S M N O T sp . - 4 5 6 3 4 5 4 4 4 4 3 14 4 D A R , E G T H K I L S M N O sp . - 4 5 6 6 4 5 4 4 4 4 14 4 D A M R , E G T H K I L S N O sp . - 8 5 6 6 4 5 4 4 4 14 4 CS 340 Page 174

  5. D A M R , E G T H K I L S N O sp . - 8 5 6 6 4 5 4 4 4 14 4 D A M R , E G T H K I L S N O sp . - 8 5 6 6 4 5 4 8 14 4 D A M R , E G T H K L S I N O sp . - 8 5 6 6 8 5 8 14 4 D I A M R , E G T H K L S . - N O sp 8 5 6 6 8 9 8 14 E D I A M R , G T H K L S . - N O sp 8 11 6 8 9 8 14 CS 340 Page 175

  6. E D I A M R , G T H K L S . - N O sp 8 11 6 8 9 8 14 E D I A M G T R , H K L S . - N O sp 14 11 8 9 8 14 E D N O I A M G T R , H K L S . - sp 14 11 16 9 14 E I D . - N O A M G T R , H K L S sp 14 20 16 14 CS 340 Page 176

  7. N O sp E I H K L S D . - A M G T R , 28 36 sp N O E I A M G T H K L S D . - R , CHARACTER A D E G H I K L M N O R S T space , . - HUFFMAN CODE 0000 10000 1001 0010 11000 1010 11001 11010 0001 1110 1111 100010 11011 0011 01 100011 10110 10111 The Huffman code for each character is determined by traversing the tree from the root to the character’s leaf node, adding a zero to the code for each left offspring encountered and adding a one to the code for each right offspring encountered. Note that if five bits per character had been used, then the original message would have needed 320 bits, but with the Huffman code it only needs 247 bits. CS 340 Page 177

  8. Divide-And-Conquer Algorithms Another common algorithmic technique is the divide and conquer approach, i.e., recursively divide the problem into smaller problems, from which the global solution can be obtained. Divide-And-Conquer Example 1: Closest-Points Problem Given n points p1, p2, …, pn in two-space, to find the pair with the smallest distance between them, just sort the points by their x-coordinates, sort them in a separate list by their y-coordinates, and split the problem. Minimum distance in left partition Minimum distance in right partition The closest pair is either the pair discovered recursively on the left side of the partition, the pair discovered recursively on the right side of the partition, or some pair that straddles the partition. CS 340 Page 178

  9. Let  be the minimum of the two recursive values, and examine all possible straddling pairs in the strip of width 2 surrounding the partition line.       For each point in the strip, only the  square below the point’s y-coordinate and on the opposite side of the partition needs to be examined, resulting in at most four comparisons. This yields O(nlogn) time complexity for the entire algorithm. CS 340 Page 179

  10. A11 A12 A21 A22 B11 B12 B21 B22 C11 C12 C21 C22 = Divide-And-Conquer Example 2: Matrix Multiplication Traditional matrix multiplication of two n  n matrices takes (n3) time. A divide-and-conquer approach can reduce this to O(nlog7)  O(n2.81). Splitting every n  n matrix into four n/2  n/2 matrices, we note that if: then C11 = A11B11+A12B21 , C12 = A11B12+A12B22 , C21= A21B11+A22B21 , and C22 = A21B12+A22B22 . Using the following matrices: P = (A11+A22)(B11+B22) T = (A11+A12)B22 Q = (A21+A22)B11 U = (A21-A11)(B11+B12) R = A11(B12-B22) V = (A12-A22)(B21+B22) S = A22(B21-B11) we obtain C11 = P+S-T+V, C12 = R+T, C21 = Q+S, and C22 = P+R-Q+U. This technique uses 7 matrix multiplications and 18 matrix additions, compared to the traditional 8 multiplications and 4 additions. (Note that one matrix multiplication performs n3 multiplications and n3-n2 additions, while one matrix addition performs only n2 additions.) CS 340 Page 180

  11. WORD a and he I it not or she the you PROBABILITY .09 .13 .07 .14 .07 .10 .07 .07 .15 .11 Dynamic Programming Dynamic programming techniques determine a problem’s solution by means of a sequence of smaller decisions. Dynamic Programming Example: Optimal Binary Search Trees Given n words w1, w2, …, wn with respective probabilities of occurrence p1, p2, …, pn, we wish to place the words in a binary search tree in such a way that the average access time is minimized. We take advantage of the fact that subtrees in the binary search tree contain complete ranges of values. To minimize the cost of the entire tree, then, we select the root word wi for which the sum of the left subtree’s cost, the right subtree’s cost, and the sum of all word probabilities is minimal. For instance, assume the following word probabilities: The minimum cost trees for each range can be obtained dynamically, and represented with the range of word values, the cost, and the root. CS 340 Page 181

  12. Range Size: 1 RANGE a..a and..and he..he I..I it..it not..not or..or she..she the..the you..you ROOT a and he I it not or she the you COST .09 .13 .07 .14 .07 .10 .07 .07 .15 .11 Range Size: 2 RANGE a..and and..he he..I I..it it..not not..or or..she she..the the..you ROOT and and I I not not or the the COST .31 .27 .28 .28 .24 .24 .21 .29 .37 Range Size: 3 RANGE a..he and..I he..it I..not it..or not..she or..the she..you ROOT and he I it not or the the COST .45 .61 .42 .55 .38 .41 .50 .51 Range Size: 4 RANGE a..I and..it he..not I..or it..she not..the or..you ROOT and I I not not or the COST .80 .75 .69 .73 .59 .78 .72 Range Size: 5 RANGE a..it and..not he..or I..she it..the not..you ROOT and I I not or the COST 1.01 1.02 .90 .94 .99 1.02 Range Size: 6 Range Size: 7 RANGE a..not and..or he..she I..the it..you RANGE a..or and..she he..the I..you ROOT I not not not the ROOT I I not not COST 1.29 1.40 1.15 1.38 1.27 COST 1.50 1.51 1.59 1.71 Range Size: 8 Range Size: 9 Range Size: 10 RANGE a..she and..the he..you RANGE a..the and..you RANGE a..you ROOT I not not ROOT I not ROOT I COST 1.78 2.05 1.92 COST 2.33 2.38 COST 2.72 CS 340 Page 182

  13. I it not NULL it..she I..I not..she I..it or..she COST: 0 + .59 + .45 = 1.04 COST: .14 + .41 + .45 = 1.00 COST: .28 + .21 + .45 = .94 or she I..not she..she I..or NULL COST: .55 + .07 + .45 = 1.07 COST: .73 + .0+ .45 = 1.18 I and the a he not you it or she For instance, to determine the best size-5 subtree for the range I..she, this process compares the five possibilities: cheapest The cheapest binary search tree: CS 340 Page 183

  14. Randomized Algorithms Randomized algorithms use random numbers at certain key points in the program to make decisions. While the worst-case time complexity of such algorithms is unaffected, the best-case time complexity becomes correlated to the distribution function for the random number generator, which can improve the performance significantly. Randomized Example: Skip Lists Let’s alter the definition of a linked list to facilitate the application of binary searches to it. Specifically, assuming that the list will have at most n elements, let each node have between 1 and logn pointers to later nodes, and let’s ensure that the ith pointer of each node points to a node with at least i pointers. We do this by implementing the insert operation as follows: starting at the header’s highest pointer, traverse the list until the next node is larger than the new value (or null), at which point the process is continued at the next lower level pointer. Here’s the randomized part! This number is chosen randomly, with a distribution such that 1 is chosen half the time, 2 is chosen one-quarter of the time, 3 is chosen one-eighth of the time, etc. When this process halts, a new node is inserted after the last node where a level shift occurred; this node contains the new value and a number of pointers between 1 and logn. CS 340 Page 184

  15. Insert 35 R=2 Insert 50 R=1 Insert 20 R=1 35 35 35 50 20 50 Insert 65 R=1 Insert 40 R=3 40 35 35 20 50 65 20 50 65 Insert 10 R=2 40 10 35 20 50 65 Insert 60 R=2 40 10 35 60 20 50 65 Insert 25 R=1 40 10 35 60 20 25 50 65 Insert 55 R=4 55 40 10 35 60 20 25 50 65 Insert 15 R=1 55 40 10 35 60 15 20 25 50 65 CS 340 Page 185

  16. Current Game Status Computer’s Possible Move #1 Computer’s Possible Move #2 Computer’s Possible Move #3 Computer’s Possible Move #4 Human Move #1.1 Human Move #1.2 Human Move #1.3 Human Move #2.1 Human Move #2.2 Human Move #3.1 Human Move #3.2 Human Move #3.3 Human Move #4.1 Human Move #4.2 Human Move #4.3 CPM #1.1.1 CPM #1.2.1 CPM #1.2.2 CPM #1.3.1 CPM #2.1.1 CPM #2.2.1 CPM #2.2.2 CPM #3.1.1 CPM #3.2.1 CPM #3.3.1 CPM #3.3.2 CPM #4.1.1 CPM #4.2.1 CPM #4.3.1 Backtracking Algorithms Backtracking algorithms are usually a variation of exhaustive searching, where the search is halted whenever the situation becomes untenable, and the algorithm “backtracks” to the last point where a dubious decision was made. Backtracking Example: Game Playing When developing a computer game program, one common technique is the minimax procedure, in which the program determines the next move to take based upon its attempt to maximize its chances of victory while assuming that its human opponent will try to minimize those chances. A tree structure is used for this purpose: At the even levels, choose the move that will minimize the computer’s chances of victory (i.e., the move the human would make). At the odd levels, choose the move that will maximize the computer’s chances of victory. CS 340 Page 186

  17. Let’s try a simple tic-tac-toe example (assuming that the computer is ‘X’): XO– O–X –O– • Six outcomes: • 2 human wins • 2 computer wins • 2 draws • Five outcomes: • 3 human wins • 2 draws • Five outcomes: • 3 human wins • 2 computer wins • Five outcomes: • 4 human wins • 1 computer win XOX O–X –O– XO– OXX –O– XO– O–X XO– XO– O–X –OX XOX OOX –O– XOO OXX –O– XOO O–X XO– XOO O–X –OX XOX O–X OO– XOX O–X –OO XO– OXX OO– XO– OXX –OO XO– OOX XO– XO– O–X XOO XO– OOX –OX XO– O–X OOX XOX OXX OO– XOX O–X OOX XOX OXX –OO XOX O–X XOO XOO OXX XO– XOO OXX –OX XOX OXX OO– XO– OXX OOX XOX OXX –OO XO– OXX XOO XOO OXX XO– XOO O–X XOX XOX O–X XOO XO– OXX XOO XOO OXX –OX XOO O–X XOX XOX O–X OOX XO– OXX OOX XOX OXX OOO XOX OXX OOO XOX OOX XOO XOO OXX XOO XOX OXX OOO XOX OXX OOO XOO OXX XOO XOO OXX XOO XOO OOX XOX XOX OOX XOO XOO OXX XOO XOO OOX XOX Although the human could win no matter which move the computer makes next, the computer’s odds are better with the second move. A more thorough analysis reveals that if the computer makes that move, the human’s response will lead to either a computer victory or a draw. Furthermore, each of the other moves leads to an inevitable human victory. CS 340 Page 187

More Related