1 / 45

CS 332: Algorithms

CS 332: Algorithms. Amortized Analysis Continued Longest Common Subsequence Dynamic Programming. Administrivia. Midterm almost graded Homework 4 assigned Due: Tuesday 28 (after Thanksgiving break). Review: MST Algorithms.

bjoan
Télécharger la présentation

CS 332: Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 332: Algorithms Amortized Analysis Continued Longest Common Subsequence Dynamic Programming David Luebke 110/21/2019

  2. Administrivia • Midterm almostgraded • Homework 4 assigned • Due: Tuesday 28 (after Thanksgiving break) David Luebke 210/21/2019

  3. Review: MST Algorithms • In a connected, weighted, undirected graph, will the edge with the lowest weight be in the MST? Why or why not? • Yes: • If T is MST of G, and A  T is a subtree of T, and (u,v) is the min-weight edge connecting A to V-A, then (u,v)  T • The lowest-weight edge must be in the tree (A=) David Luebke 310/21/2019

  4. Review: MST Algorithms • What do the disjoint sets in Kruskal’s algorithm represent? • A: Parts of the graph we have connected up together so far David Luebke 410/21/2019

  5. Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1 David Luebke 510/21/2019

  6. Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1? David Luebke 610/21/2019

  7. Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1 David Luebke 710/21/2019

  8. Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2? 19 9 14 17 8 25 5 21 13 1 David Luebke 810/21/2019

  9. Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1 David Luebke 910/21/2019

  10. Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5? 21 13 1 David Luebke 1010/21/2019

  11. Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1 David Luebke 1110/21/2019

  12. Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8? 25 5 21 13 1 David Luebke 1210/21/2019

  13. Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1 David Luebke 1310/21/2019

  14. Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9? 14 17 8 25 5 21 13 1 David Luebke 1410/21/2019

  15. Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1 David Luebke 1510/21/2019

  16. Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13? 1 David Luebke 1610/21/2019

  17. Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1 David Luebke 1710/21/2019

  18. Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14? 17 8 25 5 21 13 1 David Luebke 1810/21/2019

  19. Kruskal’s Algorithm Run the algorithm: Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T  {{u,v}}; Union(FindSet(u), FindSet(v)); } 2 19 9 14 17 8 25 5 21 13 1 David Luebke 1910/21/2019

  20. Review: Shortest-Path Algorithms • How does the Bellman-Ford algorithm work? • How can we do better for DAGs? • Under what conditions can we use Dijkstra’s algorithm? David Luebke 2010/21/2019

  21. Review: Running Time ofKruskal’s Algorithm • Expensive operations: • Sort edges: O(E lg E) • O(V) MakeSet()’s • O(E) FindSet()’s • O(V) Union()’s • Upshot: • Comes down to efficiency of disjoint-set operations, particularly Union() David Luebke 2110/21/2019

  22. Review: Disjoint Set Union • So how do we represent disjoint sets? • Naïve implementation: use a linked list to represent elements, with pointers back to set: • MakeSet(): O(1) • FindSet(): O(1) • Union(A,B): “Copy” elements of A into set B by adjusting elements of A to point to B: O(A) • How long could n Union()s take? O(n2), worst case David Luebke 2210/21/2019

  23. Disjoint Set Union: Analysis • Worst-case analysis: O(n2) time for n Union’s Union(S1, S2) “copy” 1 element Union(S2, S3) “copy” 2 elements … Union(Sn-1, Sn) “copy” n-1 elements O(n2) • Improvement: always copy smaller into larger • How long would above sequence of Union’s take? • Worst case: n Union’s take O(n lg n) time • Proof uses amortized analysis David Luebke 2310/21/2019

  24. Amortized Analysis of Disjoint Sets • If elements are copied from the smaller set into the larger set, an element can be copied at most lg n times • Worst case: Each time copied, element in smaller set 1st time resulting set size  2 2nd time  4 … (lg n)th time  n David Luebke 2410/21/2019

  25. Amortized Analysis of Disjoint Sets • Since we have n elements each copied at most lg n times, n Union()’s takes O(n lg n) time • Therefore we say the amortized cost of a Union() operation is O(lg n) • This is the aggregate method of amortized analysis: • n operations take time T(n) • Average cost of an operation = T(n)/n David Luebke 2510/21/2019

  26. Amortized Analysis: Accounting Method • Accounting method • Charge each operation an amortized cost • Amount not used stored in “bank” • Later operations can used stored money • Balance must not go negative • Book also discusses potential method • But we won’t worry about it here David Luebke 2610/21/2019

  27. Accounting Method Example: Dynamic Tables • Implementing a table (e.g., hash table) for dynamic data, want to make it small as possible • Problem: if too many items inserted, table may be too small • Idea: allocate more memory as needed David Luebke 2710/21/2019

  28. Dynamic Tables 1. Init table size m = 1 2. Insert elements until number n > m 3. Generate new table of size 2m 4. Reinsert old elements into new table 5. (back to step 2) • What is the worst-case cost of an insert? • One insert can be costly, but the total? David Luebke 2810/21/2019

  29. Analysis Of Dynamic Tables • Let ci = cost of ith insert • ci = i if i-1 is exact power of 2, 1 otherwise • Example: • Operation Table Size Cost Insert(1) 1 1 1 David Luebke 2910/21/2019

  30. Analysis Of Dynamic Tables • Let ci = cost of ith insert • ci = i if i-1 is exact power of 2, 1 otherwise • Example: • Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 David Luebke 3010/21/2019

  31. Analysis Of Dynamic Tables • Let ci = cost of ith insert • ci = i if i-1 is exact power of 2, 1 otherwise • Example: • Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3 David Luebke 3110/21/2019

  32. Analysis Of Dynamic Tables • Let ci = cost of ith insert • ci = i if i-1 is exact power of 2, 1 otherwise • Example: • Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3 Insert(4) 4 1 4 David Luebke 3210/21/2019

  33. Analysis Of Dynamic Tables • Let ci = cost of ith insert • ci = i if i-1 is exact power of 2, 1 otherwise • Example: • Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3 Insert(4) 4 1 4 Insert(5) 8 1 + 4 5 David Luebke 3310/21/2019

  34. Analysis Of Dynamic Tables • Let ci = cost of ith insert • ci = i if i-1 is exact power of 2, 1 otherwise • Example: • Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3 Insert(4) 4 1 4 Insert(5) 8 1 + 4 5 Insert(6) 8 1 6 David Luebke 3410/21/2019

  35. Analysis Of Dynamic Tables • Let ci = cost of ith insert • ci = i if i-1 is exact power of 2, 1 otherwise • Example: • Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3 Insert(4) 4 1 4 Insert(5) 8 1 + 4 5 Insert(6) 8 1 6 Insert(7) 8 1 7 David Luebke 3510/21/2019

  36. Analysis Of Dynamic Tables • Let ci = cost of ith insert • ci = i if i-1 is exact power of 2, 1 otherwise • Example: • Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3 Insert(4) 4 1 4 Insert(5) 8 1 + 4 5 Insert(6) 8 1 6 Insert(7) 8 1 7 Insert(8) 8 1 8 David Luebke 3610/21/2019

  37. Analysis Of Dynamic Tables • Let ci = cost of ith insert • ci = i if i-1 is exact power of 2, 1 otherwise • Example: • Operation Table Size Cost 1 2 3 4 5 6 7 Insert(1) 1 1 8 1 Insert(2) 2 1 + 1 9 2 Insert(3) 4 1 + 2 Insert(4) 4 1 Insert(5) 8 1 + 4 Insert(6) 8 1 Insert(7) 8 1 Insert(8) 8 1 Insert(9) 16 1 + 8 David Luebke 3710/21/2019

  38. Aggregate Analysis • n Insert() operations cost • Average cost of operation = (total cost)/(# operations) < 3 • Asymptotically, then, a dynamic table costs the same as a fixed-size table • Both O(1) per Insert operation David Luebke 3810/21/2019

  39. Accounting Analysis • Charge each operation $3 amortized cost • Use $1 to perform immediate Insert() • Store $2 • When table doubles • $1 reinserts old item, $1 reinserts another old item • Point is, we’ve already paid these costs • Upshot: constant (amortized) cost per operation David Luebke 3910/21/2019

  40. Accounting Analysis • Suppose must support insert & delete, table should contract as well as expand • Table overflows  double it (as before) • Table < 1/2 full  halve it: BAD IDEA (Why?) • Better: Table < 1/4 full  halve it • Charge $3 for Insert (as before) • Charge $2 for Delete • Store extra $1 in emptied slot • Use later to pay to copy remaining items to new table when shrinking table David Luebke 4010/21/2019

  41. Dynamic Programming • Another strategy for designing algorithms is dynamic programming • A metatechnique, not an algorithm (like divide & conquer) • The word “programming” is historical and predates computer programming • Use when problem breaks down into recurring small subproblems • This lecture: a driving problem • Next lecture: the algorithm David Luebke 4110/21/2019

  42. Dynamic Programming Example: Longest Common Subsequence • Longest common subsequence (LCS) problem: • Given two sequences x[1..m] and y[1..n], find the longest subsequence which occurs in both • Ex: x = {A B C B D A B }, y = {B D C A B A} • {B C} and {A A} are both subsequences of both • What is the LCS? • Brute-force algorithm: For every subsequence of x, check if it’s a subsequence of y • How many subsequences of x are there? • What will be the running time of the brute-force alg? David Luebke 4210/21/2019

  43. LCS Algorithm • Brute-force algorithm: 2m subsequences of x to check against n elements of y: O(n 2m) • We can do better: for now, let’s only worry about the problem of finding the length of LCS • When finished we will see how to backtrack from this solution back to the actual LCS • Define c[i,j] to be the length of the LCS of x[1..i] and y[1..j] • What is the length of LCS of x and y? David Luebke 4310/21/2019

  44. Finding LCS Length • Theorem: • What is this really saying? David Luebke 4410/21/2019

  45. The End David Luebke 4510/21/2019

More Related