1 / 101

UNIT-II

ANALYSIS AND DESIGN OF ALGORITHMS. UNIT-II. CHAPTER 5:. DECREASE-AND-CONQUER. OUTLINE :. Decrease-and-Conquer Insertion Sort Depth-First Search Breadth-First Search Topological Sorting Algorithms for Generating Combinatorial Objects Generating Permutations Generating Subsets.

bcool
Télécharger la présentation

UNIT-II

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ANALYSIS AND DESIGN OF ALGORITHMS UNIT-II CHAPTER 5: DECREASE-AND-CONQUER

  2. OUTLINE : • Decrease-and-Conquer • Insertion Sort • Depth-First Search • Breadth-First Search • Topological Sorting • Algorithms for Generating Combinatorial Objects • Generating Permutations • Generating Subsets

  3. Decrease-and-Conquer • The decrease-and-Conquer technique is based on exploiting the relationship between a solution to a given instance of a problem and a solution to a smaller instance of the same problem. • Once such a relationship is established, it can be exploited either top down (recursively)orbottom up (without a recursion).

  4. There are three major variations of decrease-and-conquer: Decrease by a constant: Insertion sort Graph algorithms: DFS BFS Topological sorting Algorithms for generating permutations, subsets Decrease by a constant factor Binary search Variable-size decrease Euclid’s algorithm Decrease-and-Conquer

  5. In the decrease-by-a-constant variation, the size of an instance is reduced by the same constant (typically, this constant is equal to one) on each iteration of the algorithm. a problem of size n subproblem of size n-1 solution to the subproblem solution to the original problem Figure : Decrease (by one) – and – conquer technique.

  6. Decrease-by-a-Constant • For example, consider the exponentiation problem of computing anfor positive integer exponents: • The relationship between a solution to an instance of size n and an instance of size n-1 is obtained by the formula: an= an-1 . a • So, the function f(n) = an can be computed “top down” by using its recursive definition: f(n-1) . a if n > 1 f(n) = a if n = 1 • Function f(n) = an can be computed “bottom up” by multiplying a by itself n-1 times.

  7. In the decrease-by-a-constant factor variation, the size of a problem instance is reduced by the same constant factor on each iteration of the algorithm. In most applications, this constant factor is equal to two. a problem of size n subproblem of size n/2 solution to the subproblem solution to the original problem Figure : Decrease (by half) – and – conquer technique.

  8. Decrease – by – a – constant - factor: • For example, consider the exponentiation problem of computing anfor positive integer exponents: • If the instance of size n is to compute an, the instance of half its size will be to compute an/2, with obvious relationship between the two: an= (an/2 )2 • If n is odd, we have to compute an-1 by using the rule for even-valued exponents and then multiply the result by a. • To summarize, we have the following formula: (an/2)2 if n is even and positive an = (a(n-1)/2)2 .a if n is odd and greater than 1 a if n = 1 • If we compute an recursively according to above formula and measure the algorithm’s efficiency by number of multiplications, then algorithm is expected to be in O(log n) because, on each iteration, the size is reduced by at least one half at the expense of no more than two multiplications.

  9. Consider the problem of exponentiation: Compute an Brute Force:an = a * a * a * a * . . . * a Divide and conquer:an = an/2 * an/2 Decrease by one:an = an-1 * a Decrease by constant factor:an = (an/2)2if n is even an = (a(n-1)/2)2 . aif n is odd

  10. Variable – Size – Decrease: • In this variation, the size reduction pattern varies from one iteration of an algorithm to another. • Euclid’s algorithm for computing the GCD provides a good example of such a situation: gcd(m,n) = gcd(n,m mod n) • Though the arguments on the right-hand side are always smaller than those on the left-hand side, they are smaller neither by a constant nor by a constant factor.

  11. Insertion Sort • Decrease – by – one technique is applied to sort an array A[0 . . . n-1]. • We assume that the smaller problem of sorting the array A[0 . . . n-2] has already been solved to give us a sorted array of size n – 1: A[0] ≤ . . . ≤ A[n – 2]. • How can we take the advantage of this solution to the smaller problem to get a solution to the original problem by taking into account the element A[n-1]? • All we need is to find an appropriate position for A[n – 1] among sorted elements and insert it there.

  12. Insertion Sort • There are three reasonable alternatives for finding an appropriate position for A[n – 1] among sorted elements: • First, we can scan the sorted subarray from left to right until the first element greater than or equal to A[n – 1] is encountered and then insert A[n – 1] right before that element. • Second, we can scan the sorted subarray from right to leftuntil the first element smaller than or equal to A[n – 1] is encountered and then insert A[n – 1] right after that element. • This is implemented in practice because it is better for sorted or almost-sorted arrays. The resulting algorithm is called straight insertion sortor simplyinsertion sort.

  13. Insertion Sort • The third alternative is to use binary search to find an appropriate position for A[n – 1] in the sorted portion of the array. The resulting algorithm is called binary insertion sort. • Improves the number of comparisons (worst-case and average-case). • Still requires same number of swaps.

  14. Insertion Sort As shown in below figure, starting with A[1] and ending with A[n – 1], A[i] is inserted in its appropriate place among the first i elements of the array that have been already sorted (but, unlike selection sort, are generally not in their final positions). A[0] ≤ . . . ≤ A[j] < A[j + 1] ≤ . . . ≤ A[i – 1] | A[i] . . . A[n – 1] Smaller than or equal to A[i] greater than A[i]

  15. Pseudocode of Insertion Sort

  16. Insertion Sort program: #include<stdio.h> #include<conio.h> void main() { void insertion(int*,int); int a[10],n,i; printf("\nEnter the size of the array\n"); scanf("%d",&n); printf("\nEnter the array elements\n"); for(i=0;i<n;i++) scanf("%d",&a[i]); printf("\nArray before sorting\n"); for(i=0;i<n;i++) printf("%d\n",a[i]); insertion(a,n); printf("\nArray after sorting\n"); for(i=0;i<n;i++) printf("%d\n",a[i]); }

  17. Insertion Sort program: void insertion(int *a,int n) { int i,j,v; for(i=1;i<n;i++) { v=a[i]; j=i-1; while((j>=0) && (a[j] > v)) { a[j+1]=a[j]; j=j-1; } a[j+1]=v; } }

  18. Insert Action: i=1, first iteration Insertion Sort Example: v a[0] a[1] a[2] a[3] a[4] 8 20 8 5 10 7 8 20 20 5 10 7 8 8 20 5 10 7

  19. Insert Action: i=2, second iteration Insertion Sort Example: a[0] a[1] a[2] a[3] a[4] v 5 8 20 5 10 7 5 8 20 20 10 7 5 8 8 20 10 7 5 5 8 20 10 7

  20. Insert Action: i=3, third iteration Insertion Sort Example: v a[0] a[1] a[2] a[3] a[4] 10 5 8 20 10 7 10 5 8 20 20 7 10 5 8 10 20 7

  21. Insert Action: i=4, fourth iteration Insertion Sort Example: v a[0] a[1] a[2] a[3] a[4] 7 5 8 10 20 7 7 5 8 10 20 20 7 5 8 10 10 20 7 5 8 8 10 20 7 5 7 8 10 20 Sorted ARRAY

  22. Insertion Sort Example 1: 20 |8 5 10 7 8 20 |5 10 7 5 8 20 |10 7 5 8 10 20 |7 5 7 8 10 20 Figure: A vertical bar separates the sorted part of the array from the remaining elements; the element being inserted is in bold.

  23. Insertion Sort Example 2: 89 |45 68 90 29 34 17 45 89 |68 90 29 34 17 45 68 89 |90 29 34 17 45 68 89 90 |29 34 17 29 45 68 89 90 |34 17 29 34 45 68 89 90 |17 17 29 34 45 68 89 90 Figure: A vertical bar separates the sorted part of the array from the remaining elements; the element being inserted is in bold.

  24. Insertion Sort Problems: Apply insertion sort to sort the following list in alphabetical order: I, N, S, E, R, T, I, O, N, S, O, R, T D, E, C, R, E, A, S, E, A, N, D, C, O, N, Q, U, E, R

  25. Insertion Sort Analysis: • The number of key comparisons in this algorithm obviously depends on the nature of the input. • In the worst case, A[j] > v is executed the largest number of times. i.e., for every j = i – 1, . . .,0. • Thus, for the worst-case input, we get A[0] > A[1](for i =1), A[1] > A[2](for i=2), . . . A[n – 2] > A[n – 1](for i=n-1). • The worst-case input is an array of strictly decreasing values. The no. of key comparisons for such an input is n-1 i-1 n-1 Cworst(n) = ∑ ∑ 1 = ∑ i = ((n-1)n)/2 ЄΘ(n2) i=1 j=0 i=1 • Thus, in the worst-case, insertion sort makes exactly the same number of comparisons as selection sort.

  26. Insertion Sort Analysis: • In the Best case, the comparison A[j] > v is executed only once on every iteration of outer loop. • It happens if and only if A[i – 1] ≤ A[i] for every i = 1, . . . n – 1, i.e., if the input array is already sorted in ascending order. n-1 Cbest(n) = ∑ 1 = n - 1 ЄΘ(n) i=1

  27. Insertion Sort Average-case Analysis: Consider the already sorted array with 1 element 10. The element 12 need to be inserted to this array with single element. No. of times while loop condition is checked = 1 The element 8 need to be inserted to this array with single element. No. of times while loop condition is checked = 2 Therefore, Average = (1+2)/2 = 1.

  28. Insertion Sort Average-case Analysis: Consider the already sorted array with 2 elements 10, 12. The element 13 need to be inserted to this array with two elements. No. of times while loop condition is checked = 1 The element 11 need to be inserted to this array with two elements. No. of times while loop condition is checked = 2 The element 8 need to be inserted to this array with two elements. No. of times while loop condition is checked = 3 Therefore, Average = (1+2+3)/3 = 2.

  29. Insertion Sort Average-case Analysis: Consider the already sorted array with 3 elements 10, 12, 14. The element 15 need to be inserted to this array with three elements. No. of times while loop condition is checked = 1 The element 13 need to be inserted to this array with three elements. No. of times while loop condition is checked = 2 The element 11 need to be inserted to this array with three elements. No. of times while loop condition is checked = 3 The element 8 need to be inserted to this array with three elements. No. of times while loop condition is checked = 4 Therefore, Average = (1+2+3+4)/4 = 2.

  30. Insertion Sort Average-case Analysis: Similarly, to insert an element X with index i, in the proper position, the total no. of times while loop condition is checked on an average is given by (1 + 2 + . . . + i) = i(i+1) = (i + 1) i 2*i 2 n-1n-1 T(n) = ∑ (i+1)/2 = ½∑(i+1) i =1i =1 n-1 n-1 = 1/2∑ i + 1/2 ∑ 1 i =1 i=1 = 1/2(n(n-1)/2) + 1/2(n-1-1+1) = n2/4 – n/4 + n/2 – 1/2 = n2/4 + n/4 - 1/2 Therefore, Cavg(n) ≈ n2/4 ЄΘ(n2)

  31. Graph Traversal Many graph algorithms require processing vertices or edges of a graph in a systematic fashion. There are two principal Graph traversal algorithms: Depth-first search (DFS) Breadth-first search (BFS)

  32. Graph Traversal Depth-First Search (DFS) and Breadth-First Search (BFS): Two elementary traversal algorithms that provide an efficient way to “visit” each vertex and edge exactly once. Both work on directed or undirected graphs. Many advanced graph algorithms are based on the concepts of DFS or BFS. The difference between the two algorithms is in the order in which each “visit” vertices.

  33. Depth-First Search • DFSstarts visiting verticesof a graph at an arbitrary vertex by marking it as having been visited. • On each iteration, the algorithm proceeds to anunvisited vertexthat is adjacent to the last visited vertex. • This process continues until adead end– a vertex with no adjacent unvisited vertices – is encountered. • At a dead end, the algorithmbacks up one edge to the vertex it came fromand tries to continue visiting unvisited vertices from there. • The algorithm eventually halts after backing up to the starting vertex, with the latter being a dead end.

  34. Depth-First Search • By this time, all the vertices in the same connected component have been visited. • If unvisited vertices still remain, the depth-first search must be restarted at any one of them. • It is convenient to use a stack to trace the operation of DFS. We push a vertex onto the stack when the vertex is reached for first time, and we pop a vertex off the stack when it becomes a dead end.

  35. Depth-First Search • It is also very useful to accompany a depth-first search traversal by constructing the so-called depth-first search forest. • The traversal’s starting vertex serves as the root of the first tree in such a forest. • Whenever a new unvisited vertex is reached for the first time, it is attached as a child to the vertex from which it is being reached. Such an edge is called a tree edge because the set of all such edges forms a forest. • The algorithm may also encounter an edge leading to a previously visited vertex other than its immediate predecessor (i.e., its parent in the tree). Such an edge is called a back edge because it connects a vertex to its ancestor, other than the parent, in the depth-first search forest.

  36. Example: a a b c d b f e f g g h (a) a1,8 b2,7 f3,2 e4,1 g5,6 c6,5 d7,4 h8,3 e c d (c) h tree edgeback edge (b) Figure: (a) Graph (b) Traversal’s stack (the first subscript number indicates the order in which a vertex was visited, i.e., pushed onto stack; the second one indicates the order in which it became a dead-end, i.e., popped off the stack). (c) DFS forest

  37. Pseudocode of DFS

  38. Few Basic Facts about Directed graphs • A directed graph, or digraph, is a graph with directions specified for all its edges. • There are only two notable differences between undirected and directed graphs in representing the adjacency matrix and adjacency list: • The adjacency matrix of a directed graph does not have to be symmetric. • An edge in a directed graph has just one (not two) corresponding nodes in the digraph’s adjacency lists.

  39. Directed graphs a b a d c b e d c (b) e (a) Tree edgeBack edge Forward edge Cross edge Figure: (a) Digraph. (b) DFS forest of the digraph for the DFS traversal started at a.

  40. Directed graphs • DFS and BFS are principal traversal algorithms for traversing digraphs, but the structure of corresponding forests can be more complex. • The DFS forest in the previous figure exhibits all four types of edges possible in a DFS forest of a directed graph: tree edges (ab, bc, de), back edges (ba) from vertices to their ancestors, forward edges (ac) from vertices to their descendants in the tree other than their children, and cross edges (dc), which are none of the above mentioned types. Note: A back edge in a DFS forest of a directed graph can connect a vertex to its parent.

  41. Directed graphs • The presence of a back edge indicates that the digraph has a directed cycle. • If a DFS forest of a digraph has no back edges, the digraph is a dag, an acronym for directed acyclic graph. Note: A directed cycle in a digraph is a sequence of three or more of its vertices that starts and ends with the same vertex and in which every vertex is connected to its immediate predecessor by an edge directed from the predecessor to the successor.

  42. PROBLEMS Write the DFS forest for the following graphs and also specify the order in which each vertex is pushed on to the stack and the order in which it is popped off the stack: a b (1) a b (2) d d c c a c (3) b (4) d e e f d a c b

  43. PROBLEMS a b (5) (6) a b d c d c e f g e f (8) (7) a f b c g b e d a e d c

  44. PROBLEMS c a b (9) (10) b d d e a f f e c d (11) (12) g h a e a c f d b b c i j

  45. PROBLEMS a (13) (14) c b c b a d e d c a b a c b (15) (16) e d e f d h g f

  46. PROBLEMS (17) c a b (18) b a e d c e d h g f g f

  47. PROBLEMS (19) b a c e d g f

  48. How efficient is Depth-First Search ? • DFS algorithm is quite efficient since it takes just the time proportional to the size of the data structure used for representing the graph in question. • Thus, for the adjacency matrix representation, the traversal’s time is in Θ(|V|2), and for the adjacency list representation, it is in Θ(|V| + |E|) where |V| and |E| are the number of the graph’s vertices and edges, respectively.

  49. Depth-First Search • We can look at the DFS forest as the given graph with its edges classified by the DFS traversal into two disjoint classes: tree edges and back edges. - tree edges are edges used by the DFS traversal to reach previously unvisited vertices. - back edges connect vertices to previously visited vertices other than their immediate predecessors in the traversal. • DFS yields two distinct orderings of vertices: - order in which the vertices are reached for the first time (pushed onto stack). - order in which the vertices become dead-ends (popped off stack). These orders are qualitatively different , and various applications can take advantage of either of them.

  50. Applications of DFS Checking connectivity: Since DFS halts after visiting all the vertices connected by a path to the starting vertex, checking a graph’s connectivity can be done as follows: Start a DFS traversal at an arbitrary vertex and check, after the algorithm halts, whether all the graph’s vertices will have been visited. If they have, the graph is connected; otherwise, it is not connected. Identifying connected components of a graph. Checking acyclicity: If the DFS forest does not have back edges, then it is clearly acyclic.

More Related