E N D
CSCI 4140 Advanced Algorithms Dr. Zahid Khan CUSSA Instructor
Outlines • Introduction to Recursion • Finding recursive function solution • Factorial, power, binary search recursive examples • Method of recursive function solution • Iterative • Master theorem
Recursion • Self referential functions are called recursive (i.e., functions calling themselves) • Recursive functions are very useful for many mathematical operations
Recursion: Basic idea • We have a bigger problem whose solution is difficult to find • We divide/decompose the problem into smaller (sub) problems • Keep on decomposing until we reach to the smallest sub-problem (base case) for which a solution is known or easy to find • Then go back in reverse order and build upon the solutions of the sub-problems • Recursion is applied when the solution of a problem depends on the solutions to smaller instances of the same problem
Recursive Function • A function which calls itself • int factorial ( int n ) { • if ( n == 0) // base case • return 1; • else // general/ recursive case • return n * factorial ( n - 1 ); • }
Recursion in Action: factorial(n) Base case arrived Some concept from elementary maths: Solve the inner-most bracket, first, and then go outward factorial (5) = 5 x factorial (4) = 5 x (4 x factorial (3)) = 5 x (4 x (3 x factorial (2))) = 5 x (4 x (3 x (2 x factorial (1)))) = 5 x (4 x (3 x (2 x (1 x factorial (0))))) = 5 x (4 x (3 x (2 x (1 x 1)))) = 5 x (4 x (3 x (2 x 1))) = 5 x (4 x (3 x 2)) = 5 x (4 x 6) = 5 x 24 = 120
Finding a recursive solution • Each successive recursive call should bring you closer to a situation in which the answer is known (cf. n-1 in the previous slide) • A case for which the answer is known (and can be expressed without recursion) is called a base case • Each recursive algorithm must have at least one base case, as well as the general recursive case
Recursion vs. Iteration: Computing N! • The factorial of a positive integer n, denoted n!, is defined as the product of the integers from 1 to n. For example, 4! = 4·3·2·1 = 24. • Iterative Solution • Recursive Solution
Recursion: Do we really need it? • In some programming languages recursion is imperative • For example, in declarative/logic languages (LISP, Prolog etc.) • Variables can’t be updated more than once, so no looping – (think, why no looping?) • Heavy backtracking
Linear Recursion • The simplest form of recursion is linear recursion, where a method is defined so that it makes at most one recursive call each time it is invoked • This type of recursion is useful when we view an algorithmic problem in terms of a first or last element plus a remaining set that has the same structure as the original set
Summing the Elements of an Array • We can solve this summation problem using linear recursion by observing that the sum of all n integers in an array A is: • Equal to A[0], if n = 1, or • The sum of the first n − 1 integers in A plus the last element int LinearSum(int A[], n){ if n = 1 then return A[0]; else return A[n-1] + LinearSum(A, n-1) }
Analyzing Recursive Algorithms using Recursion Traces • Recursion trace for an execution of LinearSum(A,n) with input parameters A = [4,3,6,2,5] and n = 5
Linear recursion: Reversing an Array j i • Swap 1st and last elements, 2nd and second to last, 3rd and third to last, and so on • If an array contains only one element no need to swap (Base case) • Update i and j in such a way that they converge to the base case (i = j) 45 80 5 50 10 60 18 65 30 70
Linear recursion: Reversing an Array void reverseArray(int A[], i, j){ if (i < j){ int temp = A[i]; A[i] = A[j]; A[j] = temp; reverseArray(A, i+1, j-1) } // in base case, do nothing }
Linear recursion: run-time analysis Time complexity of linear recursion is proportional to the problem size Normally, it is equal to the number of times the function calls itself In terms of Big-O notation time complexity of a linear recursive function/algorithm is O(n)
Solving Recurrence Relations • There are five methods to solve recurrence relations that represent the running time of recursive methods: • Iteration method (unrolling and summing) • Substitution method (Guess the solution and verify by induction) • Recursion tree method • Master theorem (Master method) • Using Generating functions or Characteristic equations • In this course, we will use the Iteration method and a simplified Master theorem.
Solving- Iteration method • Steps: • Expand the recurrence • Express the expansion as a summation by plugging the recurrence back into itself until you see a pattern. • Evaluate the summation • In evaluating the summation one or more of the following summation formulae may be used: • Arithmetic series: • Geometric Series: • Special Cases of Geometric Series:
Solving - Iteration method (Cont’d) • Harmonic Series: • Others:
Analysis Of Recursive Factorial method long factorial (int n) { if (n == 0) return 1; else return n * factorial (n – 1); } • Example1: The running time of factorial method and hence determine its big-O complexity: T(0) = c (1) T(n) = b + T(n - 1)(2) = b + b + T(n - 2) by subtitutingT(n – 1) in (2) = b +b +b + T(n - 3) by substituting T(n – 2) in (2) … = kb + T(n - k) The base case is reached when n – k = 0k = n, we then have: T(n) = nb + T(n - n) = bn + T(0) = bn + c Therefore the method factorial is O(n)
Analysis Of Recursive Selection Sort public static void selectionSort(int[] x) { selectionSort(x, x.length - 1); } private static void selectionSort(int[] x, int n) { int minPos; if (n > 0) { minPos = findMinPos(x, n); swap(x, minPos, n); selectionSort(x, n - 1); } } private static int findMinPos (int[] x, int n) { int k = n; for(int i = 0; i < n; i++) if(x[i] < x[k]) k = i; return k; } private static void swap(int[] x, int minPos, int n) { int temp=x[n]; x[n]=x[minPos]; x[minPos]=temp; }
Analysis Of Recursive Selection Sort (Cont’d) findMinPos is O(n), and swap is O(1), therefore the recurrence relation for the running time of the selectionSort method is: T(0) = a (1) T(n) =T(n – 1) + n + cifn > 0 (2) = [T(n-2) +(n-1) + c] + n + c = T(n-2) + (n-1) + n + 2c by substitutingT(n-1) in (2) = [T(n-3) + (n-2) + c] +(n-1) + n + 2c= T(n-3) + (n-2) + (n-1) + n + 3c by substitutingT(n-2) in (2) = T(n-4) + (n-3) + (n-2) + (n-1) + n + 4c = …… = T(n-k) + (n-k + 1) + (n-k + 2) + …….+ n + kc The base case is reached when n – k = 0 k = n, we then have : Therefore, Recursive Selection Sort is O(n2)
Analysis Of Recursive Binary Search public int binarySearch (int target, int[] array, int low, int high) { if (low > high) return -1; else { int middle = (low + high)/2; if (array[middle] == target) return middle; else if(array[middle] < target) return binarySearch(target, array, middle + 1, high); else return binarySearch(target, array, low, middle - 1); } } • The recurrence relation for the running time of the method is: T(1) = a if n = 1 (one element array) T(n) = T(n / 2) + b if n > 1
Analysis Of Recursive Binary Search (Cont’d) Without loss of generality, assume n, the problem size, is a multiple of 2, i.e., n = 2k Expanding: T(1) = a (1) T(n) = T(n / 2) + b (2) = [T(n / 22) + b] + b = T (n / 22) + 2b by substitutingT(n/2) in (2) = [T(n / 23) + b] + 2b = T(n / 23) + 3b by substitutingT(n/22) in (2) = …….. = T( n / 2k) + kb The base case isreachedwhenn / 2k = 1 n = 2k k = log2 n, wethen have: T(n) = T(1) + b log2 n = a + b log2 n Therefore, Recursive Binary Search is O(log n)
Master Theorem (Master Method) • The master method provides an estimate of the growth rate of the solution for recurrences of the form: where a ≥ 1, b > 1 and the overhead function f(n) > 0 • If T(n)is interpreted as the number of steps needed to execute an algorithm for an input of size n, this recurrence corresponds to a “divide and conquer” algorithm, in which a problem of size nis divided into a sub-problems of size n / b, where a, b are positive constants: • Divide-and-conquer algorithm: • • divide the problem into a number of subproblems • • conquer the subproblems (solve them) • • combine the subproblem solutions to get the solution to the original problem • Example: Merge Sort • • divide the n-element sequence to be sorted into two n/2- element sequences. • • conquer the subproblems recursively using merge sort. • • combine the resulting two sorted n/2-element sequences by merging
Simplified Master Theorem • The Simplified Master Method for Solving Recurrences: • Consider recurrences of the form: T(1) = 1 T(n) = aT(n/b) + knc + h for constants a ≥ 1, b >1, c 0, k ≥ 1, and h 0 then: if a > bc if a = bc if a < bc • Note: Since k and h do not affect the result, they are sometimes not included • in the above recurrence 25
Simplified Master Theorem (Cont’d) Example1: Find the big-Oh running time of the following recurrence. Use the Master Theorem: Solution: a = 3, b = 4, c = ½ a > bc Case 1 Hence Example2: Find the big-Oh running time of the following recurrence. Use the Master Theorem: T(1) = 1 T(n) = 2T(n / 2) + n Solution: a = 2, b = 2, c = 1 a = bc Case 2 Hence T(n) is O(n log n) Example3: Find the big-Oh running time of the following recurrence. Use the Master Theorem: T(1) = 1 T(n) = 4T(n / 2) + kn3 + h where k ≥ 1 and h 1 Solution: a = 4, b = 2, c = 3 a < bc Case 3 Hence T(n) is O(n3) 26
Lecture Summary • The lecture is about the introduction of recursive functions • Examples of recursive algoriths • Basic idea of solving recursive algorithms • Methods of recursive solutions. • Iterative and Master theorem. • Examples • Factorial, power, binary search, and selection sort etc.
After Class Dicussion (Self Test) • What are the basic parts in recursive algorithms? • What is the complexity of linear recursive algorithms? • How we can solve a recursive problem with iterative method? • List different methods of recursive solutions.