1 / 15

CSCI 621

CSCI 621. Design and Analysis of Algorithms Chapter -2. Syllabus Requirements. Fundamentals of Algorithmic Problem Solving – Chapter 1 Understanding the problem Deciding on Appropriate Data Structures Algorithm Design Techniques Methods of specifying an Algorithm Proving an Algorithm

rsanderson
Télécharger la présentation

CSCI 621

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSCI 621 Design and Analysis of Algorithms Chapter -2

  2. Syllabus Requirements Fundamentals of Algorithmic Problem Solving – Chapter 1 • Understanding the problem • Deciding on Appropriate Data Structures • Algorithm Design Techniques • Methods of specifying an Algorithm • Proving an Algorithm • Important Problem Types • Fundamental Data Structures • Linear Data Structures • Fundamentals of the Analysis of Algorithm Efficiency –Chapter 2 • Analysis Framework • Measuring an Input Size • Units of Measuring Running Time • Asymptotic Notations and Basic Efficiency classes Brute Force - Chapter 3 - Selection Sort ,Bubble Sort Sequential Search , Brute-Force String Matching ,Exhaustive Search

  3. Chapter 2- Algorithm Efficiency There are two kinds of efficiency:time efficiency and space efficiency. Time efficiency, also called time complexity,indicates how fast an algorithm in question runs. Space efficiency, also called spacecomplexity, refers to the amount of memory units required by the algorithm in additionto the space needed for its input and output. Now the amount of extra space required by an algorithm is typically not of as much concern, with the caveat that there is still, of course, a difference between the fast main memory, the slower secondary memory, and the cache. The time issue has not diminished quite to the same extent, however. How can we measure an input size? – For example when we input a string for a search or a number to find if it is a prime number or not? For each input there is a different aspect to size- for example for the input search string we may measure input size by the number of characters in the string. For a matrix by the number of elements of the matrix. If there are two matrices being multiplied or added may be the total number of elements in both matrices. For the verification prime number the input which is a number could be considered by the magnitude of the number being checked. The bigger the number the longer it will take to find its prime status.

  4. Chapter 2- Algorithm Efficiency In this case it may be better to represent input size by the number b bits of the in the number (n’s) binary representation. b = [log2 n] + 1 Measuring running time: Can we measure time in seconds, milliseconds etc by just timing a program? – No. because there are many external factors such as the computer being used, the language that is handling the algorithm, the compiler used for computing the machine language and the actual difficulty of clocking speed. Another approach for measuring time of a program is the number of times an algorithm is being executed. This being difficult we need to use the basic operations. The basic operations in a sorting program would be compare key words in the list. This uses a inner loop which is considered to be the basic operation. In the case of mathematics the time taken for addition, subtraction, multiplication and division are different. Division takes the longest followed by multiplication ,addition and subtraction.

  5. Chapter 2- Algorithm Efficiency Thus the established method to find time efficiency of an algorithm is to measure the number of times its basic operation is executed based on input sizes of n. Let cop be the execution time of an algorithm’s basic operation on a particular computer, and let C(n) be the number of times this operation needs to be executed for this algorithm the running time T (n) of a program implementing this algorithm on that computer by the formula T (n) ≈ copC(n). Of course, this formula should be used with caution. The count C(n) does not contain any information about operations that are not basic, and, in fact, the count itself is often computed only approximately. Further, the constant cop is also an approximation whose reliability is not always easy to assess. Still, unless n is extremely large or very small, the formula can give a reasonable estimate of the algorithm’s running time. It also makes it possible to answer such questions as “How much faster would this algorithm run on a machine that is 10 times faster than the one we have?” The answer is, obviously, 10 times. Or, assuming that C(n) = 1 2 n(n − 1), how much longer will the algorithm run if we double its input size? The answer is about four times longer.

  6. Chapter 2- Algorithm Efficiency Consider,as an example, sequential search. This is a straightforward algorithm that searches for a given item (some search key K) in a list of n elements by checking successive elements of the list until either a match with the search key is found or the list is exhausted. Here is the algorithm’s pseudo code, in which, for simplicity, a list is implemented as an array. It also assumes that the second condition A[i] = K will not be checked if the first one, which checks that the array’s index does not exceed its upper bound, fails. ALGORITHM SequentialSearch(A[0..n − 1], K) //Searches for a given value in a given array by sequential search //Input: An array A[0..n − 1] and a search key K //Output: The index of the first element in A that matches K // or −1 if there are no matching elements i ←0 while i < n and A[i] = K do i ←i + 1 if i < n return i else return −1

  7. Chapter 2- Algorithm Efficiency Clearly, the running time of this algorithm can be quite different for the same list size n. In the worst case, when there are no matching elements or the first matching element happens to be the last one on the list, the algorithm makes the largest number of key comparisons among all possible inputs of size n: Cworst(n) = n. The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n, which is an input (or inputs) of size n for which the algorithm runs the longest among all possible inputs of that size. The way to determine the worst-case efficiency of an algorithm is, in principle, quite straight forward: analyze the algorithm to see what kind of inputs yield the largest value of the basic operation’s count C(n) among all possible inputs of size n and then compute this worst-case value Cworst(n). The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which is an input (or inputs) of size n for which the algorithm runs the fastest among all possible inputs of that size. Accordingly, we can analyze the best case efficiency as follows. First, we determine the kind of inputs for which the count C(n) will be the smallest among all possible inputs of size n. (Note that the best case does not mean the smallest input; it means the input of size n for which the algorithm runs the fastest.) Then we ascertain the value of C(n) on these most convenient inputs. For example, the best-case inputs for sequential search are lists of size n with their first element equal to a search key; accordingly, Cbest(n) = 1for this algorithm.

  8. Chapter 2- Algorithm Efficiency Asymptotic notations Three notations:O (big oh), (big omega), and (big theta). First, we introduce these notations informally, and then, after several examples, formal definitions are given. In the following discussion, t (n) and g(n) can be any nonnegative functions defined on the set of natural numbers. In the context we are interested in, t (n) will be an algorithm’s running time (usually indicated by its basic operation count C(n)), and g(n) will be some simple function to compare the count with. O-notation DEFINITION A function t (n) is said to be in O(g(n)), denoted t (n) ∈ O(g(n)),if t (n) is bounded above by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such that t (n) ≤ cg(n) for all n ≥ n0.As an example, let us formally prove one of the assertions made in the introduction: 100n + 5 ∈ O(n2). Indeed,100n + 5 ≤ 100n + n (for all n ≥ 5) = 101n ≤ 101n2.Thus, as values of the constants c and n0 required by the definition, we can take 101 and 5, respectively.Note that the definition gives us a lot of freedom in choosing specific valuesfor constants c and n0. For example, we could also reason that 100n + 5 ≤ 100n + 5n (for all n ≥ 1) = 105n to complete the proof with c = 105 and n0 = 1.

  9. Chapter 2- Algorithm Efficiency Omega notation DEFINITION A function t (n) is said to be in (g(n)), denoted t (n) ∈ (g(n)), if t (n) is bounded below by some positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such that t (n) ≥ cg(n) for all n ≥ n0.. Here is an example of the formal proof that n3 ∈ (n2): n3 ≥ n2 for all n ≥ 0, i.e., we can select c = 1 and n0 = 0. Theta-notation DEFINITION A function t (n) is said to be in (g(n)), denoted t (n) ∈ (g(n)), if t (n) is bounded both above and below by some positive constant multiples of g(n) for all large n, i.e., if there exist some positive constants c1 and c2 and some nonnegative integer n0 such that c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0. For example, let us prove that 1/2 n(n − 1) ∈ (n2). First, we prove the right inequality (the upper bound):

  10. Chapter 2- Algorithm Efficiency First, we prove the right inequality (the upper bound): ½ n(n − 1) = 1/2n2 − ½n ≤ 1/2n2 for all n ≥ 0. Second, we prove the left inequality (the lower bound): 1/2 n(n − 1) = ½n2 − ½n ≥ 1/2n2 − 1/2n 1/2n (for all n ≥ 2) = 1/4n2. Hence, we can select c2 = 1/4, c1 = 1/2 , and n0 = 2.

  11. Chapter 2- Algorithm Efficiency General Plan for Analyzing the Time Efficiency of Nonrecursive Algorithms 1. Decide on a parameter (or parameters) indicating an input’s size. 2. Identify the algorithm’s basic operation. (As a rule, it is located in the innermost loop.) 3. Check whether the number of times the basic operation is executed depends only on the size of an input. If it also depends on some additional property, the worst-case, average-case, and, if necessary, best-case efficiencies have to be investigated separately. 4. Set up a sum expressing the number of times the algorithm’s basic operation is executed. 5. Using standard formulas and rules of sum manipulation, either find a closedform formula for the count or, at the very least, establish its order of growth.

  12. Chapter 2- Algorithm Efficiency General Plan for Analyzing the Time Efficiency of recursive Algorithms 1.Decide on a parameter (or parameters) indicating an input’s size. 2. Identify the algorithm’s basic operation. 3.Check whether the number of times the basic operation is executed can vary on different inputs of the same size; if it can, the worst-case, average-case, and best-case efficiencies must be investigated separately. 4. Set up a recurrence relation, with an appropriate initial condition, for the number of times the basic operation is executed. 5. Solve the recurrence or, at least, ascertain the order of growth of its solution

  13. Chapter 2- Algorithm Efficiency

  14. Chapter 2- Algorithm Efficiency

More Related