1 / 45

Introduction to Algorithms

Introduction to Algorithms . Rabie A. Ramadan rabie@rabieramadan.org http://www. rabieramadan.org 2. Some of the sides are exported from different sources to clarify the topic . Algorithms are used in every aspect in our life. Let’s take an Example ………. Importance of algorithms.

gage
Télécharger la présentation

Introduction to Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Algorithms Rabie A. Ramadan rabie@rabieramadan.org http://www. rabieramadan.org 2 Some of the sides are exported from different sources to clarify the topic

  2. Algorithms are used in every aspect in our life. Let’s take an Example ………. Importance of algorithms

  3. Suppose you are implementing a spreadsheet program, in which you must maintain a grid of cells. Some cells of the spreadsheet contain numbers, but other cells contain expressions that depend on other cells for their value. However, the expressions are not allowed to form a cycle of dependencies: for example, if the expression in cell E1 depends on the value of cell A5, and the expression in cell A5 depends on the value of cell C2, then C2 must not depend on E1. Example

  4. Describe an algorithm for making sure that no cycle of dependencies exists (or finding one and complaining to the spreadsheet user if it does exist). If the spreadsheet changes, all its expressions may need to be recalculated. Describe an efficient method for sorting the expression evaluations, so that each cell is recalculated only after the cells it depends on have been recalculated. Example

  5. Order the following items in a food chain Another Example tiger human fish sheep shrimp plankton wheat

  6. Solution: Verify whether a given digraph is a dag and, if it is, produce an ordering of vertices. Two algorithms for solving the problem. They may give different (alternative) solutions. DFS-based algorithm Perform DFS traversal and note the order in which vertices become dead ends (that is, are popped of the traversal stack). Reversing this order yields the desired solution, provided that no back edge has been encountered during the traversal. Solving Topological Sorting Problem

  7. Complexity: as DFS Example

  8. Source removal algorithm Identify a source, which is a vertex with no incoming edges and delete it along with all edges outgoing from it. There must be at least one source to have the problem solved. Repeat this process in a remaining diagraph. The order in which the vertices are deleted yields the desired solution. Solving Topological Sorting Problem

  9. Example

  10. Efficiency: same as efficiency of the DFS-based algorithm, but how would you identify a source? A big Problem Source removal algorithm Efficiency

  11. Issues: Correctness space efficiency time efficiency optimality Approaches: theoretical analysis empirical analysis Analysis of algorithms

  12. When considering space complexity, algorithms are divided into those that need extra space to do their work and those that work in place. Space analysis would examine all of the data being stored to see if there were more efficient ways to store it. Example : As a developer, how do you store the real numbers ? Suppose we are storing a real number that has only one place of precision after the decimal point and ranges between -10 and +10. How many bytes you need ? Space Analysis

  13. Example : As a developer, how do you store the real numbers ? Suppose we are storing a real number that has only one place of precision after the decimal point and ranges between -10 and +10. How many bytes you need ? Most computers will use between 4 and 8 bytes of memory. If we first multiply the value by 10. We can then store this as an integer between -100 and +100. This needs only 1 byte, a savings of 3 to 7 bytes. A program that stores 1000 of these values can save 3000 to 7000 bytes. It makes a big difference when programming mobile or PDAs or when you have large input . Space Analysis

  14. Time efficiency is analyzed by determining the number of repetitions of the basic operation as a function of input size Basic operation: the operation that contributes the most towards the running time of the algorithm T(n) ≈ copC(n) input size running time Number of times basic operation is executed execution time for basic operation or cost Theoretical analysis of time efficiency Note: Different basic operations may cost differently!

  15. It gives an idea about how fast the algorithm If the number of basic operations C(n) = ½ n (n-1) = ½ n2 – ½ n ≈ ½ n2 How much longer if the algorithm doubles its input size? Increasing input size increases the complexity We tend to omit the constants because they have no effect with large inputs Everything is based on estimation Importance of the analysis T(n) ≈ copC(n)

  16. Input determines what the path of execution through an algorithm will be. If we are interested in finding the largest value in a list of N numbers, we can use the following algorithm: Why Input Classes are Important?

  17. If the list is in decreasing order, There will only be one assignment done before the loop starts. If the list is in increasing order, There will be N assignments (one before the loop starts and N -1 inside the loop). Our analysis must consider more than one possible set of input, because if we only look at one set of input, it may be the set that is solved the fastest (or slowest). Why Input Classes are Important?

  18. Input size and basic operation examples

  19. Select a specific (typical) sample of inputs Use physical unit of time (e.g., milliseconds) or Count actual number of basic operation’s executions Analyze the empirical/experimental data Empirical analysis of time efficiency

  20. For some algorithms, efficiency depends on form of input: Worst case: Cworst(n) – maximum over inputs of size n Best case:Cbest(n) – minimum over inputs of size n Averagecase: Cavg(n) – “average” over inputs of size n The toughest to do Cases to consider in Analysis Best-case, average-case, worst-case

  21. Averagecase: Cavg(n) – “average” over inputs of size n Determine the number of different groups into which all possible input sets can be divided. Determine the probability that the input will come from each of these groups. Determine how long the algorithm will run for each of these groups. Best-case, average-case, worst-case n is the size of the input, m is the number of groups, pi is the probability that the input will be from group i, tiis the time that the algorithm takes for input from group i.

  22. Worst case Best case Average case Example: Sequential search n key comparisons 1 comparison (n+1)/2, assuming K is in A

  23. Neither the Worst nor the Best case gives the yield to the actual performance of an algorithm with random input. The Average Case does Assume that: The probability of successful search is equal to p(0≤ p ≤1) The probability of the first match occurring in the ith position is the same for every i . The probability of a match occurs at ith position is p/nfor every i In the case of unsuccessful search , the number of comparison is n with probability (1-p). Computing the Average Case for the Sequential search

  24. If p =1 (I found the key k) The average number of comparisons is (n+1)/2 If p=0 The average number of key comparisons is n The average Case is more difficult than the Best and Worst cases Computing the Average Case for the Sequential search

  25. Mathematical Background

  26. Logarithms Mathematical Background

  27. Which Base ? Loga n = Loga b Logb n Loga n = c Logb n In terms of complexity , we tend to ignore the constant Logarithms

  28. Mathematical Background

  29. Mathematical Background

  30. Mathematical Background

  31. Exact formula e.g., C(n) = n(n-1)/2 Formula indicating order of growth with specific multiplicative constant e.g., C(n) ≈ 0.5 n2 Formula indicating order of growth with unknown multiplicative constant e.g., C(n) ≈ cn2 Types of formulas for basic operation’s count

  32. Order of growth

  33. Of greater concern is the rate of increase in operations for an algorithm to solve a problem as the size of the problem increases. This is referred to as the rate of growth of the algorithm. Order of growth

  34. The function based on x2increases slowly at first, but as the problem size gets larger, it begins to grow at a rapid rate. The functions that are based on x both grow at a steady rate for the entire length of the graph. The function based on log x seems to not grow at all, but this is because it is actually growing at a very slow rate. Order of growth

  35. Values of some important functions

  36. Second point to consider : Because the faster growing functions increase at such a significant rate, they quickly dominate the slower-growing functions. This means that if we determine that an algorithm’s complexity is a combination of two of these classes, we will frequently ignore all but the fastest growing of these terms. Example : if the complexity is we tend to ignore 30x term Order of growth

  37. A way of comparing functions that ignores constant factors and small input sizes. O(g(n)): class of functions f(n) that grow no faster than g(n) All functions with smaller or the same order of growth as g(n) Ω(g(n)): class of functions f(n) that grow at least as fast as g(n) All functions that are larger or have the same order of growth as g(n) Θ(g(n)): class of functions f(n) that grow at same rate as g(n) Set of functions that have the same order of growth as g(n) Classification of GrowthAsymptotic order of growth

  38. Big-oh • O(g(n)): class of functions t(n) that grow no faster than g(n) • if there exist some positive constant c and some nonnegative n0 such that You may come up with different c and n0

  39. Big-omega Ω(g(n)): class of functions t(n) that grow at least as fast as g(n)

  40. Big-theta Θ(g(n)): class of functions t(n) that grow at same rate as g(n)

  41. Summary >= (g(n)), functions that grow at least as fast as g(n) = (g(n)), functions that grow at the same rate as g(n) g(n) <= O(g(n)), functions that grow no faster than g(n)

  42. If t1(n)  O(g1(n)) and t2(n)  O(g2(n)), then t1(n) + t2(n)  O(max{g1(n), g2(n)}). The analogous assertions are true for the -notation and -notation. Implication: The algorithm’s overall efficiency will be determined by the part with a larger order of growth, i.e., its least efficient part. For example, 5n2 + 3nlogn  O(n2) Theorem Proof. There exist constants c1, c2, n1, n2 such that t1(n)  c1*g1(n), for all n  n1 t2(n)  c2*g2(n), for all n  n2 Define c3 = c1 + c2 and n3 = max{n1,n2}. Then t1(n) + t2(n)  c3*max{g1(n), g2(n)}, for all n n3

  43. f(n)  O(f(n)) f(n)  O(g(n)) iff g(n) (f(n)) If f(n)  O(g(n)) and g(n)  O(h(n)) , then f(n)  O(h(n)) If f1(n)  O(g1(n)) and f2(n)  O(g2(n)) , then f1(n) +f2(n)  O(max{g1(n), g2(n)}) Also, 1in(f(i)) =  (1in f(i)) Some properties of asymptotic order of growth

  44. All logarithmic functions loga n belong to the same class(log n) no matter what the logarithm’s base a > 1 isbecause All polynomials of the same degree k belong to the same class: aknk + ak-1nk-1 + … + a0  (nk) Exponential functions an have different orders of growth for different a’s Orders of growth of some important functions

  45. Basic asymptotic efficiency classes

More Related