1 / 54

What is an algorithm?

What is an algorithm?. Algorithms are the ideas behind computer programs. An algorithm is the thing which stays the same whether the program is in Pascal running on a Cray in New York or is in BASIC running on a PC in Taipei!

hanne
Télécharger la présentation

What is an algorithm?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What is an algorithm? • Algorithms are the ideas behind computer programs. • An algorithm is the thing which stays the same whether the program is in Pascal running on a Cray in New York or is in BASIC running on a PC in Taipei! • To be interesting, an algorithm has to solve a general, specified problem. An algorithmic problem is specified by describing the set of instances it must work on and what desired properties the output must have.

  2. What is an algorithm? • Problem: specified by its input/output behaviors,   • Algorithm: a problem-solving procedure which can be implemented on a computer and satisfies the following conditions: it terminates,  it is correct, and  it is well-defined • Program: implementation of an algorithm on a computer • A problem will normally have many (usually infinitely many) instances. • An algorithm must work correctly on every instance of the problem it claims to solve.

  3. Example

  4. Another Example • Suppose you have a robot arm equipped with a tool, say a soldering iron. To enable the robot arm to do a soldering job, we must construct an ordering of the contact points, so the robot visits (and solders) the first contact point, then visits the second point, third, and so forth until the job is done. • Since robots are expensive, we need to find the order which minimizes the time (I.e., travel distance) it takes to assemble the circuit board.

  5. Correctness is Not Obvious! • You are given the job to program the robot arm. Give me an algorithm to find the best tour!

  6. Nearest Neighbor Tour • A very popular solution starts at some point p0 and then walks to its nearest neighbor p1 first, then repeats from p1, etc. until done. Pick and visit an initial point p0 p = p0 i = 0 While there are still unvisited points i = i +1 Let pi be the closest unvisited point to pi-1 Visit pi Return to p0 from pi

  7. It Does Not Solve The Problem! • This algorithm is simple to understand and implement and very efficient. However, it is not correct! • Always starting from the leftmost point or any other point will not solve the problem.

  8. Closest Pair Tour • Always walking to the closest point is too restrictive, since that point might trap us into making moves we don't want. • Another idea would be to repeatedly connect the closest pair of points whose connection will not cause a cycle or a three-way branch to be formed, until we have a single chain with all the points in it.

  9. It is Still Not Correct! Let n be the number of points in the set d =  For i = 1 to n - 1 do For each pair of endpoints (x; y) of partial paths If dist(x; y) < d then xm = x, ym = y, d = dist(x; y) Connect (xm; ym ) by an edge Connect the two endpoints by an edge. • Although it works correctly on the previous example, other data causes trouble:

  10. A Correct Algorithm • We could try all possible orderings of the points, then select the ordering which minimizes the total length: d =  For each of the n! permutations i of the n points if (cost(i) < d) then d = cost(i) and Pmin = i Return Pmin

  11. It is Not Efficient • Since all possible orderings are considered, we are guaranteed to end up with the shortest possible tour. • Because it tries all n! permutations, it is extremely slow, much too slow to use when there are more than 10-20 points. • No efficient, correct algorithm exists for the traveling salesman problem, as we will see later.

  12. Efficiency • “Why not just use a supercomputer?” • Supercomputers are for people too rich and too stupid to design efficient algorithms! • A faster algorithm running on a slower computer will always win for sufficiently large instances, as we shall see. • Usually, problems don't have to get that large before the faster algorithm wins.

  13. Expressing Algorithms • We need some way to express the sequence of steps comprising an algorithm. • In order of increasing precision, we have English, pseudocode, and real programming languages. Unfortunately, ease of expression moves in the reverse order. • I prefer to describe the ideas of an algorithm in natural language, moving to pseudocode to clarify sufficiently tricky details of the algorithm.

  14. Pseudocode notation • Similar to any typical imperative programming language, such as Pascal, C, Modula, Java, ... • Liberal use of English. • Use of indentation for block structure. • Employs any clear and concise expressive methods. • Typically not concerned with software engineering issues such as: • error handling. • data abstraction. • modularity.

  15. Algorithm Analysis • Predicting the amount of resource required from the size of the input. We must have some quantity to count. Typically: • runtime. • memory. • number of basic operations, such as: • arithmetic operations (e.g., for multiplying matrices). • bit operations (e.g., for multiplying integers). • comparisons (e.g., for sorting and searching). • Types of Analysis: • worst-case. • average-case. • best-case.

  16. Best, Worst, and Average-Case • Types of Analysis: • The worst case complexity of the algorithm is the function defined by the maximum number of steps takes on any instance of size n. • The best case complexity of the algorithm is the function defined by the minimum number of steps taken on any instance of size n. • The average-case complexity of the algorithm is the function defined by an average number of steps taken on any instance of size n. • Each of these complexities defines a numerical function: time v.s. size!

  17. Best, Worst, and Average-Case

  18. The RAM Model • Algorithms are the only important, durable, and original part of computer science because they can be studied in a machine and language independent way. • We will do most of our design and analysis for the RAM model of computation: • Each "simple" operation (+, -, =, if, call) takes exactly 1 step. • Loops and subroutine calls are not simple operations, but depend upon the size of the data and the contents of a subroutine. We do not want “sort" to be a single step operation. • Each memory access takes exactly 1 step.

  19. Insertion Sort • One way to sort an array of n elements is to start with an empty list, then successively insert new elements in the proper position: a1 a2   akak+1 an • At each stage, the inserted element leaves a sorted list, and after n insertions contains exactly the right elements. Thus the algorithm must be correct. • But how efficient is it? • Note that the run time changes with the permutation instance! (even for a fixed size problem)

  20. Example

  21. Exact Analysis of Insertion Sort • In the following table, n = length(A), and tj = number of times the while test in line 5 is executed in the jth iteration.

  22. The Total Cost • Add up the executed instructions for all pseudo-code lines to get the run-time of the algorithm: • What are the tj‘s? They depend on the particular input.

  23. Best Case • If it's already sorted, all tj‘s are 1. • Hence, the best case time is c1n + (c2 + c4 + c5 + c8)(n – 1) = Cn + D where C and D are constants.

  24. Worst Case • If the input is sorted in descending order, we will have to slide all of the already-sorted elements, so tj = j, and step 5 is executed • Total runtime is

  25. Average Case

  26. Exact Analysis is Hard! • Exact Analysis is difficult to work with because it is typically very complicated! • Thus it is usually cleaner and easier to talk about upper and lower bounds of the function.

  27. Life Cycle of Algorithm Development

  28. Related Courses • Formal Specification • Abstract computation models • Data structure design • Algorithm design techniques • Theory of Computation / Complexity • Software engineering • Program verification • Computability

  29. Classification of Algorithms • by methods (techniques) • by characteristics • by running environments (architectures)

  30. Classified by methods (techniques) • Divide and Conquer • Dynamic Programming • Greedy • Network Flow • Linear/Integer Programming • Backtracking • Branch and Bound • • • •

  31. Classified by characteristics • Heuristic • Approximation • Randomized (Probabilistic) • On-Line • Genetic • • • •

  32. Classified by running environments • Sequential • Parallel • Distributed • Systolic • • • •

  33. Asymptotic Notations Suppose f and g are functions defined on positive integers: • Asymptotic upper bound: O(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n > n0 }. • Asymptotic lower bound: W(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n > n0}. • Asymptotic tight bound: Q(g(n)) = {f(n): there exist positive constants c1, c2, and n0 such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n > n0}.

  34. Useful (abuse of) notation • Write f(n) = O(g(n)) to mean f(n)O(g(n)). • Similarly for W, andQ. Very useful, e.g.:

  35. Big-O

  36. Big-W f(n) = W(g(n))

  37. Big-Q f(n) = Q(g(n))

  38. W, Q and O f(n) = Q(g(n)) f(n) = O(g(n)) f(n) = W(g(n)) tight bound upper bound lower bound

  39. Growth Rate of Functions

  40. A Revealing Table

  41. Another Revealing Table • If an algorithm runs in time T(n), what is the largest problem instance that it can solve in 1 minute?

  42. Another definition ofW • In using this notation, the left-hand side is more precise than the right, i.e., n2 = O(n4), 27n3 = Q(n3),Q(n) = O(n2), we do not say O(n2) = n2. • Another definition of the big omega notation: f(n) =W(g(n)) iff there exists constant c > 0 and positive integer n0 such that f(n) ≥ c  g(n), for infinitely many n ≥ n0. • Why define big-O and big-Wnotation in such an asymmetric why?

  43. Example • What is the asymptotic order of f(n)? • Clearly, f(n) is O(n) but notQ(n) and hence notW(n). • • What is the lower bound of f(n)? • According to the original definition, the best lower bound is 0.

  44. More Asymptotic Notations • Upper bound that is not asymptotically tight: o(g(n)) = {f(n): for any c > 0, there exist positive constant n0 such that 0 ≤ f(n) < cg(n) for all n > n0 }. • Lower bound that is not asymptotically tight: w(g(n)) = {f(n): for any c >0, there exist positive constant n0 such that 0 ≤ cg(n) < f(n) for all n > n0}. • f(n)O(g(n)) iff g(n)W(f(n)). • f(n)o(g(n)) iff g(n)w(f(n)).

  45. Comparison of functions • Many of the relational properties of real numbers apply to asymptotic comparisons as well. For the following, assume that f(n) and g(n) are asymptotically positive. • Transitivity: • f(n) =Q(g(n)) and g(n) =Q(h(n)) imply f(n) =Q(h(n)), • f(n) = O(g(n)) and g(n) = O(h(n)) imply f(n) = O(h(n)), • f(n) =W(g(n)) and g(n) =W(h(n)) imply f(n) =W(h(n)), • f(n) = o(g(n)) and g(n) = o(h(n)) imply f(n) = o(h(n)), • f(n) =w(g(n)) and g(n) =w(h(n)) imply f(n) =w(h(n)).

  46. Relations • Reflexivity: f(n) =Q(f(n)), f (n) = O(f(n)), f(n) =W(f(n)). • Symmetry: f (n) =Q(g(n)) if and only if g(n) =Q(f(n)). • Transpose symmetry: f(n) = O(g(n)) iff g(n) =W(f(n)), f(n) = o(g(n)) iff g(n) =w(f(n)).

  47. Asymptotic vs. Real Numbers • Analogy between the asymptotic comparison of two functions f and g and the comparison of two real numbers a and b: f(n) =Q(g(n)) €a = b f(n) = O(g(n)) €a ≤ b f(n) = o(g(n))  € a < b f(n) =W(g(n))€ a ≥ b f(n) =w(g(n))a > b. • Any two real numbers can be compared, but not all functions are asymptotically comparable. That is, for two functions f(n) and g(n), it may be the case that neither f(n) = O(g(n)) nor f(n) =W(g(n)) holds. E.g.: f(n) = n1+sin(n), and g(n) = n1-sin(n).

  48. Properties of Asymptotic Notations 1. For all k > 0, kf is O(f). 2. If f is O(g) and f’ is O(g) then (f + f’) is O(g). If f is O(g) then (f + g) is O(g). 3. If f is O(g) and g is O(h) then f is O(h). 4. nr is O(ns) if 0 ≤ r ≤ s. 5. If p is a polynomial of degree d then p isQ(nd).

  49. More Properties 6. If f is O(g) and h is O(r) then f  h is O(g  r). 7. nk is O(bn), for all b > 1, k 0. 8. logb n is O(nk), for all b > 1, k > 0. 9. logbn isQ(logdn), for all b, d > 1. 10. kr = O(nr+1).

  50. Intractability • Definition: An algorithm has polynomial time complexity iff its time complexity is O(nd) for some integer d. A problem is intractable iff no algorithm with polynomial time complexity is known for it. Exercises: • Is 3n O(2n)? • What is O(n2 - n + 6) - O(n2 - 6)? • Find functions f and g such that (1) f is O(g), (2) f is notW(g), and (3) f(n) > g(n) for infinitely many n.

More Related