1 / 43

1.2.1 Complexity 2 nd edition When size matters

Learn about computational complexity, "Big Oh" notation, and algorithm classification in this module. Explore different types, from computational to cognitive complexity. Discover key factors affecting execution time and algorithm efficiency. Enhance your understanding of complexity for better problem-solving in computer science and CAE.

lsandra
Télécharger la présentation

1.2.1 Complexity 2 nd edition When size matters

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Knowledge Component 1: Theoretical Foundations 1.2.1 Complexity 2nd edition When size matters Ian F. C. Smith EPFL, Switzerland

  2. Module Information Intended audience Intermediate Key words Complexity Program execution time “Big Oh” notation Author Ian Smith, EPFL, Switzerland Reviewer (1st Edition) William Rasdorf, NCSU

  3. What there is to learn • At the end of modules 1.2.1 and 1.2.2, there will be answers to the following questions: • Are there certain tasks that computers cannot do and if so, can faster computers help in these cases? • What are the cases when computational requirements are nearly independent of task size? • Can small changes have a big effect? • Can one classify tasks in order to know whether or not a program will perform well for full-scale tasks? • Why is engineering experience so valuable?

  4. Outline Complexity Execution Time “Big Oh” Notation Classification in “Big Oh” Notation

  5. Complexity Complexity is a central theme in computer science and is an important topic in CAE. One of the most practical aspects involves the classification of algorithms according to their ability to cope with different levels of computational complexity. There are three types of complexity: computational, descriptive and cognitive.

  6. Three Types of Complexity • Computational complexity: This type is used to classify well-structured tasks where the goal is to find an efficient solution. Although efficiency can be measured in terms of use of memory and hardware resources, algorithms are traditionally classified according to factors that influence execution time.

  7. Three Types of Complexity (cont’d.) • Descriptive complexity: A classification of the level of difficulty involved with providing descriptive structures to systems that have varying degrees of intricacy. • Cognitive complexity: A classification of the level of difficulty related to describing and simulating human thought for various activities.

  8. This course provides an introduction only to the fundamental concepts of computational complexity. The two other types of complexities do not yet have fixed classification schemas. Nevertheless, they are both important areas in computer science and their sub-domains are gradually attracting the interest of engineers.

  9. Outline Complexity Execution Time “Big Oh” Notation Classification in “Big Oh” Notation

  10. Execution Time • Rather than absolute values, trendswith respect to particular factors are of most interest, viz.growth of execution time in relation to the increase in the size of the task. • Important factors: • Formulation of the problem (task complexity) • Algorithm complexity • Size of input • Presentation of the desired results • Hardware capacity (memory, peripherals, etc.) • Operating environment

  11. Execution Time (cont’d.) In this course, we will concentrate on the first two aspects. It is assumed that the size of the input and requirements related to the results are not critical factors. It will be shown that under certain conditions the first two aspects can create extreme situations that are independent of hardware capacity and operating environments. These aspects will remain important regardless of advances in computing equipment.

  12. Example 1: Part A Estimation of trends in execution times of a program upon modifying key parameters. Part A Task: Write a program that finds the minimum value of the following function by sampling the solution space at regular intervals.

  13. Example 1: Part A (cont’d.) F(x) = 1 + x2/4000 – cos(x) for x Є [-10,10] (Griewank’s function)

  14. Example 1: Part A (cont’d.) What would be the best number of samples? All other aspects being equal, if the number of samples is doubled, the execution time is doubled. If the number of samples is tripled, the execution time is tripled. Therefore, a sampling algorithm of this type is said to be linear with respect to the number of samples required.

  15. Example 1: Part A (cont’d.) Note: This is not the best algorithm for this task. What is the best algorithm?

  16. Example 1: Part A (cont’d.) Answer Take the differential of f(x), set it to zero and solve for x. This algorithm requires no sampling and execution time is the same regardless of the range of values of x. Discussion Most tasks can be solved in several ways using algorithms that have varying sensitivity to execution time.

  17. Example 1: Part B Part B Task: Write a program that finds the minimum value of the following function by sampling along each axis at regular intervals. (General form of Griewank’s function in N variables)

  18. Example 1: Part B (cont’d.) For a given number of variables N, the number of samples increases by 2N if the number of sampling points on each axis is doubled and by 3N if the number of axis points is tripled. For constant N, this algorithm is said to be polynomialwith respect to the number of sampling points on each axis.

  19. Example 1: Part B (cont’d.) If the number of sampling points along each axis is constant while N varies, the algorithm is said to be exponentialwith respect to the number of variables.

  20. Example 1: Part B (cont’d.) • For 10 sampling points along each axis: • N = 2 requires 100 samples • N = 3 requires 1000 samples • N = 4 requires 10000 samples • N = 10 requires 10 billion samples Polynomial → Exponential ↓

  21. Example 1: Part B (cont’d.) In practice, exponentially complex algorithms are feasible only for small tasks. The algorithm may be clear and concise and the program may only comprise a few lines, but execution may take hours, days or even longer.

  22. Review Quiz I • What are three types of complexity? • What is a good measure of computational complexity? • What is more desirable from a computational point of view; polynomial or exponential complexity? Why?

  23. Answers to Review Quiz I • What are three types of complexity? • Computational complexity • Descriptive complexity • Cognitive complexity Only computational complexity has fixed classification schemas. This course includes one of them.

  24. Answers to Review Quiz I • What is a good measure of computational complexity? Trends with respect to growth of execution time in relation to the increase in a parameter. • What is more desirable from a computational point of view; polynomial or exponential complexity? Polynomial complexity because execution time grows less rapidly with increases in task size.

  25. Outline Complexity Execution Time “Big Oh” Notation Classification in “Big Oh” Notation

  26. "Big Oh" Notation One of the popular notations expressing the relationship between task size and amount of computational resources required. It provides a model of relative execution time and task size. Therefore, “Big Oh” notation is very useful for classifying levels of computational complexity.

  27. "Big Oh" Notation (cont’d.) If n = task size (integer, n>0) f(n) = execution time (>0) g(n) = relative execution time Then there exists a positive constant c such that f(n) ≤ c ◦ g(n) Note the inequality!

  28. "Big Oh" Notation (cont’d.) We are interested in upper bound, or worst case, estimates of execution time. For small values of n, say n<n0, this may not be true. An extra condition is n≥n0.

  29. "Big Oh" Notation: O(g(n)) The constant, c, contains machine specific aspects of execution (hardware, compiler, etc.). These aspects are assumed to be independent of task size n (n≥n0). The function g(n) represents the trend in execution time for varying values of task size, n. The notation O(g(n)) signifies "Order of g(n)".

  30. Examples Revisited Example 1, Part A The complexity is O(n) with respect to the number of samples taken in the interval [–10,10] Example 1, Part B With three variables (N=3), the complexity is O(n3) with respect to the number of samples, n, taken in the interval [–10, 10]

  31. Examples Revisited (cont’d.) Example 1, Part B With 10 samples taken in the interval [–10, 10] on each axis, the complexity is O(10N) with respect to the number of variables (axes), N.

  32. Outline Complexity Execution Time “Big Oh” Notation Classification in “Big Oh” Notation

  33. Classifications in “Big Oh” Notation Logarithmic time O(log(n)): f(n) ≤ c ◦ log(n) Linear time O(n): f(n) ≤ c ◦ n Polynomial time O(n2): f(n) ≤ c ◦ n2 Exponential time O(2n): f(n) ≤ c ◦ 2n

  34. Classifications in “Big Oh” Notation (cont’d.) Factorial time O(n!): f(n) ≤ c ◦ n! Double exponential time O(nn): f(n) ≤ c ◦ nn Note: “Big Oh” notation is independent of the number of commands a machine executes in a second.

  35. Simplification • What are the “Big Oh” notations for the following? • f(n) = 5n3 + 4n2 + 3n + 5 • f(n) = 3n3 + 2n • f(n) = 3n + log(n) • f(n) = 6n! + 2n4 + n

  36. Simplification – Answers • f(n) = 5n3 + 4n2 + 3n + 5 O(n3) • f(n) = 3n3 + 2n O(2n) • f(n) = 3n + log(n) O(n) • f(n) = 6n! + 2n4 + n O(n!) Explanations follow.

  37. Simplification – Answers (cont’d.) • f(n) = 5n3 + 4n2 + 3n + 5 f(n) = 5n3 + 4n2 + 3n + 5 f(n) ≤ 5n3 + 4n3 + 3n3 + 5n3 f(n) ≤ 17n3 f(n) is O(n3)

  38. Simplification – Answers (cont’d.) • f(n) = 3n3 + 2n Comparing the two right-hand-side columns, f(n)≤ 13(2n). Therefore, f(n) is O(2n)

  39. Simplification – Answers (cont’d.) • f(n) = 3n + log n log n < n f(n) ≤ 4n Therefore, f(n) is O(n)

  40. Simplification – Answers (cont’d.) • f(n) = 6n! + 2n4 + n Comparing the two right-hand-side columns, f(n)≤ 36(n!). Therefore, f(n) is O(n!)

  41. Summary • Complexity is an important topic in engineering • Three types of complexity: computational, descriptive, cognitive • The “Big Oh” notation is useful for characterizing computational complexity • High complexity (e.g. exponential or factorial complexity) => excessive execution times for large problems

  42. Remarks • When complexity is exponential, it is not possible to find solutions in a reasonable amount of time for large values of n • Increases in computer power does not help • Presence of well-defined algorithms is not the only criterion for computability. For example, a well-defined algorithm that has factorial complexity would not be computable for high values of task size.

  43. Further reading • Computers and Intractability: A Guide to the Theory of NP-Completeness, M. Garey and D. Johnson, W.H. Freeman and Company, New York, 1979 • Algorithmics, The Spirit of Computing, 3rd ed., D. Harel, Haddison-Wesley, 2004 • Fundamentals of Computer-Aided Engineering, B. Raphael and I.F.C. Smith, Wiley, 2003

More Related