1 / 50

Program Design and Algorithm Analysis

Program Design and Algorithm Analysis. INTRODUCTION TO PROGRAM DESIGN. Understand the program development life cycle. Understand the algorithm and the notation to be used for expressing them. Understanding the various control structure Understanding the algorithms complexity

tom
Télécharger la présentation

Program Design and Algorithm Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Program Design and Algorithm Analysis

  2. INTRODUCTION TO PROGRAM DESIGN • Understand the program development life cycle. • Understand the algorithm and the notation to be used for expressing them. • Understanding the various control structure • Understanding the algorithms complexity • Understand the big O notation • Understand the recursion

  3. PROGRAM DEVELOPMENT LIFE CYCLE. The program development process is divided into following phases: • Defining the problem. • Designing the problem. • Coding the problem. • Testing and debugging the problem. • Documenting the program. • Implementing and maintaining the program.

  4. DEFINING THE PROBLEM. • To form the program specification. Which includes: • Input data • Processing that should take place • Format of the output report • User interface • Handling of job by individual or team

  5. DESIGNING THE PROBLEM • Program design begins by focusing on the main goal that the program is going to achieve and breaking the program into manageable components each of which contribute to this goal. • Approach program design is called top down program design or modular programming.

  6. PROGRAM DESIGN TOOLS • Structure charts • Algorithms • Flowcharts • Pseudo codes

  7. CODING THE PROBLEM • Coding the program involves translating the algorithm into specific program language instructions. • While writing the code, prefer to use well defined control structures. • This technique of programming using only well defined control structures is known as structured programming.

  8. TESTING AND DEBUGGING THE PROBLEM • Programmer must find and correct logical errors by carefully examining the program output for a set of data for which results are already known, such type of data is known as test data. • If the software developed is a complex one and consists of large number of modules then testing is done unit testing , system testing. • Syntax errors and logical errors are collectively known as bugs. The process of identifying and eliminating these errors is known as debugging

  9. DOCUMENTING THE PROGRAM • After testing, the software is almost complete. The structure chart, pseudo codes, flow chart developed during the design phase become documentation for others who are associated with the software project. • In addition more documentation done as the program are being coded such as list of variable names and definition, description of files that the program need to be work with and the format of the output that the program produces.

  10. IMPLEMENTING AND MAINTAINING THE PROGRAM • In the final phase, the program is installed at the user’s site. Here also, the program is kept under watch till the user gives green signal to it. User may discover the errors that were not caught in the testing phase. • Even after the software project is complete, it needs to be maintained and evaluated regularly. In program maintenance, the programming team fixes program errors that user discover during day to day use.

  11. INTRODUCTION TO ALGORITHM An algorithm is a finite set of steps defining the solution of a particular problem. Every algorithm must satisfy the following criteria: • Input: There are zero or more values which are externally supplied. • Output: At least one value is produced • Definiteness: Each step must be clear and unambiguous. • Finiteness: Must terminate after a finite number of steps. • Effectiveness: Each step must be definite and must be feasible.

  12. EXAMPLE • Given an one dimensional array a[0,n-1] of real values, and we want to find the location of its largest element. • Algorithm1 Let a[0:n-1] be one dimensional array with n real values. This algorithm finds the location loc of its largest element. The variable i is used as index variable, max is temporary variable to store the current largest element.

  13. ALGORITHM Begin Read: n //enter value of n for i=0 to (n-1) by 1 do read: ai //enter elements of array endfor set max=a0 //take first element as the largest set loc=0 //set location of the element to 0 for i=0 to (n-1) by 1 do if(ai>max) then //compare the next element with max set max=ai//take the element as the current largest set loc =I; //update location of the largest element endif endfor write: “location of the largest element=“,loc end

  14. ALGORITHM DESCRIPTION Algorithm consists of two parts: • Describe the input data, the purpose of the algo. Identifies the variable used in algorithms. • Composed of sequence of instructions that lead to the solution of the problem.

  15. SUMMARY OF CONVENTION USED FOR PRESENTING ALGORITHM Comments: which explain the purpose of instruction. //this is a sample comment Variable Names: we use italicized lower case letters such as max, loc etc. Assignment statement: the assignment statement will use the notation as set max=ai Input/output: data may be input and assigned to variables by means of a read statements with the following format read: variable list Output is done by write statement write: message and/ or variable list

  16. STRUCTURE PROGRAMMING CONSTRUCTS • Sequential programming construct • Selection programming construct • Iterative programming construct

  17. THE if SELECTION STRUCTURE if structure is a single-entry/single-exit structure 17 true false print “Passed” marks >= 40 A decision can be made on any expression. zero - false nonzero - true Example: 3 - 4 istrue

  18. THE if/else SELECTION STRUCTURE Flow chart of the if/else selection structure 18 false true print “Passed” print “Failed” grade >= 40

  19. 19 START Yes Case 1 Statement 1 No Yes Statement 1 Case 2 No Yes Statement 1 Case 3 No Yes Statement 1 Case 4 No STOP switch (choice) { case 1: statement 1; break; case 2: statement 2; break; case 3: statement 3; break; case 4: statement 4; break; } SWITCH STATEMENT

  20. THE WHILE REPETITION STRUCTURE int number=1; while ( number <= 10){ printf(“number=%d”,number) ; number=number+1; } NUMBER=1 20 true number <= 10 number = number+1 false

  21. 21 true false action(s) condition THE do/while REPETITION STRUCTURE

  22. FOR LOOPSyntax:for(initialization;condition;increment/decrement)e.g for(i=0;i<10;i++) { printf(“%d”,i); } 22

  23. What is Performance Analysis? In order to be accessed, data is stored in a data structure. Often, there is a choice of data structures that can be used to store the data. Almost any data structure can be accessed by more than one algorithm.

  24. Frequently, more than one algorithm performs the same action. In order to chose between data structures and algorithms, the efficiency of competing data structures (in terms of space required) and competing algorithms (in terms of time used) must be compared.

  25. Frequency Count Analysis • A frequency count of statements executed is the most direct form of a priori analysis of the time used by an algorithm. • Each statement in a program adds the value of 1 to the frequency count each time it is executed.

  26. Simple Program with FOR Loop • int sum=0; • for(int i=0; i<5; i++) • sum += i; • print (sum);

  27. Analysis of FOR Loop • The frequency count analysis of this fragment is: • int sum=0; // add 1 to the time count, add 1 to the space count • for(int i=0; i<5; i++) { // add 6 to the time count, add 1 to the space count sum += i; // add 5 to the time count } Print ( sum); // add 1 to the time count

  28. Analysis of FOR Loop This example has a frequency count for time of 13 and a frequency count for space of 2. Note that compound statements symbols are not counted. Note that the FOR statement requires 1 more iteration than the body of the for loop. This is so that the for variable (i) can reach 5 and trigger the loop halt.

  29. Try this Program int b; for (int x=0; x<n; x++) for (int y=0; y<n; y++) { b = x * y; Print ( b); }

  30. Analysis • int b; // add 1 to space • for (int x=0; x<n; x++) // add 1 to space, n +1 to time • for (int y=0; y<n; y++) { // add 1 to space, n times n + 1 to time • {b = x * y; // add n times n to time • Print ( b); // add n times n to time • }

  31. Solution • This gives time and space complexity: • Time Space n+1 1 + n (n+1) +1 + n2 +1 + n2 _____________ ______ 3n2+2n+1 3

  32. Analysis Notice that one loop dependent on ngives a result in the form of (an + b), where the largest order of magnitude is 1. Two nested loops dependent on n give the result in the form (an2 + bn + c), where the largest order of magnitude is 2. In fact, the pattern continues. With a triply nested set of loops that depend on a limit of n, the result is in the form of (an3 + bn2 + cn + d), where the largest order of magnitude is 3. This pattern continues as nesting of loops continues.

  33. Logarithmic loops • In linear loop, the loop update either adds or subtracts. • In a logarithmic loop, the controlling variable is multiplied or divided in each iteration. • Multiply loop: for(i=1;i<1000:i*=2) application code • Divide loop: for(i=1000;i>1:i/=2) application code • To understand this , in next slide value of i for each iteration has been given.

  34. Logarithmic loops • As you see the number of iteration is 10 in both cases. • The reason is that in each iteration the value of i doubles for multiply loop and is cut in half for divide loop. • Thus the number of iterations is a function of the multiplier or divisor. • Generalizing the analysis, we can say that the iterations in loops that multiply or divide are determined by the following formula: F(n)=logn

  35. ALGORITHM COMPLEXITY Two aspects of computer programming: • Data organization i.e. the data structures to represent the data of the problem in hand. • Choosing the appropriate algorithm to solve the problem in hand.

  36. ALGORITHM The choice of particular algorithm depends on following considerations: • Performance requirements, i.e. time complexity • Memory requirements i.e. space complexity • Programming requirements.

  37. SPACE COMPLEXITY The space complexity of an algorithm, is the amount of memory it needs to run to completion. Some reason for studying space complexity are: • If the program is to run on multy user system, it may be required to specify the amount of memory to be allocated to the program. • We may be interested to know in advance that whether sufficient memory is available to run the program. • There may be several possible solutions with different space requirements. • Can be used to estimate the size of the largest problem that a program can solve.

  38. SPACE COMPLEXITY Space needed by a program consists of following components: • Instruction Space- space needed to store the executable version of the program. • Data space- space needed to store all constants, variable. • Environment stack space

  39. TIME COMPLEXITY The time complexity of an algorithm is the amount of time it needs to run to completion. Some of the reasons for studying time complexity are: • We may be interested to know in advance that whether the program will provide a satisfactory real time response. • There may be several possible solutions with different time requirements.

  40. TIME SPACE TRADE-OFF • The best algorithm, hence best program to solve a given problem is one that requires less space in memory and takes less time to complete its execution. • In practice it is not always possible to achieve both of these objectives. we have to sacrifice one at the cost of other.

  41. EXPRESSING SPACE AND TIME COMPLEXITY The space and/or time complexity is usually expressed in the form of a function f(n); where n is the input size for a given instance of a problem being solved Expressing time and space complexity as a function f(n) is important because of the following reason: • We may be interested to predict the rate of growth of complexity as the size of the problem increases. • To compare the complexities of two or more algorithms solving the same problem in order to find which is more efficient.

  42. NOTE: Since in modern computer, the memory is not severe constraint, therefore, our analysis of algorithms will be on the basis of time complexity.

  43. BIG Oh NOTATION Big Oh is a characterization scheme that allows to measure the properties of algorithms such as performance and/or memory requirements in general fashion. The algorithm complexity can be determined ignoring the implementation dependent factor. Eliminating constant factors in the analysis of the algorithm. Basically these are the constant factor that differs from computer to computer. Clearly, the complexity function f(n) of an algorithm increases as n increases. It is the rate of increase of f(n) that we want to examine.

  44. BIG Oh NOTATION Suppose , f(n) and g(n) are function defined on positive integer number n, then function f(n)=O(g(n)), read as “ f of n is big oh of g of n”, or as “ f(n) is of the order of g(n)”, if there exists positive constants c and n0 , such that f(n)<=c x g(n) for all values of n>=n0 That is, for all sufficiently large amount of input data (n>n0), f(n) will grow no more than a constant factor than g(n).

  45. THEOREM If F(n)=amnm + am-1nm-1 + …+a2n2+a1n+a0 and am>0 Then f(n)=O(nm) Proof:- m f(n)<=∑|ai|ni i=0 m =nm∑|ai|ni-m i=0 m <=nm∑|ai| for n>=1 i=0 hence f(n)=O(nm)

  46. EXAMPLES Exact constant values do not matter and that the relationship f(n)<=c x g(n) may not hold small input sizes.

  47. CATEGORIES OF ALGORITHMS Based on the big Oh notation, the algorithms can be categorized as follows: • Constant time (O(1)) algorithm. • Logarithmic time (O(logn)) algorithms. • Linear time (O(n)) algorithms. • O(n log n ) time functions • Quadratic (O(n2)) time functions • Polynomial time (O(nk), for k>1) algorithms. • Exponential time (O(kn), for k>1) algorithms

  48. RATE OF GROWTH OF SOME STANDARD FUNCTION The logarithmic function log n grows most slowly, whereas the exponential function 2n grows most rapidly, and the polynomial function nk grows according to the exponential k.

  49. LIMITATION OF BIG OH NOTATION • It contains no consideration of programming effort. • It masks (hides) potentially important constant. For example one algorithm use 500000n2 time, and other n3 time. The first is O(n2), which implies that it will take less time than the other algorithm which is O(n3). However, second algorithm will be faster for n<500000, and this would be faster for many applications.

More Related