1 / 42

Numerical Methods

Numerical Methods. Lecture 1 – Introduction Dr Andy Phillips School of Electrical and Electronic Engineering University of Nottingham. Today. Introduction to module Why do we need numerical methods? What will we cover ? plus laboratory and exam details Quick review of numbers

Télécharger la présentation

Numerical Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Numerical Methods Lecture 1 – Introduction Dr Andy Phillips School of Electrical and Electronic Engineering University of Nottingham

  2. Today • Introduction to module • Why do we need numerical methods? • What will we cover ? • plus laboratory and exam details • Quick review of numbers • How numbers are stored and the effect on Precision

  3. Topics to be Covered (cont.) • Numbers • Types of numbers (review) • How they are stored (IEEE standard) • Effect on Precision

  4. Why Numerical Methods • Why numerical methods (NM) ? • On paper we can solve (some) equations, however as they get more complex this may cease to be the case • We may have a simulation that requires millions of iterations – again pen & paper is not a possibility

  5. Why Numerical Methods • Why numerical methods (NM) ? • We are testing a theory and NM are used to implement simulations before any experimental work is carried out • To save money ! • Compared to a more complicated and less versatile analytical approach

  6. Why Numerical Methods • NM techniques allow us to perform • Solution of ODE & PDEs • Integration • Solve series of linear equations • Find roots of equations • Simulate random processes • Obtain approximate solutions to problems where a definite one cannot be found (or is too difficult to determine)

  7. Why Numerical Methods • However we must remember that • Numerical techniques can provide a solution to a problem, but the answer will not always be the exact solution one. • The accuracy we obtain is related to • The technique used • How accurately we store the values • The limits we work with • Step sizes, no. of iterations etc.

  8. What will we cover? • So what will we cover ? • Simple NM Techniques (using C) e.g.: • Solutions of Differential Equations • Numerical Integration • Gaussian Elimination • Random Numbers • Root finding • Optimisation of code • A mathematical programming language: MATLAB • Fourier synthesis, FFTs, curve fitting & interpolation

  9. Lecture Content • Lectures & Laboratories (part 1) • Following each lecture you will receive a laboratory sheet • This will contain exercises for you to complete (normally) based on the material covered in the lecture • In Wednesday’s lab slot (1000-1200) • First lab is on 26 Jan 2005 • All the programming in the the first part of the course will be in C (using the Salford Compiler)

  10. Lecture Content (Cont.) • Lectures & Laboratories (part 2) • This part of the module deals with Numerical Programming Languages • Specifically Matlab • Webpage: http://hermes.eee.nottingham.ac.uk/teaching/h61num/

  11. Assessment Methods • The final mark is made up of • Progress test questions (20%) • Examination (80%) • This will be in the form of a supervised programming session. • Two sections, part one will be based on the programming part of the module, the second will require you to demonstrate the use of a mathematical programming language (MATLAB)

  12. And so we begin…

  13. Numbers • On a computer, the basic building block for a number is the BYTE • 8 bits • Can range from 00000000 to 11111111 • Can be signed or unsigned

  14. Numbers – Byte sizes • Each variable is made up of a number of bytes, and this defines the range of numbers possible. • We can obtain the size using the sizeof function in C • E.g. sizeof(int) gives the size of an integer • Some machines will have the same size for variables that on other machines will be different (esp. short int)

  15. Numbers (cont) eg • The no. of bytes used defines (in the case of integer types) the range of numbers that can be stored • 2 Byte (16 bits) has the range • 0 -> 216 – 1 (unsigned), or • ( 0 – 215) –> 215 –1

  16. Numbers: Limits • The limits of a variable are machine specific as they depend on the no. of bytes used for storage • Possible exception of char – generally one byte • With each machine/compiler is shipped a file ‘limits.h’ which has the permissible range

  17. Fixed Point Numbers • A method of representing fractions using binary numbers • Fixed point representation of fractions. • In a fixed point representation, the binary point is understood to always be in the same position. The bits to the left represent the integer part and the bits to the right represent the fraction part : • The integer parts go as 2n (1,2,4,8 etc) • The fraction parts go as 2-n (0.5, 0.25, 0.125 etc) . The overall value is formed using a sum of these.

  18. Fixed Point Numbers Example: A fixed point system uses 8-bit numbers. 4 bits for the integer part and 4 bits for the fraction: What number is represented by 00101100?

  19. 8 4 2 1 0.5 0.25 0.125 0.0625 0 0 1 0 1 1 0 0 Fixed Point Numbers Example: A fixed point system uses 8-bit numbers. 4 bits for the integer part and 4 bits for the fraction: What number is represented by 00101100?

  20. 8 4 2 1 0.5 0.25 0.125 0.0625 0 0 1 0 1 1 0 0 Fixed Point Numbers Example: A fixed point system uses 8-bit numbers. 4 bits for the integer part and 4 bits for the fraction: What number is represented by 00101100? NOTE : The headings for the fraction part are divided by 2 successively to the right... The number 00101100 represents the number 2.75.

  21. Fixed Point Numbers Example: Converting to Fixed Point – there is an easy way ! EG. 56.78125 Stage 1 : Integer part: easy

  22. 0 . 7 8 1 2 5 x 2 (1) . 5 6 2 5 0 x 2 (1) . 1 2 5 0 0 x 2 (0) . 2 5 0 0 0 x 2 (0) . 5 0 0 0 0 x 2 (1) . 0 0 0 0 0 Fixed Point Numbers (56.78125) Example: - fraction part You keep multiplying, ignoring the value in blue until you get zero or run out of bits to use (try 0.4 which gives a recurring pattern) So reading down we get 11001 So final answer is 00111000 11001000

  23. Fixed Point Numbers Your Turn: Convert the value 25.375 to a fixed point representation, ( 8 bit integer part, 8 bit fractional part )

  24. Fixed Point Numbers Your Turn: Convert the value 25.375 to a fixed point representation, ( 8 bit integer part, 8 bit fractional part ) Answer 0 0011001 01100000 s 25 . 375

  25. Fixed Point Numbers • Disadvantage: • Fixed window of representation so can’t handle v. big or v. small numbers • Common solution is Floating Point Numbers • “sliding window” so v. big and v. small possible

  26. Floating Point Numbers • Any number can be represented in any base b (b=2 for binary) floating point as • X=m*be (*sign) • where m is the mantissa and e is the exponent • The mantissa is usually “normalised” if it has leading zero digits • In binary this means the mantissa becomes 1.F where F is the binary fractional part • X increases in proportion to the magnitude of the number being represented and decreases in proportion to no. of bits used to form the mantissa

  27. Floating Point Numbers • These are held using the IEEE 754 Floating Point Model • IEEE 754-1985 governs binary floating-point arithmetic. It specifies number formats, basic operations, conversions, and exceptional conditions. • The related standard IEEE 854-1987 generalizes 754 to cover decimal arithmetic as well as binary.

  28. Single precision 32 bit floating point numbers • S EEEEEEEE FFFFFFFFFFFFFFFFFFFFFFF • 1 overall Sign bit, 8 bits Exponent, 23 bits Fraction • General case 0<E<255 then number X = ((–1)S)*(1.F)*2E-127 • Allows effective 24 bit mantissa due to leading 1 • E.g. 0 10000001 11100000000000000000000 is (–10)*(1.F)*2129-127 = 1*1.11100000000000000000000*22 = 111.1 (notice the 2 place shift in the radix point) = 7.5

  29. Single precision 32 bit floating point numbers • E=255 [11111111] and E=0 [00000000] are special cases: • E=255, F not zero then X = NaN (“not a number”) • E=255, F=0, S=1 then X=-Infinity • E=255, F=0, S=0 then X=Infinity • E=0, F not zero then X=((–1)S)*(0.F)*(2-126) [“unnormalised values” – no leading 1] • E=0, F=0, S=1 then X=-0 • E=0, F=0, S=0 then X=0

  30. Floating Point Numbers • Example • Using the IEEE 754 model convert the value below to denary (base 10) • 0 10000101 10100000000000000000000

  31. Floating Point Numbers Step 1 : Note that E is neither 255 nor 0 so we have the standard case. The sign S=0, therefore the overall number is positive [(-1)0=1] The Mantissa is 1.10100000000000000000000

  32. Floating Point Numbers Step 2 : The stored exponent E is 10000101 (includes a bias of 127) The actualexponent value is 128+4+1-127=6

  33. Floating Point Numbers Step 3 : The Mantissa (in binary) is 1.1010000000…(etc.) The actual Exponent is 6 We know the number overall is positive Hence X = 1.1010000000 * 26 =(1+0.5+0.125)*64=104 or, perhaps more elegantly, move the radix point 6 places to the right giving: X=1101000.0000 i.e. X=64+32+8=104

  34. Floating Point Numbers Your turn to do it (in reverse!): Convert 25.375 to a IEEE 754 floating point number Hint: Find suitable actual exponent e first, then divide 25.375 by 2e and represent the mantissa by the answer

  35. Floating Point Numbers Your turn to do it (in reverse!): Convert 25.375 to a IEEE 754 floating point number Hint: Find suitable actual exponent e first, then divide 25.375 by 2e and represent the mantissa by the answer Hint 2: Don’t forget to adjust the actual exponent by the bias (127) before ‘storing’ it!

  36. Floating Point Numbers Your turn: Convert 25.375 to a IEEE 754 floating point number Answer: 0 10000011 10010110000000000000000

  37. Floating Point Numbers • This representation however has limitations • Even with a 23 bit fraction (24 bit mantissa) some numbers can appear the same (as 2-23=1.19*10-7 we can only have 7 or 8 dp precision) • We can extend to double precision (52 bit fraction, 11 bit exponent) but we can still have inaccuracies • Adding very large to very small numbers can cause considerable problems

  38. Variable Selection • We aim to use the most suitable variable for the type of number we are storing • The choice of variable has consequences both for overall memory usage and speed of calculation • Integer mathematics is considerably faster than floating point calculations.

  39. Variable Selection & Precision • For ‘whole’ numbers integer types may be used, but care must be taken when mixing with ‘real’ numbers • Also, division can pose a problem if integer types are used by mistake • I.e 4/8 • Integer mathematics : 0 • Floating Point : 0.5 • This is a very common mistake in H61NUM students programs!

  40. Variable Selection & Precision • When using floating point numbers, large numbers with very small decimal point parts • eg 1000000000000000000000000.000000000000000000000000000001 (NB decimal!) • cause problems as the number of decimal places whose values matter exceeds the available precision.

  41. Variable Selection & Precision • Similar problems can occur when adding very small numbers to very large ones (for the same reasons!) • We can address the problems by using higher precision, but this requires more storage and reduces speed.

  42. Laboratory • First laboratory is on Wednesday 26 Jan 1000-1200 in Tower 4.02 • Unlike most labs, which will normally be directly linked to the subject of the preceeding lecture, this coming lab is mainly an introduction to Numerical Methods.

More Related