1 / 59

Computational Physics 1

Computational Physics 1. Limits of computers. Computers are powerful but finite. There is limited amount of space. This has an effect on how numbers can be represented and any calculations we carryout. We first look at how numbers are stored and represented on computers.

amanda
Télécharger la présentation

Computational Physics 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Physics 1

  2. Limits of computers • Computers are powerful but finite. • There is limited amount of space. • This has an effect on how numbers can be represented and any calculations we carryout. • We first look at how numbers are stored and represented on computers.

  3. Number Representation

  4. Representation of Numbers • Computer represent all numbers, other than integers and some fractions with imprecision.

  5. Representation of Numbers • Computer represent all numbers, other than integers and some fractions with imprecision. • Numbers are stored in some approximation which can be represented by a fixed number of bits or bytes.

  6. Representation of Numbers • Computer represent all numbers, other than integers and some fractions with imprecision. • Numbers are stored in some approximation which can be represented by a fixed number of bits or bytes. • There are different types of “representations” or “data types”.

  7. Representation of Numbers • Data types vary in number of bits used (word length) and representation - fixed point (int or long) or floating point(float or double) format.

  8. Fixed point Integer Representation

  9. Fixed point Integer Representation • A common method of integer representation is sign and magnitude rep.

  10. Fixed point Integer Representation • A common method of integer representation is sign and magnitude rep. • One bit is used for the sign and the remaining bits for the magnitude.

  11. Fixed point Integer Representation • A common method of integer representation is sign and magnitude rep. • One bit is used for the sign and the remaining bits for the magnitude. • Clearly there is a restriction to the numbers which can be represented.

  12. Sign bit (+ve number) Sign bit (-ve number) Fixed point Integer Representation • With 7 bits reserved for the magnitude, the largest and smallest numbers represented are +127 and –127.

  13. Fixed point Integer Representation • Things to note: • Fixed point numbers are represented as exact. • Arithmetic between fixed point numbers is also exact provided the answer is within range. • Division is also exact if interpreted as producing an integer and discarding any remainder.

  14. Floating point Representation

  15. Floating point Representation • In floating point representation, numbers are represented by a sign bit s, an integer component e, a positive integer mantissa M. • Eg of floating pt. • e-exact exponent. E –bias : fixed int and machine dependent. • B- base, usually 2 or 16.

  16. Floating point Representation • Most computers use the IEEE representation where the floating point number is normalized.

  17. Floating point Representation • Most computers use the IEEE representation where the floating point number is normalized. • The two most common IEEE rep are: • IEEE Short Real (Single Precision): 32 bits – 1 for the sign, 8 for exponent and 23 mantissa • IEEE Long Real (Double Prec): 64 bits – 1 sign bit, 11 for exp and 52 for the mantissa

  18. exponent 1 11111111 11111111111111111111111 31 30 23 22 0 sign mantissa Floating point Representation • Example of a single precision number.

  19. 0 01111111 10000000000000000000000 Floating point Representation • For example 0.5 is represented as: • Where the bias is 0111 11112 =12710

  20. Floating point Representation • By convention by floating point number is normalized so that 31278 is normalized as 3.1278 x 104. • As a result the left most bit in the mantissa is 1 and does not have to stored. The computer only needs to remember that there is a phantom bit.

  21. Precision • A consequence of the storage scheme is that there is a limit to precision. • Consider the following example, to see how machine precision affects calculations.

  22. Precision • Consider the addition of two 32-bit words (single precision): 7 + 1.0 x 10-7.

  23. 0 0 01100010 10000010 11010110101111111001010 11100000000000000000000 7 = 10-7 = Precision • Consider the addition of two 32-bit words (single precision): 7 + 1.0 x 10-7. • These numbers are stored as:

  24. Precision • Because the mantissa of the numbers are different, the exponent of the smaller number is made larger while the mantissa is made smaller until they are same(shifting bits to the right while inserting zeros).

  25. 10-7 = 0 01100010 00000000000000000000000(0001101…) Precision • Because the mantissa of the numbers are different, the exponent of the smaller number is made larger while the mantissa is made smaller until they are same(shifting bits to the right while inserting zeros). • Until: • NB: The bits in the brackets are lost.

  26. Precision • As a result when the calculation is carried out the computer will get: 7 + 1.0 x 10-7 = 7 • Effectively the 10-7 term has been ignored. • The machine precision can be defined as the maximum positive value that can be added to 1 so that its value is not changed.

  27. Errors and Uncertainties Living with errors

  28. The problem • Errors and uncertainty are an unavoidable part of computation. • Some are human error while others are computer errors (eg. Due to precision). • We therefore look at the types of errors and ways of reducing them.

  29. Errors and Uncertainties • There five types of errors in computation: • Mistakes • Random error • Truncation error • Roundoff error • Propagated error

  30. Errors and Uncertainties • Mistakes: are typographical errors entered with program or maybe running the program using the wrong data etc.

  31. Errors and Uncertainties • Random errors: these are caused by random fluctuations in electronics due to for example power surges. The likelihood is rare but there is no control over them.

  32. Errors and Uncertainties • Truncation or approximation errors: these occur from simplifications of mathematics so that the problem may be solved. For example replace of an infinite series by a finite series. • Eg:

  33. Errors and Uncertainties • Truncation or approximation errors: these occur from simplifications of mathematics so that the problem may be solved. For example replace of an infinite series by a finite series. • Eg:

  34. Errors and Uncertainties • Where is the total absolute error.

  35. Errors and Uncertainties • Where is the total absolute error. • The truncation error vanishes as N is taken to infinity. • For N much larger than x, the error is small. • If x and N are close then the truncation error will be large.

  36. Errors and Uncertainties • Roundoff error: since most numbers are represented with imprecision by computers (and general restrictions) this leads to a number being lost. • The error as result of the roundoff or truncation of digits is known as the roundoff error. • Eg:

  37. Errors and Uncertainties • Propagated error: this is defined as an error in later steps of a program due to an earlier error.

  38. Errors and Uncertainties • Propagated error: this is defined as an error in later steps of a program due to an earlier error. • This error is added the local error(eg. to a roundoff error).

  39. Errors and Uncertainties • Propagated error: this is defined as an error in later steps of a program due to an earlier error. • This error is added the local error(eg. to a roundoff error). • Propagated error is critical as errors may be magnified causing results to be invalid.

  40. Errors and Uncertainties • Propagated error: this is defined as an error in later steps of a program due to an earlier error. • This error is added the local error(eg. to a roundoff error). • Propagated error is critical as errors may be magnified causing results to be invalid. • The stability of the program determines how errors are propagated.

  41. Stability of algorithms • For stable methods early errors die out as the program. • However unstable programs continuously magnify any error.

  42. Errors and Uncertainties • In a program all or some of the types of errors occur and will interact to some degree.

  43. Calculating the Error • A simple way of looking at the error is as the difference between the true value and the actual value. • Ie: Error (e) = True value – Approximate value

  44. Example: • Find the truncation error for at x= if the first 3 terms in the expansion are retained. • Sol: Error = True value – Approx value

  45. Calculating the Error • Three other ways of defining the error are: • Absolute error • Relative error • Percentage error

  46. Calculation the Error • Absolute error. ea = |True value – Approximate value|

  47. Calculating the Error • Absolute error: ea = |True value – Approximate value| • Relative error is defined as:

  48. Calculating the Error • Percentage error is defined as:

  49. Examples • Suppose 1.414 is used as an approx to . Find the absolute, relative and percentage errors.

  50. Examples • Suppose 1.414 is used as an approx to . Find the absolute, relative and percentage errors.

More Related