1 / 26

MATH 685/CSI 700 Lecture Notes

MATH 685/CSI 700 Lecture Notes. Lecture 1. Intro to Scientific Computing. Useful info . Course website: http://math.gmu.edu/~memelian/teaching/Spring10 MATLAB instructions: http://math.gmu.edu/introtomatlab.htm Mathworks, the creator of MATLAB: http://www.mathworks.com

robyn
Télécharger la présentation

MATH 685/CSI 700 Lecture Notes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MATH 685/CSI 700 Lecture Notes Lecture 1. Intro to Scientific Computing

  2. Useful info • Course website: http://math.gmu.edu/~memelian/teaching/Spring10 • MATLAB instructions: http://math.gmu.edu/introtomatlab.htm • Mathworks, the creator of MATLAB: http://www.mathworks.com • OCTAVE = free MATLAB clone Available for download at http://octave.sourceforge.net/

  3. Scientific computing • Design and analysis of algorithms for numerically solving mathematical problems in science and engineering • Deals with continuous quantities vs. discrete (as, say, computer science) • Considers the effect of approximations and performs error analysis • Is ubiquitous in modern simulations and algorithms modeling natural phenomena and in engineering applications • Closely related to numerical analysis

  4. Mathematical modeling Computational problems:attack strategy • Develop mathematical model (usually requires a combination of math skills and some a priori knowledge of the system) • Come up with numerical algorithm (numerical analysis skills) • Implement the algorithm (software skills) • Run, debug, test the software • Visualize the results • Interpret and validate the results

  5. Computational problems:well-posedness • The problem is well-posed, if (a) solution exists (b) it is unique (c) it depends continuously on problem data The problem can be well-posed, but still sensitive to perturbations. The algorithm should attempt to simplify the problem, but not make sensitivity worse than it already is. Simplification strategies: Infinite finite Nonlinear linear High-order low-order Only approximate solution can be obtained this way!

  6. Sources of numerical errors • Before computation • modeling approximations • empirical measurements, human errors • previous computations • During computation • truncation or discretization • Rounding errors • Accuracy depends on both, but we can only control the second part • Uncertainty in input may be amplified by problem • Perturbations during computation may be amplified by algorithm Abs_error = approx_value – true_value Rel_error = abs_error/true_value Approx_value = (true_value)x(1+rel_error) Cannot be controlled Can be controlled through error analysis

  7. Computational error: affected by algorithm + Propagated data error: not affected by algorithm Sources of numerical errors • Propagated vs. computational error • x = exact value, y = approx. value • F = exact function, G = its approximation • G(y) – F(x) = [G(y) - F(y)] + [F(y) - F(x)] • Rounding vs. truncation error • Rounding error: introduced by finite precision calculations in the computer arithmetic • Truncation error: introduced by algorithm via problem simplification, e.g. series truncation, iterative process truncation etc. Total error = Computational error = Truncation error + rounding error

  8. Backward vs. forward errors

  9. Backward error analysis • How much must original problem change to give result actually obtained? • How much data error in input would explain all error in computed result? • Approximate solution is good if it is the exact solution to a nearby problem • Backward error is often easier to estimate than forward error

  10. Backward vs. forward errors

  11. Conditioning • Well-conditioned (insensitive) problem: small relative change in input gives commensurate relative change in the solution • Ill-conditioned (sensitive): relative change in output is much larger than that in the input data • Condition number = measure of sensitivity • Condition number = |rel. forward error| / |rel. backward error| = amplification factor

  12. Conditioning

  13. Stability • Algorithm is stable if result produced is relatively insensitive to perturbations during computation • Stability of algorithms is analogous to conditioning of problems • From point of view of backward error analysis, algorithm is stable if result produced is exact solution to nearby problem • For stable algorithm, effect of computational error is no worse than effect of small data error in input

  14. Accuracy • Accuracy : closeness of computed solution to true solution of problem • Stability alone does not guarantee accurate results • Accuracy depends on conditioning of problem as well as stability of algorithm • Inaccuracy can result from applying stable algorithm to ill-conditioned problem or unstable algorithm to well-conditioned problem • Applying stable algorithm to well-conditioned problem yields accurate solution

  15. Floating point representation

  16. Floating point systems

  17. Normalized representation Not all numbers can be represented this way, those that can are called machine numbers

  18. Rounding rules • If real number x is not exactly representable, then it is approximated by “nearby” floating-point number fl(x) • This process is called rounding, and error introduced is called rounding error • Two commonly used rounding rules • chop: truncate base- expansion of x after (p − 1)st digit; also called round toward zero • round to nearest : fl(x) is nearest floating-point number to x, using floating-point number whose last stored digit is even in case of tie; also called round to even • Round to nearest is most accurate, and is default rounding rule in IEEE systems

  19. Floating point arithmetic

  20. Machine precision

  21. Floating point operations

  22. Summing series in floating-point arithmetic

  23. Loss of significance

  24. Loss of significance

  25. Loss of significance

  26. Loss of significance

More Related