1 / 27

MATH 175: NUMERICAL ANALYSIS II

MATH 175: NUMERICAL ANALYSIS II. Lecturer: Jomar Fajardo Rabajante IMSP, UPLB 2 nd Semester AY 2012-2013. CHAPTER 1: NUMERICAL METHODS IN SOLVING NONLINEAR EQUATIONS. This chapter involves numerically solving for the roots of an equation or zeros of a function . e.g.

ekram
Télécharger la présentation

MATH 175: NUMERICAL ANALYSIS II

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MATH 175: NUMERICAL ANALYSIS II Lecturer: Jomar Fajardo Rabajante IMSP, UPLB 2nd Semester AY 2012-2013

  2. CHAPTER 1: NUMERICAL METHODS IN SOLVING NONLINEAR EQUATIONS This chapter involves numerically solving for the roots of an equation or zeros of a function. e.g. Solve for x in This problem may boil down in solving for the root of or solving for the zeros of

  3. Go back to HS Math or Math 17 • The whole of Elementary Algebra (including college algebra) generally focuses on the ways and means of easily solving equations (or systems of equations)… Why do we factor? Why do we simplify? Do these methods have real life applications? Probably, none? Ahhh… Solving equations has...

  4. Recall MATH MODELING IN HIGH SCHOOL AND EARLY MATH SERIES… Example: Age problem!!! We need a working EQUATION in solving this problem… and how can we solve it?

  5. Recall Let us say we have a quadratic equation: - we can use factoring - we can use completing the square - we can use quadratic formula But what if we have a cubic equation -we can use Cardano’s formula

  6. Deadlock… What if we have a polynomial of degree 8? Note that, from a theorem in Algebra (by Abel), we cannot have a closed-form formula for polynomials of degree higher than 4. Or an equation involving transcendentals? e.g.

  7. A Naïve way… GRAPH IT!!! Then get the zeros of f !!!

  8. You can use MS Excel, GraphCalc, Grapes, etc…

  9. A Naïve way… Ah, eh… but what is the exact value? Or if we cannot get the exact value, what is the nearest solution? (say with error at most 10-4)

  10. First Method: Exhaustive search or Incremental search Just use MS Excel… Let’s say we want to get this zero

  11. =A122+0.0001 =COS(A123)+5*SIN(A123) +EXP(A123^2)-4

  12. We want f(x)=0, so if f(0.3919)=-0.000082≈0, then our APPROXIMATE solution is 0.3919. Smaller step sizes increases our accuracy. However, we cannot measure the order of convergence.

  13. First Method: Exhaustive search or Incremental search Exhaustive search may miss a blip:

  14. Second Method: Bisection (a bracketing method) Recall IVT (specifically the Intermediate Zero Theorem) and Math 174 discussion about Bisection Method

  15. Second Method: Bisection • Given f(x), choose the initial interval [x1,1 ,x2,1] such that f(x1,1)*f(x2,1)<0. Input tol, the tolerance error level. Input kmax, maximum number of iterations. Define a counter, say k, to keep track of the number of bisections performed. • For k<kmax+2 (starting k=2), compute x3,k := (x1,k-1 + x2,k-1)/2. Set x3,1:= x1,1 .If |x3,k – x3,k-1|<tolthen print results and exit the program (the root is found in k-1 iterations). Otherwise, if f(x1,k-1)*f(x3,k)<0 then set x2,k := x3,k and x1,k := x1,k-1 , else set x1,k := x3,k andx2,k := x2,k-1. • If convergence is not obtained in kmax+1 iterations, inform the user that the tolerance criterion was not satisfied, and exit the program.

  16. DIFFERENT STOPPING CRITERION Recall Math 174 Terminate when: delikado If tol=10-n, then x3,k should approximate the true value to n decimal places or n significant digits

  17. DIFFERENT STOPPING CRITERION Recall Math 174 Terminate when: Iterations have gone on “long enough”

  18. Second Method: Bisection Notice that this algorithm locates only one root of the equation at a time. When an equation has multiple roots, it is the choice of the initial interval provided by the user which determines which root is located. The contrary condition, f(a)*f(b)>0 does NOT imply that there are no real roots in the interval [a,b]. Convergence of bisection method is guaranteed provided that IVT (IZT) is always met in every iteration.

  19. Second Method: Bisection  The bisection algorithm will fail in this case

  20. Second Method: Bisection  The bisection algorithm may also fail in these cases: -not continuous on the initial interval -multiple roots (may fail to converge to the desired root). It is better to bracket only one root.

  21. Second Method: Bisection  How many iterations do we need to have an answer accurate at least up to m decimal places? Let n be the number of iterations, n>1. Note that the computed n should be rounded-up to the nearest integer.

  22. Second Method: Bisection DERIVATION Length of the 1st interval: b – a Length of the 2nd interval: (b – a)/2 Length of the n-th interval: (b – a)/2n–1 At the n-th iteration we get the midpoint, so the length of the interval will become: (b – a)/2n

  23. Second Method: Bisection DERIVATION Let x* be the exact root. HENCE, THE (worst-case) NUMBER OF ITERATIONS NEEDED IS:

  24. ORDER OF CONVERGENCE (Q-convergence) Given an asymptotic error constant λ>0, the sequence {x3,k} converges to x* with order p>0 iff Order of convergence: p If p=1, rate of convergence: O(λk)

  25. When p=1 If 0<λ<1, then the sequence {x3,k} converges to x* linearly Ifλ=1, then the sequence {x3,k} converges to x* sublinearly Ifλ=0, then the sequence {x3,k} converges to x* superlinearly, meaning it is possibly also of quadratic order, or possibly of higher order (but definitely faster than mere linear, e.g. p=1.6, p=2, p=3, etc…) As λbecomes nearer to 0, the convergence gets faster. ORDER OF CONVERGENCE

  26. When p=r>1 If 0<λ<∞, then the sequence {x3,k} converges to x* with order r Ifλ=∞, then the sequence {x3,k} converges to x* with order less than r Ifλ=0, then the sequence {x3,k} converges to x* possibly also with order r+1 or possibly of higher order (but definitely faster than mere order r) As λbecomes nearer to 0, the convergence gets faster. ORDER OF CONVERGENCE

  27. Second Method: Bisection ORDER OF CONVERGENCE meaning the error is reduced by half in every iteration. Recall: Hence, bisection method converges to the root linearly(p=1… slow!) Rate of convergence: O(1/2k)

More Related