1 / 102

CS-110 Computational Engineering Part 3

CS-110 Computational Engineering Part 3. A. Avudainayagam Department of Mathematics. Root Finding: f(x)=0. Method 1: The Bisection method Th: If f(x) is continuous in [a,b] and if f(a)f(b)<0, then there is atleast one root of f(x)=0 in (a,b). y. y. f(b) > 0. x 2. a. a. x 4. b.

jesse
Télécharger la présentation

CS-110 Computational Engineering Part 3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS-110Computational EngineeringPart 3 • A. Avudainayagam • Department of Mathematics

  2. Root Finding: f(x)=0 Method 1: The Bisection method Th: If f(x) is continuous in [a,b] and if f(a)f(b)<0, then there is atleast one root of f(x)=0 in (a,b). y y f(b) > 0 x2 a a x4 b x x5 x x3 x1 f (a) < 0 b x1 Multiple roots in (a,b) Single root in (a,b)

  3. The Method • Find an interval [x0 ,x1] so that f(x0)f(x1) < 0 (This may not be easy. Use your knowledge of the physical phenomenon which the equation represents). • Cut the interval length into half with each iteration,by examining the sign of the function at the mid point. If f(x2) = 0, x2 is the root. If f(x2)  0 and f(x0)f(x2) < 0, root lies in [x0,x2]. Otherwise root lies in [x2,x1]. Repeat the process until the interval shrinks to a desired level.

  4. Number of Iterations and Error Tolerance • Length of the interval (where the root lies) after n iterations We can fix the number of iterations so that the root lies within an interval of chosen length  (error tolerance). If n satisfies this, root lies within a distance of  / 2 of the actual root.

  5. Though the root lies in a small interval, |f(x)| may not be small if f(x) has a large slope. Conversely if |f(x)| small, x may not be close to the root if f(x) has a small slope. So, we use both these facts for the termination criterion. We first choose an error tolerance on f(x) : |f(x)| <  and m the maximum number of iterations.

  6. Pseudo code (Bisection Method) 1. Input  > 0, m > 0, x1 > x0 so that f(x0) f(x1) < 0. Compute f0 = f(x0) . k = 1 (iteration count) 2. Do { (a) Compute f2 = f(x2) = f (b) If f2f0 < 0 , set x1= x2 otherwise set x0 = f2 and f0 = f2. (c) Set k = k+1. } 3. While |f2| >  and k  m set x = x2 , the root.

  7. False Position Method (Regula Falsi) Instead of bisecting the interval [x0,x1], we choose the point where the straight line through the end points meet the x-axis as x2 and bracket the root with [x0,x2] or [x2,x1] depending on the sign of f(x2).

  8. False Position Method y y=f(x) (x1,f1) (x2,f2) x0 x2 x1 x (x0,f0) Straight line through (x0,f0) ,(x1,f1) : New end point x2 :

  9. False Position Method (Pseudo Code) 1. Choose  > 0 (tolerance on |f(x)| ) m > 0 (maximum number of iterations ) k = 1 (iteration count) x0 ,x1 (so that f0 ,f1 < 0) 2. { a. Compute f2 = f(x2) b. If f0f2 < 0 set x1 = x2 , f0 = f2 c. k = k+1 } 3. While (|f2|  ) and (k  m) 4. x = x2 , the root.

  10. Newton-Raphson Method / Newton’s Method At an approximate xk to the root ,the curve is approximated by the tangent to the curve at xk and the next approximation xk+1 is the point where the tangent meets the x-axis. y root x x2 x1 x0 y = f(x) 

  11. Tangent at (xk, fk) : y = f(xk) + f´(xk)(x-xk) This tangent cuts the x-axis at xk+1 Warning : If f´(xk) is very small , method fails. • Two function Evaluations per iteration

  12. Newton’s Method - Pseudo code 1. Choose  > 0 (function tolerance |f(x)| < ) m > 0 (Maximum number of iterations) x0 - initial approximation k - iteration count Compute f(x0) 4. x = x1 the root. 2. Do { q = f (x0) (evaluate derivative at x0) x1= x0 - f0/q x0 = x1 f0 = f(x0) k = k+1 } 3. While (|f0|  ) and (k  m)

  13. Getting caught in a cycle of Newton’s Method y Alternate iterations fall at the same point . No Convergence. xk+1 xk x

  14. Newton’s Method for finding the square root of a number x =  a f(x) = x2 - a2 = 0 Example : a = 5 , initial approximation x0 = 2. x1 = 2.25 x2 = 2.236111111 x3 = 2.236067978 x4 = 2.236067978

  15. The secant Method The Newton’s Method requires 2 function evaluations (f,f ). The Secant Method requires only 1 function evaluation and converges as fast as Newton’s Method at a simple root. Start with two points x0,x1 near the root (no need for bracketing the root as in Bisection Method or Regula Falsi Method) . xk-1 is dropped once xk+1 is obtained.

  16. The Secant Method (Geometrical Construction) y y = f(x1) • Two initial points x0, x1 are chosen • The next approximation x2 is the point where the straight line joining (x0,f0) and (x1,f1) meet the x-axis • Take (x1,x2)and repeat. x2 x0 x1 x  (x0,f0) (x1,f1)

  17. The secant Method (Pseudo Code) 1. Choose  > 0 (function tolerance |f(x)|   ) m > 0 (Maximum number of iterations) x0 , x1 (Two initial points near the root ) f0 = f(x0) f1 = f(x1) k = 1 (iteration count) 2. Do { x0 = x1 f0 = f1 x1= x2 f1 = f(x2) k = k+1 } 3. While (|f1|  ) and (m  k)

  18. General remarks on Convergence • The false position method in general converges faster than the bisection method. (But not always ) counter example follows. • The bisection method and the false position method are guaranteed for convergence. • The secant method and the Newton-Raphson method are not guaranteed for convergence.

  19. Order of Convergence • A measure of how fast an algorithm converges . Let  be the actual root : f () = 0 Let xk be the approximate root at the kth iteration . Error at the kth iteration = ek = |xk-  | The algorithm converges with order p if there exist  such that

  20. Order of Convergence of • Bisection method p = 1 ( linear convergence ) • False position - generally Super linear ( 1 < p < 2 ) • Secant method (super linear) • Newton Raphson method p = 2 quadratic

  21. Machine Precision • The smallest positive float M that can be added to one and produce a sum that is greater than one.

  22. Pseudo code to find ‘Machine Epsilon’ 1. Set M = 1 2. Do { M = M / 2 x = 1 + M } 3. While ( x < 1 ) 4. M = 2 M

  23. Solutions Of Linear Algebraic Systems AX = B A n  n matrix B n 1 vector X n  1 vector (unknown) If |A|  0, the system has a unique solution, provided B  0. If |A| = 0, the matrix A is called a singular matrix. • Direct Methods • Iterative Methods

  24. The Augmented Matrix [A | B]

  25. Elementary row operations 1. Interchange rows j and k. 2. Multiply row k by . 3. Add  times row j to row k. Use elementary row operations and transform the equation so that the solution becomes apparent

  26. Diagonal Matrix (D) = Upper Triangular Matrix (U) = Lower Triangular Matrix (L) =

  27. Gauss Elimination Method Using row operations, the matrix is reduced to an upper triangular matrix and the solutions obtained by back substitution

  28. Partial Pivot Method Step 2Multiply the I row by an appropriate number and subtract this from the II row so that the first element of the second row becomes zero. Do this for the remaining rows so that all the elements in the I column except the top most element are zero. Step 1We look for the largest element in absolute value (called pivot) in the first column and interchange this row with the first row Now the matrix looks like:

  29. Now look for the pivot in the second column below the first row and interchange this with the second row. Multiply the II row by an appropriate numbers and subtract this from remaining rows (below the second row) so that the matrix looks like :

  30. Now look for the pivot in the III column below the second row and interchange this row with the third row. Proceed till the matrix looks like : Now the solution is obtained by back substitution.

  31. Gauss Elimination (An example) Augmented Matrix :

  32. Solution by back substitution x3 = 0.8333/1.1667 = 5 x2 = (2 - 5 )/1.5 = -2 x1 = (2-8(5)+2)/4 = -9 x1 = -9 , x2 = -2 , x3 = 5

  33. Gauss Elimination (Pseudo Code) 1. Input matrix A of order n  n and r h s vector B and form the augmented matrix [A|B]. 2. For j = 1 to n do { (a) Compute the pivot index j  p  n such that (b) If Apj = 0 , print “singular matrix” and exit.(c) If p > j , interchange rows p and j (d) For each i > j , subtract Aij/Ajj times row j from row i. } 3. Now the augmented matrix is [C|D] where C is upper triangular. 4. Solve by back substitution For j = 1= n down to 1 compute :

  34. Determinant of a Matrix Some properties of the determinant. 1. det (AT) = det (A) 2. det (AB) = det (A) det (B) 3. det (U) = U11U22U33...Unn 4. det [E1(k,j)] = -1 . det A 5. det [E2(k,)] =  det A 6. det [E3(k,j,)] = det A

  35. Matrix Determinant Gauss Elimination procedure uses two types of row operations. 1. Interchange of rows for pivoting 2. Add  times row j to row k. The first operation changes the sign of the determinant while the second has no effect on the determinant. To obtain the matrix determinant, follow Gauss elimination and obtain the upper triangular form U, keeping count of (r) the interchanges of rows Then det A = (-1)r U11U22 ... Unn

  36. Matrix Determinant (Pseudo Code) 1. Set r = 0 2. For j = 1 to n do { (a) Compute the pivot index j  p  n such that (b) If Apj = 0 , print “Singular Matrix” and exit. (c) If p > j interchange rows p and j , set r = r +1. (d) For each i > j , subtract Aij/Ajj times row j and row i } 3. det = (-1)r A11A22 ... Ann

  37. Matrix Determinant A =    det = (-1)1 (1)(4)(5.5) = -22

  38. LU Decomposition • When Gauss elimination is used to solve a linear system, the row operations are applied simultaneously to the RHS vector. • If the system is solved again for a different RHS vector, the row operations must be repeated. • The LU decomposition eliminates the need to repeat the row operators each time a new RHS vector is used.

  39. The Matrix A (AX = B) is written as a product of an upper triangular matrix U and a lower triangular matrix L. The diagonal elements of the upper triangular matrix are chosen to be 1 to get a consistent system of equations n = 3 =  9 Equations for 9 unknowns L11, L21, L22, L31, L32, L33, U12, U13, U23.

  40. = First column gives L11 L21 L31 = (A11, A21, A31) First row gives U11 U12 U13 = (1, A12/L11, A13/L11) Second column gives L12 L22 L32 = (0, A22 - L21U12, A32 - L31U22) Second row gives U21 U22 U23 = (0, 1, (A23 - L21U13)/L22 ) Third column gives L13 L23 L33 = (0, 0, A33 - L31U13 - L32 U23)

  41. LU Decomposition (Pseudo Code) Li1 = Ai1 , i = 1, 2, ... , n U1j = A1j/L11 , j = 2, 3, ... , n for j = 2, 3, ... , n-1 , i = j, j+1, ... , n , k = j, j+1, ... , n

  42. LU Decomposition (Example) =

  43. The value of U are stored in the zero space of L . Q = [L \ U] = After each element of A is employed , it is not needed again. So Q can be stored in place of A.

  44. After LU decomposition of the matrix A, the system AX = B is solved in 2 steps : (1) Forward substitution, (2) Backward Substitution AX = LUX = B Put UX = Y LY = = Forward Substitution : Y1 = B1/ L11 Y2 = (B2 - L21Y1) / L22 Y3 = (B3 - L31B1 - L32B2) / L33

  45. UX = = Back Substitution : x3 = Y3 x2 = Y2 - x3 U23 x1 = Y1 - x2U12 - x3 U23

  46. As in Gauss elimination, LU decomposition must employ pivoting to avoid division by zero and to minimize round off errors. The pivoting is done immediately after computing each column.

  47. Matrix Inverse Using the LU Decomposition LU decomposition can be used to obtain the inverse of the original coefficient matrix. Each column j of the inverse is determined by using a unit vector (with 1 in the jth row ) as the RHS vector

  48. Matrix inverse using LU decomposition-an example = L U A First column of the inverse of A is given by = The second and third columns are obtained by taking the RHS vector as (0,1,0) and (0,0,1).

  49. Iterative Method for solution of Linear systems • Jacobi iteration • Gauss - Seidel iteration

More Related