1 / 71

CSE 245: Computer Aided Circuit Simulation and Verification

CSE 245: Computer Aided Circuit Simulation and Verification. Fall 2004, Oct 19 Lecture 7: Matrix Solver II -Iterative Method. Outline. Iterative Method Stationary Iterative Method (SOR, GS,Jacob) Krylov Method (CG, GMRES) Multigrid Method. Iterative Methods. Stationary:

Télécharger la présentation

CSE 245: Computer Aided Circuit Simulation and Verification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSE 245: Computer Aided Circuit Simulation and Verification Fall 2004, Oct 19 Lecture 7: Matrix Solver II -Iterative Method

  2. Outline • Iterative Method • Stationary Iterative Method (SOR, GS,Jacob) • Krylov Method (CG, GMRES) • Multigrid Method Zhengyong (Simon) Zhu, UCSD

  3. Iterative Methods Stationary: x(k+1)=Gx(k)+c where G and c do not depend on iteration count (k) Non Stationary: x(k+1)=x(k)+akp(k) where computation involves information that change at each iteration courtesy Alessandra Nardi, UCB

  4. Stationary: Jacobi Method In the i-th equation solve for the value of xi while assuming the other entries of x remain fixed: In matrix terms the method becomes: where D, -L and -U represent the diagonal, the strictly lower-trg and strictly upper-trg parts of M M=D-L-U courtesy Alessandra Nardi, UCB

  5. Stationary-Gause-Seidel Like Jacobi, but now assume that previously computed results are used as soon as they are available: In matrix terms the method becomes: where D, -L and -U represent the diagonal, the strictly lower-trg and strictly upper-trg parts of M M=D-L-U courtesy Alessandra Nardi, UCB

  6. Stationary: Successive Overrelaxation (SOR) Devised by extrapolation applied to Gauss-Seidel in the form of weighted average: In matrix terms the method becomes: where D, -L and -U represent the diagonal, the strictly lower-trg and strictly upper-trg parts of M M=D-L-U courtesy Alessandra Nardi, UCB

  7. SOR • Choose w to accelerate the convergence • W =1 : Jacobi / Gauss-Seidel • 2>W>1: Over-Relaxation • W < 1: Under-Relaxation Zhengyong (Simon) Zhu, UCSD

  8. Convergence of Stationary Method • Linear Equation: MX=b • A sufficient condition for convergence of the solution(GS,Jacob) is that the matrix M is diagonally dominant. • If M is symmetric positive definite, SOR converges for any w (0<w<2) • A necessary and sufficient condition for the convergence is the magnitude of the largest eigenvalue of the matrix G is smaller than 1 • Jacobi: • Gauss-Seidel • SOR: Zhengyong (Simon) Zhu, UCSD

  9. Outline • Iterative Method • Stationary Iterative Method (SOR, GS,Jacob) • Krylov Method (CG, GMRES) • Steepest Descent • Conjugate Gradient • Preconditioning • Multigrid Method Zhengyong (Simon) Zhu, UCSD

  10. Linear Equation: an optimization problem • Quadratic function of vector x • Matrix A is positive-definite, if for any nonzero vector x • If A is symmetric, positive-definite, f(x) is minimized by the solution Zhengyong (Simon) Zhu, UCSD

  11. Linear Equation: an optimization problem • Quadratic function • Derivative • If A is symmetric • If A is positive-definite is minimized by setting to 0 Zhengyong (Simon) Zhu, UCSD

  12. For symmetric positive definite matrix A from J. R. Shewchuk "painless CG"

  13. Gradient of quadratic form The points in the direction of steepest increase of f(x) from J. R. Shewchuk "painless CG"

  14. Symmetric Positive-Definite Matrix A • If A is symmetric positive definite • P is the arbitrary point • X is the solution point since We have, If p != x Zhengyong (Simon) Zhu, UCSD

  15. If A is not positive definite • Positive definite matrix b) negative-definite matrix • c) Singular matrix d) positive indefinite matrix from J. R. Shewchuk "painless CG"

  16. Non-stationary Iterative Method • State from initial guess x0, adjust it until close enough to the exact solution • How to choose direction and step size? i=0,1,2,3,…… Adjustment Direction Step Size Zhengyong (Simon) Zhu, UCSD

  17. Steepest Descent Method (1) • Choose the direction in which f decrease most quickly: the direction opposite of • Which is also the direction of residue Zhengyong (Simon) Zhu, UCSD

  18. Steepest Descent Method (2) • How to choose step size ? • Line Search should minimize f, along the direction of , which means Orthogonal Zhengyong (Simon) Zhu, UCSD

  19. Steepest Descent Algorithm Given x0, iterate until residue is smaller than error tolerance Zhengyong (Simon) Zhu, UCSD

  20. Steepest Descent Method: example • Starting at (-2,-2) take the • direction of steepest descent of f • b) Find the point on the intersec- • tion of these two surfaces that • minimize f • c) Intersection of surfaces. • d) The gradient at the bottommost • point is orthogonal to the gradient • of the previous step from J. R. Shewchuk "painless CG"

  21. Iterations of Steepest Descent Method from J. R. Shewchuk "painless CG"

  22. Convergence of Steepest Descent-1 let Eigenvector: EigenValue: j=1,2,…,n Energy norm: Zhengyong (Simon) Zhu, UCSD

  23. Convergence of Steepest Descent-2 Zhengyong (Simon) Zhu, UCSD

  24. Convergence Study (n=2) assume let Spectral condition number let Zhengyong (Simon) Zhu, UCSD

  25. Plot of w from J. R. Shewchuk "painless CG"

  26. Case Study from J. R. Shewchuk "painless CG"

  27. Bound of Convergence It can be proved that it is also valid for n>2, where from J. R. Shewchuk "painless CG"

  28. Conjugate Gradient Method • Steepest Descent • Repeat search direction • Why take exact one step for each direction? Search direction of Steepest descent method figure from J. R. Shewchuk "painless CG"

  29. Orthogonal Direction Pick orthogonal search direction: • We don’t know !!! Zhengyong (Simon) Zhu, UCSD

  30. Orthogonal  A-orthogonal • Instead of orthogonal search direction, we make search direction A –orthogonal (conjugate) from J. R. Shewchuk "painless CG"

  31. Search Step Size Zhengyong (Simon) Zhu, UCSD

  32. Iteration finish in n steps Initial error: A-orthogonal The error component at direction dj is eliminated at step j. After n steps, all errors are eliminated. Zhengyong (Simon) Zhu, UCSD

  33. Conjugate Search Direction • How to construct A-orthogonal search directions, given a set of n linear independent vectors. • Since the residue vector in steepest descent method is orthogonal, a good candidate to start with Zhengyong (Simon) Zhu, UCSD

  34. Construct Search Direction -1 • In Steepest Descent Method • New residue is just a linear combination of previous residue and • Let We have Krylov SubSpace: repeatedly applying a matrix to a vector Zhengyong (Simon) Zhu, UCSD

  35. Construct Search Direction -2 let For i > 0 Zhengyong (Simon) Zhu, UCSD

  36. Construct Search Direction -3 • can get next direction from the previous one, without saving them all. let then Zhengyong (Simon) Zhu, UCSD

  37. Conjugate Gradient Algorithm Given x0, iterate until residue is smaller than error tolerance Zhengyong (Simon) Zhu, UCSD

  38. Conjugate gradient: Convergence • In exact arithmetic, CG converges in n steps (completely unrealistic!!) • Accuracy after k steps of CG is related to: • consider polynomials of degree k that are equal to 1 at 0. • how small can such a polynomial be at all the eigenvalues of A? • Thus, eigenvalues close together are good. • Condition number:κ(A) = ||A||2 ||A-1||2 = λmax(A) / λmin(A) • Residual is reduced by a constant factor by O(κ1/2(A)) iterations of CG. courtesy J.R.Gilbert, UCSB

  39. Other Krylov subspace methods • Nonsymmetric linear systems: • GMRES: for i = 1, 2, 3, . . . find xi  Ki (A, b) such that ri = (Axi– b)  Ki (A, b)But, no short recurrence => save old vectors => lots more space (Usually “restarted” every k iterations to use less space.) • BiCGStab, QMR, etc.:Two spaces Ki (A, b)and Ki (AT, b)w/ mutually orthogonal bases Short recurrences => O(n) space, but less robust • Convergence and preconditioning more delicate than CG • Active area of current research • Eigenvalues: Lanczos (symmetric), Arnoldi (nonsymmetric) courtesy J.R.Gilbert, UCSB

  40. Preconditioners • Suppose you had a matrix B such that: • condition number κ(B-1A) is small • By = z is easy to solve • Then you could solve (B-1A)x = B-1b instead of Ax = b • B = A is great for (1), not for (2) • B = I is great for (2), not for (1) • Domain-specific approximations sometimes work • B = diagonal of A sometimes works • Better: blend in some direct-methods ideas. . . courtesy J.R.Gilbert, UCSB

  41. Preconditioned conjugate gradient iteration • One matrix-vector multiplication per iteration • One solve with preconditioner per iteration x0 = 0, r0 = b, d0 = B-1r0, y0 = B-1r0 for k = 1, 2, 3, . . . αk = (yTk-1rk-1) / (dTk-1Adk-1) step length xk = xk-1 + αk dk-1 approx solution rk = rk-1 – αk Adk-1 residual yk = B-1rk preconditioning solve βk = (yTk rk) / (yTk-1rk-1) improvement dk = yk + βk dk-1 search direction courtesy J.R.Gilbert, UCSB

  42. Outline • Iterative Method • Stationary Iterative Method (SOR, GS,Jacob) • Krylov Method (CG, GMRES) • Multigrid Method Zhengyong (Simon) Zhu, UCSD

  43. What is the multigrid • A multilevel iterative method to solve • Ax=b • Originated in PDEs on geometric grids • Expend the multigrid idea to unstructured problem – Algebraic MG • Geometric multigrid for presenting the basic ideas of the multigrid method. Zhengyong (Simon) Zhu, UCSD

  44. v3 v4 v1 v2 v5 v6 v8 v7 + vs The model problem Ax = b Zhengyong (Simon) Zhu, UCSD

  45. Simple iterative method • x(0) -> x(1) -> … -> x(k) • Jacobi iteration • Matrix form : x(k) = Rjx(k-1) + Cj • General form: x(k) = Rx(k-1) + C (1) • Stationary: x* = Rx* + C (2) Zhengyong (Simon) Zhu, UCSD

  46. Error and Convergence Definition: errore = x* - x (3) residualr = b – Ax (4) e, r relation: Ae = r (5) ((3)+(4)) e(1) = x*-x(1) = Rx* + C – Rx(0)– C =Re(0) Error equatione(k) = Rke(0) (6) ((1)+(2)+(3)) Convergence: Zhengyong (Simon) Zhu, UCSD

  47. k= 1 k= 4 k= 2 Error of diffenent frequency • Wavenumber k and frequency  • = k/n • High frequency error is more oscillatory between points Zhengyong (Simon) Zhu, UCSD

  48. Iteration reduce low frequency error efficiently • Smoothing iteration reduce high frequency error efficiently, but not low frequency error Error k = 1 k = 2 k = 4 Iterations Zhengyong (Simon) Zhu, UCSD

  49. 2 1 3 4 3 4 1 2 5 6 8 7 Multigrid – a first glance • Two levels : coarse and fine grid 2h A2hx2h=b2h h Ahxh=bh Ax=b Zhengyong (Simon) Zhu, UCSD

  50. Idea 1: the V-cycle iteration • Also called the nested iteration Start with 2h A2hx2h = b2h A2hx2h = b2h Iterate => Prolongation:  Restriction:  h Ahxh = bh Iterate to get Question 1: Why we need the coarse grid ? Zhengyong (Simon) Zhu, UCSD

More Related