1 / 20

Scientific Computing

Scientific Computing. The Power Method for Finding Dominant Eigenvalue. Eigenvalues-Eigenvectors. The eigenvectors of a matrix are vectors that satisfy Ax = λx Or, ( A – λ I)x = 0

yahto
Télécharger la présentation

Scientific Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scientific Computing The Power Method for Finding Dominant Eigenvalue

  2. Eigenvalues-Eigenvectors • The eigenvectors of a matrix are vectors that satisfy • Ax = λx • Or, (A – λI)x = 0 • So, λ is an eigenvalue iff det(A – λI) = 0 • Example:

  3. Eigenvalues-Eigenvectors • Eigenvalues are used in the solution of Engineering Problems involving vibrations, elasticity, oscillating systems, etc. • Eigenvalues are also important for the analysis of Markov Chains in statistics. • The next set of slides are from the course “Computer Applications in Engineering and Construction” at Texas A&M (Fall 2008).

  4. Mass-Spring System Equilibrium positions

  5. Mass-Spring System • Homogeneous system • Find the eigenvalues  from det[ ] = 0

  6. Polynomial Method • m1 = m2 = 40 kg, k = 200 N/m • Characteristic equationdet[ ] = 0 • Two eigenvalues  = 3.873s1 or 2.236 s 1 • Period Tp = 2/ = 1.62 s or 2.81 s

  7. Principal Modes of Vibration Tp = 1.62 s X1 = X2 Tp = 2.81 s X1 = X2

  8. Power Method • Power method for finding eigenvalues • Start with an initial guess for x • Calculate w = Ax • Largest value (magnitude) in w is the estimate of eigenvalue • Get next x by rescaling w (to avoid the computation of very large matrix An ) • Continue until converged

  9. Power Method • Start with initial guess z = x0 rescaling kis the dominant eigenvalue

  10. Power Method • For large number of iterations,  should converge to the largest eigenvalue • The normalization make the right hand side converge to  , rather than n

  11. Example: Power Method Consider Assume all eigenvalues are equally important, since we don’t know which one is dominant Start with Eigenvalue estimate Eigenvector

  12. Example Current estimate for largest eigenvalue is 21 Rescale w by eigenvalue to get new x Check Convergence(Norm < tol?) Norm

  13. Update the estimated eigenvector and repeat • New estimate for largest eigenvalue is 19.381 Rescale w by eigenvalue to get new x Norm

  14. Example One more iteration Norm Convergence criterion -- Norm (or relative error) < tol

  15. Example: Power Method

  16. Script file: Power_eig.m

  17. » A=[2 8 10; 8 3 4; 10 4 7] A = 2 8 10 8 3 4 10 4 7 » [z,m] = Power_eig(A,100,0.001); it m z(1) z(2) z(3) z(4) z(5) 1.0000 21.0000 0.9524 0.7143 1.0000 2.0000 19.3810 0.9091 0.7101 1.0000 3.0000 18.9312 0.9243 0.7080 1.0000 4.0000 19.0753 0.9181 0.7087 1.0000 5.0000 19.0155 0.9206 0.7084 1.0000 6.0000 19.0396 0.9196 0.7085 1.0000 7.0000 19.0299 0.9200 0.7085 1.0000 8.0000 19.0338 0.9198 0.7085 1.0000 9.0000 19.0322 0.9199 0.7085 1.0000 error = 8.3175e-004 » z z = 0.9199 0.7085 1.0000 » m m = 19.0322 » x=eig(A) x = -7.7013 0.6686 19.0327 MATLAB Example: Power Method eigenvector eigenvalue MATLAB function

  18. MATLAB’s Methods • e = eig(A) gives eigenvalues of A • [V, D] = eig(A) eigenvectors in V(:,k) eigenvalues = Dii (diagonal matrix D) • [V, D] = eig(A, B) (more general eigenvalue problems) (Ax = Bx) AV = BVD

  19. Theorem: If A has a complete set of eigenvectors, then the Power method converges to the dominate eigenvalue of the matrix A. Proof: A has n eigenvalues 1,2,3,…,n with 1>2>3>…>n with a corresponding basis of eigenvectors w1,w2,w3,…,wn. Let the initial vector w0 be a linear combination of the vectors w1,w2,w3,…,wn. w0 = a1w1+a2w2+a3w3+…+anwn Aw0 = A(a1w1+a2w2+a3w3+…+anwn) =a1Aw1+a2Aw2+a3Aw3+…+anAwn =a11w1+a22w2+a33w3+…+annwn Akw0=a1(1)kw1+a2(2)kw2+…+an(n)kwn Akw0/(1)k-1=a1(1)k /(1)k-1w1 +…+an(n)k /(1)k-1wn

  20. For large values of k (as k goes to infinity) we get the following: At each stage of the process we divide by the dominant term of the vector. If we write w1 as shown to the right and consider what happens between two consecutive estimates we get the following. Dividing by the dominant term gives something that is approximately 1.

More Related