1 / 18

Engineering Analysis ENG 3420 Fall 2009

Engineering Analysis ENG 3420 Fall 2009. Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00. Lecture 17. Reading assignment Chapters 10 and 11, Linear Algebra ClassNotes Last time: Symmetric matrices; Hermitian matrices. Matrix multiplication Today:

Télécharger la présentation

Engineering Analysis ENG 3420 Fall 2009

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00

  2. Lecture 17 Reading assignment Chapters 10 and 11, Linear Algebra ClassNotes Last time: Symmetric matrices; Hermitian matrices. Matrix multiplication Today: Linear algebra functions in Matlab The inverse of a matrix Vector products Tensor algebra Characteristic equation, eigenvectors, eigenvalues Norm Matrix condition number Next Time More on LU Factorization Cholesky decomposition Lecture 17 2

  3. Matrix analysis in MATLAB Norm Matrix or vector norm normest Estimate the matrix 2-norm rank Matrix rank det Determinant trace Sum of diagonal elements null Null space orth Orthogonalization rref Reduced row echelon form subspaceAngle between two subspaces

  4. Eigenvalues and singular values eig Eigenvalues and eigenvectors svd Singular value decomposition eigs A few eigenvalues svds A few singular values poly Characteristic polynomial polyeig Polynomial eigenvalue problem condeig Condition number for eigenvalues hess Hessenberg form qz QZ factorization schur Schur decomposition

  5. Matrix functions Expm Matrix exponential Logm Matrix logarithm Sqrtm Matrix square root Funm Evaluate general matrix function

  6. Linear systems of equations \ and / Linear equation solution inv Matrix inverse cond Condition number for inversion condest 1-norm condition number estimate chol Cholesky factorization cholinc Incomplete Cholesky factorization linsolve Solve a system of linear equations lu LU factorization ilu Incomplete LU factorization luinc Incomplete LU factorization qr Orthogonal-triangular decomposition lsqnonneg Nonnegative least-squares pinv Pseudoinverse lscovLeast squares with known covariance

  7. The inverse of a square • If [A] is a square matrix, there is another matrix [A]-1, called the inverse of [A], for which [A][A]-1=[A]-1[A]=[I] • The inverse can be computed in a column by column fashion by generating solutions with unit vectors as the right-hand-side constants:

  8. Canonical base of an n-dimensional vector space 100……000 010……000 001……000 ……………. 000…….100 000…….010 000…….001

  9. Matrix Inverse (cont) • LU factorization can be used to efficiently evaluate a system for multiple right-hand-side vectors - thus, it is ideal for evaluating the multiple unit vectors needed to compute the inverse.

  10. The response of a linear system • The response of a linear system to some stimuli can be found using the matrix inverse.

  11. Distance and norms • Metric space   a set where the ”distance” between elements of the set is defined, e.g., the 3-dimensional Euclidean space. The Euclidean metric defines the distance between two points as the length of the straight line connecting them. • A norm real-valued function that provides a measure of the size or “length” of an element of a vector space.

  12. Vector Norms • The p-norm of a vector X is: • Important examples of vector p-norms include:

  13. Matrix Norms • Common matrix norms for a matrix [A] include: • Note - max is the largest eigenvalue of [A]T[A].

  14. Matrix Condition Number • The matrix condition number Cond[A] is obtained by calculating Cond[A]=||A||·||A-1|| • In can be shown that: • The relative error of the norm of the computed solution can be as large as the relative error of the norm of the coefficients of [A] multiplied by the condition number. • If the coefficients of [A] are known to t digit precision, the solution [X] may be valid to onlyt-log10(Cond[A]) digits.

  15. Built-in functions to compute norms and condition numbers • norm(X,p)  Compute the p norm of vector X, where p can be any number, inf, or ‘fro’ (for the Euclidean norm) • norm(A,p)  Compute a norm of matrix A, where p can be 1, 2, inf, or ‘fro’ (for the Frobenius norm) • cond(X,p) or cond(A,p)  Calculate the condition number of vector X or matrix A using the norm specified by p.

  16. LU Factorization • LU factorization involves two steps: • Decompose the [A] matrix into a product of: • a lower triangular matrix [L] with 1 for each entry on the diagonal. • and an upper triangular matrix [U • Substitution to solve for {x} • Gauss elimination can be implemented using LU factorization • The forward-elimination step of Gauss elimination comprises the bulk of the computational effort. • LU factorization methods separate the time-consuming elimination of the matrix [A] from the manipulations of the right-hand-side [b].

  17. Gauss Elimination as LU Factorization • To solve [A]{x}={b}, first decompose [A] to get [L][U]{x}={b} • MATLAB’s lu function can be used to generate the [L] and [U] matrices: [L, U] = lu(A) • Step 1  solve [L]{y}={b}; {y} can be found using forward substitution. • Step 2 solve [U]{x}={y}, {x} can be found using backward substitution. • In MATLAB: [L, U] = lu(A) d = L\b x = U\d • LU factorization  requires the same number of floating point operations (flops) as for Gauss elimination. • Advantage  once [A] is decomposed, the same [L] and [U] can be used for multiple {b} vectors.

  18. Cholesky Factorization • A symmetric matrix a square matrix, A, that is equal to its transpose: A = AT (T stands for transpose). • The Cholesky factorization based on the fact that a symmetric matrix can be decomposed as: [A]= [U]T[U] • The rest of the process is similar to LU decomposition and Gauss elimination, except only one matrix, [U], needs to be stored. • Cholesky factorization with the built-in chol command:U = chol(A) • MATLAB’s left division operator \ examines the system to see which method will most efficiently solve the problem. This includes trying banded solvers, back and forward substitutions, Cholesky factorization for symmetric systems. If these do not work and the system is square, Gauss elimination with partial pivoting is used.

More Related