60 likes | 193 Vues
This document explores the fundamental properties of matrices using Singular Value Decomposition (SVD). It covers key concepts including unitary and diagonal matrices, the rank of matrices, and the relationship between singular values and eigenvalues. Theorems are presented to illustrate how matrices can be expressed in different bases, revealing insights into their behavior and characteristics. Additionally, it discusses low-rank approximations, energy definitions based on norms, and practical applications such as rank determination, orthonormal basis retrieval, and least squares fitting, providing a comprehensive overview of SVD in linear algebra.
E N D
CPSC 491 Xin Liu November 22, 2010
A change of Bases • Amxn=UΣVT, • Umxm, Vnxn are unitary matrixes • Σmxn is a diagonal matrix • Columns of a unitary matrix form a basis • Any b in Rm can be expanded in {u1, u2, …, um} • b=Ub’ <==> b’=UTb • Any x in Rn can be expanded in {v1, v2, …, vm} • x=Vx’ <==> x’=VTx • b=Ax <==> UTb = UTAx = UTUΣVTx <==> b’= Σx’ • A reduces to the diagonal matrix Σ when the range is expressed in the basis of columns of U and the domain is expressed in the basis of columns of V
Matrix Properties via SVD • Theorem 1: The rank of A is r, the # of nonzero singular values. • Proof: • Amxn=UΣVT • Rank (Σ) = r • U, V are of full rank • Theorem 2: range (A) = <u1, u2, …, ur> and null (A) = <vr+1, vr+2, …, vn> • Theorem 3: ||A||2 = σ1 and ||A||F = sqrt (σ12+σ22 + … + σr2)
Matrix Properties via SVD • Theorem 4: The nonzero singular values of A are the square roots of the nonzero eigenvalues of ATA or AAT • if Ax = λx (x is non-zero vector), then λ is an eigenvalue of A • Theorem 5: If A = AT, then the singular values of A are the absolute values of the eigenvalues of A. • Theorem 6: For Amxm, |det(A)| = Πi=1mσi • Compute the determinant • Proof: • |det (A)| = |det (UΣVT)| = |det (U)| |det(Σ)| |det(VT)| = |det(Σ)| = Πi=1mσi
Low-Rank Approximations • Theorem 7: A is the sum of r rank-one matricesA = Σj=1rσjujvjT • Proof: • Σ = diag(σ1, 0, …, 0) + … + diag(0, ..0, σr, 0, …, 0) • matrix multiplications • The partial sum captures as much of the energy of A as possible • “Energy” is defined by either the 2-norm or the Frobenius norm • For any 0 ≤ v ≤ r
Applications • Determine the rank of a matrix • Find an orthonormal basis of a range/nullspace of a matrix • Solve linear equation systems • Compute ||A||2 • Least squares fitting