1 / 141

The rank of a product of two matrices X and Y is equal to the smallest of the rank of X and Y:

S. A. C. The rank of a product of two matrices X and Y is equal to the smallest of the rank of X and Y:. Rank (X Y) =min (rank (X) , rank (Y)). =. Eigenvectors and Eigenvalues. 0. v. R. I. R. v. = l. - l. v. For a symmetric, real matrix, R , an eigenvector v is obtained from:.

lloyd
Télécharger la présentation

The rank of a product of two matrices X and Y is equal to the smallest of the rank of X and Y:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. S A C The rank of a product of two matrices X and Y is equal to the smallest of the rank of X and Y: Rank (X Y) =min (rank (X) , rank (Y)) =

  2. Eigenvectors and Eigenvalues 0 v R I R v =l -l v For a symmetric, real matrix, R, an eigenvector v is obtained from: Rv = vl l is an unknown scalar-the eigenvalue (R – lI) v= 0 Rv – vl = 0 The vector v is orthogonal to all of the row vector of matrix (R-lI) =

  3. 0.1 0.2 0.3 0.2 0.4 0.6 A= R=ATA = 0.14 0.28 0.28 0.56 0.14 0.28 0.28 0.56 0.14 0.28 0.28 0.56 v1 v2 0 0 1 0 0 1 - l v1 v2 l 0 0 l 0.14 - l 0.28 0.28 0.56 - l v1 v2 0 0 - Rv = vl (R – lI) v= 0 = = =

  4. 0.14 - l 0.28 0.28 0.56 - l = 0 (0.14 – l) (0.56 –l) – (0.28) (0.28) = 0 l2 - 0.7l = 0 l1 = 0.7 & l2=0 l (l - 0.7) = 0

  5. v11 v21 0.14 – 0.7 0.28 0.28 0.56 – 0.7 0 0 -0.56 0.28 0.28 -0.14 v11 v21 = = -0.56 v11 + 0.28v21= 0 0.28 v11 - 0.14 v21= 0 0.4472 0.8944 Normalized vector v1 = For l1 = 0.7 v21 = 2 v11 If v11 = 1 v21= 2

  6. v12 v22 0 0 0.14 0.28 0.28 0.56 = 0.14 v12 + 0.28 v22 = 0 0.28 v12 +0.56 v22 = 0 -0.8944 0.4472 Normalized vector v1 = For l1 = 0 v12= -2 v22 If v22 = 1 v12= -2

  7. 0.1 0.2 0.3 0.2 0.4 0.6 A= R=ATA = 0.14 0.28 0.28 0.56 -0.8944 0.4472 0.4472 0.8944 0.7 0 0 0 V = L = ∑ tr(R) = li= 0.7 + 0.0 =0.7 Rv = vl RV = VL v1v2 =0 More generally, if R (p x p) is symmetric of rank r≤p then R posses r positive eigenvalues and (p-r) zero eigenvalues

  8. Example

  9. Consider 15 sample each contain 3 absorbing components

  10. ? Show that in the presence of random noise the number of non-zero eigenvalues is larger than numbers of components

  11. Variance-Covariance Matrix … x11 – mx1 x12 – mx2 x1p – mxp … x21 – mx1 x22 – mx2 x1p – mxp X = … … … … … xnp – mxp xn2 – mx2 (xn1 – mx1) … var(x1) covar(x1x2) covar(x1xp) … covar(x2x1) var(x2) covar(x2xp) XTX = … … … … … var(xp) covar(xpx1) covar(xpx2) Column mean centered matrix

  12. mmcn.m file for mean centering a matrix

  13. ? Use anal.m file and mmcn.m file and verify that each eigenvalue of an absorbance data matrix is correlated with variance of data

  14. Singular Value Decomposition SVD of a rectangular matrix X is a method which yield at the same time a diagnal matrix of singular values S and the two matrices of singular vectors U and V such that : X = U S VT UTU = VTV =Ir The singular vectors in U and V are identical to eigenvectors of XXT AND XTX, respectively and the singular values are equal to the positive square roots of the corresponding eigenvalues X = U S VT XT = V S UT X XT= U S VT VSUT= US2UT (X XT) U = US2

  15. n n n n n S VT X X U n n m m m r n r S VT r r U m = If the rank of matrix X=r then; X = U S VT = s1u1v1T + … + srurvrT =

  16. Singular value decomposition with MATLAB

  17. residual Reconstructed data R1 Ideal data Ideal data Noised data A A nd rd residual R2 Consider 15 sample containing 2 component with strong spectral overlapping and construct their absorbance data matrix accompany with random noise - = - = It can be shown that the reconstructed data matrix is closer to ideal data matrix

  18. Anal.m file for constructing the data matrix

  19. Spectral overlapping of two absorbing species

  20. Ideal data matrix A

  21. Noised data matrix, nd, with 0.005 normal distributed random noise

  22. nf.m file for investigating the noise filtering property of svd reconstructed data

  23. ? Plot the %relative standard error as a function of number of eigenvectors

  24. Principal Component Analysis (PCA) x11 x12 … x114 x2 … x21 x21 x214 • • • • • • • • • • • • • • x1

  25. u11 u2 u12 • • • • u1 • … • • • • • u114 • • • • PCA

  26. x2 x11 x12 … x114 • • • • … x21 x21 x214 • • • • • • • • • • x1

  27. u11 u21 u2 u1 • • u12 u22 • • … … • • u114 u214 • • • • • • • •

  28. l1 l2 s1 0.1 0.2 0.2 0.4 0.3 0.6 s2 s3 Principal Components in two Dimensions u1 = ax1 + bx2 u2 = cx1 + dx2 In principal components model new variables are found which give a clear picture of the variability of the data. This is best achieved by giving the first new variable maximum variance, the second new variable is then selected so as to be uncorrelated with the first one, and so on

  29. 0.5 1.0 1.5 0.1 0.2 0.3 u1 = 0.2 0.4 0.6 x1 = x2 = 1.0 2.0 3.0 u1 = The new variables can be uncorrelated if: ac + bd =0 Orthogonality constraint a=1 b=2 c=-1 d=0.5 var(u1)=0.25 a=2 b=4 c=-2 d=1 var(u1)=1.0

More Related