1 / 58

SVD(Singular Value Decomposition) and Its Applications

SVD(Singular Value Decomposition) and Its Applications. Joon Jae Lee 2006. 01.10. The plan today. Singular Value Decomposition Basic intuition Formal definition Applications. LCD Defect Detection. Defect types of TFT-LCD : Point, Line, Scratch, Region. Shading.

cyndi
Télécharger la présentation

SVD(Singular Value Decomposition) and Its Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SVD(Singular Value Decomposition) and Its Applications Joon Jae Lee 2006. 01.10

  2. The plan today • Singular Value Decomposition • Basic intuition • Formal definition • Applications

  3. LCD Defect Detection • Defect types of TFT-LCD : Point, Line, Scratch, Region

  4. Shading • Scanners give us raw point cloud data • How to compute normals to shade the surface? normal

  5. Hole filling

  6. Eigenfaces • Same principal components analysis can be applied to images

  7. same image content different source same video source different image content Video Copy Detection AVI and MPEG face image Hue histogram Hue histogram

  8. 3D animations • Connectivity is usually constant (at least on large segments of the animation) • The geometry changes in each frame  vast amount of data, huge filesize! 13 seconds, 3000 vertices/frame, 26 MB

  9. Geometric analysis of linear transformations • We want to know what a linear transformation A does • Need some simple and “comprehendible” representation of the matrix of A. • Let’s look what A does to some vectors • Since A(v) = A(v), it’s enough to look at vectors v of unit length A

  10. The geometry of linear transformations A • A linear (non-singular) transform A always takes hyper-spheres to hyper-ellipses. A

  11. The geometry of linear transformations A • Thus, one good way to understand what A does is to find which vectors are mapped to the “main axes” of the ellipsoid. A

  12. Geometric analysis of linear transformations A • If we are lucky: A = V  VT, V orthogonal (true if A is symmetric) • The eigenvectors of A are the axes of the ellipse

  13. Symmetric matrix: eigen decomposition 1 1 1 2 A • In this case A is just a scaling matrix. The eigen decomposition of A tells us which orthogonal axes it scales, and by how much:

  14. General linear transformations: SVD • In general A will also contain rotations, not just scales: 1 1 1 2 A

  15. General linear transformations: SVD 1 1 1 2 A orthonormal orthonormal

  16. SVD more formally • SVD exists for any matrix • Formal definition: • For square matrices A  Rnn, there exist orthogonal matrices U, V Rnn and a diagonal matrix , such that all the diagonal values i of  are non-negative and =

  17. SVD more formally • The diagonal values of  (1, …, n) are called the singular values. It is accustomed to sort them: 1  2 …  n • The columns of U (u1, …, un) are called the left singular vectors. They are the axes of the ellipsoid. • The columns of V (v1, …, vn) are called the right singular vectors. They are the preimages of the axes of the ellipsoid. =

  18. SVD is the “working horse” of linear algebra • There are numerical algorithms to compute SVD. Once you have it, you have many things: • Matrix inverse  can solve square linear systems • Numerical rank of a matrix • Can solve least-squares systems • PCA • Many more…

  19. Matrix inverse and solving linear systems • Matrix inverse: • So, to solve

  20. Matrix rank • The rank of A is the number of non-zero singular values n 1 2 n m =

  21. Numerical rank • If there are very small singular values, then A is close to being singular. We can set a threshold t, so that numeric_rank(A) = #{i| i > t} • If rank(A) < n then A is singular. It maps the entire space Rn onto some subspace, like a plane (so A is some sort of projection).

  22. Solving least-squares systems • We tried to solve Ax=b when A was rectangular: • Seeking solutions in least-squares sense: b A x =

  23. Solving least-squares systems • We proved that when A is full-rank, (normal equations). • So: 1 12 1 = n n2 n

  24. Solving least-squares systems • Substituting in the normal equations: 1/1 1 1/12 = 1/n2 1/n n

  25. Pseudoinverse • The matrix we found is called the pseudoinverse of A. • Definition using reduced SVD: If some of the i are zero, put zero instead of 1/I .

  26. Pseudo-inverse • Pseudoinverse A+ exists for any matrix A. • Its properties: • If A is mn then A+ is nm • Acts a little like real inverse:

  27. Solving least-squares systems • When A is not full-rank: ATA is singular • There are multiple solutions to the normal equations: • Thus, there are multiple solutions to the least-squares. • The SVD approach still works! In this case it finds the minimal-norm solution:

  28. PCA – the general idea • PCA finds an orthogonal basis that best represents given data set. • The sum of distances2 from the x’ axis is minimized. y’ y x’ x

  29. PCA – the general idea y v2 v1 x y y x x The projected data set approximates the original data set This line segment approximates the original data set

  30. PCA – the general idea • PCA finds an orthogonal basis that best represents given data set. • PCA finds a best approximating plane (again, in terms of distances2) z 3D point set in standard basis y x

  31. For approximation • In general dimension d, the eigenvalues are sorted in descending order: 1  2  …  d • The eigenvectors are sorted accordingly. • To get an approximation of dimension d’ < d, we take the d’ first eigenvectors and look at the subspace they span (d’ = 1 is a line, d’ = 2 is a plane…)

  32. For approximation • To get an approximating set, we project the original data points onto the chosen subspace: xi = m + 1v1+ 2v2+…+ d’vd’+…+dvd Projection: xi’ = m + 1v1+ 2v2+…+ d’vd’+0vd’+1+…+ 0vd

  33. Principal components • Eigenvectors that correspond to big eigenvalues are the directions in which the data has strong components (= large variance). • If the eigenvalues are more or less the same – there is no preferable direction. • Note: the eigenvalues are always non-negative. Think why…

  34. Principal components • There’s no preferable direction • S looks like this: • Any vector is an eigenvector • There is a clear preferable direction • S looks like this: •  is close to zero, much smaller than .

  35. Application: finding tight bounding box • An axis-aligned bounding box: agrees with the axes y maxY minX maxX x minY

  36. Application: finding tight bounding box • Oriented bounding box: we find better axes! x’ y’

  37. Application: finding tight bounding box • Oriented bounding box: we find better axes!

  38. Scanned meshes

  39. Point clouds • Scanners give us raw point cloud data • How to compute normals to shade the surface? normal

  40. Point clouds • Local PCA, take the third vector

  41. Eigenfaces • Same principal components analysis can be applied to images

  42. Eigenfaces • Each image is a vector in R250300 • Want to find the principal axes – vectors that best represent the input database of images

  43. Reconstruction with a few vectors • Represent each image by the first few (n) principal components

  44. Face recognition • Given a new image of a face, w R250300 • Represent w using the first n PCA vectors: • Now find an image in the database whose representation in the PCA basis is the closest: The angle between w and w’ is the smallest w w’ w w’

  45. SVD for animation compression Chicken animation

  46. 3D animations • Each frame is a 3D model (mesh) • Connectivity – mesh faces

  47. 3D animations • Each frame is a 3D model (mesh) • Connectivity – mesh faces • Geometry – 3D coordinates of the vertices

  48. 3D animations • Connectivity is usually constant (at least on large segments of the animation) • The geometry changes in each frame  vast amount of data, huge filesize! 13 seconds, 3000 vertices/frame, 26 MB

  49. Animation compression by dimensionality reduction • The geometry of each frame is a vector in R3N space (N = #vertices) 3N  f

  50. Animation compression by dimensionality reduction • Find a few vectors of R3N that will best represent our frame vectors! VTff ff U 3Nf VT =

More Related