1 / 33

Computational Physics (Lecture 8)

Learn about the straightforward method to invert a matrix using a linear equating approach and Gaussian elimination techniques. Understand LU decomposition and its application in matrix operations. Explore the Doolittle and Crout factorizations in matrix inversion. Discover how quadratic forms, steepest descent, errors, and residuals are applied in solving linear equations. This lecture provides a comprehensive overview of matrix inversion methods in computational physics.

ewebb
Télécharger la présentation

Computational Physics (Lecture 8)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Physics(Lecture 8) PHY4061

  2. Inverse of a matrix • What’s the most straightforward way to inverse a matrix?

  3. Inverse of a matrix • The inverse of a matrix A using linear equating approach: A−1ij= xi j, • for i, j = 1, 2, . . . , n, • where xi jis the solution of the equation Axj= bj, with bi j= δij. • If we expand b into a unit matrix in the program for solving a linear equation set, • the solution corresponding to each column of the unit matrix forms the corresponding column of A−1

  4. for (int i=0; i<n-1; ++i) for (int j=i+1; j<n; ++j) for (int k=0; k<n; ++k) b[index[j]][k] -= a[index[j]][i]*b[index[i]][k]; // Perform backward substitutions for (int i=0; i<n; ++i) { x[n-1][i] = b[index[n-1]][i]/a[index[n-1]][n-1]; for (int j=n-2; j>=0; --j) { x[j][i] = b[index[j]][i]; for (int k=j+1; k<n; ++k) { x[j][i] -= a[index[j]][k]*x[k][i]; } x[j][i] /= a[index[j]][j]; } } return x; } public static void gaussian(double a[][], int index[]) {...} } // An example of performing matrix inversion through the // partial-pivoting Gaussian elimination. import java.lang.*; public class Inverse { public static void main(String argv[]) { double a[][]= {{ 100, 100, 100}, {-100, 300, -100}, {-100, -100, 300}}; int n = a.length; double d[][] = invert(a); for (int i=0; i<n; ++i) for (int j=0; j<n; ++j) System.out.println(d[i][j]); } public static double[][] invert(double a[][]) { int n = a.length; double x[][] = new double[n][n]; double b[][] = new double[n][n]; int index[] = new int[n]; for (int i=0; i<n; ++i) b[i][i] = 1; // Transform the matrix into an upper triangle gaussian(a, index); // Update the matrix b[i][j] with the ratios stored

  5. LU decomposition • A scheme related to the Gaussian elimination is called LU decomposition, • which splits a nonsingular matrix into the product of a lower-triangular and an upper-triangular matrix. • the Gaussian elimination corresponds to • a set of n − 1 matrix operations that can be represented by a matrix multiplication • An−1= MA, • where • M = Mn−1Mn−2· · ·M1, • with each Mr, for r = 1, 2, . . . , n − 1, completing one step of the elimination.

  6. It s easy to show that M is a lower-triangular matrix with all the diagonal elements being −1 and Mkij= −A j−1k_i j /Aj−1k_ j jfor i > j . • So A = LU,where U = An−1is an upper-triangular matrix and L = M−1 is a lower-triangular matrix.

  7. Furthermore, because Mii= −1, we must have Lii= 1 and Li j= −Mi j= A j−1ki j /Aj−1k j jfor i > j , • which are the ratios stored in the matrix in the method introduced earlier in this section to perform the partial-pivoting Gaussian elimination.

  8. The method performs the LU decomposition with nonzero elements of U and nonzero, nondiagonal elements of L stored in the returned matrix. • The choice of Lii= 1 in the LU decomposition is called the Doolittle factorization. • An alternative is to choose Uii= 1; this is called the Crout factorization. • The Crout factorization is equivalent to rescaling Uijby Uiifor i < j and multiplingLi jby Ujjfor i > j from the Doolittle factorization.

  9. In general the elements in L and U can be obtained from comparison of the elements of the product of L and U with the elements of A. • Theresult can be cast into a recursion with where L_i1 = A_i1/U_11 and U_1 j = A_1 j /L11 are the starting values.

  10. We can apply the same pivoting technique and store the L and U values at the zero component of the matrix. • The determinant of A is given by the product of the diagonal elements of L and U because |A| = |L||U|

  11. As soon as we obtain L and U for a given matrix A, we can solve the linear equation set Ax = b with one set of forward substitutions and another set of backward substitutions • the linear equation set can now be rewritten as • Ly = b, Ux= y. • If we compare the elements on both sides of these equations, we have • with y1= b1/L11and i= 2, 3, . . . , n.

  12. We can obtain the solution of the original linear equation set from for i = n − 1, n − 2, . . . , 1, starting with x_n = y_n/U_nn.

  13. Quadratic form • A quadratic form is simply a scalar, quadratic function of a vector with the form • A is the matrix, x and b are vectors, c is a scalar constant. If A is symmetric and positive definite, f(x) is minimized by the solution to Ax=b.

  14. Sample Problem: • c= 0

  15. The gradient • ,…,=-b. • If A is symmetric, the equation becomes: Ax-b.

  16. Revisit of Steepest descent • We start from an arbitrary starting point: x0, • Slide down to the bottom of the paraboloid. • When we take the step, we choose the direction in which f decreases most quickly. • -f’(xi)=b-Axi. • Error: ei = xi-x => how far from the solution. • Residual: ri = b-Axi => how far from the correct value of b.

  17. Suppose we start from (-2, -2). Along the steepest descent, will fall somewhere on the solid line. • X1 = x0+αr0 • How big the step we should take? • Line search • Choose α to minimize f. df(x1)/dα=0, • So, f’(x1)Tr0=0 • So α is chosen to make r0 and f’(x1) orthogonal!

  18. Determine α • F(x1)= -r1 • r1Tr0 = 0 • (b-Ax1)Tr0= 0 • (b-A(x0+ αr0))Tr0= 0 • (b-Ax0)Tr0= α(Ar0)Tr0 • r0 Tr0= αr0T(Ar0) • α =(r0 Tr0)/ (r0TAr0)

  19. General method of Steepest Descent

  20. Any problems can you perceive?

  21. Conjugate directions • SD often takes steps in the same direction! • If we take n orthogonal steps, d0, d1 … dn-1, each step has the correct length, after n steps, we are done! • In general, for each step, we have • xi+1=xi+αidi • ei+1 should be orthogonal to di • diTei+1 = 0 (ei+1 = xi+1-x=ei+ αidi) • diT(ei+ αidi)=0 • αi=-diTei/(diTdi)

  22. Useless! We don’t know ei. • Instead, we make di and dj A-orthogonal, or conjugate. • diTAdj = 0

  23. New requirement: • e(i+1) is A-orthogonal to d(i) • Like SD, we need to find the minimum along d(i)

  24. This time, evaluate the α(i) again, A e i = A (xi –x) = Axi-Ax=Axi-b = ri

More Related